text
stringlengths 226
34.5k
|
---|
Django Basics myapp mapping in myproject "myapp" is not defined
Question: I am learning django, and have ran `python manage.py startapp myapp` which
created the following folder structure:
myapp/
__init__.py
admin.py
models.py
tests.py
views.py
I also added `myapp` to `INSTALLED_APPS` in settings,
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'myapp',
)
then I ran `python manage.py migrate` and `python manage.py createsuperuser`
to create super user
I have also created views of myapp :
"from django.http import HttpResponse
def hello(request):
text = """<h1>welcome to my app !</h1>"""
return HttpResponse(text)
"
Finally, here's the URL mapping:
from django.conf.urls import patterns, include, url
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
url(r'^admin', include(admin.site.urls)),
url(r'^hello/', 'myapp.views.hello', name = 'hello'),
)
When ran, it throws an error stating that "myapp" is not defined. And I am not
able to access admin page at <http://127.0.0.1:8000/admin>
How can I solve this error and get my application to work?
Answer: Possibly `myapp` is not included in `INSTALLED_APPS` in `setting.py`. I
suppose you are following
[this](http://www.tutorialspoint.com/django/django_url_mapping.htm) tutorial.
|
Get higher resolution pygame
Question: I am trying to make a game in python and pygame. I want to make a game with
decent resolution, but I can't get higher resolution, any ways?
import pygame
import time
import random
import os
pygame.init()
height = 600
width = 800
window = pygame.display.set_mode((width,height))
white = 255,255,255
black = 0,0,0
red = 255,0,0
blue = 0,0,255
def update():
pygame.display.update()
def game():
stop_game = False
while not stop_game:
window.fill(white)
loaded_image = pygame.image.load("Player.png")
loaded_image = pygame.transform.scale(loaded_image,(150,150))
window.blit(loaded_image,(0,0))
update()
game()
Answer:
print pygame.display.list_modes()
This will list the available display modes for you. Mine goes up to (2646,
1024).
|
Refer to defined variable from function outside Python class
Question: Excuse for newbie question but I want make all clear for myself. I have a
problem accessing class variable from within function outside the class. In
code below I need **`text, text2`**. I get variables as tuple but I need them
separately. How to update code to get defined variable like **`text = var1`**
and **`text2 = var2`**? Thanks in advance!
from django.shortcuts import render
class Someclass():
def method_1(self):
self.var2 = 'var2'
self.var1 = 'var1'
return self.var1 ,self.var2
def func(request):
cls = Someclass()
text2 = cls.method_1()
text = cls.method_1()
content = {
'text': text,
'text2': text2,
}
return render(request, "web/page.html", content)
Answer: Since you are returning a tuple you should directly assign their values to
variables in a single call.
Something like this:
text, text2 = cls.method_1()
|
RuntimeError while opening deploy.prototxt
Question: I'm trying run a simple code with caffe that should open `deploy.prototxt` but
it couldn't open the file and throws this error
RuntimeError: Could not open file /home/ebadawy/git/caffemodels/bvlc_reference_caffenet/deploy.prototxt
this is my code
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10, 10) # large images
plt.rcParams['image.interpolation'] = 'nearest' # don't interpolate: show square pixels
plt.rcParams['image.cmap'] = 'gray' # use grayscale output rather than a (potentially misleading)
# color heatmap
caffe_root = '/home/ebadawy/git/caffe'
import os
if os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'):
print('CaffeNet found.')
else:
print('Downloading pre-trained CaffeNet model...')
os.system('../scripts/download_model_binary.py ../models/bvlc_reference_caffenet')
import caffe
caffe.set_mode_cpu()
model_def = caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt'
model_weights = caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'
net = caffe.Net(model_def, # defines the structure of the model
model_weights, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout)
I'm using archlinux+python3.5
Answer: I found that I forgot to append `/` for `caffe_root` "very silly mistake!"
|
NameError says variable is not defined, but only in some places
Question: I am trying to implement a keep-alive that sends some data every 30 seconds to
keep a telnet connection open.
My code calls `reinitScore` every second. This function will sometimes call
`calculateWinner`, which sends the data through telnet via
`stelnet.send(data)`.
The problem is, when I call `stelnet.send(data)` inside any function, it
raises a `NameError: global name 'stelnet' is not defined`.
My questions is: why would `stelnet.send(data)` work in one place, and not
another?
Here is the part of my code that concerns telnet transfer and function
calling:
import socket, select, string, sys
import string
import threading
leftKeyCounter = 0
rightKeyCounter = 0
frontKeyCounter = 0
backKeyCounter = 0
# function called by reinitScore
def calculateWinner(d):
scores = {}
high_score = 0
for key, value in d.items():
try:
scores[value].append(key)
except KeyError:
scores[value] = [key]
if value > high_score:
high_score = value
results = scores[high_score]
if len(results) == 1:
print results[0]
stelnet.send(results[0])
return results[0]
else:
print 'TIE'
return 'TIE', results
#called once and repeat itselfs every second
def reinitScore():
threading.Timer(1, reinitScore).start()
#globaling for changing the content
global leftKeyCounter
global rightKeyCounter
global frontKeyCounter
global backKeyCounter
values = {'left' : leftKeyCounter, 'right' : rightKeyCounter, 'front' : frontKeyCounter, 'back' : backKeyCounter}
if (leftKeyCounter != 0 or rightKeyCounter != 0 or frontKeyCounter != 0 or backKeyCounter != 0):
calculateWinner(values)
leftKeyCounter = 0
rightKeyCounter = 0
frontKeyCounter = 0
backKeyCounter = 0
print "back to 0"
reinitScore()
if __name__ == "__main__":
if(len(sys.argv) < 3) :
print 'Usage : python telnet.py hostname port'
sys.exit()
host = sys.argv[1]
port = int(sys.argv[2])
stelnet = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
stelnet.settimeout(2)
# connect to remote host
try :
stelnet.connect((host, port))
except :
print 'Unable to connect'
sys.exit()
print 'Connected to remote host'
while True:
// ... Some code that has nothing to do with telnet
while 1:
socket_list = [sys.stdin, stelnet]
read_sockets, write_sockets, error_sockets = select.select(socket_list , [], [])
for sock in read_sockets:
if sock == stelnet:
data = sock.recv(4096)
if not data :
print 'Connection closed'
sys.exit()
else :
sys.stdout.write(data)
else :
msg = sys.stdin.readline()
stelnet.send(msg)
I tried to declare `stelnet` as a `global` variable at many places, but it
doesn't change anything --- I always get the "not defined" `NameError`.
Answer: In response to your updated code... The error message is _still_ correct,
because although you have defined `stelnet` at the module level, you've
defined it too late. It's definition occurs _after_ its use in the
`calculateWinner` function.
Stripping your code down to a ridiculously minimal example, you are doing
something like this:
def calculateWinner():
# A leap of faith... There is no `stelnet` defined
# in this function.
stelnet.send(results[0])
def reinitScore():
# Indirectly depends on `stelnet` too.
calculateWinner()
# But we haven't defined `stelnet` yet...
reinitScore() # Kaboom!
# These lines will never run, because the NameError has
# already happened.
if __name__ == '__main__':
stelnet = ... # Too late.
`calculateWinner` depends on a name that _does not exist_ when the function is
compiled. Whether it works or crashes will depend on whether some other code
has defined `stelnet` 1) where `calculateWinner` can get at it, and 2) before
`calculateWinner` is executed.
**Suggestions**
Functions that depend on global mutable state are hard to follow, let alone
code correctly. It's not easy to tell what depends on which variables, or
what's modifying them, or when. Also, coming up with an
[MCVE](https://stackoverflow.com/help/mcve) is more trouble than it should be,
because functions that appear independent might not be.
Stuff as much of your module-level code as you can into a `main` function, and
call it (and nothing else) from the body of `if __name__ == '__main__':`
(since even _that_ is actually at module level).
Consider something like this:
def reinit_score(output_socket, shared_scores):
# Ensuring safe concurrent access to the `shared_scores`
# dictionary is left as an exercise for the reader.
winner = ... # Determined from `shared_scores`.
output_socket.send(winner)
for key in shared_scores:
shared_scores[key] = 0
threading.Timer(
interval=1,
function=reinit_score,
args=[output_socket, shared_scores],
).start()
def main():
output_socket = ... # This was `stelnet`.
shared_scores = {...} # A dictionary with 4 keys: L/R/U/D.
reinit_score(output_socket, shared_scores)
while True:
play_game(shared_scores)
# `play_game` mutates the `shared_scores` dictionary...
if __name__ == '__main__':
main()
These functions are still connected by the shared dictionary that they pass
around, but only functions that are explicitly passed that dictionary can
change its contents.
|
What's a simple way to build a python web service which will take arguments and return a value?
Question: I want to write a simple python web service which will read the arguments
provided in the called url and based on that return some basic string.
In order to get started and get a better understanding of the whole thing, I'd
like to start by creating a "calculator" web service which will take two
numbers and an operator and based on these inputs return the mathematical
result.
For example if I call from my browser something like:
<http://123.45.67.89:12345/calculate>?**number1=12** &**number2=13**
&**operation=addition**
I'd like the python script to figure out (i guess most probably by some simple
switch/case statement) that it should execute something like this: "return
number1 + number2" and return the result 25 to the caller.
I am sure this shouldn't be a too big problem to implement this in python
since it isn't anything too fancy, but, as a beginner, I wasn't able to find
the right starting point.
Any help is appreciated.
Answer: Have a look at the [WSGI](https://www.python.org/dev/peps/pep-0333/)
specification.
You can also use a framework like
[bottle](http://bottlepy.org/docs/dev/index.html) which make your work easier:
from bottle import route, run, request
@route('/calculate')
def index():
if request.GET.get('operation') == 'addition':
return str(int(request.GET.get('number1')) + int(request.GET.get('number2')))
else:
return 'Unsupported operation'
if __name__ == '__main__':
run(host='123.45.67.89', port=12345)
Or even with [flask](http://flask.pocoo.org/):
from flask import Flask
app = Flask(__name__)
@app.route('/calculate')
def calculate():
if request.args.get('calculate') == 'addition'
return int(request.args.get('number1')) + int(request.args.get('number2'))
else:
return 'Unsupported operation'
if __name__ == '__main__':
app.run()
|
Identifying false alert using python mapreduce
Question: Can someone help me regarding the following problem. I am trying to analyze a
security log to find false alerts. The false alerts are those containing "TXT
was not created" and true are with "txt was not created". How can I extract
the particular "txt was not created" from the data source (sample input data
given below).
from mrjob.job import MRJob
class MRWordFrequencyCount(MRJob):
def mapper(self, _, line):
words = line.split()
for word in words:
word = unicode(word, "utf-8", errors="ignore")
yield word, 1
def reducer(self, key, values):
yield key, sum(values)
if __name__ == '__main__':
MRWordFrequencyCount.run()
A sample input is given here:
Mon Feb 1 12:13:59 EST 2016 virtual user etransactiondev started to upload file
/export/home/pub/etransactiondev/uploads/etransactionenvironment/ABC/rrd/in/WCWT.SMR.XYZ0002.PLSE.INPUT01.LFEP_APOL_D_M_20160201171358.TXT
/export/home/pub/etransactiondev/uploads/etransactionenvironment/ABC/rrd/in/WCWT.SMR.XYZ0002.PLSE.INPUT01.LFEP_APOL_D_M_20160201171358.txt was not created
Answer: Can you just check the first word?
word = word.split(' ')
if word[0] == 'TXT':
do something...
|
Split one txt to several txt files with particular name in Python
Question: I have a txt file which looks like:
24.03.2016 Peso
27.03.2016 Ruble
18.04.2016 Euro
17.05.2016 Dollar
16.06.2016 Frank
I need to split it in different files, and the name of a new file should be
the date, and the stuffing of this file - the rest. For example - the name is
**18.04.2016** and inside the file is **Euro**.
But if its the same month (like 03.2016 here), i need to put it all in one
file, with a name of the first date of this month. For example - the name is
**24.03.2016** , and inside is **Peso /n Ruble**.
How I can do that? Now i'm only on the step of reading my file line by line:
with open("Data.txt", 'r', encoding="utf-8") as fp:
for line in fp:
read (line)
Answer: Something like this:
#!python3
import collections
seen = collections.defaultdict(list)
with open("Data.txt", 'r', encoding="utf-8") as fp:
for line in fp:
line = line.strip()
if not line:
continue
date,currency = line.split()
month = date[3:]
seen[month].append((date,currency))
for month in seen.keys():
with open(seen[month][0][0], 'w') as outfile:
print(file=outfile,
"\n".join(currency for date,currency in seen[month]))
|
incrementing values of a list at specific indexes python
Question: I am attempting to create a list for each nucleotide (A, G, C, T) in a
sequence where the index of the list corresponds to the position in the
sequence and the value is the frequency of that nucleotide across all
sequences, here are 4 sequences as an example:
>ignore this
GTAGGGCGA
>ignore this
GTATACAGC
>ignore this
GTTTCTCTT
>ignore this
GTAATCAAA
The code I've written:
def function(filename, length):
g,t,c,a = [],[],[],[]
with open(filename, "r") as f:
for line in f:
if line.startswith('GT'):
gcount, acount, tcount, ccount = 0, 0, 0, 0
g = [gcount + 1 if nuc == 'G' else gcount for nuc in line[:length]]
return g
Right now, this code just looks at the G nucleotides and I get a list for
every sequence instead of 1 list that sums the values at each index of the
list.
[1, 0, 0, 1, 1, 1, 0, 1, 0]
[1, 0, 0, 0, 0, 0, 0, 1, 0]
[1, 0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 0, 0, 0, 0, 0, 0, 0]
What I would like as my output for g alone:
[4, 0, 0, 1, 1, 1, 0, 2, 0]
Answer: You can use `numpy` for this. Just convert your lists to `numpy` arrays and
add.
import numpy as np
list1 = np.array([1, 0, 0, 1, 1, 1, 0, 1, 0])
list2 = np.array([1, 0, 0, 0, 0, 0, 0, 1, 0])
list3 = np.array([1, 0, 0, 0, 0, 0, 0, 0, 0])
list4 = np.array([1, 0, 0, 0, 0, 0, 0, 0, 0])
list1 + list2 + list3 +list4 # desired result!
>>> array([4, 0, 0, 1, 1, 1, 0, 2, 0])
Here's how you can modify your current function to support this:
def function(filename, length)
g,t,c,a = [],[],[],[]
# create an array of expected length of g filled with 0s
base = np.zeros((1,length)) # 1 row, `length` number of columns
with open(filename, "r") as f:
for line in f:
if line.startswith('GT'):
gcount, acount, tcount, ccount = 0, 0, 0, 0
g = np.array([gcount + 1 if nuc == 'G' else gcount for nuc in line[:length]])
base = base + g # add this new numpy array
return base # return the summed result
Here are installation instruction for
[`numpy`](http://docs.scipy.org/doc/numpy-1.10.1/user/install.html).
|
Is there a Python class/enum for flag/bit mask operations?
Question: I know of base classes `Enum` and `IntEnum`. Both are very helpful but I miss
features for flag operations. I don't expect that these two classes implement
my wished feature.
Let's construct an example:
class NetlistKind(IntEnum):
Unknown = 0
LatticeNetlist = 1
QuartusNetlist = 2
XSTNetlist = 4
CoreGenNetlist = 8
All = 15
As you can see, I'm already using `IntEnum` to get arithmetic features for
this enum. It would be nice to have something like `@unique` to ensure all
values are a power of two. I can do this by forking enum.unique for my needs.
(I'm aware that `All` is an exception from that rule.)
How is such an enum used?
filter = NetlistKind.LatticeNetlist | NetlistKind.QuartusNetlist
Thanks to the underlaying int bit operations are possible and filter has an
internal value of 3.
If would be nice to have a "is flag X set in filter Y" function or even better
an operator. I add a magic function for `x in y`:
@unique
class NetlistKind(IntEnum):
Unknown = 0
LatticeNetlist = 1
QuartusNetlist = 2
XSTNetlist = 4
CoreGenNetlist = 8
All = 15
def __contains__(self, item):
return (self.value & item.value) == item.value
Usage example:
....
def GetNetlists(self, filter=NetlistKind.All):
for entity in self._entities:
for nl in entity.GetNetlists():
if (nl.kind in filter):
yield nl
def GetXilinxNetlists(self):
return self.GetNetlists(NetlistKind.XSTNetlist | NetlistKind.CoreGenNetlist)
So the questions are:
* Are there better ways to implement bit fields?
* Are thete better ways to implement such an 1-D filter? I don't want to use lamdas for such a simple filter condition?
* Is such solution already included in the Python standard library?
* How to add this enum extension to the next Python version? :)
Open features:
* return a list of all active flags in `__str__`
* ...?
Answer: I've recently published an opensource package [py-
flags](https://pypi.python.org/pypi/py-flags) that aims this problem. That
library has exactly this functionality and its design is heavily influenced by
the python3 enum module.
There are debates about whether it is pythonic enough to implement such a
flags class because its functionality has huge overlaps with other methods
provided by the language (collection of bool variables, sets, objects with
bool attributes or dicts with bool items, ...). For this reason I feel a flags
class to be too narrow purpose and/or redundant to make its way to the
standard library but in some cases it is much better than the previously
listed solutions so having a "pip install"-able library can come in handy.
Your example would look like the following using the py-flags module:
from flags import Flags
class NetlistKind(Flags):
Unknown = 0
LatticeNetlist = 1
QuartusNetlist = 2
XSTNetlist = 4
CoreGenNetlist = 8
All = 15
The above things could be tweaked a bit further because a flags class declared
with the library automatically provides two "virtual" flags:
`NetlistKind.no_flags` and `NetlistKind.all_flags`. These make the already
declared `NetlistKind.Unknown` and `NetlistKind.All` redundant so we could
leave them out from the declaration but the problem is that `no_flags` and
`all_flags` don't match your naming convention. To aid this we declare a flags
base class in your project instead of `flags.Flags` and you will have to use
that in the rest of your project:
from flags import Flags
class BaseFlags(Flags):
__no_flags_name__ = 'Unknown'
__all_flags_name__ = 'All'
Based on the previously declared base class that can be subclassed by any of
your flags in your project we could change your flag declaration to:
class NetlistKind(BaseFlags):
LatticeNetlist = 1
QuartusNetlist = 2
XSTNetlist = 4
CoreGenNetlist = 8
This way `NetlistKind.Unknown` is automatically declared with a value of zero.
`NetlistKind.All` is also there and it is automatically the combination of all
of your declared flags. It is possible to iterate enum members with/without
these virtual flags. You can also declare aliases (flags that have the same
value as another previously declared flag).
As an alternative declaration using the "function-call style" (also provided
by the standard enum module):
NetlistKind = BaseFlags('NetlistKind', ['LatticeNetlist', 'QuartusNetlist',
'XSTNetlist', 'CoreGenNetlist'])
If a flags class declares some members than it is considered to be final.
Trying to sublcass it will result in error. It is semantically undesired to
allow subclassing a flag class for the purpose of adding new members or change
functionality.
Besides this the flags class provides the operators your listed (bool
operators, in, iteration, etc...) in a type-safe way. I'm going to finish the
README.rst along with a little plumbing on the package interface in the next
few days but the basic functionality is already there and tested with quite
good coverage.
|
Python3 Unicode Decode Error
Question: I get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 0:
invalid continuation byte`
When I try to call `codecs.decode(X, 'utf-8')` where `X =
b'\xe8\xd0\xca@\xee\xe4\xca\xc6\xd6@\xde\xcc@\xe8\xd0\xca@\xd0\xca\xe6\xe0\xca\xe4\xea\xe6\x14\xc4\xf2@\xd0\xca\xdc\xe4\xf2@\xee\xc2\xc8\xe6\xee\xde\xe4\xe8\xd0@\xd8\xde\xdc\xce\xcc\xca\xd8\xd8\xde\xee\x14\x14\xd2\xe8@\xee\xc2\xe6@\xe8\xd0\xca@\xe6\xc6\xd0\xde\xde\xdc\xca\xe4@\xd0\xca\xe6\xe0\xca\xe4\xea\xe6\x14@@@@@@\xe8\xd0\xc2\xe8@\xe6\xc2\xd2\xd8\xca\xc8@\xe8\xd0\xca@\xee\xd2\xdc\xe8\xe4\xf2@\xe6\xca\xc2\x14\xc2\xdc\xc8@\xe8\xd0\xca@\xe6\xd6\xd2\xe0\xe0\xca\xe4@\xd0\xc2\xc8@\xe8\xc2\xd6\xca\xdc@\xd0\xd2\xe6@\xd8\xd2\xe8\xe8\xd8\xca@\xc8\xc2\xea\xce\xd0\xe8\xca\xe4\x14@@@@@@\xe8\xde@\xc4\xca\xc2\xe4@\xd0\xd2\xda@\xc6\xde\xda\xe0\xc2\xdc\xf2\\\x14\x14\xc4\xd8\xea\xca@\xee\xca\xe4\xca@\xd0\xca\xe4@\xca\xf2\xca\xe6@\xc2\xe6@\xe8\xd0\xca@\xcc\xc2\xd2\xe4\xf2Z\xcc\xd8\xc2\xf0\x14@@@@@@\xd0\xca\xe4@\xc6\xd0\xca\xca\xd6\xe6@\xd8\xd2\xd6\xca@\xe8\xd0\xca@\xc8\xc2\xee\xdc@\xde\xcc@\xc8\xc2\xf2\x14\xc2\xdc\xc8@\xd0\xca\xe4@\xc4\xde\xe6\xde\xda@\xee\xd0\xd2\xe8\xca@\xc2\xe6@\xe8\xd0\xca@\xd0\xc2\xee\xe8\xd0\xde\xe4\xdc@\xc4\xea\xc8\xe6\x14@@@@@@\xe8\xd0\xc2\xe8@\xde\xe0\xca@\xd2\xdc@\xe8\xd0\xca@\xda\xde\xdc\xe8\xd0@\xde\xcc@\xda\xc2\xf2\\\x14\x14\xe8\xd0\xca@\xe6\xd6\xd2\xe0\xe0\xca\xe4@\xd0\xca@\xe6\xe8\xde\xde\xc8@\xc4\xca\xe6\xd2\xc8\xca@\xe8\xd0\xca@\xd0\xca\xd8\xda\x14@@@@@@\xd0\xd2\xe6@\xe0\xd2\xe0\xca@\xee\xc2\xe6@\xd2\xdc@\xd0\xd2\xe6@\xda\xde\xea\xe8\xd0\x14\xc2\xdc\xc8@\xd0\xca@\xee\xc2\xe8\xc6\xd0\xca\xc8@\xd0\xde\xee@\xe8\xd0\xca@\xec\xca\xca\xe4\xd2\xdc\xce@\xcc\xd8\xc2\xee@\xc8\xd2\xc8@\xc4\xd8\xde\xee\x14@@@@@@\xe8\xd0\xca@\xe6\xda\xde\xd6\xca@\xdc\xde\xee@\xee\xca\xe6\xe8@\xdc\xde\xee@\xe6\xde\xea\xe8\xd0\\\x14\x14\xe8\xd0\xca\xdc@\xea\xe0@\xc2\xdc\xc8@\xe6\xe0\xc2\xd6\xca@\xc2\xdc@\xde\xd8\xc8@\xe6\xc2\xd2\xd8\xde\xe4\x14@@@@@@\xd0\xc2\xc8@\xe6\xc2\xd2\xd8\xca\xc8@\xe8\xde@\xe8\xd0\xca@\xe6\xe0\xc2\xdc\xd2\xe6\xd0@\xda\xc2\xd2\xdc\x14\xd2@\xe0\xe4\xc2\xf2@\xe8\xd0\xca\xca@\xe0\xea\xe8@\xd2\xdc\xe8\xde@\xf2\xde\xdc\xc8\xca\xe4@\xe0\xde\xe4\xe8\x14@@@@@@\xcc\xde\xe4@\xd2@\xcc\xca\xc2\xe4@\xc2@\xd0\xea\xe4\xe4\xd2\xc6\xc2\xdc\xca\\\x14\x14\xd8\xc2\xe6\xe8@\xdc\xd2\xce\xd0\xe8@\xe8\xd0\xca@\xda\xde\xde\xdc@\xd0\xc2\xc8@\xc2@\xce\xde\xd8\xc8\xca\xdc@\xe4\xd2\xdc\xce\x14@@@@@@\xc2\xdc\xc8@\xe8\xdeZ\xdc\xd2\xce\xd0\xe8@\xdc\xde@\xda\xde\xde\xdc@\xee\xca@\xe6\xca\xca\x14\xe8\xd0\xca@\xe6\xd6\xd2\xe0\xe0\xca\xe4@\xd0\xca@\xc4'`
I also tried to use `binascii.unhexlify('%x' % (int('0b' + bNum,
2))).decode('utf-8')` where `bNum` is a long binary string
The text was originally from a utf-8 encoded `.txt` file
EDIT: Lets say we have two bit strings, the first is the exact bit string from
converting some text to a bit string. The second is extracted from an image.
The second is exactly the same as the first up to the point where it was cut
off because the image it was being hidden in didn't have enough pixels.
example: <http://pastebin.com/NnaH9dEb>
why would it throw `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8
in position 0: invalid continuation byte` error if they both contain the same
data up to the point the second one cuts off?
EDIT2: when I convert the two bit strings to hex via `hex(int(<var name>, 2))`
I get different results, but converting only the first couple of bytes returns
the same result.
Answer: The decode of `decMsg` is misaligned. If I add 7 zero bits to the end of the
message or truncate the last bit, it decodes with my method. Your code was
TL;DR.
import math
initMsg = '11101000110100001100101...' # truncated due post limits.
decMsg = '11101000110100001100101...'
# Only printing the first 25 chars of the message for bevity:
a = int(initMsg,2)
print(a.to_bytes(math.ceil(a.bit_length()/8),'big')[:25])
a = int(decMsg,2)
print(a.to_bytes(math.ceil(a.bit_length()/8),'big')[:25])
a = int(decMsg+'0000000',2)
print(a.to_bytes(math.ceil(a.bit_length()/8),'big')[:25])
a = int(decMsg[:-1],2)
print(a.to_bytes(math.ceil(a.bit_length()/8),'big')[:25])
Output:
b'the wreck of the hesperus'
b'\xe8\xd0\xca@\xee\xe4\xca\xc6\xd6@\xde\xcc@\xe8\xd0\xca@\xd0\xca\xe6\xe0\xca\xe4\xea\xe6'
b'the wreck of the hesperus'
b'the wreck of the hesperus'
Compare `\xe8` to `t` in binary:
>>> format(ord('t'),'08b')
'01110100'
>>> format(0xe8,'08b')
'11101000'
|
Can I force python array elements to have a specific size?
Question: I am using the arrays modules to store sizable numbers (many gigabytes) of
unsigned 32 bit ints. Rather than using 4 bytes for each element, python is
using 8 bytes, as indicated by array.itemsize, and verified by pympler.
eg:
>>> array("L", range(10)).itemsize
8
I have a large number of elements, so I would benefit from storing them within
4 bytes.
Numpy will let me store the values as unsigned 32 bit ints:
>>> np.array(range(10), dtype = np.uint32).itemsize
4
But the problem is that any operation using numpy's index operator is about
twice as slow, so operations that aren't vector operations supported by numpy
are slow. eg:
python3 -m timeit -s "from array import array; a = array('L', range(1000))" "for i in range(len(a)): a[i]"
10000 loops, best of 3: 51.4 usec per loop
vs
python3 -m timeit -s "import numpy as np; a = np.array(range(1000), dtype = np.uint32)" "for i in range(len(a)): a[i]"
10000 loops, best of 3: 90.4 usec per loop
So I am forced to either use twice as much memory as I would like, or the
program will run twice as slow as I would like. Is there a way around this?
Can I force python arrays to have use specified itemsize?
Answer: If you want to stick to using `array`, [set the
typecode](https://docs.python.org/3.6/library/array.html) to `I` (`unsigned
int`) rather than `L` (`unsigned long`):
>>> array.array("I", range(10)).itemsize
4
That said, I would be very surprised if there wasn't a way to speed up your
calculations way more than the 2x you are losing by using numpy. Hard to tell
without knowing exactly what are you doing with those values.
|
IV must be 16 bytes long error in AES encryption
Question: I am using [pycrypto](https://pypi.python.org/pypi/pycrypto) module for AES
encryption. And using documentation I have write down the below function but
it al;ways gives error `IV must be 16 bytes long` but I am using 16 byte long
IV.
def aes_encrypt(plaintext):
"""
"""
key = **my key comes here**
iv = binascii.hexlify(os.urandom(16)) # even used without binascii.hexlify)
aes_mode = AES.MODE_CBC
obj = AES.new(key, aes_mode, iv)
ciphertext = obj.encrypt(plaintext)
return ciphertext
Answer: Use this:
from Crypto.Cipher import AES
import binascii,os
def aes_encrypt(plaintext):
key = "00112233445566778899aabbccddeeff"
iv = os.urandom(16)
aes_mode = AES.MODE_CBC
obj = AES.new(key, aes_mode, iv)
ciphertext = obj.encrypt(plaintext)
return ciphertext
Works as below:
>>> aes_encrypt("TestTestTestTest")
'r_\x18\xaa\xac\x9c\xdb\x18n\xc1\xa4\x98\xa6sm\xd3'
>>>
That's the difference:
>>> iv = binascii.hexlify(os.urandom(16))
>>> iv
'9eae3db51f96e53f94dff9c699e9e849'
>>> len(iv)
32
>>> iv = os.urandom(16)
>>> iv
'\x16fdw\x9c\xe54]\xc2\x12!\x95\xd7zF\t'
>>> len(iv)
16
>>>
|
MySQL queries waiting for other queries to finish
Question: We have 30-40 different projects in Python and PHP that update, insert and
select more than 1 million rows of data in MySQL DB every day.
Currently we use InnoDB Engine for our tables.
The problem: we have peaks in MySQL when almost all projects are working and
lots of queries are processing in DB. There are main queries that are very
important to finish ASAP (high priority) and queries that can wait for finish
of main queries (less priority). But as they go to MySQL concurrent it causes
main queries to wait finishing of less priority queries.
Questions:
1. Is there any possibility to release all lock in tables before executing main queries (so they can finish ASAP)? or create locks if it help?
2. Can we pause the less priority queries execution when start execution main queries automatically?
3. Can use HIGH_PRIORITY and LOW_PRIORITY in queries help?
4. Are there some configurations in MySQL that can help?
5. Can changing tables to MyISAM or other DB engine help?
Let me know your thoughts and ideas.
Answer: 1. No. You might try upgrading to MySQL 5.7 as it allows parallel replication within tables if the transactions do not interfere with each other.
2. See <http://dev.mysql.com/doc/refman/5.7/en/lock-tables.html> about how LOW PRIORITY has no effect.
3. See #2.
4. It would probably be better to look how you are doing your locking in your application - -are you locking rows up, making changes, unlock quickly or does the code do this in a leisurely fashion?
5. MyISAM locks at the table level not the row level and MyISAM does not support transactions (Which is probably why you are locking records).
|
Errno 13 when installing python package
Question: I'm pretty new to programming and I've just downloaded Ubuntu onto my laptop.
The problem I'm having is when I try to install the python package tabulate
(<https://pypi.python.org/pypi/tabulate>) from the terminal it displays an
error telling me I don't have permission to do so.
kai@kai-HP-Notebook:~$ pip install tabulate
Downloading/unpacking tabulate
Downloading tabulate-0.7.5.tar.gz
Running setup.py (path:/tmp/pip_build_kai/tabulate/setup.py) egg_info for package tabulate
Installing collected packages: tabulate
Running setup.py install for tabulate
error: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/tabulate.py'
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_kai/tabulate/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-If5xKf-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.7
copying tabulate.py -> build/lib.linux-x86_64-2.7
running install_lib
copying build/lib.linux-x86_64-2.7/tabulate.py -> /usr/local/lib/python2.7/dist-packages
error: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/tabulate.py'
----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_kai/tabulate/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-If5xKf-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_kai/tabulate
Storing debug log for failure in /home/kai/.pip/pip.log
What I am doing wrong? I'm sure its quite an easy problem to get around.
Answer: Answered by @JRodDynamite: Use
sudo pip install tabulate
|
Compare rows of csv and work out percentage
Question: I'm relatively new to Python. I'm trying to find a way to create a script that
looks at a CSV file called "data_old" from a previous month, and compares it
with the data in a more recent month called "data_new", then finally outputs
that data into a new CSV "data_compare".
The files each month are consistently laid out and look like this (example)
> Month 1
> Company, StaffNumber, NeedToPass, Passed, %age meeting requirement
> xxxxxxxx, 100, 80, 30, 30%
>
> Month 3
> Company, StaffNumber, NeedToPass, Passed, %meeting requirement
> xxxxxxxx, 101, 81, 54, 60%
I'm trying to get the output file to compare the data from all rows and show
me "Percentage improved, instead of "Percentage meeting requirement". Nothing
I try seems to work.
As the numbers change all the time the only common data will be the company
name.
I need a simple, explanatory way with comments... as I'd like to understand
the logic so I can modify it and add functions.
Much appreciated.
Answer: Here ist a python code example which might does what you want. This script
asumes that the two input csv files have the same amount of lines. In the
function `test` the function `zip` i used, which stops if one list is at the
end. If your files have a different amount of lines you have to manually loop
over both. But I think it is a good starting point
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import csv
def parse_csv(filename, sort_row=0, as_dict=False, delimiter=","):
r = list()
with open(filename, "rb") as f:
# make csv reader object
reader = csv.reader(f, delimiter=delimiter)
if as_dict:
# make dict if desired
header = [h.strip() for h in reader.next()]
for row in reader:
if as_dict:
# make dict if desired
r.append(dict(zip(header, row)))
else:
# strip each item in the row and append it to the return list
r.append([h.strip() for h in row])
# sort the list by the first item (company name in this example)
r.sort(key=lambda x: x[sort_row])
return r
def write_csv(filename, fieldnames, rows, delimiter=","):
with open(filename, "w") as f:
# make csv writer object
writer = csv.writer(f, delimiter=delimiter)
# write the first header line
writer.writerow(fieldnames)
for row in rows:
# write each row
writer.writerow(row)
def test():
data_old = parse_csv("m1.csv")
data_new = parse_csv("m2.csv")
#write_csv("data_compare.csv", data_old[:1][0], data_old[1:])
result = list()
# loop over the items (skipping the first header row)
for o, n in zip(data_old[1:], data_new[1:]):
# calculate the improvement (or whatever needs to be calculated)
value = float(n[4].replace("%", "")) - float(o[4].replace("%", ""))
# create the row
result.append([o[0], "%s%%" % value, o[4], n[4]])
#result.append(["%s%%" % value])
header = ["Company", "Percentage improved", "old", "new"]
#header = ["Company", "Percentage improved"]
write_csv("data_compare.csv", header, result)
if __name__ == '__main__':
test()
|
How to run python script on Jupyter in the terminal?
Question: I want to execute one python script in Jupyter, but I don't want to use the
web browser (IPython Interactive terminal), I want to run a single command in
the Linux terminal to load & run the python script, so that I can get the
output from Jupyter.
I tried to run `jupyter notebook %run <my_script.py>`, but it seems jupyter
can't recognize `%run` variable.
Is it possible to do that?
Answer: You can use the `jupyter console -i` command to run an interactive jupyter
session in your terminal. From there you can run `import <my_script.py>`. Do
note that this is not the intended use case of either jupyter or the notebook
environment. You should run scripts using your normal python interpreter
instead.
|
How can I form a composite Function with Theano?
Question: I would like to compute a composite function f(x, g(x)) with Theano.
Unfortunately, when I try to code a function composition, Python complains
about a TypeError. For example, consider the following simple script:
import theano
import theano.tensor as T
x = T.dscalar('x')
def g():
y1 = T.sqr(x)
return theano.function([x], y1)
def composition():
input = g()
yComp = x * input
return theano.function([x], yComp)
def f():
y1 = T.sqr(x)
yMult = x * y1
return theano.function([x], yMult)
When writing ` funComp = composition() ` Python returns a TypeError:
TypeError: unsupported operand type(s) for *: 'TensorVariable' and 'Function'
However, I can compile and calculate the function ` fun = f() `. Is there a
way to successfully establish a function composition? I am grateful for any
help!
Answer: You don't need multiple function actually for this case. This one works well.
import theano
import theano.tensor as T
x = T.dscalar('x')
def g():
y1 = T.sqr(x)
return y1
def composition():
input = g()
yComp = x * input
return theano.function([x], yComp)
tfunc = composition()
print tfunc(4)
|
Value Error: Too many dimensions: 3 > 2
Question: I've tried to resize image with scipy and everything seems to work fine until
I try to save the image. When I try to save image I get error that you can see
in title. Full traceback is available below.
import numpy as np
import scipy.misc
from PIL import Image
image_path = "img0.jpg"
def load_image(img_path):
img = Image.open(img_path)
img.load()
data = np.asarray(img, dtype="int32")
return data
def save_image(npdata, outfilename):
img = Image.fromarray(np.asarray(np.clip(npdata, 0, 255), dtype="uint8"), "L")
img.save(outfilename)
array_image = load_image(image_path)
array_resized_image = scipy.misc.imresize(array_image, (320, 240), interp='nearest', mode=None)
save_image(array_resized_image, "i1.jpg")
Full traceback of the error:
Traceback (most recent call last):
File "D:/Python/Playground/resize image with scipy.py", line 26, in <module>
save_image(array_resized_image, "i1.jpg")
File "D:/Python/Playground/resize image with scipy.py", line 16, in save_image
img = Image.fromarray(np.asarray(np.clip(npdata, 0, 255), dtype="uint8"), "L")
File "C:\Anaconda2\lib\site-packages\PIL\Image.py", line 2154, in fromarray
raise ValueError("Too many dimensions: %d > %d." % (ndim, ndmax))
ValueError: Too many dimensions: 3 > 2.
Answer: don't you need to convert it to a two dimensional array before doing the
fromarray(... 'L')?
You can do that using a scipy function or, actually quicker, to multiply the
RGB by factors. Like this
npdata = (npdata[:,:,:3] * [0.2989, 0.5870, 0.1140]).sum(axis=2)
|
Replace value in array when greater than x
Question: I get a little problem with a simple idea. I have an array of data and I would
like to replace each value if the value is greater than X.
To solve that, I wrote a little script as example which give the same idea :
import numpy as np
# Array creation
array = np.array([0.5, 0.6, 0.9825])
print array
# If value > 0.7 replace by 0.
new_array = array[array > 0.7] == 0
print new_array
I would like to obtain :
>>> [0.5, 0.6, 0] # 0.9825 is replaced by 0 because > 0.7
Thank you if you could me help ;)
**EDIT :**
I didn't find How this subject could help me : [Replace all elements of Python
NumPy Array that are greater than some
value](http://stackoverflow.com/questions/19666626/replace-all-elements-of-
python-numpy-array-that-are-greater-than-some-value) The answer given by
@ColonelBeauvel is not noticed in the previous post.
Answer: I wonder why this solution is not provided in the link @DonkeyKong provided:
np.where(arr>0.7, 0, arr)
#Out[282]: array([ 0.5, 0.6, 0. ])
|
ImportError: DLL load failed: The specified module could not be found for numpy
Question: I have Python 3.3.2, 64 bit. When I run a script with `import numpy` I get the
following Error: `ImportError: DLL load failed: The specified module could not
be found.`. The traceback is:
Traceback (most recent call last):
File "C:\Users\ZKZJFIO\workspace\FX_FORWARD_FLAG_DETERMINATION\Main.py", line 1, in <module>
import numpy
File "C:\Python33\numpy\__init__.py", line 180, in <module>
from . import add_newdocs
File "C:\Python33\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "C:\Python33\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "C:\Python33\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "C:\Python33\numpy\core\__init__.py", line 14, in <module>
from . import multiarray
I looked at [this link](http://scipy-user.10969.n7.nabble.com/ImportError-DLL-
load-failed-The-specified-module-could-not-be-found-After-update-td16317.html)
which appeared to be dealing with a similar issue and found that I do actually
have multiarray.pyd so I am a bit confused as to how to resolve this issue as
most questions about this error appear to be specific to that module.
**After running dependency walker on multiarray.pyd it appears MSVCR90.DLL and
PYTHON27.DLL are missing. Would it be worth just downloading Python27 to
rectify this issue as I was told downloading dll's directly may not be the
best thing?**
Thank You
Answer: Since the creator of Numpy made a company that puts out a python distribution
etc... (with Numpy as one of 195 libraries which work on windows) I would
suggest you pick that one to use <http://continuum.io/downloads> . you can
pick version 2.7 or 3.x
|
what does "*" mean in the following line in python?
Question: I was looking for the graphical interface using python.
what does * means in the following line ?
=======
from Tkinter import *
Answer: [What exactly does "import *"
import?](http://stackoverflow.com/questions/2360724/what-exactly-does-import-
import)
It basically imports everything that is part of the module.
|
Python 2.6.4 - method name is not defined
Question: Running a python script named `automator.py` from the command line using
powershell on windows 7.
`python .\automator.py`
`automator.py` file looks like this....
import os
ipAddressFile = os.path.join("DCM_Info", "ip_address")
ipAddresses = getIpAddresses()
for ip in ipAddresses:
print str(ip)
cmd = "python run.py " + ip + " get_transrator_settings"
os.system(cmd)
def getIpAddresses():
f = open(ipAddressFile, 'r')
return f.readlines()
Why am I getting an error that the name of the method is undefined?
`NameError: name 'getIpAddresses' is not defined`
I'm used to C#/Java where you have a main method that starts the program and
classes that have constructors. Do I need to have a constructor or a class? Is
that necessary?
Answer: You need to move the function definition to be _before_ the first time it is
used. It can be easy to forget about this, since languages like JavaScript
allow functions to be declared after they're called.
|
Selecting values from list in dictionary python
Question: I've been working on a small contact importer, and now I'm trying to implement
a block that automatically selects the output file format based on the number
of contacts to be imported.
However, every time it results in the error:
KeyError: 'q'
I can't figure out for the life of me why this is happening, and I would love
any help offered.
My idea of scalability is that the dictionary `personDict` would be of the
format `personDict = {nameid:[name,email]}`, but nothing works.
Any help is good help,
Thanks
def autoFormat():
while True:
name = input("Enter the person's name \n")
if name == "q":
break
email = input("Enter the person's email \n")
personDict[name] = [name, email]
if len(personDict) <= 10:
keyValue = personDict[name]
for keyValue in personDict:
for key, value in personDict.iteritems():
combined = "BEGIN:VCARD\nVERSION:4.0\n" + "FN:" + name + "\n" + "EMAIL:" + email + "\n" + "END:VCARD"
fileName = name + ".vcl"
people = open(fileName, 'a')
people.write(combined)
people.close()
print("Created file for " + name)
autoFormat()
Answer: The main problem is that when the user types `"q"` your code leaves the
`while` loop with `name` keeping "q" as value. So you should remove this
useless line:
keyValue = person_dict[name]
Since there is no element with key `"q"` in your dictionary.
Also in the export part you write in file values different from those you loop
with. Your code becomes:
if len(personDict) <= 10:
for name, email in personDict.values():
combined = "BEGIN:VCARD\nVERSION:4.0\n" + "FN:" + name + "\n" + "EMAIL:" + email + "\n" + "END:VCARD"
fileName = name + ".vcl"
people = open(fileName, 'a')
people.write(combined)
people.close()
print("Created file for " + name)
|
Vectorizing for loop with repeated indices in python
Question: I am trying to optimize a snippet that gets called a lot (millions of times)
so any type of speed improvement (hopefully removing the for-loop) would be
great.
I am computing a correlation function of some j'th particle with all others
C_j(|r-r'|) = sqrt(E((s_j(r')-s_k(r))^2)) averaged over k.
My idea is to have a variable corrfun which bins data into some bins (the r,
defined elsewhere). I find what bin of r each s_k belongs to and this is
stored in ind. So ind[0] is the index of r (and thus the corrfun) for which
the j=0 point corresponds to. Multiple points can fall into the same bin (in
fact I want bins to be big enough to contain multiple points) so I sum
together all of the (s_j(r')-s_k(r))^2 and then divide by number of points in
that bin (stored in variable rw). The code I ended up making for this is the
following (np is for numpy):
for k, v in enumerate(ind):
if j==k:
continue
corrfun[v] += (s[k]-s[j])**2
rw[v] += 1
rw2 = rw
rw2[rw < 1] = 1
corrfun = np.sqrt(np.divide(corrfun, rw2))
Note, the rw2 business was because I want to avoid divide by 0 problems but I
do return the rw array and I want to be able to differentiate between the rw=0
and rw=1 elements. Perhaps there is a more elegant solution for this as well.
Is there a way to make the for-loop faster? While I would like to not add the
self interaction (j==k) I am even ok with having self interaction if it means
I can get significantly faster calculation (length of ind ~ 1E6 so self
interaction is probably insignificant anyways).
Thank you!
Ilya
Edit:
Here is the full code. Note, in the full code I am averaging over j as well.
import numpy as np
def twopointcorr(x,y,s,dr):
width = np.max(x)-np.min(x)
height = np.max(y)-np.min(y)
n = len(x)
maxR = np.sqrt((width/2)**2 + (height/2)**2)
r = np.arange(0, maxR, dr)
print(r)
corrfun = r*0
rw = r*0
print(maxR)
''' go through all points'''
for j in range(0, n-1):
hypot = np.sqrt((x[j]-x)**2+(y[j]-y)**2)
ind = [np.abs(r-h).argmin() for h in hypot]
for k, v in enumerate(ind):
if j==k:
continue
corrfun[v] += (s[k]-s[j])**2
rw[v] += 1
rw2 = rw
rw2[rw < 1] = 1
corrfun = np.sqrt(np.divide(corrfun, rw2))
return r, corrfun, rw
I debug test it the following way
from twopointcorr import twopointcorr
import numpy as np
import matplotlib.pyplot as plt
import time
n=1000
x = np.random.rand(n)
y = np.random.rand(n)
s = np.random.rand(n)
print('running two point corr functinon')
start_time = time.time()
r,corrfun,rw = twopointcorr(x,y,s,0.1)
print("--- Execution time is %s seconds ---" % (time.time() - start_time))
fig1=plt.figure()
plt.plot(r, corrfun,'-x')
fig2=plt.figure()
plt.plot(r, rw,'-x')
plt.show()
Again, the main issue is that in the real dataset n~1E6. I can resample to
make it smaller, of course, but I would love to actually crank through the
dataset.
Answer: Your original code on my system runs in about 5.7 seconds. I fully vectorized
the inner loop and got it to run in 0.39 seconds. Simply replace your "go
through all points" loop with this:
points = np.column_stack((x,y))
hypots = scipy.spatial.distance.cdist(points, points)
inds = np.rint(hypots.clip(max=maxR) / dr).astype(np.int)
# go through all points
for j in range(n): # n.b. previously n-1, not sure why
ind = inds[j]
np.add.at(corrfun, ind, (s - s[j])**2)
np.add.at(rw, ind, 1)
rw[ind[j]] -= 1 # subtract self
The first observation was that your `hypot` code was computing 2D distances,
so I replaced that with `cdist` from SciPy to do it all in a single call. The
second was that the inner `for` loop was slow, and thanks to an insightful
comment from @hpaulj I vectorized that as well using `np.add.at()`.
* * *
Since you asked how to vectorize the inner loop as well, I did that later. It
now takes 0.25 seconds to run, for a total speedup of over 20x. Here's the
final code:
points = np.column_stack((x,y))
hypots = scipy.spatial.distance.cdist(points, points)
inds = np.rint(hypots.clip(max=maxR) / dr).astype(np.int)
sn = np.tile(s, (n,1)) # n copies of s
diffs = (sn - sn.T)**2 # squares of pairwise differences
np.add.at(corrfun, inds, diffs)
rw = np.bincount(inds.flatten(), minlength=len(r))
np.subtract.at(rw, inds.diagonal(), 1) # subtract self
This uses more memory but does produce a substantial speedup vs. the single-
loop version above.
|
Convert csv to json with python
Question: everyone, I tried to convert my csv to json, using the code below:
f = open( 'D:\\ResumesClassification\\test2.csv', 'r' )
fieldnames =("id","basicinformation","workexperience","education","skill","publication","add tionalinformation","link","award","certification")
reader = csv.DictReader( f,fieldnames)
out = json.dumps( [ row for row in reader ] )
fo = open('D:\\ResumesClassification\\test2.json','w')
fo.write(out)
fo.close()
print out
And when I used mongoimport, it said json decoder out of sync - data changing
underfoot, this is how my .json look like:
[{"publication": "Structural basis for modulation of a G-protein-coupled receptor by allosteric drugs Nature,http://www.nature.com/nature/journal/v503/n7475/full/nature12595.htmlOctober 13, 2013The design of G-protein-coupled receptor (GPCR) allosteric modulators, an active area of modern pharmaceutical research, has proved challenging because neither the binding modes nor the molecular mechanisms of such drugs are known1, 2. Here we determine binding sites, bound conformations and specific drugreceptor interactions for several allosteric modulators of the M2 muscarinic acetylcholine receptor (M2 receptor), a prototypical family A GPCR, using atomic-level simulations in which the modulators spontaneously associate with the receptor. Despite substantial structural diversity, all modulators form cation interactions with clusters of aromatic residues in the receptor extracellular vestibule, approximately 15 from the classical, orthosteric ligand-binding site. We validate the observed modulator binding modes through radioligand binding experiments on receptor mutants designed, on the basis of our simulations, either to increase or to decrease modulator affinity. Simulations also revealed mechanisms that contribute to positive and negative allosteric modulation of classical ligand binding, including coupled conformational changes of the two binding sites and electrostatic interactions between ligands in these sites. These observations enabled the design of chemical modifications that substantially alter a modulators allosteric effects. Our findings thus provide a structural basis for the rational design of allosteric modulators targeting muscarinic and possibly other GPCRs.High-resolution crystal structure of human protease-activated receptor 1http://www.nature.com/nature/journal/v492/n7429/full/nature11701.htmlDecember 9, 2012Protease-activated receptor 1 (PAR1) is the prototypical member of a family of G-protein-coupled receptors that mediate cellular responses to thrombin and related proteases. Thrombin irreversibly activates PAR1 by cleaving the amino-terminal exodomain of the receptor, which exposes a tethered peptide ligand that binds the heptahelical bundle of the receptor to affect G-protein activation. Here we report the 2.2--resolution crystal structure of human PAR1 bound to vorapaxar, a PAR1 antagonist. The structure reveals an unusual mode of drug binding that explains how a small molecule binds virtually irreversibly to inhibit receptor activation by the tethered ligand of PAR1. In contrast to deep, solvent-exposed binding pockets observed in other peptide-activated G-protein-coupled receptors, the vorapaxar-binding pocket is superficial but has little surface exposed to the aqueous solvent. Protease-activated receptors are important targets for drug development. The structure reported here will aid the development of improved PAR1 antagonists and the discovery of antagonists to other members of this receptor family.Structure and dynamics of the M3 muscarinic acetylcholine receptorhttp://www.nature.com/nature/journal/v482/n7386/full/nature10867.htmlFebruary 22, 2012Acetylcholine, the first neurotransmitter to be identified, exerts many of its physiological actions via activation of a family of G-protein-coupled receptors (GPCRs) known as muscarinic acetylcholine receptors (mAChRs). Although the five mAChR subtypes (M1-M5) share a high degree of sequence homology, they show pronounced differences in G-protein coupling preference and the physiological responses they mediate. Unfortunately, despite decades of effort, no therapeutic agents endowed with clear mAChR subtype selectivity have been developed to exploit these differences. We describe here the structure of the G(q/11)-coupled M3 mAChR ('M3 receptor', from rat) bound to the bronchodilator drug tiotropium and identify the binding mode for this clinically important drug. This structure, together with that of the G(i/o)-coupled M2 receptor, offers possibilities for the design of mAChR subtype-selective ligands. Importantly, the M3 receptor structure allows a structural comparison between two members of a mammalian GPCR subfamily displaying different G-protein coupling selectivities. Furthermore, molecular dynamics simulations suggest that tiotropium binds transiently to an allosteric site en route to the binding pocket of both receptors. These simulations offer a structural view of an allosteric binding mode for an orthosteric GPCR ligand and provide additional opportunities for the design of ligands with different affinities or binding kinetics for different mAChR subtypes. Our findings not only offer insights into the structure and function of one of the most important GPCR families, but may also facilitate the design of improved therapeutics targeting these critical receptors.", "certification": "", "basicinformation": "Hillary GreenData Scientist - KnewtonNew York, NY-Authorized to work in the US for any employer", "award": "", "link": "", "workexperience": "Data ScientistKnewton - New York, NYNovember 2014 to PresentData Scientist - Embedded in Direct to Institution Team Use Big Data to advise the Direct to Institution team on how students and teachers are using the product, how to improve analytics for teachers and students, and how changes to the product might affect current users Create informative, beautiful, and dynamic data visualizations that convey key information about large data sets (javascript, d3, matplotlib) Create tools for processing, labeling, and understanding large, messy datasets (python, pandas, numpy) Write blog posts that highlight data insights to a general audience (https://www.knewton.com/blog/adaptive-learning/how-instructors-use-adaptive-assignments-in-the-classroom/https://www.knewton.com/resources/blog/adaptive-learning/visualizing-personalized-learning/https://www.knewton.com/resources/blog/adaptive-learning/friday-effect-students-really-worse/https://www.knewton.com/resources/blog/adaptive-learning/school-holidays-affect-student-scores/) Leadership: occasionally lead team stand-ups and sprint planning; advise other data scientists on best practices; review data science codeData Scientist - Efficacy Research Lead internal efficacy research efforts, including designing observational studies, performing complex data analysis, writing technical papers, and presenting results to both internal and external audiences Advise partner companies on efficacy research strategies, including study design and methodology and data analysis strategies Use Big Data to demonstrate the impact of adaptive learning technology Report research findings to internal and external audiences via papers, presentations, and visualizations(https://www.knewton.com/resources/blog/adaptive-learning/adaptive-advantage-reducing-performance-gap/)Scientific AssociateD.E. Shaw Research - New York, NYMay 2011 to November 2014Designed, performed and analyzed molecular dynamics simulations on the Anton supercomputer resulting in three publications in Nature Built tools to automate experimental workflows and report progress by email and text message using Python and shell scripting Collaborated with software engineering team to debug and suggest improvements for company-wide job-scheduler Analyzed experimental data using MATLAB and Python, including clustering (hierarchical and k-means), principal component analysis, and data visualization Studied allosteric modulation of GPCRs by drug-like molecules resulting in a completely in silico designed allosteric modulator of M2 muscarinic actylcholine receptor with novel properties Interpreted radio-ligand binding and electrophysiology data from collaborators' experiments Prepared research for publication in major journals (including manuscript writing and figure creation) Currently working on developing small-molecule inhibitors of voltage-gated potassium ion channels", "addtionalinformation": "Authored an article about life as a female scientist.http://lilith.org/blog/2014/05/for-all-you-aspiring-female-scientists-out-there/Authored a blog post on visualizing data from personalized learning (using Adobe Illustrator, iMovie, and d3 to create animations):https://www.knewton.com/resources/blog/adaptive-learning/visualizing-personalized-learning/Authored a series of blog posts on student performance at different times based on analysis of Big Data:https://www.knewton.com/resources/blog/adaptive-learning/friday-effect-students-really-worse/https://www.knewton.com/resources/blog/adaptive-learning/school-holidays-affect-student-scores/Authored a blog post on how procrastination affects student gradeshttp://www.knewton.com/blog/adaptive-learning/the-early-bird-gets-the-grade-how-procrastination-affects-student-scores/Authored a blog post about how instructors use adaptive assignments in the classroomhttp://www.knewton.com/blog/adaptive-learning/how-instructors-use-adaptive-assignments-in-the-classroom/Spoke about efficacy research at NYC Python Meetuphttp://www.meetup.com/nycpython/events/220735605/Spoke about efficacy research at PyGothamhttps://pygotham.org/2015/speakers/profile/359/", "skill": "Expert user in Python (Pandas, NumPy, SciPy, Matplotlib) (5 years), Expert user of SQL (PostgreSQL/MySQL/RedShift) (2 years), Proficient in bash shell scripting (7 years), Proficient in JavaScript (jQuery, d3) (1 year), Proficient in Unix/Linux & Windows environments (7 years), Familiar with High-Performance Computing (4 years), Expert user of Adobe Illustrator (5 years), Expert user in Maestro, VMD, PyMol (4 years), Proficient in MOE, InstantJChem (2 years), Expert user of Charmm/CGenFF force fields (4 years), Molecular Dynamics (including some Enhanced Sampling techniques, FEP) (7 years)", "education": "B.S. in Theoretical and Computational MaterialsUniversity of California, Berkeley - Berkeley, CAUniversity of California, Berkeley2006 to 2010", "id": "1"}, {"publication": "", "certification": "", "basicinformation": "Joseph DaoudNew York, NY-", "award": "", "link": "", "workexperience": "Data Scientist & Quantitative developerSocit Gnrale CIB - New York, NYSeptember 2013 to PresentNew York, USA 2013 - NowDesk Quantitative Developer & Data Scientist - Securitized Products, Exotic Credit Derivatives & Interest Rates Derivatives Designed, implemented and supported several trading pricing, monitoring and reporting tools:o System design and data architecture for financial data repository for data sets spanning multiple asset classes and geographies(data collection, cleaning and analytics)o Quantitative strategy: Negative Basis Trading (generated P&L of $3M in 2015): pattern recognition, opportunity triggero Monitoring: Non-Agency MBS products financing platform (Repo, TRS on Loans, Credit Facility on ABS, CMBS, RMBS, CLO, Loans) with limitsindicators and dynamic haircut computations CMBS Primary (CRE Loans, Hedge with CMBX & IR Swaps)o Big data: Data analysis and Machine Learning of Big Data (Fannie Mae & Freddie Mac MBS): Study of loan data analytics Statistical framework to analyze time series (generated P&L of $10M in 2015): detection of patterns, find relationships, gaininsights / 200K+ time series of 15 years / multi-asset (Rates, Credit, FX & Indices)o Pricing: CRE Loans during warehousing period for CMBS primary market issuance Implied Spread of Markit ABX & CMBX indiceso Market Marking: Aggregation of BWICs, price talk and enrichment of bids with market data Deal Analyzero Risk & PnL system: Structured Products & Exotic Credit Derivativeso Contribution: Prices and other market data in several internal systemso Reports: Aging report, IPV report, Management risk reports, Technologies: .NET C# (+ WPF), Python, SQL. R, SAS, VBA, C++, Spark, Cassandra Methodologies: Agile Development, Continuous Delivery, Git, Jira, Build FactoryQuantitative DeveloperBanque de France - Paris (75)2012 to 2012on High Frequency Trading - Financial Economics Research Build and implemented a new, state of the art, low latency and high frequency trading simulation platform Analyzed the impact of the market making trading and the liquidity of the market Evaluated the consequences on the market functioning and dynamismFinancial Software Engineer - Global Equity DerivativesSocit Gnrale Corporate & Investment Banking - New York, NY2010 to 2011 Designed, implemented, optimized, and maintained several applications including:o Front-end database applicationo Real-time proprietary trade-reporting charts applicationo File-based scheduler for report processing to external regulators (FINRA, SEC)o Real-time and multi-threading application feeding equities database referenceo Seeder process which aims to retrieve data from an immense reference database Wrote technical and business requirements and technical specificationsGRTGaz (via Accenture) - Paris, FranceAlgorithmic Engineer - Customer Information System upgrade (50M project) Designed and developed data processing algorithm Generated test scenarios, test cases and test data. Executed tests, created problem reports Conducted various management activities by analyzing and verifying test results, providing status reports Worked with business analysts and developers to resolve issues", "addtionalinformation": "Programming SkillsProgramming C#, Java, C++, VBA, Python, Shell ScriptingWeb HTML, JavaScript, AngularJSDatabases SQL (Microsoft SQL Server, MySQL), NoSQL (MongoDB, Cassandra)Mathematics Matlab, R, SASBig Data Spark, Hadoop", "skill": "Microsoft office, Python, C#, spark, machine learning, Hadoop, SQL, Sql Server, VBA", "education": "MSc. in Applied Mathematics & Quantitative FinanceUniversity of Paris 1 - Paris (75)University of Paris 12011 to 2012MSc. in Computer ScienceENSISA - FranceENSISA2007 to 2010", "id": "2"}, {"publication": "", "certification": "", "basicinformation": "Jason SypniewskiData Scientist - MetisClifton, NJ-Highly motivated Data Scientist with a Bachelors Degree in Computer Science and a Masters Degree in Information Systems. Versatile and reliable professional with a prior background in the government and defense sector. Proven leader with experience managing cross-functional teams in high paced environments. Creative thinker driven by data to solve real-world problems. Polished communicator with ability to effectively convey results to diverse audiences.Authorized to work in the US for any employer", "award": "Commander's Award for Civilian ServiceNovember 2015award description not metioned", "link": "http://jasonsyp.github.iohttps://www.linkedin.com/in/jasonsypniewski", "workexperience": "Data ScientistMetis - New York, NY2016 to PresentMetis is an immersive program focused on teaching end-to-end design, implementation and communication of data science projects. A Metis education covers topics in programming, statistics, data acquisition, machine learning, data visualization, relational and non-relational databases, natural language processing, and iterative design.Analyzed MTA turnstile data to recommend optimal locations and times for a non-profit org to deploy street team members.Built and optimized linear regression models to predict success of sports genre movies in terms of revenues and specific sport featured. Scraped and cleaned data across multiple sources for relevant movie data.Analysis of various supervised classification models for diagnosing heart disease using data from the Cleveland Clinic, Hungarian Institute of Cardiology, Swiss University Hospitals, and Long Beach V.A. Medical Center.Utilized unsupervised machine learning and natural language processing to perform topic modeling on Twitter data regarding sentiment towards the European Union.Supervisory Computer ScientistDepartment of the Army RDECOM CERDEC C4ISR Ground Activity - Lakehurst, NJ2008 to 2015Managed team of 25-30 engineers and scientists.Responsible for all project management tasks, reporting directly to the Deputy Director of the organization.Led requirements engineering and analysis according to specific Army C4ISR research and development (R&D) requirements.Led the design, execution and analysis of complex system of systems experiments including multi-tiered RF and satellite communications, intelligence, surveillance and reconnaissance (ISR), information technology (IT), TCP/IP wireless telecommunications, software integration, and mission command applications.Tracked project milestones according to cost, schedule, and performance metrics.Maintained authority for resolving technical issues, conducting analysis of alternatives and making engineering compromises where necessary.Executed all personnel decisions within the branch, including hiring, onboarding, disciplinary actions, training, and performance appraisals.Corresponded with senior management through written and oral communications.Developed briefing materials and presented to internal and external stakeholders across Department of Defense and industry, including briefing senior Army officials, military and civilian.Performed contract management and technical oversight on multmillion dollar support services contracts.Computer ScientistDepartment of the Army RDECOM CERDEC C2D - Fort Monmouth, NJ2001 to 2008Served as the Lead Systems Engineer responsible for the design, integration and testing of system of systems C4ISR architectures.Designed and executed experiments evaluating performance of Army computer systems, mission command applications, ground and airborne platforms, sensors, and tactical wireless radio systems.Executed tasks across the systems engineering lifecycle including software installation, configuration and maintenance, database development, network configuration and monitoring, training, digital terrain generation and mapping, creating and editing application scripts, and test plan development and execution.Served as organization's subject matter expert on Army mission command applications and information systems, writing documentation and giving presentations to internal and external stakeholders.", "addtionalinformation": "SKILLSPROGRAMMING LANGUAGES: Python, JavascriptMACHINE LEARNING: Supervised Learning, Unsupervised Learning, Linear Regression, Classification, Clustering, Natural Language ProcessingSTATISTICAL PACKAGES: scikitlearn, statsmodelsDATA ACQUISITION, STORAGE AND MANAGEMENT: PostgresSQL, Amazon Web ServicesWEB DESIGN: HTML, CSSDATA VISUALIZATION: D3.js, matplotlib, seaborn, CartoDBPROJECT MANAGEMENT: Requirements Analysis, Financial Analysis and Budgeting, Contract Management, Systems Engineering, Integration, Testing, Technical Writing,Oral Communications, Workforce Development, Scheduling", "skill": "skill not metioned", "education": "M.S. in Information SystemsNew Jersey Institute of TechnologyNew Jersey Institute of Technology2004B.S. in Computer ScienceNew Jersey Institute of TechnologyNew Jersey Institute of Technology2001", "id": "3"}, {"publication": "", "certification": "", "basicinformation": "Cong WuManager - Big Data Scientist - American Express-", "award": "", "link": "", "workexperience": "Manager - Big Data ScientistAmerican Express - New York, NYMarch 2015 to PresentPartner closely with Business Units to develop Big Data Use Cases to drive growth- Identify business owners who own multiple businesses and added in EPIN features to database of prospectbusinesses, help to target potential high-spend customer.- Build up a small business ecosystem. SBE(small business ecosystem) connects different business together usingtheir addresses, business owners and hierarchy information- Partnered with marketing team, build ATUL (acquisition targeting utilization link), an easy to use tool that canfacilitate marketing team to get necessary information from big data database of prospect businesses Create Frameworks and automation tools to ensure focused approach and disciplined governance on corecapabilities investments around prospects and customers with a Big Data POA- On big data platform, built up connected component, network builder and network visualization, searchingcapability for prospect small businesses.- Partner with analytic team, build geo database capability POA, geo database can enable analytic and all other teams in Amex- Integrate analytic tools (Datameer, RevR ) into enterprise-wise centered big data platformData ScientistSocure - New York, NYAugust 2014 to December 2014 Building machine learning models of fraud predictions. Performing ad hoc analytics using R, Python, SQL, MongoDB query and Unix utilities Maintaining proprietary machine learning R library Developing OFAC fuzzy match algorithm for online identity verification system.", "addtionalinformation": "SkillsProgramming: Python, R, Java, C++, C, Hive, MATLAB, SQL, d3.js, scala", "skill": "skill not metioned", "education": "Master of Arts in StatisticsColumbia University - New York, NYColumbia UniversitySeptember 2013 to December 2014Bachelor of Engineering in Software EngineeringSun Yat-sen University - Guangzhou, CNSun Yat-sen UniversitySeptember 2009 to June 2013", "id": "4"}]
Can anyone help? Thank you.
Answer: I wonder why you are using DictReader. instead you can do this:
1. read csv file line by line
2. split it by tab or comma.
3. create json object add each element from splitted list to json object
4. write json object to your json file
|
Django/Python timestamp convert EDT to UTC
Question: I have a Python datetime object and here is the offset and tzname(). As in
Django, the timezone is stored as UTC, I want it store the the tzname
separately, so that I can use that field to reconvert to the actual datetime.
>>>from dateutil.parser import parse
>>>dt = parse('Tue Apr 26 2016 08:32:00 GMT-0400 (EDT)')
>>> dt
datetime.datetime(2016, 4, 26, 8, 32, tzinfo=tzoffset('EDT', 14400))
>>> dt.tzinfo
tzoffset('EDT', 14400)
>>> dt.tzname()
'EDT'
Question:-
When I store "dt" object in Django it converts it to UTC format.How do I
reconvert the UTC format to EDT format?
I'm using this link as reference but I'm not sure how to create the to_zone
object for 'EDT'. 'UTC' works fine but tz.gettz('EDT') is always None.
>>> to_zone=tz.gettz('EDT')
>>> to_zone
>>> to_zone=tz.gettz('UTC')
>>> to_zone
tzfile('/usr/share/zoneinfo/UTC')
[Python - Convert UTC datetime string to local
datetime](http://stackoverflow.com/questions/4770297/python-convert-utc-
datetime-string-to-local-datetime)
Answer: The datetime package has a method "astimezone":
[datetime.astimezone](https://docs.python.org/2/library/datetime.html#datetime.datetime.astimezone)
So
tzone = tz.gettz('EDT')
newdate = olddate.astimezone(tzone)
|
Python Rate Limit class based view Flask
Question: I'm following this example:
<http://flask-limiter.readthedocs.org/en/stable/#ratelimit-string>
app = Flask(__name__)
limiter = Limiter(app, key_func=get_remote_address)
class MyView(flask.views.MethodView):
decorators = [limiter.limit("10/second")]
def get(self):
return "get"
def put(self):
return "put"
My problem is that in example, the application, limiter and classes are
defined in the same file in my case the application and limiter are defined in
the same file but my classes live in a separate file.
If I import either limiter or app my Flask app doesn't start because circular
[dependencies](http://pastebin.com/raw/2yArEjyZ). How can fix this, what is
the recommended way? I want to apply limiter to specific endpoints. I tried
`from flask import current_app`in order to initialize limiter but this
function was not take it as a valid parameter. Any recommendations?
**File information:**
* app.py
* api_main.py
Under app.py I have defined my resources:
api_app = Flask(__name__) # Flask Application
api_app.config.from_pyfile("../../../conf/settings.py") # Flask configuration
imbue_api = restful.Api(api_app) # Define API
limiter = Limiter(api_app, key_func=get_remote_address, global_limits=["10 per second"])
imbue_api.add_resource(ApiBase, settings.BASE_API_URL)
In api_main.py I have defined all my classes:
class ApiBase(Resource):
@authenticator.requires_auth
def get(self):
"""
:return:
"""
try:
# =========================================================
# GET API
# =========================================================
log.info(request.remote_addr + ' ' + request.__repr__())
if request.headers['Content-Type'] == 'application/json':
# =========================================================
# Send API version information
# =========================================================
log.info('api() | GET | Version' + settings.api_version)
response = json.dumps('version: ' + settings.api_version)
resp = Response(response, status=200, mimetype='application/json')
return resp
except KeyError:
response = json.dumps('Invalid type headers. Use application/json')
resp = Response(response, status=415, mimetype='application/json')
return resp
except Exception, exception:
log.exception(exception.__repr__())
response = json.dumps('Internal Server Error')
resp = Response(response, status=500, mimetype='application/json')
return resp
Answer: Use the `Resource.method_decorators`
[https://github.com/flask-restful/flask-
restful/blob/master/flask_restful/**init**.py#L574](https://github.com/flask-
restful/flask-restful/blob/master/flask_restful/__init__.py#L574)
It is applied for each request. You can override it in your view class:
@property
def method_decorators(self):
# get some limiter bound to the `g` context
# maybe you prefer to get it from `current_app`
return g.limiter
If you prefer, you can append the limiter to the existing `method_decorators`
before adding the resource to your restful API.
ApiBase.method_decorators.append(limiter)
imbue_api.add_resource(ApiBase, settings.BASE_API_URL)
|
Xpath python is not working
Question: hi i am trying to get data from xxx website through python xpath...it just
give me blank data.. i copied the xpath from chrome.. pls let me know what i
am doing wrong here. Thanks you
from lxml import html,etree
import requests
import urllib2
def webText(url):
import urllib2
response = urllib2.urlopen(url)
html = response.read()
return html
x=webText("http://www.sportscardforum.com/ttm.php?s=3161e010cc6e6fd80ddb2e6b18ab2c5d&do=listp&pl=13450&sp=4");
f = open("foo.html", "w");
f.write(x)
f.close()
R=open("foo.html").read().strip()
tree =etree.HTML(R)
x = tree.xpath('//*[@id="vbulletin_html"]/body/div[2]/table/tbody/tr/td[3]/table[2]/tbody/tr[1]/td/table[2]/tbody/tr[2]/td[2]/table/tbody/tr/td[1]')
print x
Answer: You can use following xpath:
//b[contains(text(),'Address:')]/parent::td[1]/following-sibling::td[1]
|
Check if element returned by xpath is correct
Question: I am making my first Python project. I am trying to scrape a web page like
this:
page = requests.get('http://www.mypage.com')
tree = html.fromstring(page.content)
table = tree.xpath('//table[@class="list"]')
However, I'm not sure if the table returned is correct.
Is there a way of checking the `html` content from the table?
If I try doing this:
print str(table)
I get this output, which is not very useful:
[<Element table at 0x10b20b6d8>]
Answer: You can use `tostring()` to print raw HTML of the element :
from lxml import html
.....
html.tostring(table[0])
|
Can't launch selenium tests inside container, WebDriverException:Chrome failed to start: exited abnormally
Question: My problem is - I can't launch selenium tests inside container.
my docker file looks like:
FROM selenium/node-chrome
EXPOSE 9090
USER root
RUN mkdir /code
WORKDIR /code
ADD requirements_tests.txt /code/
RUN apt-get update
RUN apt-get install -y python python-dev python-distribute python-pip
RUN pip install -r requirements_tests.txt
ADD /selenium_tests HTMLTestRunner.py launch_selenium_tests.py chromedriver /code/
`/selenium_tests` contain all my tests, `launch_selenium_tests.py` \- its my
launcher for tests.
import time
from pyvirtualdisplay import Display
import os
class SeleniumTestCase(unittest.TestCase):
def __init__(self, *args, **kwargs):
"""
todo add validation for arguments
:param args:
:param kwargs:
"""
super(SeleniumTestCase, self).__init__(args[0])
self.base_url = args[1]
def setUp(self):
chromedriver = "./chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
self.driver = webdriver.Chrome(executable_path='./chromedriver')
self.display = Display(visible=0, size=(800, 800))
self.display.start()
this is my `test_case` file
So, when i start docker container with -it /bin/bash (interactive mode with
terminal) and launch tests i get this error msg:
WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
(Driver info: chromedriver=2.20.353124 (035346203162d32c80f1dce587c8154a1efa0c3b),platform=Linux 4.2.0-35-generic x86_64)
I already try to switch container with selenium, rewrite some lines of code,
but nothing worked for me.
Any idea how i can fix this?
Answer: I would suggest two things:
Quick - start the `display`before starting the driver so the code:
def setUp(self):
chromedriver = "./chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
self.display = Display(visible=0, size=(800, 800))
self.display.start()
self.driver = webdriver.Chrome(executable_path='./chromedriver')
Second I would strongly suggest to use the `service_log_path` and
`service_args` arguments to the selenium webdriver to see output from the
chromedriver:
service_log_path = "{}/chromedriver.log".format(outputdir)
service_args = ['--verbose']
driver = webdriver.Chrome('/path/to/chromedriver',
service_args=service_args,
service_log_path=service_log_path)
This may provide missing info why the driver failed to start
|
Can not get table header elements
Question: In Python, I have a variable containing an `html` table element obtained like
this:
page = requests.get('http://www.myPage.com')
tree = html.fromstring(page.content)
table = tree.xpath('//table[@class="list"]')
The `table` variable has this content:
<table class="list">
<tr>
<th>Date(s)</th>
<th>Sport</th>
<th>Event</th>
<th>Location</th>
</tr>
<tr>
<td>Jan 18-31</td>
<td>Tennis</td>
<td><a href="tennis-grand-slam/australian-open/index.htm">Australia Open</a></td>
<td>Melbourne, Australia</td>
</tr>
</table>
I am trying to extract the headers like this:
rows = iter(table)
headers = [col.text for col in next(rows)]
print "headers are: ", headers
However, when I print the `headers` variable I get this:
headers are: ['\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n
', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n
', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n ', '\n
', '\n ', '\n ']
How can I extract the headers properly?
Answer: **Try this:**
from lxml import html
HTML_CODE = """<table class="list">
<tr>
<th>Date(s)</th>
<th>Sport</th>
<th>Event</th>
<th>Location</th>
</tr>
<tr>
<td>Jan 18-31</td>
<td>Tennis</td>
<td><a href="tennis-grand-slam/australian-open/index.htm">Australia Open</a></td>
<td>Melbourne, Australia</td>
</tr>
</table>"""
tree = html.fromstring(HTML_CODE)
headers = tree.xpath('//table[@class="list"]/tr/th/text()')
print "Headers are: {}".format(', '.join(headers))
**Output:**
Headers are: Date(s), Sport, Event, Location
|
Call predict function for nearest neighbor (knn) classifier with Python scikit sklearn
Question: I've tried to call predict function of nearest neighbor and got the following
error:
AttributeError: 'NearestNeighbors' object has no attribute 'predict'
The code is:
from sklearn.neighbors import NearestNeighbors
samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
neigh = NearestNeighbors()
neigh.fit(samples)
neigh.predict([[1., 1., 1.]]) # this cause error
I've read the documentation and it has predict function: <http://scikit-
learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html>
How to do the predict?
Answer: Your are confusing the `NearestNeighbors` class and the `KNeighborsClassifier`
class. Only the second one has the `predict` function.
Note the example from the [link](http://scikit-
learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)
you posted:
X = [[0], [1], [2], [3]]
y = [0, 0, 1, 1]
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X, y)
print(neigh.predict([[1.1]]))
print(neigh.predict_proba([[0.9]]))
The `NearestNeighbors` class is unsupervised and can not be used for
classification but only for nearest neighbour searches.
|
Egg and basket game in python
Question: I am a beginner in python programming. I got a course project to make a game
using python. I tried making the egg and basket game using pygame. The game is
working to some extent. I am able to make the egg fall and move the basket on
key press, but I am not able to make egg fall continuously i.e. just one egg
falls and after that it stops falling. I am expecting eggs to fall one after
other like in the actual game.
And I have no idea how to know when the egg falls in the basket and how to
increase the score when it falls in the basket.
Can you please help me out??
[Screenshot of my game](http://i.stack.imgur.com/XyPYJ.png)
#The egg and basket game
import pygame
from pygame.locals import *
import time
import random
clock = pygame.time.Clock()
x=260
y=500
#Screen initialize
pygame.init()
pygame.font.init()
screen=pygame.display.set_mode((600,600))
pygame.display.set_caption("egg")
#Background
cloud=pygame.image.load("clouds.jpg")
cloud=pygame.transform.scale(cloud,(600,600))
screen.blit(cloud,(0,0))
#Basket
basket=pygame.image.load("basket.jpg")
basket=pygame.transform.scale(basket,(80,80))
screen.blit(basket,(x,y))
pygame.display.update()
#egg
egg=pygame.image.load("egg.jpg")
egg=pygame.transform.scale(egg,(20,20))
#screen.blit(egg,(290,20))
pygame.display.update()
#Movement of basket
ychange=0
xchange=0
exiting=False
for yegg in range(20,550):
#for i in range(0,100):
xegg=random.randrange(50,550)
while not exiting:
#xegg=random.randrange(50,550)
#for yegg in range(20,550):
if yegg<550:
ychange+=1
pygame.display.update()
clock.tick(60)
screen.blit(egg,(xegg,ychange))
else:
yegg=20
yegg=yegg+ychange
pygame.display.update()
clock.tick(60)
screen.blit(egg,(xegg,yegg))
#yegg=20
pygame.display.update()
clock.tick(60)
#yegg=20
for event in pygame.event.get():
print(event)
if(event.type==pygame.QUIT):
exiting=True
pygame.quit()
quit()
if(event.type==pygame.KEYDOWN):
if(event.key==pygame.K_LEFT):
xchange=-5
if(event.key==pygame.K_RIGHT):
xchange=5
screen.blit(basket,(x,y))
if(event.type==pygame.KEYUP):
if(event.key==pygame.K_LEFT or event.key==pygame.K_RIGHT):
xchange=0
x=x+xchange
print(x)
screen.blit(cloud,(0,0))
screen.blit(basket,(x,y))
pygame.display.update()
clock.tick(60)
i=i+1
ychange=0
#random position of eggs
#MOVEMENT OF egg
Answer: First of all use .png images to make the white square around the images to
disappear. Use `pygame.image.load("myimage.png").convert_alpha()`
xegg=random.randrange(50,550)
the above line should be inside the while loop so that you get random x values
each iteration. I've done some changes to you code and now the eggs fall from
random positions. To catch them you must check for collisions between the
basket and the eggs.
#The egg and basket game
import pygame
from pygame.locals import *
import time
import random
clock = pygame.time.Clock()
x=260
y=500
#Screen initialize
pygame.init()
pygame.font.init()
screen=pygame.display.set_mode((600,600))
pygame.display.set_caption("egg")
#Background
cloud=pygame.image.load("clouds.png").convert_alpha()
cloud=pygame.transform.scale(cloud,(600,600))
screen.blit(cloud,(0,0))
#Basket
basket=pygame.image.load("basket.png").convert_alpha()
basket=pygame.transform.scale(basket,(80,80))
screen.blit(basket,(x,y))
pygame.display.update()
#egg
egg=pygame.image.load("eggs.png").convert_alpha()
egg=pygame.transform.scale(egg,(20,20))
#screen.blit(egg,(290,20))
pygame.display.update()
#Movement of basket
ychange=0
xchange=0
exiting=False
xegg = random.randrange(50,550)
yegg = 20
while not exiting:
#xegg=random.randrange(50,550)
#for yegg in range(20,550):
print yegg
if yegg<550:
yegg += 5
pygame.display.update()
clock.tick(60)
screen.blit(egg,(xegg,yegg))
else:
yegg=20
xegg = random.randrange(50,550)
yegg=yegg+ychange
pygame.display.update()
clock.tick(60)
screen.blit(egg,(xegg,yegg))
#yegg=20
pygame.display.update()
clock.tick(60)
#yegg=20
for event in pygame.event.get():
print(event)
if(event.type==pygame.QUIT):
exiting=True
pygame.quit()
quit()
if(event.type==pygame.KEYDOWN):
if(event.key==pygame.K_LEFT):
xchange=-5
if(event.key==pygame.K_RIGHT):
xchange=5
screen.blit(basket,(x,y))
if(event.type==pygame.KEYUP):
if(event.key==pygame.K_LEFT or event.key==pygame.K_RIGHT):
xchange=0
x=x+xchange
print(x)
screen.blit(cloud,(0,0))
screen.blit(basket,(x,y))
pygame.display.update()
clock.tick(60)
i=i+1
ychange=0
#random position of eggs
#MOVEMENT OF egg
Go to [pygame collisions](http://www.pygame.org/docs/ref/sprite.html) and
learn collisons. Go [here](http://www.petercollingridge.co.uk/) for some very
good examples. Also [here](http://thepythongamebook.com/en:start) for an
excellent Pygame guide.
|
How to extract the rating of a movie in imdb from an image element using scrapy in python
Question: I am trying to scrape imdb using pythons scrapy. however I am not being able
to get the rating info from the page as shown below:
[image](http://i.stack.imgur.com/6OnEM.png)
I am using the below code:
from scrapy.spiders import Spider
from scrapy.selector import Selector
from imdb.items import ImdbItem
class ImdbSpider(Spider):
name = "imdb"
allowed_domains = ["imdb.com"]
start_urls = [
"http://www.imdb.com/title/tt0068646/reviews?ref_=%20best",
]
def parse(self, response):
sel = Selector(response)
ratings = sel.xpath('//div[contains(@id,"tn15content")]/div/img')
items = []
for rating in ratings:
item = ImdbItem()
item['rating'] = rating.xpath('/@alt').extract()
items.append(item)
return items
I am sorry if this is a very basic question but I am very new to python and
web scraping and can't really figure out how to achieve so would someone
kindly guide me??
Answer: The `/` is extra, use:
rating.xpath('@alt').extract_first()
|
Django - Foreman cannot find installed modeles
Question: I am trying to use [Foreman](https://github.com/ddollar/foreman) /
[Honcho](https://github.com/nickstenning/honcho) to manage my Procfile-based
Django application. When I start the app view the normal `python manage.py
runserver`, everything works fine. However, when I start the app via `honcho
start` or `foreman start web`, I am receiving this error:
11:59:31 system | web.1 started (pid=27959)
11:59:31 web.1 | [2016-04-26 11:59:31 -0700] [27959] [INFO] Starting gunicorn 19.4.5
11:59:31 web.1 | [2016-04-26 11:59:31 -0700] [27959] [INFO] Listening at: http://0.0.0.0:5000 (27959)
11:59:31 web.1 | [2016-04-26 11:59:31 -0700] [27959] [INFO] Using worker: sync
11:59:31 web.1 | [2016-04-26 11:59:31 -0700] [27962] [INFO] Booting worker with pid: 27962
11:59:31 web.1 | [2016-04-26 18:59:31 +0000] [27962] [ERROR] Exception in worker process:
11:59:31 web.1 | Traceback (most recent call last):
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/gunicorn/arbiter.py", line 515, in spawn_worker
11:59:31 web.1 | worker.init_process()
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/gunicorn/workers/base.py", line 122, in init_process
11:59:31 web.1 | self.load_wsgi()
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/gunicorn/workers/base.py", line 130, in load_wsgi
11:59:31 web.1 | self.wsgi = self.app.wsgi()
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
11:59:31 web.1 | self.callable = self.load()
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
11:59:31 web.1 | return self.load_wsgiapp()
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
11:59:31 web.1 | return util.import_app(self.app_uri)
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/gunicorn/util.py", line 357, in import_app
11:59:31 web.1 | __import__(module)
11:59:31 web.1 | File "../wsgi.py", line 17, in <module>
11:59:31 web.1 | application = get_wsgi_application()
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application
11:59:31 web.1 | django.setup()
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup
11:59:31 web.1 | apps.populate(settings.INSTALLED_APPS)
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate
11:59:31 web.1 | app_config = AppConfig.create(entry)
11:59:31 web.1 | File "/Library/Python/2.7/site-packages/django/apps/config.py", line 90, in create
11:59:31 web.1 | module = import_module(entry)
11:59:31 web.1 | File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
11:59:31 web.1 | __import__(name)
11:59:31 web.1 | ImportError: No module named django_messages
11:59:31 web.1 | [2016-04-26 18:59:31 +0000] [27962] [INFO] Worker exiting (pid: 27962)
11:59:31 web.1 | [2016-04-26 11:59:31 -0700] [27959] [INFO] Shutting down: Master
11:59:31 web.1 | [2016-04-26 11:59:31 -0700] [27959] [INFO] Reason: Worker failed to boot.
11:59:31 system | web.1 stopped (rc=3)
This is with attempting to install the [django-
message](https://github.com/arneb/django-messages) module. I have the same
issues with other modules as well. I'm also running into the same issue with
[django-webpack-loader](https://github.com/owais/django-webpack-loader). I
should also mention that I am receiving the error both within a virtualenv and
when it is deactivated.
Here's the command for installing django-messages:
$> pip install django-messages
Requirement already satisfied (use --upgrade to upgrade): django-messages in ./lib/python2.7/site-packages
Installed Apps;
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'my_app',
'django_messages',
)
I'm not sure what other information I can provide to help troubleshoot, but
the basic question is how do I get installed apps to work with foreman /
honcho?
Answer: Honcho and Foreman don't use the Python executable and libs from your
virtualenv, and while you didn't include your Honcho Procfile, just calling
`python` will use the System-wide executable and libs.
Unfortunately, you can't just call `/path/to/virtualenv/bin/activate` as part
of the Procfile, because Honcho exits when one of the subprocesses exits, as
discussed [in this Github issue
thread](https://github.com/nickstenning/honcho/issues/127). However, you can
execute this script and your python script in one subshell using the `&&`
operator to chain them together:
web: source venv/bin/activate && python manage.py
Alternatively, you might have better luck modifying your `wsgi.py` wrapper to
explicitly pull in your virtualenv's libraries before importing your Django
application:
# Activate your virtual env
activate_env=os.path.expanduser("/path/to/virtualenv/bin/activate_this.py")
execfile(activate_env, dict(__file__=activate_env))
These should be executed before importing any modules (other than `os`) to
ensure that your application reads the correct site libraries.
Finally, Honcho itself supports the use of `.env` files alongside the Procfile
which set up the environment the processes are run in. The format of this file
is the same as any bash script. You could use the .env file to set
`PYTHONPATH` and `PYTHONHOME` to point to the libraries in your Virtualenv,
and then specify the explicit Python interpreter inside the Virtualenv from
the Procfile.
**.env File**
PYTHONHOME=/path/to/virtualenv/lib/python2.7
PYTHONHOME=
|
Importing CSV from URL and displaying rows on Python by using Requests
Question:
import csv
import requests
webpage = requests.get('http://www.pjm.com/pub/account/lmpda/20160427-da.csv')
reader=csv.reader(webpage)
for row in reader:
print(row)
Hi, I'm new to Python and I'm trying to open a CSV file from a URL & then
display the rows so I can take the data that I need from it. However, the I
get an error saying :
> Traceback (most recent call last): File "", line 1, in for row in reader:
> Error: iterator should return strings, not bytes (did you open the file in
> text mode?)
Thank you in advance.
Answer: Use _.text_ as you are getting _bytes_ returned in python3:
webpage = requests.get('http://www.pjm.com/pub/account/lmpda/20160427-da.csv')
reader = csv.reader([webpage.text])
for row in reader:
print(row)
That gives `_csv.Error: new-line character seen in unquoted field` so split
the lines after decoding, also `stream=True` will allow you to get the data in
chunks not all at once so you can filter by row and write:
import csv
import requests
webpage = requests.get('http://www.pjm.com/pub/account/lmpda/20160427-da.csv', stream=1)
for line in webpage:
print(list(csv.reader((line.decode("utf-8")).splitlines()))[0])
Which gives you:
['Day Ahead Hourly LMP Values for 20160427', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '']
['', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '']
['00', '600', '700', '800', '900', '1000', '1100', '1200', '1300', '1400', '1500', '1600', '1700', '1800', '1900', '2000', '2100', '2200', '2300', '2400', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '']
['', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '']
['1', '25.13', '25.03', '28.66', '25.94', '21.74', '19.47', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '']
['', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '']
['', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '']
['600', '600', '600', '700', '700', '700', '800', '800', '800', '900', '900', '900', '1000', '1000', '1000', '1100', '1100', '1100', '1200', '1200', '1200', '1300', '1300', '1300', '1400', '1400', '1400', '1500', '']
['1500', '1500', '1600', '1600', '1600', '1700', '1700', '1700', '1800', '1800', '1800', '1900', '1900', '1900']
['', '2000', '2000', '2000', '2100', '2100', '2100', '2200', '2200', '2200', '2300', '2300', '2300', '2400', '2400', '2400', '']
['lLMP', 'CongestionPrice', 'MarginalLossPrice', 'TotalLMP', 'CongestionPrice', 'MarginalLossPrice', 'TotalLMP', 'CongestionPrice', 'MarginalLossPrice', 'Tot']
['alLMP', 'CongestionPrice', 'MarginalLossPrice', 'TotalLMP', 'CongestionPrice', 'MarginalLossPrice', 'TotalLMP', 'CongestionPrice', 'MarginalLossPrice', 'To']
['talLMP', 'CongestionPrice', 'MarginalLossPrice', 'TotalLMP', 'CongestionPrice', 'MarginalLossPrice', 'TotalLMP', 'CongestionPrice', 'MarginalLossPrice', 'T']
.......................................
|
Django - env folder not getting created while using VS 2015 and python 2.7
Question: I am using VS 2015 and python 2.7 to create a Web application using Django. VS
2015 asked to create a virtual env but I am getting the following error and
the env folder is not getting created;
Installing 'pip' package manager.
pip is already available.
'pip' was installed successfully.
Installing 'pip' package manager.
pip is already available.
'pip' was installed successfully.
Collecting https://go.microsoft.com/fwlink/?LinkID=317969
Exception:
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\pip\basecommand.py", line 209, in main
status = self.run(options, args)
File "C:\Python27\lib\site-packages\pip\commands\install.py", line 299, in run
requirement_set.prepare_files(finder)
File "C:\Python27\lib\site-packages\pip\req\req_set.py", line 360, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "C:\Python27\lib\site-packages\pip\req\req_set.py", line 577, in _prepare_file
session=self.session, hashes=hashes)
File "C:\Python27\lib\site-packages\pip\download.py", line 810, in unpack_url
hashes=hashes
File "C:\Python27\lib\site-packages\pip\download.py", line 649, in unpack_http_url
hashes)
File "C:\Python27\lib\site-packages\pip\download.py", line 842, in _download_http_url
stream=True,
File "C:\Python27\lib\site-packages\pip\_vendor\requests\sessions.py", line 480, in get
return self.request('GET', url, **kwargs)
File "C:\Python27\lib\site-packages\pip\download.py", line 378, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
File "C:\Python27\lib\site-packages\pip\_vendor\requests\sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\site-packages\pip\_vendor\requests\sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\site-packages\pip\_vendor\cachecontrol\adapter.py", line 46, in send
resp = super(CacheControlAdapter, self).send(request, **kw)
File "C:\Python27\lib\site-packages\pip\_vendor\requests\adapters.py", line 447, in send
raise SSLError(e, request=request)
SSLError: ('_ssl.c:574: The handshake operation timed out',)
System.InvalidOperationException: Virtual environment was not created at 'C:\Users\Jeri_Dabba\Google Drive\Python\Django\DjangoWebProject1\DjangoWebProject1\env\'
at Microsoft.PythonTools.Project.VirtualEnv.<CreateAndInstallDependencies>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.PythonTools.Project.PythonProjectNode.<CreateOrAddVirtualEnvironment>d__148.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.PythonTools.Project.AddVirtualEnvironmentOperation.<Run>d__10.MoveNext()
Could anyone please help me out with this as I am new to django.
Answer: You need to set the proxy configuration in your environment. For Windows, this
means setting the environment variables. Simply search for `environment
variables` in the Start menu, and click on `Edit environment variables for
your account`.
You need to add two environment variables `HTTP_PROXY` and `HTTPS_PROXY`, the
value is in the format `http://user:[email protected]:port`.
For example, if your proxy server is `proxy.example.com` and is listening on
port `8080`, your value is `http://user:[email protected]:8080`
Check with your network administrator for the exact settings for your
environment.
Once you have updated the variables (and clicked OK), it is very important to
**restart visual studio** otherwise the variables will not be read.
|
How to append a new column to a CSV file using Python?
Question: I have stored a set of four numbers in an array which I want to add to a CSV
file under the 'Score' column.
with open('Player.csv', 'ab') as csvfile:
fieldnames = ['Score']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for i in range(0, l):
writer.writerow({'Score': score[i]})
It appends to the file, but this adds a new row instead of a new column. Could
someone guide into appending it into a new column?
Answer: Probably the simplest solution would be to use
[Pandas](http://pandas.pydata.org/). It's overkill, but it generally is much
cleaner for CSV manipulation that extends beyond straight reading/writing.
Say I have a CSV file as follows:
ASSETNUM ASSETTAG ASSETTYPE AUTOWOGEN
cent45 9164 0
cent45 9164 0
Then, the relevant code to add a column would be as follows:
import pandas as pd
df = pd.read_csv('path/to/csv.csv', delimiter='\t')
# this line creates a new column, which is a Pandas series.
new_column = df['AUTOWOGEN'] + 1
# we then add the series to the dataframe, which holds our parsed CSV file
df['NewColumn'] = new_column
# save the dataframe to CSV
df.to_csv('path/to/file.csv', sep='\t')
This adds a new column, scales well, and is easy to use. The resulting CSV
file then would look as follows:
ASSETNUM ASSETTAG ASSETTYPE AUTOWOGEN NewColumn
0 cent45 9164 0 1
1 cent45 9164 0 1
Compare this to the CSV module code for the same purpose (modified from
[here](http://stackoverflow.com/a/23682707/4131059)):
with open('path/to/csv.csv', 'r') as fin:
reader = csv.reader(fin, delimiter='\t')
with open('new_'+csvfile, 'w') as fout:
writer = csv.writer(fout, delimiter='\t')
# set headers here, grabbing headers from reader first
writer.writerow(next(reader) + ['NewColumn']
for row in reader:
# use whatever index for the value, or however you want to construct your new value
new_value = reader[-1] + 1
row.append(new_value)
writer.writerow(row)
|
Python Heads and Tails
Question: I need help for a Python game I am currently creating. When running the code
it will stay open and do nothing after entering a number of times to flip.
Here is what I have so far:
# Heads and Tails generator
# User how many times they wish to flip a coin and will recieve the results
CoinTosses = int(input("How many coins do you wish to flip: "))
Heads = 0
Tails = 0
CurrentCoinToss = 0
from random import randint
while CoinTosses != 0:
CurrentCoinToss == int(randint(1, 2))
if CurrentCoinToss == 1:
Heads += 1
CoinTosses -= 1
if CurrentCoinToss == 2:
Tails += 1
CoinTosses -= 1
print("During this round you recieved: ", Heads, " and", Tails, " Tails!")
input("Press the enter key to exit")
What is wrong with this? I have studied my code and nothing SHOULD be wrong.
Answer: Change this line
CurrentCoinToss == int(randint(1, 2))
to this
CurrentCoinToss = int(randint(1, 2))
|
How do I return JSON-data from python to javascript?
Question: I'm working on a pythonfile which returns the variable "distance" and sends it
to a javascriptfile where I can put the value on my webpage.
My problem is that I don't know how to send the value from python to
javascript. I've heard you have to make the pythonfile return in JSON-format
and then make a ajax-request, but I can't find how to do it anywhere.
My question is: How do I set up the connection which makes me get the JSON-
data in javascript? I would really appriciate if someone showed me using code,
I'm very new to both python and javascript..
Edit: The data comes from a RaspberryPi with a distance-sensor. My python code
is:
import RPi.GPIO as GPIO
import time
import io, json
GPIO.setmode(GPIO.BCM)
TRIG = 14
ECHO = 15
GPIO.setup(TRIG,GPIO.OUT)
GPIO.setup(ECHO,GPIO.IN)
revers = 1
while revers == 1:
GPIO.output(TRIG,0)
time.sleep(0.5)
GPIO.output(TRIG,1)
time.sleep(0.00001)
GPIO.output(TRIG,0)
while GPIO.input(ECHO)==0:
pulse_start = time.time()
while GPIO.input(ECHO)==1:
pulse_end = time.time()
pulse_duration = pulse_end - pulse_start
distance = pulse_duration * 17150
GPIO.cleanup()
I'm not yet to write anything in javascript, simply because I don't know how
to make the ajax call, but my goal is to make the variable distance into JSON-
format and get it up on my webpage.
Answer: The python program can return any value in any format you want, but json is a
convenient format, which can be readily handled by both python programs and
javascript.
Your javascript needs to send a request to a server. The request will specify
that it wants to retrieve the python file. Note that a javascript request is
sent in reaction to some event--like the user clicking on a button.
The python program will reside somewhere in the directory structure of the
server. If your server is setup correctly, then when the request for a python
file is received, instead of returning the text of the python file, the server
will execute the python program and return the output of the python program.
The easiest way to make an ajax (i.e. a javascript) request is with jquery.
There are literally 10,000 tutorials, blogs, etc. about how to make ajax
requests with jquery. Here is one:
<http://www.tutorialspoint.com/jquery/jquery-ajax.htm>
Here is the relevant info in the jquery docs:
<https://api.jquery.com/jquery.get/>
> I'm very new to both python and javascript..
Then your desired program is most likely too complex.
Here is an ajax example:
**page.html**
<!DOCTYPE html>
<html>
<head>
<title>My Web Page</title>
</head>
<body>
<p>Hello</p>
<button id="my_button" type="button">Click me</button>
<div id="ajax_results"></div>
<!-- Download jquery library: -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.0/jquery.min.js"></script>
<!-- My jquery code: -->
<script>
$("#my_button").on('click', function() {
$.get("http://localhost:8080/cgi-bin/my_prog.py", function(data) {
$("#ajax_results").text(data);
})
});
</script>
</body>
</html>
I make a request to my local server using the following url in my browser:
http://localhost:8080/page.html
which loads page.html in my browser. As the page loads, my jquery code
executes, which adds an onclick handler to the button:
$("#my_button").on('click', function() {....
Thereafter, if the button is clicked, the function will execute.
The following python program resides in a directory on my local server:
**my_prog.py**
#!/usr/bin/env python3.4
print("Content-Type: text/html\n\n")
distance = 10
print(distance) #This is the body of the response
If I click on the button displayed in the web page, the jquery code sends a
request to the server for the file my_prog.py:
$.get("http://localhost:8080/cgi-bin/my_prog.py", ....
My server is setup to execute that file--rather than return the text of the
file--then return the program's output as the response.
When the jquery code receives the response from the server, jquery calls the
following function:
function(data) {
$("#ajax_results").text(data);
})
passing the body of the response as an argument. The function inserts the body
of the response, data, into the html tag with the id "ajax_results". Because
the body of the response is the string "10", 10 is displayed in the web page.
|
How to get url from a new window by using Selenium with phantomjs
Question: I want to get a new window url by using selenium, and using PhantomJs is more
efficient than Firefox. python code is here:
from selenium import webdriver
renren = webdriver.Firefox()
#renren = webdriver.PhantomJS()
renren.get("file:///home/xjz/Desktop/html/easy.html")
renren.execute_script("windows()")
now_handle1 = renren.current_window_handle
all_handles1 = renren.window_handles
for handle1 in all_handles1:
if handle1 != now_handle1:
renren.switch_to_window(handle1)
print renren.current_url
print renren.page_source
In script "windows()", it will open a new window for <http://www.renren.com/>.
When I use Firefox,I get current url and context of <http://www.renren.com/> .
But I get "about:blank" of the url and "" of the context.It means I get failed
when I use PhantomJS. So how can I get current url when I use selenium with
PhantomJS. Thanks a lot.
Answer: You can add sleep time in your code before getting the current URL.
from selenium import webdriver
renren = webdriver.Firefox()
...
...
...
import time
time.sleep(10)# in seconds
print renren.current_url
..
|
Amazon S3 and Cloudfront - Publish file uploaded as hashed filename
Question: **Technologies:**
* Python3
* Boto3
* AWS
I have a project built using Python3 and Boto3 to communicate with a bucket in
Amazon S3 service.
The process is that a user posts images to the service; these' images are
uploaded to an S3 bucket, and can be served through amazon cloudfront using a
hashed file name instead of the real file name.
**Example:**
* (S3) Upload key: /category-folder/png/image.png
* (CloudFront) Serve: `http://d2949o5mkkp72v.cloudfront.net/d824USNsdkmx824`
I want to file uploaded to S3, appear as hash number as file name in
cloudfront server.
Does anyone have knowledge that makes S3 or cloudfront automatically convert
and publish a file-name to a hash name.
Answer: In order to suffice my needs I created the fields needed to maintain the keys
(to make them unique; both on S3 and in my mongodb)
**Fields** :
original_file_name = my_file_name
file_category = my_images, children, fun
file_type = image, video, application
key = uniqueID
With the mentioned fields; then one can check if the key exists by simply
searching for the key, the new file_name, the category, and the type; if it
exists in the database then file exists.
**To generate the unique id:**
def get_key(self):
from uuid import uuid1
return uuid1().hex[:20]
This limits the ID to the length of 20 characters.
|
Automatically detect identical consecutive std::string::find() calls
Question: During a code review, i found source code like this:
void f_odd(std::string &className, std::string &testName)
{
if (className.find("::") != std::string::npos)
{
testName = className.substr(className.find("::") + 2);
(void)className.erase(className.find("::"), std::string::npos);
}
}
Within this function, std::string::find() is called three times with the same
pattern( here "::").
This code can of course be refactored to
void f(std::string &className, std::string &testName)
{
const size_t endOfClassNamePos = className.find("::");
if (endOfClassNamePos != std::string::npos)
{
testName = className.substr(endOfClassNamePos + 2);
(void)className.erase(endOfClassNamePos, std::string::npos);
}
}
where find is called only once.
**Question**
Does anybody know a strategy to detecting such a pattern like this? I am
having a huge code base, where i intend to spot this pattern. I plan to use a
Windows or a Linux environment.
**Potential Strategies**
1. Use/adapt a static code analysis tool, like cppcheck to detect these kind of oddities.
2. Search within the code base with regular expression.
3. Use/adapt clang-tidy for detection of this pattern.
4. Write a custom checker in some language (e.g. Python) that detects these issues. In this case, the checking should be performed on pre-processed code.
**No Go's**
* Manual review
* * *
**Update 1**
I decided to start with potential strategy 1). I plan to adapt cppcheck to
catch this issue.
Cppcheck offers a possibility to write customized rules, based on PCRE regular
expressions. For this, cppcheck has to be compiled with enabled PCRE support.
Since the current test environment is Linux-based, the following commands can
be used to download the latest version of cppcheck:
`git clone https://github.com/danmar/cppcheck.git && cd cppcheck`
After that, compile and install the tool as follows:
`sudo make install HAVE_RULES=yes`
Now the basic tool setup is done. In order to develop a cppcheck-rule, i
prepared a simple test case (file: test.cpp), similar to the sample code in
the first section of this article. This file contains three functions and the
cppcheck-rule shall emit a warning on `f_odd` and `f_odd1` about consecutive
identical `std::string::find` calls.
test.cpp:
#include <string>
void f(std::string &className, std::string &testName)
{
const size_t endOfClassNamePos = className.find("::");
if (endOfClassNamePos != std::string::npos)
{
testName = className.substr(endOfClassNamePos + 2);
(void)className.erase(endOfClassNamePos, std::string::npos);
}
}
void f_odd(std::string &className, std::string &testName)
{
if (className.find("::") != std::string::npos)
{
testName = className.substr(className.find("::") + 2);
(void)className.erase(className.find("::"), std::string::npos);
}
}
#define A "::"
#define B "::"
#define C "::"
void f_odd1(std::string &className, std::string &testName)
{
if (className.find(A) != std::string::npos)
{
testName = className.substr(className.find(B) + 2);
(void)className.erase(className.find(C), std::string::npos);
}
}
So far so good. Now cppcheck has to be tweaked to catch consecutive identical
`std::string::find` calls. For this i have created a [cppcheck_rule-
file](https://github.com/orbitcowboy/cppcheck_rules/blob/master/rules/rule.xml)
that contains a regular expression that matches consecutive identical
`std::string::find` calls:
<?xml version="1.0"?>
<rule>
<tokenlist>normal</tokenlist>
<pattern><![CDATA[([a-zA-Z][a-zA-Z0-9]*)(\s*\.\s*find)(\s*\(\s*\"[ -~]*\"\s*\))[ -\{\n]*(\1\2\3)+[ -z\n]]]></pattern>
<message>
<severity>style</severity>
<summary>Found identical consecutive std::string::find calls.</summary>
</message>
This file can be used to extend cppcheck about a new check. Lets try:
`cppcheck --rule-file=rules/rule.xml test/test.cpp`
and the output is
Checking test/test.cpp...
[test/test.cpp:14]: (style) Found identical consecutive std::string::find calls.
[test/test.cpp:26]: (style) Found identical consecutive std::string::find calls.
Now, identical consecutive `std::string::find` calls can be detected in C/C++
codes. Does anybody know a better/more efficient or more clever solution?
References:
* [Checkout the cppcheck_rule file on github](https://github.com/orbitcowboy/cppcheck_rules)
* [How to write custom rules for cppcheck](https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&cad=rja&uact=8&ved=0ahUKEwiwjPSrm7LMAhXMWSwKHX66BH8QFgg_MAQ&url=http%3A%2F%2Fwww.cs.kent.edu%2F~rothstei%2Fspring_12%2Fsecprognotes%2Fcppcheck_writing-rules-1.pdf&usg=AFQjCNFLjfvbJ4NlmqatgOg6TeJ7te2knw&sig2=Cn3z2MZMLH4f1jqOTPa-Og)
* * *
Answer: The main problem with such a tool is that a lexical analysis can only check if
there is _textual_ repetition. Taking your example, calling
`className.find("::")` twice is a potential issue if the variable refers to
the same string twice. But let me add one _small_ change to your code:
`className = className.substr(className.find("::") + 2);`. Suddenly the
meaning of the next `className.find` has changed _dramatically_.
Can you find such changes? You need a full-blown compiler for that, and even
then you have to be pessimistic. Sticking to your example, could `className`
be changed via an iterator? It's not just direct manipulation you need to be
aware of.
Is there no positive news? Well: existing compilers do have a similar
mechanism. It's called Common Subexpression Elimination, and it works
conceptually as you would want it to work in the example above. But that is
also bad news in one way: If the situation is detectable, it's unimportant
because it's already optimized out by the compiler!
|
Flask route to download .xml files leading to 404 Not Found
Question: I have a `flask` (0.10.1) app running on a `Debian Jessie VPS` and powered by
`nginx` (1.6.2). The app is working fine but I have a problem on a specific
`route` I added recently.
The `route` is intended for downloading `.xml` files.
It is dynamic to tell the directory and the file name:
@app.route('/backups/<dir_key>/<filename>')
And it registers a function based on the `flask` `send_from_directory`
function:
def backups(dir_key,filename):
directory = os.path.join(app.config['BACKUPXML_FOLDER'], dir_key)
return send_from_directory(directory, filename, as_attachment=True)
The route is generated thanks to the `flask` `url_for` function, and returned
to the frontend:
return jsonify({
'backupFileUrl': url_for('backups', dir_key=dir_key, filename = filename, _external=True)
})
where it is stored in an `AngularJS` variable:
$scope.backupFileUrl = response.backupFileUrl;
And finally included in a `<a>` tag for download :
<a class="btn btn-primary"
ng-show="sessionDownload"
ng-href="{{ backupFileUrl }}" target="_blank">
<span class="glyphicon glyphicon-save"></span> Télécharger </a>
But when I click on the button, I get the following error :
[](http://i.stack.imgur.com/SNHYu.png)
What is weird to is that :
1. The download is properly triggered when the app is powered by a small `Python` server on a local `Windows` machine.
2. I have a `route` intended for downloads of `.xlsx` files which is actually working, and both on a local `Windows` machine and on the `Jessie VPS`.
Someone see how I can define the `route` to make it work ?
Here is the api architecture if needed :
**api/app.py**
import sys
sys.path.append('../')
from flask_script import Server, Manager
from kosapp import app, db
manager = Manager(app)
if __name__ == '__main__':
manager.run()
**api/config.py**
from os.path import abspath, dirname, join
import tempfile
basedir = dirname(abspath(__file__))
BASEDIR = dirname(abspath(__file__))
DEBUG = True
REPORTS_FOLDER = '/tmp/reports'
# on local machine
# REPORTS_FOLDER = os.path.join(tempfile.gettempdir(), 'reports')
BACKUPXML_FOLDER = '/tmp/backups'
# on local machine
# BACKUPXML_FOLDER = os.path.join(tempfile.gettempdir(), 'backups')
**api/kosapp/__init__.py**
from flask import Flask
app = Flask(__name__)
app.url_map.strict_slashes = False
app.config.from_object('config')
from kosapp import views
**api/kosapp/views.py**
import os
from flask import send_file, jsonify, request, render_template, send_from_directory
from kosapp import app
@app.route('/reports/<dir_key>/<filename>')
def reports(dir_key, filename):
directory = os.path.join(app.config['REPORTS_FOLDER'], dir_key)
return send_from_directory(directory, filename)
@app.route('/backups/<dir_key>/<filename>')
def backups(dir_key,filename):
directory = os.path.join(app.config['BACKUPXML_FOLDER'], dir_key)
return send_from_directory(directory, filename, as_attachment=True)
As a note, the route `'/reports/<dir_key>/<filename>'` is intended for
downloading `.xlsx` file and works fine.
Answer: Did you remember to reload the app on the server? That's usually the problem
if i get different results on my development computer and the web server.
For instance, if you deployed with `gunicorn`, you would have to restart
`gunicorn` so the server would know about the changes to your code.
|
Row into column in Python
Question: I have a CSV file containing a time serie of daily precipitation. The problem
arises of how the data is organized. Here a small sample to ilustrate:
date p01 p02 p03 p04 p05 p06
01-01-1941 33.6 7.1 22.3 0 0 0
01-02-1941 0 0 1.1 11.3 0 0
So, there is a column to each day of the month (p01 is the precipitation of
the day 1, p02 corresponds to the day 2, and so on ). I'd like to have this
structure: one column to date and another to precipitation values.
date p
01-01-1941 33.6
02-01-1941 7.1
03-01-1941 22.3
04-01-1941 0
05-01-1941 0
06-01-1941 0
01-02-1941 0
02-02-1941 0
03-02-1941 1.1
04-02-1941 11.3
05-02-1941 0
06-02-1941 0
I have found some code examples, but unsuccessfully for this specific problem.
In general they suggest to try using pandas, numpy. Does anyone have a
recommendation to solve this issue or a good advice to guide my studies?
Thanks. (I'm sorry for my terrible English)
Answer: I think you can first use [`read_csv`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.read_csv.html), then
[`to_datetime`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.to_datetime.html) with
[`stack`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.stack.html) for reshaping `DataFrame`,
then convert column `days` [`to_timedelta`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.to_timedelta.html) and add it to column `date`:
import pandas as pd
import io
temp=u"""date;p01;p02;p03;p04;p05;p06
01-01-1941;33.6;7.1;22.3;0;0;0
01-02-1941;0;0;1.1;11.3;0;0"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), sep=";")
print df
date p01 p02 p03 p04 p05 p06
0 01-01-1941 33.6 7.1 22.3 0.0 0 0
1 01-02-1941 0.0 0.0 1.1 11.3 0 0
#convert coolumn date to datetime
df.date = pd.to_datetime(df.date, dayfirst=True)
print df
date p01 p02 p03 p04 p05 p06
0 1941-01-01 33.6 7.1 22.3 0.0 0 0
1 1941-02-01 0.0 0.0 1.1 11.3 0 0
#stack, rename columns
df1 = df.set_index('date').stack().reset_index(name='p').rename(columns={'level_1':'days'})
print df1
date days p
0 1941-01-01 p01 33.6
1 1941-01-01 p02 7.1
2 1941-01-01 p03 22.3
3 1941-01-01 p04 0.0
4 1941-01-01 p05 0.0
5 1941-01-01 p06 0.0
6 1941-02-01 p01 0.0
7 1941-02-01 p02 0.0
8 1941-02-01 p03 1.1
9 1941-02-01 p04 11.3
10 1941-02-01 p05 0.0
11 1941-02-01 p06 0.0
#convert column to timedelta in days
df1.days = pd.to_timedelta(df1.days.str[1:].astype(int) - 1, unit='D')
print df1.days
0 0 days
1 1 days
2 2 days
3 3 days
4 4 days
5 5 days
6 0 days
7 1 days
8 2 days
9 3 days
10 4 days
11 5 days
Name: days, dtype: timedelta64[ns]
#add timedelta
df1['date'] = df1['date'] + df1['days']
#remove unnecessary column
df1 = df1.drop('days', axis=1)
print df1
date p
0 1941-01-01 33.6
1 1941-01-02 7.1
2 1941-01-03 22.3
3 1941-01-04 0.0
4 1941-01-05 0.0
5 1941-01-06 0.0
6 1941-02-01 0.0
7 1941-02-02 0.0
8 1941-02-03 1.1
9 1941-02-04 11.3
10 1941-02-05 0.0
11 1941-02-06 0.0
|
DeprecationWarning in sklearn MiniBatchKMeans
Question:
vectors = model.syn0
n_clusters_kmeans = 20 # more for visualization 100 better for clustering
min_kmeans = MiniBatchKMeans(init='k-means++', n_clusters=n_clusters_kmeans, n_init=10)
min_kmeans.fit(vectors)
X_reduced = TruncatedSVD(n_components=50, random_state=0).fit_transform(vectors)
X_embedded = TSNE(n_components=2, perplexity=40, verbose=2).fit_transform(X_reduced)
fig = plt.figure(figsize=(10, 10))
ax = plt.axes(frameon=False)
plt.setp(ax, xticks=(), yticks=())
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.0, top=0.9, wspace=0.0, hspace=0.0)
plt.scatter(X_embedded[:, 0], X_embedded[:, 1], c=None, marker="x")
plt.show()
I want to plot vectors. I am using sklearn.cluster MiniBatchKMeans. Above code
is giving me following deprecation error:
> /usr/local/lib/python3.5/site-packages/sklearn/cluster/k_means_.py:1328:
> DeprecationWarning: This function is deprecated. Please call randint(0, 99 +
> 1) instead 0, n_samples - 1, self.batch_size)
Any suggestions are appreciated. Thanks
Answer: ## Temporarily Suppressing Warnings
The best option to suppress this warning has been described in python's
documentation for the
[warnings](https://docs.python.org/3/library/warnings.html#temporarily-
suppressing-warnings) module.
In this case you can just wrap the clusterizer fitting method using _with_
statement like this:
import warnings
....
min_kmeans = MiniBatchKMeans(...)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
min_kmeans.fit(vectors)
# Rest part of the code
|
H2O python rbind error
Question: I have a 2000 rows data frame and I'm trying to slice the same data frame into
two and combine them together.
t1 = test[:10, :]
t2 = test[20:, :]
temp = t1.rbind(t2)
temp.show()
Then I got this error:
---------------------------------------------------------------------------
EnvironmentError Traceback (most recent call last)
<ipython-input-37-8daeb3375743> in <module>()
2 t2 = test[20:, :]
3 temp = t1.rbind(t2)
----> 4 temp.show()
5 print len(temp)
6 print len(test)
/usr/local/lib/python2.7/dist-packages/h2o/frame.pyc in show(self, use_pandas)
383 print("This H2OFrame has been removed.")
384 return
--> 385 if not self._ex._cache.is_valid(): self._frame()._ex._cache.fill()
386 if H2ODisplay._in_ipy():
387 import IPython.display
/usr/local/lib/python2.7/dist-packages/h2o/frame.pyc in _frame(self, fill_cache)
423
424 def _frame(self, fill_cache=False):
--> 425 self._ex._eager_frame()
426 if fill_cache:
427 self._ex._cache.fill()
/usr/local/lib/python2.7/dist-packages/h2o/expr.pyc in _eager_frame(self)
67 if not self._cache.is_empty(): return self
68 if self._cache._id is not None: return self # Data already computed under ID, but not cached locally
---> 69 return self._eval_driver(True)
70
71 def _eager_scalar(self): # returns a scalar (or a list of scalars)
/usr/local/lib/python2.7/dist-packages/h2o/expr.pyc in _eval_driver(self, top)
81 def _eval_driver(self, top):
82 exec_str = self._do_it(top)
---> 83 res = ExprNode.rapids(exec_str)
84 if 'scalar' in res:
85 if isinstance(res['scalar'], list): self._cache._data = [float(x) for x in res['scalar']]
/usr/local/lib/python2.7/dist-packages/h2o/expr.pyc in rapids(expr)
163 The JSON response (as a python dictionary) of the Rapids execution
164 """
--> 165 return H2OConnection.post_json("Rapids", ast=expr,session_id=H2OConnection.session_id(), _rest_version=99)
166
167 class ASTId:
/usr/local/lib/python2.7/dist-packages/h2o/connection.pyc in post_json(url_suffix, file_upload_info, **kwargs)
515 if __H2OCONN__ is None:
516 raise ValueError("No h2o connection. Did you run `h2o.init()` ?")
--> 517 return __H2OCONN__._rest_json(url_suffix, "POST", file_upload_info, **kwargs)
518
519 def _rest_json(self, url_suffix, method, file_upload_info, **kwargs):
/usr/local/lib/python2.7/dist-packages/h2o/connection.pyc in _rest_json(self, url_suffix, method, file_upload_info, **kwargs)
518
519 def _rest_json(self, url_suffix, method, file_upload_info, **kwargs):
--> 520 raw_txt = self._do_raw_rest(url_suffix, method, file_upload_info, **kwargs)
521 return self._process_tables(raw_txt.json())
522
/usr/local/lib/python2.7/dist-packages/h2o/connection.pyc in _do_raw_rest(self, url_suffix, method, file_upload_info, **kwargs)
592 raise EnvironmentError(("h2o-py got an unexpected HTTP status code:\n {} {} (method = {}; url = {}). \n"+ \
593 "detailed error messages: {}")
--> 594 .format(http_result.status_code,http_result.reason,method,url,detailed_error_msgs))
595
596
EnvironmentError: h2o-py got an unexpected HTTP status code:
500 Server Error (method = POST; url = http://localhost:54321/99/Rapids).
detailed error messages: []
If I count rows (len(temp)), it works find. Also if I change the slicing index
a little bit, it works find too. For example, if I change to this, it shows
the data frame.
t1 = test[:10, :]
t2 = test[:5, :]
Do I miss something here? Thanks.
Answer: Unclear what happened without more information (logs would probably say why
the rbind did not take).
What version are you using? I tried your code with iris on the bleeding edge
and it all worked as expected.
By the way, rbind is typically going to be expensive, especially since what
you're semantically after is a subset:
`test[range(10) + range(20,test.nrow),:]`
should also give you the desired subset (with caveat that you make the full
list of row indices in python and pass it over REST to h2o).
|
Multi-dimension dictionary in configparser
Question: Is it possible to store a multi-dimensional dictionary (3 deep) using the
Python 'configparser' using indentions? The work-around is to split the key
values, but wanted to know if there was a clean way to import directly into a
dictionary.
**DOES NOT WORK - USING SUB-OPTION INDENTION IN CONFIGPARSER**
[OPTIONS]
[SUB-OPTION]
option1 = value1
option2 = value2
option3 = value3
**WORKS - SPLITING USED ON SUB-OPTION VALUES**
[OPTIONS]
SUB-OPTION = 'option1, value1',
'option2, value2',
'option3, value3'
**DICTIONARY VALUES**
dict['OPTIONS']['SUB-OPTION'] = {
option1 : value1,
option2 : value2,
option3 : value3,
}
Answer: ASAIK, there isn't a nested configuration file in that format.
I suggest a json like config file:
{
"OPTIONS": {
"SUB-OPTIONS": {
"option1" : value1,
"option2" : value2,
"option3" : value3,
}
}
}
Then in the code use:
from ast import literal_eval
with open("filename","r") as f:
config = literal_eval(f.read())
|
A way to set up virtual environment directly in Eclipse for Django project?
Question: Is there a way to have Eclipse directly set up a virtual environment for
Django? I can create one independently and import into Eclipse but wondering
if there is a way to have Eclipse/PyDev set one up internally? Using Python
2.7 and/or 3.5, Django, Eclipse Mars and Virtualenv. Have searched python,
eclipse, and django forums.
edited to add prior search history.
Answer: Perhaps this could be useful:
<http://garmoncheg.blogspot.com.ar/2011/07/django-setting-up-django-
virtual.html>
|
Sending data from Python to R to perform statistical test using rpy2
Question: I want to use the [Fisher's exact test
functionality](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/fisher.test.html)
(specifically, the MC simulation functionality) of R with an interface to
Python. I'm trying to do that using
[rpy2](http://rpy.sourceforge.net/rpy2/doc-dev/html/introduction.html), but
it's more difficult than I thought.
I can get an interface to the Fisher's test method using the following code:
import rpy2.robjects as robjects
fisher = robjects.r['fisher.test']
However, how do I pass a `2xN` matrix to the function and retrieve the
p-value?
Answer: Consider importing R's stats package and run the Fisher Test as a Python
function. Do note, the `result` object is `<class
'rpy2.robjects.vectors.ListVector'>` and hence must be converted to a Python
dictionary as shown below.
import rpy2
from rpy2.robjects.numpy2ri import numpy2ri
from rpy2.robjects.packages import importr
import numpy as np
cont = np.reshape(np.arange(0,4), (2,2))
statspackage = importr('stats', robject_translations={'format_perc': '_format_perc'})
result = statspackage.fisher_test(numpy2ri(cont), simulate_p_value = True, B = 100)
# DEPRECATED CONVERSION
import pandas.rpy.common as com
pyresultdict = com.convert_robj(result)
for k, v in pyresultdict.items():
print(k, v)
# data.name ['structure(c(0L, 2L, 1L, 3L), .Dim = c(2L, 2L))']
# p.value [1.0]
# estimate odds ratio 0.0
# dtype: float64
# null.value odds ratio 1.0
# dtype: float64
# conf.int [0.0, 77.90626902008512]
# alternative ['two.sided']
# method ["Fisher's Exact Test for Count Data"]
* * *
Another note, you may receive a warning about the deprecation of
`com.convert_to_r_dataframe` and `com.convert_robj(rdf)` which should be
replaced with `pandas2ri.pandas2ri()` and `pandas2ri` as suggested
[here](http://pandas.pydata.org/pandas-docs/stable/r_interface.html). However,
the conversion on my end does not work for the _ListVector_ object. Ideally,
above conversion would be replaced with below:
# CURRENT CONVERSION
from rpy2.robjects import pandas2ri
pandas2ri.activate()
pyresultdict = pandas2ri.ri2py(result)
for k, v in pyresultdict.items():
print(k, v)
|
Hadoop returning less results than expected
Question: I have two python scripts a mapper and reducer (basically reducer at this
point just prints nothing else) and while locally i get 4 results - strings on
hadoop i get 3. How does this work?
i use Amazon Elastic Map Reduce using Hadoop
mapper.py
#!/usr/bin/env python
import sys
import re
import os
# Constants declaration
WINDOW = 10
OVERLAP = 4
START_POSITION = 0
END_POSITION = 0
# regular expressions
pattern = re.compile("[a-z]*", re.IGNORECASE)
a_to_f_pattern = re.compile("[a-f]", re.IGNORECASE)
g_to_l_pattern = re.compile("[g-l]", re.IGNORECASE)
m_to_r_pattern = re.compile("[m-r]", re.IGNORECASE)
s_to_z_pattern = re.compile("[s-z]", re.IGNORECASE)
# variables initialization
converted_word = ""
next_word = ""
new_character = ""
filename = ""
prev_filename = ""
i = 0
# Read pairs as lines of input from STDIN
for line in sys.stdin:
line.strip()
filename = os.environ['mapreduce_map_input_file']
filename = filename.replace("s3://source123/input/","")
# check if its a new file, and reset start position
if filename != prev_filename:
START_POSITION = 0
next_word = ""
converted_word = ""
prev_filename = filename
# loop through every word that matches the pattern
for word in pattern.findall(line):
new_character = convert(word)
converted_word = converted_word + new_character
if len(converted_word) > (WINDOW - OVERLAP):
next_word = next_word + new_character
# print "word= ", word
# print "converted_word= ", converted_word
else:
END_POSITION = START_POSITION + (len(converted_word) - 1)
print converted_word + "," + str(filename) + "," + str(START_POSITION) + "," + str(END_POSITION)
START_POSITION = START_POSITION + (WINDOW - OVERLAP)
new_character = convert(word)
converted_word = next_word + new_character
log
2016-04-27 19:58:41,293 INFO com.amazon.ws.emr.hadoop.fs.EmrFileSystem (main): Consistency disabled, using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as filesystem implementation
2016-04-27 19:58:41,512 INFO amazon.emr.metrics.MetricsSaver (main): MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: true maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1461784308237
2016-04-27 19:58:41,512 INFO amazon.emr.metrics.MetricsSaver (main): Created MetricsSaver j-KCDMFZJGYO89:i-995f5a41:RunJar:16480 period:60 /mnt/var/em/raw/i-995f5a41_20160427_RunJar_16480_raw.bin
2016-04-27 19:58:43,477 INFO org.apache.hadoop.yarn.client.RMProxy (main): Connecting to ResourceManager at ip-172-31-38-52.us-west-2.compute.internal/172.31.38.52:8032
2016-04-27 19:58:43,673 INFO org.apache.hadoop.yarn.client.RMProxy (main): Connecting to ResourceManager at ip-172-31-38-52.us-west-2.compute.internal/172.31.38.52:8032
2016-04-27 19:58:44,156 INFO com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem (main): Opening 's3://source123/mapper.py' for reading
2016-04-27 19:58:44,267 INFO amazon.emr.metrics.MetricsSaver (main): Thread 1 created MetricsLockFreeSaver 1
2016-04-27 19:58:44,439 INFO com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem (main): Opening 's3://source123/source_reducer.py' for reading
2016-04-27 19:58:44,628 INFO com.hadoop.compression.lzo.GPLNativeCodeLoader (main): Loaded native gpl library
2016-04-27 19:58:44,630 INFO com.hadoop.compression.lzo.LzoCodec (main): Successfully loaded & initialized native-lzo library [hadoop-lzo rev 426d94a07125cf9447bb0c2b336cf10b4c254375]
2016-04-27 19:58:45,046 INFO com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem (main): listStatus s3://source123/input with recursive false
2016-04-27 19:58:45,265 INFO org.apache.hadoop.mapred.FileInputFormat (main): Total input paths to process : 1
2016-04-27 19:58:45,336 INFO org.apache.hadoop.mapreduce.JobSubmitter (main): number of splits:9
2016-04-27 19:58:45,565 INFO org.apache.hadoop.mapreduce.JobSubmitter (main): Submitting tokens for job: job_1461784297295_0004
2016-04-27 19:58:45,710 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl (main): Submitted application application_1461784297295_0004
2016-04-27 19:58:45,743 INFO org.apache.hadoop.mapreduce.Job (main): The url to track the job: http://ip-172-31-38-52.us-west-2.compute.internal:20888/proxy/application_1461784297295_0004/
2016-04-27 19:58:45,744 INFO org.apache.hadoop.mapreduce.Job (main): Running job: job_1461784297295_0004
2016-04-27 19:58:53,876 INFO org.apache.hadoop.mapreduce.Job (main): Job job_1461784297295_0004 running in uber mode : false
2016-04-27 19:58:53,877 INFO org.apache.hadoop.mapreduce.Job (main): map 0% reduce 0%
2016-04-27 19:59:11,063 INFO org.apache.hadoop.mapreduce.Job (main): map 11% reduce 0%
2016-04-27 19:59:14,081 INFO org.apache.hadoop.mapreduce.Job (main): map 22% reduce 0%
2016-04-27 19:59:16,094 INFO org.apache.hadoop.mapreduce.Job (main): map 33% reduce 0%
2016-04-27 19:59:18,106 INFO org.apache.hadoop.mapreduce.Job (main): map 56% reduce 0%
2016-04-27 19:59:19,114 INFO org.apache.hadoop.mapreduce.Job (main): map 67% reduce 0%
2016-04-27 19:59:26,159 INFO org.apache.hadoop.mapreduce.Job (main): map 78% reduce 0%
2016-04-27 19:59:29,178 INFO org.apache.hadoop.mapreduce.Job (main): map 89% reduce 0%
2016-04-27 19:59:30,184 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2016-04-27 19:59:32,196 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 33%
2016-04-27 19:59:34,207 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 67%
2016-04-27 19:59:38,228 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 100%
2016-04-27 19:59:40,246 INFO org.apache.hadoop.mapreduce.Job (main): Job job_1461784297295_0004 completed successfully
2016-04-27 19:59:40,409 INFO org.apache.hadoop.mapreduce.Job (main): Counters: 55
File System Counters
FILE: Number of bytes read=190
FILE: Number of bytes written=1541379
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=873
HDFS: Number of bytes written=0
HDFS: Number of read operations=9
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
S3: Number of bytes read=864
S3: Number of bytes written=130
S3: Number of read operations=0
S3: Number of large read operations=0
S3: Number of write operations=0
Job Counters
Killed map tasks=1
Launched map tasks=9
Launched reduce tasks=3
Data-local map tasks=9
Total time spent by all maps in occupied slots (ms)=6351210
Total time spent by all reduces in occupied slots (ms)=2449170
Total time spent by all map tasks (ms)=141138
Total time spent by all reduce tasks (ms)=27213
Total vcore-milliseconds taken by all map tasks=141138
Total vcore-milliseconds taken by all reduce tasks=27213
Total megabyte-milliseconds taken by all map tasks=203238720
Total megabyte-milliseconds taken by all reduce tasks=78373440
Map-Reduce Framework
Map input records=5
Map output records=3
Map output bytes=124
Map output materialized bytes=562
Input split bytes=873
Combine input records=0
Combine output records=0
Reduce input groups=3
Reduce shuffle bytes=562
Reduce input records=3
Reduce output records=6
Spilled Records=6
Shuffled Maps =27
Failed Shuffles=0
Merged Map outputs=27
GC time elapsed (ms)=2785
CPU time spent (ms)=11670
Physical memory (bytes) snapshot=5282500608
Virtual memory (bytes) snapshot=28472725504
Total committed heap usage (bytes)=5977407488
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=864
File Output Format Counters
Bytes Written=130
2016-04-27 19:59:40,409 INFO org.apache.hadoop.streaming.StreamJob (main): Output directory: s3://source123/output/
Answer: The mapper task converts its inputs into lines and feed the lines to the stdin
of the process.
In this case, you have _multiple_ input files and you're _assuming_ that all
the lines from different files are fed sequentially (i.e. file by file), but
they are likely processed in _parallel_ , so a mapper (getting a couple of
input files) could be resetting its counters more than expected by a
sequential distribution.
|
Fitting an exponential modified gaussian curve to data with Python
Question: I have a data set and a kernel density estimate for those data. I believe the
KDE should be reasonably well described by an [exponentinally modified
Gaussian](https://en.wikipedia.org/wiki/Exponentially_modified_Gaussian_distribution),
so I'm trying to sample from the KDE and fit those samples with a function of
that type. However, when I try to fit using scipy.optimize.curve_fit, my fit
doesn't match the data well at all. My code is
import scipy.special as sse
from scipy.optimize import curve_fit
def fit_func(x, l, s, m):
return 0.5*l*n.exp(0.5*l*(2*m+l*s*s-2*x))*sse.erfc((m+l*s*s-x)/(n.sqrt(2)*s)) # exponential gaussian
popt, pcov = curve_fit(fit_func, n.linspace(0,1,100), data)
My "data set" (from sampling my KDE) is
data = [1.00733940e-09, 1.36882036e-08, 1.44555907e-07, 1.18647634e-06, 7.56926695e-06, 3.75417381e-05, 1.44836578e-04, 4.35259159e-04, 1.02249858e-03, 1.89480681e-03, 2.83377851e-03, 3.60624100e-03, 4.30392052e-03, 5.33527267e-03, 6.95313891e-03, 8.89175932e-03, 1.05631739e-02, 1.15411608e-02, 1.18087942e-02, 1.16473841e-02, 1.14907524e-02, 1.20296850e-02, 1.42949235e-02, 1.90939074e-02, 2.59260288e-02, 3.27250866e-02, 3.73294844e-02, 3.92476016e-02, 3.94803903e-02, 3.88736022e-02, 3.76397612e-02, 3.65042464e-02, 3.72842810e-02, 4.19404962e-02, 5.12185577e-02, 6.39393269e-02, 7.75139966e-02, 8.97085567e-02, 1.00200355e-01, 1.10354564e-01, 1.22123289e-01, 1.37876215e-01, 1.60232917e-01, 1.90218800e-01, 2.25749072e-01, 2.63342328e-01, 3.01468733e-01, 3.41685959e-01, 3.86769102e-01, 4.38219405e-01, 4.95491603e-01, 5.56936603e-01, 6.20721893e-01, 6.85160043e-01, 7.49797233e-01, 8.17175672e-01, 8.92232359e-01, 9.78276608e-01, 1.07437591e+00, 1.17877517e+00, 1.29376679e+00, 1.42302331e+00, 1.56366767e+00, 1.70593547e+00, 1.84278471e+00, 1.97546304e+00, 2.10659735e+00, 2.23148403e+00, 2.34113950e+00, 2.43414110e+00, 2.52261228e+00, 2.62487277e+00, 2.75168928e+00, 2.89831664e+00, 3.04838614e+00, 3.18625230e+00, 3.30842825e+00, 3.42373645e+00, 3.53943425e+00, 3.64686003e+00, 3.72464478e+00, 3.75656044e+00, 3.74189870e+00, 3.68666210e+00, 3.58686497e+00, 3.42241586e+00, 3.16910593e+00, 2.81976459e+00, 2.39676519e+00, 1.94507169e+00, 1.51241642e+00, 1.13287316e+00, 8.22421330e-01, 5.82858108e-01, 4.07338019e-01, 2.84100125e-01, 1.98750792e-01, 1.37317714e-01, 9.01427225e-02, 5.35761233e-02]
and here is my histogram of the real data, the KDE in red, and my attempt at
fitting the KDE in black -
[](http://i.stack.imgur.com/5lR8w.png)
Answer: The exponentially modified Gaussian is defined to be a skewed distribution to
the left and as such, the shape parameter does not change the direction of
this skew.
This is what I have tried.
data.reverse()
popt,pcov=(curve_fit(fit_func, n.linspace(0,1,100), data))
fitted_curve=list(fit_func(n.linspace(0,1,100),popt[0],popt[1],popt[2]))
data.reverse()
fitted_curve.reverse()
[Plot of the data and the fitted curve](http://i.stack.imgur.com/JiVou.png)
|
Ansible correct way to get a virtualenv with a recent version of setuptools and pip
Question: Hello today to get a virtualenv runing with vagrant (1.7.4)
I first install `python-virtualenv` with apt::
- name: Apt install
apt: name={{ item }} state=installed update_cache=yes
with_items:
## needed to make virtualenv
- python-dev
- python-setuptools
- python-virtualenv
The with eassy_install I get pip::
- easy_install: name=pip
I create virtualenv with `shell:`:
- name: == Create virtualenv
shell: virtualenv "{{ venv_name }}"
args:
chdir: "{{ home }}"
sudo: true
sudo_user: "{{ user }}"
- name: Upgrade pip wheel and setuptools
pip: name={{ item }} virtualenv="{{ home }}/{{ venv_name }}"
extra_args='--upgrade'
with_items:
- pip
- wheel
- setuptools
And End with pip giving the virtualenv info::
- name: pip Install packages into virtualenv
pip: >
name={{ item }} virtualenv="{{ home }}/{{ venv_name }}"
virtualenv_site_packages="no"
with_items:
- ansicolors
- blist
Is that the correct way to get a virtualenv with a recent version of
setuptools and pip ?:
(venv)toto@vagrant-ubuntu-wily-64:~$ python -c "import pkg_resources as pkg; print(pkg.require(['setuptools'])[0].version)"
20.10.1
(venv)toto@vagrant-ubuntu-wily-64:~$ pip -V
pip 8.1.1 from /home/toto/venv/local/lib/python2.7/site-packages (python 2.7)
(venv)toto@vagrant-ubuntu-wily-64:~$ wheel version
wheel 0.29.0
(venv)toto@vagrant-ubuntu-wily-64:~$
Answer: You can require the latest version:
- name: Upgrade pip wheel and setuptools
pip: name={{ item }} virtualenv="{{ home }}/{{ venv_name }}" state=latest
extra_args='--upgrade'
with_items:
- pip
- wheel
- setuptools
- name: pip Install packages into virtualenv
pip: >
name={{ item }} virtualenv="{{ home }}/{{ venv_name }}" state=latest
virtualenv_site_packages="no"
with_items:
- ansicolors
- blist
|
Python quit unexpectedly
Question: Can anybody help with this? Whenever I try to launch tkinter, i get this
report:
> Process: Python [1106] Path:
>
> /Library/Frameworks/Python.framework/Versions/3.5/Resources/Python.app/Contents/MacOS/Python
> Identifier: org.python.python Version: 3.5.1 (3.5.1) Code Type: X86-64
> (Native) Parent Process:
> Python [1036] Responsible: Python [1036] User ID:
> 501
>
> Date/Time: 2016-04-28 00:14:59.804 -0500 OS Version:
> Mac OS X 10.10.5 (14F1713) Report Version: 11 Anonymous UUID:
> 8A5EA9E5-B94F-6C3F-2F7E-EC33C5FA8E26
>
> Time Awake Since Boot: 4900 seconds
>
> Crashed Thread: 0 Dispatch queue: com.apple.main-thread
>
> Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes:
> KERN_INVALID_ADDRESS at 0x00007fff5afffff8
>
> VM Regions Near 0x7fff5afffff8: mapped file
> 000000010a37f000-000000010a409000 [ 552K] rw-/rwx SM=COW
> /System/Library/Fonts/Monaco.dfont \--> __UNIXSTACK
> 00007fff5b000000-00007fff5c000000 [ 16.0M] rw-/rwx SM=COW
> /Library/Frameworks/Python.framework/Versions/3.5/Resources/Python.app/Contents/MacOS/Python
Answer: Python **quit unexpectedly** when running the code below on my Mac (OS X
Yosemite):
root.config(menu_Bar = file_Menu)
# Tkinter GUI Menu
from tkinter import *
### Functions ###
# Do Nothing
def do_Nothing():
print('I just did... nothing')
### Create tkinter window ###
# Create Window
root = Tk()
#### Creating the Menu(s) ###
# Create the Menu Bar
menu_Bar = Menu(master = root)
# Create File Menu
file_Menu = Menu(master = menu_Bar)
### Displaying the Menu(s) ###
# Display Menu Bar
root.config(menu = menu_Bar)
# Display File Menu
menu_Bar.add_cascade(label = 'File', menu = file_Menu)
### File Menu Properties ####
# New
file_Menu.add_command(label = 'New', command = do_Nothing)
# Open
file_Menu.add_command(label = 'Open', command = do_Nothing)
# Exit
file_Menu.add_command(label = 'Exit', command = root.quit)
### Display tkinter window ###
root.mainloop()
# Display Menu Bar
root.config(menu = menu_Bar)
The issue that made Python **_quit unexpectedly_** was that instead of
# Display Menu Bar
root.config(menu = menu_Bar)
I had originally wrote something like:
# Display Menu Bar
root.config(myMenu = menu_Bar)
In addition to that, I had to update the Tlc from version **_`Apple 8.5.9`_**
to **_`ActiveTcl 8.5.18.0`_**. The website for this is here:
<https://www.python.org/download/mac/tcltk/#activetcl-8-5-18-0>
|
faster geometric average on ASCII
Question: Is it possible to speed up the following code, but without using external
modules (NumPy, etc)? Just plain Python. Two lines of thought: speeding the
computation in
chr(int(round( multiplOrds**(1.0/DLen), 0) ) )
or faster building of the desired structure. The aim is to find the geometric
average of an ord() of an ASCII symbol and report it as a round value
(symbol). The len(InDict) is anything above 1. The outcome of the example
should be
KM<I
The code:
def GA():
InStr="0204507890"
InDict={
0:"ABCDEFGHIJ",
1:"KLMNOPQRST",
2:"WXYZ#&/()?"
}
OutStr = ""
DLen = len(InDict)
for pos in zip(InStr, *InDict.values()):
if pos[0]=="0":
multiplOrds = 1
for mul in (ord(char) for char in pos[1:] if char!="!"): multiplOrds*=mul
OutStr+= chr(int(round( multiplOrds**(1.0/DLen), 0) ) )
return OutStr
if __name__ == '__main__':
import timeit
print(timeit.timeit("GA()", setup="from __main__ import GA"))
Answer: A first thought:
Concatenating strings is slow as they are immutable, therefore each
modification results in creating a new copied instance. That's why you should
not do things like:
s = ""
for i in range(1000000):
s += chr(65)
Each loop it will create a new string instance being one character larger than
the previous, the old instance will remain until the Garbage Collector kicks
in. Also allocating memory is slow.
Using a generator expression to store the partial strings and joining them
together in the end is about twice as fast and shorter to code:
s = "".join(chr(65) for i in range(1000000))
|
Exception django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet
Question: I have upgraded django version from 1.8 into 1.9 and django rest framework to
3.3.3. I am getting this exception:
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
I have tried as follows but exception is still there.
#__init__.py
default_app_config = 'panel.apps.PanelConfig'
And also
#apps.py
from django.apps import AppConfig
class PanelConfig(AppConfig):
name = 'panel'
def ready(self):
from panel import receivers
for all apps and added these to installed apps
'api.apps.ApiConfig',
'billing.apps.ApiConfig',
'incoming.apps.IncomingConfig',
'outgoing.apps.OutgoingConfig',
'panel.apps.PanelConfig',
This is my full traceback:
Unhandled exception in thread started by <function wrapper at 0x7f3eec09c7d0>
Traceback (most recent call last):
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 249, in raise_last_exception
six.reraise(*_exception)
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/apps/config.py", line 90, in create
module = import_module(entry)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/admin_tools/dashboard/__init__.py", line 1, in <module>
from admin_tools.dashboard.dashboards import *
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/admin_tools/dashboard/dashboards.py", line 13, in <module>
from django.contrib.contenttypes.models import ContentType
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/contrib/contenttypes/models.py", line 161, in <module>
class ContentType(models.Model):
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/db/models/base.py", line 94, in __new__
app_config = apps.get_containing_app_config(module)
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/apps/registry.py", line 239, in get_containing_app_config
self.check_apps_ready()
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/django/apps/registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
Exception is still there ? What is the problem ? I am not getting ?
Answer: The traceback shows you that the problem is occuring in
[admin_tools](http://django-admin-tools.readthedocs.io/).
from admin_tools.dashboard.dashboards import *
File "/home/sparrow/virtualenvs/bishnu/local/lib/python2.7/site-packages/admin_tools/dashboard/dashboards.py", line 13, in <module>
from django.contrib.contenttypes.models import ContentType
It looks like [it has been fixed](https://github.com/django-admin-
tools/django-admin-tools/commit/9470008dd29db6c3cef2177839a57cdf371d21e5), so
try upgrading to the latest release, currently 0.7.2.
|
values at coordinates to image
Question: I have a text file with measurement data which looks like this.
x y z
1 3 -2
2 1 -3
3 1 1
2 2 3
1 2 2
2 3 0
This would imply the following measurement (on an x,y grid)
-2 0
2 3
-3 1
I want to create an image from these values where no measurement would mean
that the image is transparent. If possible I would like to map the z values
(from for example -9.4 to +3.2) to a colormap such as colormap.jet
I've tried to do this using the Python Image Library and putpixel but this is
very slow and I'm sure there must be a better way of doing this.
My current code: basePath = os.path.dirname(os.path.realpath(**file**)) #
defines the directory where the current file resides srcFiles =
glob.glob('*.pts')
for fileName in srcFiles:
data = pd.read_csv(os.path.join(basePath, fileName), names=['x', 'y', 'z'], delim_whitespace=True)
print fileName
maxX = data.x.max()
minX = data.x.min()
maxY = data.y.max()
minY = data.y.min()
minZ = data.z.min()
maxZ = data.z.max()
width = maxX-minX
height = maxY-minY
img = Image.new('L', (int(width), int(height)))
for x in range(int(width)):
for y in range(int(height)):
value = data[(data['x'] == (minX+x)) & (data['y'] == (minY+y))]['z']
if len(value) == 0:
value = 99.;
img.putpixel((x,y),int(value))
img.save('test.png')
Answer: Maybe you should just use a numpy matrix to manipulate the image. I didn't do
the csv read part as you already have it. The masked array let you have
transparent pixels.
import numpy as np
import matplotlib.pyplot as plt
INPUT = np.array(
[[1, 3, -2]
,[2, 1, -3]
,[3, 1, 1]
,[2, 2, 3]
,[1, 2, 2]
,[2, 3, 0]])
# get ranges
xmin = INPUT[:,0].min()
xmax = INPUT[:,0].max()
ymin = INPUT[:,1].min()
ymax = INPUT[:,1].max()
zmin = INPUT[:,2].min()
zmax = INPUT[:,2].max()
# create array for image : zmax+1 is the default value
shape = (xmax-xmin+1,ymax-ymin+1)
img = np.ma.array(np.ones(shape)*(zmax+1))
for inp in INPUT:
img[inp[0]-xmin,inp[1]-ymin]=inp[2]
# set mask on default value
img.mask = (img==zmax+1)
# set a gray background for test
img_bg_test = np.zeros(shape)
cmap_bg_test = plt.get_cmap('gray')
plt.imshow(img_bg_test,cmap=cmap_bg_test,interpolation='none')
# plot
cmap = plt.get_cmap('jet')
plt.imshow(img,cmap=cmap,interpolation='none',vmin=zmin,vmax=zmax)
plt.colorbar()
plt.imsave("test.png",img)
plt.show()
plt.close()
[](http://i.stack.imgur.com/mp04r.png)
note that the imsave does not save the figure I show here but the image as you
want which wouldn t be interesting with 3x3 pixels.
|
Python: How to add content of file to list from position
Question: I have a JSON file containing various objects each containing elements. With
my python script, I only keep the objects I want, and then put the elements I
want in a list. But the element has a prefix, which I'd like to suppress form
the list. The post-script JSON looks like that:
{
"ip_prefix": "184.72.128.0/17",
"region": "us-east-1",
"service": "EC2"
}
The "IP/mask" is what I'd like to keep. The List looks like that:
'"ip_prefix": **"23.20.0.0/14"** ,'
So what can I do to only keep **"23.20.0.0/14"** in the list?
Here is the code:
json_data = open(jsonsourcefile)
data = json.load(json_data)
print (destfile)
d=[]
for objects in (data['prefixes']):
if servicerequired in json.dumps(objects):
#print(json.dumps(objects, sort_keys=True, indent=4))
with open(destfile, 'a') as file:
file.write(json.dumps(objects, sort_keys=True, indent=4 ))
with open(destfile, 'r') as reads:
liste = list()
for strip in reads:
if "ip_prefix" in strip:
strip = strip.strip()
liste.append(strip)
print(liste)
Thanks, dersoi
Answer: I've refactored your code, try this out:
import json
with open('sample.json', 'r') as data:
json_data = json.loads(data.read())
print json_data.get('ip_prefix')
# Output: "184.72.128.0/17"
|
importing swift file to obj-c
Question: Okay, this one is too much for me. I'm trying to import manually file from
socket-io which are written in swift, to my project who is fully written in
obj-c.
I have read the doc from [Swift and
obj-c](https://developer.apple.com/library/ios/documentation/Swift/Conceptual/BuildingCocoaApps/MixandMatch.html#//apple_ref/doc/uid/TP40014216-CH10-ID138),
but that's not really helpful at all. And from [socket-
io](https://github.com/socketio/socket.io-client-swift), same thing. All it
says is to download the github project, import the "source" folder to my obj-c
project, and then follow the instruction from Apple doc which to my opinion
are definitively not clear at all.
My question is: what is the concept of swift module? Because I definitively
didn't found what conceptually is a module, and how to build it from a folder
of swift file, even if I think it is a collection of swift class, but with
Apple, I'm not even sure of this.
my project is structured like this:
.
+-- projectName
+-- Source
| +-- SockectFile_0.swift
| +-- SockectFile_1.swift
| +-- SockectFile_2.swift
| +-- SockectFile_3.swift
+-- ViewController
| +-- viewController_0.h
| +-- viewController_0.m
Then how to import the whole "Source" folder with all this files inside to my
viewController_0.m ? I don't have even any idea how to compile it as a module.
Or if it is the right way to do it. Like I said, I'm in a confused state right
now.
Thanks for the answers in advance.
**Note** After further research, I have give it up. What I did is compile the
entire swift project alone and import it as framework. But of course, it is
too much for Xcode to handle the framework as a fat binary. Sorry if I look
gross, but I don't understand why Xcode is a sooooooooooo bad IDE tool. For
info, it took me approximatively 1 minute to import socketIO in my python
project where it took me 3 hours with Xcode with the source code, and I just
dodged the problem because what I did is the binary solution (Xcode is even
not good at making a simple fat binary by itself. Where it took 3 lines of
script to implement). I really hate.
Answer: # Making Swift Module
You have to follow these steps Goto Build Setting > Search = Defines Module to
**YES** then > Search product module name and give name for your Module there
[](http://i.stack.imgur.com/Ju2NP.png)
Then
[](http://i.stack.imgur.com/kkoDp.png)
Your Can use your Module Like `@import MQTTKit;`
[](http://i.stack.imgur.com/gwTAd.png)
|
Apply power fit to data by using levenberg-marquardt algorithm in python
Question: Hy everybody! I am a beginer in python and data analysis, and meet with a
problem, during fitting a power function to my data. [Here I plotted my
dataset as a scatterplot](http://i.stack.imgur.com/V173w.png)
I want to plot a power function with expontent arround -1 , but after I apply
the levenberg-marquardt method, using lmfit library in python, [I get the
following faulty image.](http://i.stack.imgur.com/FxGWE.png) I tried to modify
the initial parameters, but it didn't help.
Here is my code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from lmfit import minimize, Parameters, Parameter, report_fit
be = pd.read_table('...',
skipinitialspace=True,
names = ["CoM", "slope", "slope2"])
x=be["CoM"]
data=be["slope"]
def fcn2min(params, x, data):
n2 = params['n2'].value
n1 = params['n1'].value
model = n1 * x ** n2
return model - data #that's what you want to minimize
# create a set of Parameters
# 'value' is the initial condition
params = Parameters()
params.add('n2', value= -1.00)
params.add('n1',value= 23.0)
# do fit, here with leastsq model
result = minimize(fcn2min, params, args=(be["CoM"],be["slope"]))
#calculate final result
final = data + result.residual
resid = result.residual
# write error report
report_fit(result)
#plot results
xplot = x
yplot = result.params['n1'].value * x ** result.params['n2'].value
plt.figure(figsize=(15,6))
plt.ylabel('OD-slope',fontsize=18, color='blue')
plt.xlabel('CoM height_Sz [m]',fontsize=18, color='blue')
plt.plot(be["CoM"],be["slope"],"o", label="slope_flat")
plt.plot(be["CoM"],be["slope2"],"+",color='r', label="slope_curv")
plt.plot(xplot,yplot)
plt.legend()
plt.savefig('plot2')
plt.show()
I don't quite understand what is the problem with this, so if you have any
observations, thank you very much.
Answer: It's a little hard to tell what the question is. t looks to me like the fit
completed and gave a reasonably good fit, but you don't provide the fit
statistics or report of the parameters.
If you're asking about all the green lines for the "COM" array (the best
fit?), this is almost certainly because the starting x axis "height_Sz" data
was not sorted to be strictly increasing. That's OK for the fit, but plotting
an X-Y trace with a line expects the data to be in order.
|
Flask-Python import data from csv file
Question: I am trying to import a csv file from a webpage using flask. I am able to
import the data from the csv file and return the data as json. However, I
would like to print only the first sample from the data. I have attached my
code and the flask error below. The csv file I am using is
[csvfile](https://drive.google.com/open?id=0B6-GnPywzeVkeWpVY3F3WllTc3M). The
returned json data looks something like this
{
"result": [
[
"0.011223",
"0.018274",
"0.071568",
"0.3407",
"0.50367",
"0.63498",
"0.45607",
"0.39945",
"0.27201",
"0.23569",
"0.29102",
"0.15327",
"0.095266",
"0.059091",
"0.014877",
"0.00010369",
"0.049384",
"0.12681",
"0.24325",
"0.30725",
"0.4259",
"0.56476",
"0.606",
"0.1001",
"0.5427",
"0.63342",
"0.62526",
"0.59211",
"0.59013",
"0.50669",
"0.42666",
"0.29487",
"0.20149",
Please advice what is wrong with the script.
`from flask import Flask, request, jsonify, render_template
from flask.ext import excel
import json, csv
app=Flask(__name__)
app.debug = True
@app.route("/upload", methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
a= jsonify({"result": request.get_array(field_name='file')})
entries = json.loads(a)
entry=entries['result'][0]
return "<h2>'entry=%f'</h2>"%entry
return '''
<!doctype html>
<title>Upload an excel file</title>
<h1>Excel file upload (csv, tsv, csvz, tsvz only)</h1>
<form action="" method=post enctype=multipart/form-data><p>
<input type=file name=file><input type=submit value=Upload>
</form>
'''
if __name__ == "__main__":
app.run()`
TypeError
TypeError: expected string or buffer
Traceback (most recent call last)
File "C:\Users\Vikrant\Desktop\Flask1\chap1\lib\site-packages\flask\app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\Vikrant\Desktop\Flask1\chap1\lib\site-packages\flask\app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "C:\Users\Vikrant\Desktop\Flask1\chap1\lib\site-packages\flask\app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\Vikrant\Desktop\Flask1\chap1\lib\site-packages\flask\app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\Vikrant\Desktop\Flask1\chap1\lib\site-packages\flask\app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\Vikrant\Desktop\Flask1\chap1\lib\site-packages\flask\app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\Vikrant\Desktop\Flask1\chap1\lib\site-packages\flask\app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\Vikrant\Desktop\Flask1\chap1\lib\site-packages\flask\app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Users\Vikrant\Desktop\Flask1\Flaskr\flaskr_t2.py", line 12, in upload_file
entries = json.loads(a)
File "c:\python27\Lib\json\__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "c:\python27\Lib\json\decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
The debugger caught an exception in your WSGI application. You can now look at the traceback which led to the error.
To switch between the interactive traceback and the plaintext one, you can click on the "Traceback" headline. From the text traceback you can also create a paste of it. For code execution mouse-over the frame you want to debug and click on the console icon on the right side.
You can execute arbitrary Python code in the stack frames and there are some extra helpers available for introspection:
dump() shows all variables in the frame
dump(obj) dumps all that's known about the object
Brought to you by DON'T PANIC, your friendly Werkzeug powered traceback interpreter.
Answer: I guess the issue here is with `jsonify`.
According to [the
docs](http://flask.pocoo.org/docs/0.10/api/#flask.json.jsonify):
> flask.json.jsonify(*args, **kwargs)
>
> Creates a Response with the JSON representation of the given arguments with
> an application/json mimetype.
(See also [this answer](http://stackoverflow.com/a/13172658/4653485).)
You'd typically use it to send a json to a client (like in an API, for
instance) and let it handle the protocol stuff.
When writing this:
a= jsonify({"result": request.get_array(field_name='file')})
entries = json.loads(a)
it looks like you expect it to return just the json data, not a full response.
Did you try to print `a` and see what's in there? You may also want to print
`request.get_array(field_name='file')` as it looks like you are serializing
then deserializing the data.
|
Python cannot load cx_Oracle on Windows 7
Question: I installed python-2.7.amd64.msi and cx_Oracle-5.1.2-11g.win-amd64-py2.7.msi.
I've poked around a lot with PATH and PYTHONPATH environment variables but
nothing has helped loading the cx_Oracle module. Currently PYTHONPATH is set
to C:\Python27\Lib\site-packages
My exceedingly basic program is
import sys
print sys.path
import cx_Oracle
conn_str = u'xxx/xxx@server/XXX'
conn = cx_Oracle.connect(conn_str)
c = conn.cursor()
c.execute(u'select * from table')
conn.close()
The program output is:
['C:\\Users\\terry\\IdeaProjects\\PythonScripts', 'C:\\Python27\\Lib\\site-packages', 'C:\\WINDOWS\\system32\\python27.zip', 'C:\\Python27\\DLLs', 'C:\\Python27\\lib', 'C:\\Python27\\lib\\plat-win', 'C:\\Python27\\lib\\lib-tk', 'C:\\Python27']
Traceback (most recent call last):
File "OracleTest.py", line 4, in <module>
import cx_Oracle
ImportError: DLL load failed: The specified module could not be found.
I have also added the Registry entries as detailed
[here](http://stackoverflow.com/questions/17872234/how-to-add-python-to-
windows-registry)
This works fine on Linux so it seems I have something wrong with the windows
setup. But I've pretty much run out of ideas.
Answer: This problem turned out (I think) to be that I had not set the ORACLE_HOME
environment variable in Windows. This must point to your Oracle instantclient
directory e.g. ORACLE_HOME=C:\instantclient_11_2
The "I think" part of the story is that even after setting that it did not
help. I uninstalled cx_Oracle and reinstalled it from scratch. This time I
also used the cx_Oracle.EXE from the python web site NOT the cx_Oracle.MSI
file from sourceforge. In theory they would do the same thing. But in theory
it wouldn't have taken me over a day to get the environment set up.
|
How to do configuration of a python environment
Question: During the last years I mostly used django. Django uses the environment
variable `DJANGO_SETTINGS_MODULE` to find its configuration.
Now I have a project where I don't use django and I would like to have a clean
and easy to understand way to load the configuration.
Example:
I want a global boolean to turn debugging on or off:
settings.DEBUG
I guess there are thousands of ways to do it.
But what is the most pythonic way to store and load project settings?
... maybe the Python community already settled on a well known default way I
just don't know yet.
Answer: From: <https://docs.python.org/2/library/constants.html>
Use the global immutable constant
__debug__
This constant is true if Python was not started with an -O option.
If you mean to set the logging level, use `logging.setLevel(lvl)`:
<https://docs.python.org/2/library/logging.html#logging.Logger.setLevel>
_Update:_
For general global constants I would use a singleton, i.e. a module, to store
the constants you need. I.e. create a module `config.py`
DEBUG = False
OTHER_THING = True
From other modules you can
import config
print(config.DEBUG)
|
How to change the point size for regplot(), seaborn's scatter plot function (python)
Question: I want to be able to set the point size when plotting like this:
sns.regplot(y=[1,3,4,2,5], x=[range(5)], data=df,
marker='o', color='red')
plt.show()
Do you guys know how?
Answer: To do this you can feed the `regplot()` function the `scatter_kws` arg like
so:
import seaborn as sns
tips = sns.load_dataset("tips")
sns.regplot(x="total_bill", y="tip", data=tips,
marker='o', color='red', scatter_kws={'s':2})
[](http://i.stack.imgur.com/MzuGT.png)
sns.regplot(x="total_bill", y="tip", data=tips,
marker='o', color='red', scatter_kws={'s':20})
[](http://i.stack.imgur.com/iXqmd.png)
|
Python Classes order of executing things
Question: I'm trying to make a window full of balloons disappear one-by-one (clicking on
it),using python2, but I'm a little confused where I could capture my mouse
events, in this moment, I'm not clicking anything, the code is attached. I'm
inspired by the code at: <http://effbot.org/zone/tkinter-canvas-find-
withtag.htm>, but I'm trying to do my own arrangements.
I'm going a little bit confused how Python Classes, attributes, and events are
related. Thanks for your ideas
class Baloons:
def __init__(self,wdw, dimension):
self.canvas= Canvas(wdw, width=dimension, height=dimension)
self.canvas.pack()
self.update(self.canvas)
self.click2(self.canvas)
def click2(event, canvas):
if canvas.find_withtag(CURRENT):
canvas.itemconfig(CURRENT, state="hidden")
canvas.update_idletasks()
canvas.after(200)
canvas.delete("all")
canvas.itemconfig(CURRENT, fill="red")
self.update(canvas)
self.click1(canvas)
def update(self,canvas):
for i in range(200):
x, y = randint(0, 400-1), randint(0, 400-1)
canvas.create_oval(x-5, y-5, x+5, y+5, fill="red")
def click1(self,canvas):
canvas.bind("<Button-1>", click2(canvas))
if __name__ == "__main__":
root=Tk()
dimension = 200
b=Baloons(root,dimension,delay)
root.mainloop()
Answer: > I'm trying to make a window full of balloons disappear one-by-one (clicking
> on it)
That is not what your code is trying to do. Your code is hiding it, destroying
everything, trying to configure something you've destroyed, then recreating
new objects.
To answer the question of how to make balloons disappear by clicking on them
you can throw away about a fourth of your code.
Following is a working example. Mostly I just re-implemented `click2` to
remove all of the code unrelated to deleting the item under the cursor. I also
removed `click1` because I didn't see any reason to put the binding in a
function. You only need to set up the binding once.
I also removed the need to pass `canvas` around. Since you're using objects,
every method already has a way to access the canvas.
from Tkinter import *
from random import randint
class Baloons:
def __init__(self,wdw, dimension):
self.canvas= Canvas(wdw, width=dimension, height=dimension)
self.canvas.pack()
self.update()
self.canvas.bind("<Button-1>", self.click2)
def click2(self, event):
item = self.canvas.find_withtag(CURRENT)
if item:
self.canvas.delete(item)
def update(self):
for i in range(200):
x, y = randint(0, 400-1), randint(0, 400-1)
self.canvas.create_oval(x-5, y-5, x+5, y+5, fill="red")
if __name__ == "__main__":
root=Tk()
dimension = 200
b=Baloons(root,dimension)
root.mainloop()
|
Plot pandas dataframe with varying number of columns along imshow
Question: I want to plot an image and a pandas bar plot side by side in an iPython
notebook. This is part of a function so that the dataframe containing the
values for the bar chart can vary with respect to number of columns.
The libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
Dataframe
faces = pd.Dataframe(...) # return values for 8 characteristics
This returns the the bar chart I'm looking for and works for a varying number
of columns.
faces.plot(kind='bar').set_xticklabels(result[0]['scores'].keys())
But I didn't find a way to plot it in a pyplot figure also containing the
image. This is what I tried:
fig, (ax_l, ax_r) = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
ax_l.imshow( img )
ax_r=faces.plot(kind='bar').set_xticklabels(result[0]['scores'].keys())
The output i get is the image on the left and an empty plot area with the
correct plot below. There is
ax_r.bar(...)
but I couldn't find a way around having to define the columns to be plotted.
Answer: You just need to specify your axes object in your `DataFrame.plot` calls.
In other words: `faces.plot(kind='bar', ax=ax_r)`
|
Calculating vectors with the cosine law (Python)
Question: I was trying to develop a little console aplication for solving additions
between vectors using the Cosine Law:
sum = sqrt((s1 ** 2) + (s2 ** 2) + (2 * s1 * s2 * cos(angle)))
print(sum)
`# Where s1 and s2 are the sizes of the vectors, respectively.`
But the, the cos in the equation returned a weird value (the angle was 60, so
the `cos(angle)` should be 1/2, right?).
Also, I tried changing the `cos` with `acos` after reading other solutions,
but it returned `ValueError: math domain error`.
Does anyone know how to solve this?
Answer: Python's trigonometric functions use radians, rather than degrees.
Fortunately, the `math` module includes a function to perform the conversion
for you:
from math import cos, radians
sum = sqrt((s1 ** 2) + (s2 ** 2) + (2 * s1 * s2 * cos(radians(angle))))
print(sum)
|
Parsing Text with Python RegEx re.findall
Question: I have a long string that I need to parse in groups, but need to control it
more.
import re
RAW_Data = "Name Multiple Words Testing With 1234 Numbers and this stuff* ((Bla Bla Bla (Bla Bla) A40 & A41)) Name Multiple Words Testing With 3456 Numbers and this stuff2* ((Bla Bla Bla (Bla Bla) A42 & A43)) Name Multiple Words Testing With 78910 Numbers and this stuff3* ((Bla Bla Bla (Bla Bla) A44 & A45)) Name Multiple Words Testing With 1234 Numbers and this stuff4* ((Bla Bla Bla (Bla Bla) A46 & A47)) Name Multiple Words Testing With 1234 Numbers and this stuff5* ((Bla Bla Bla (Bla Bla) A48 & A49)) Name Multiple Words Testing With 1234 Numbers and this stuff6* ((Bla Bla Bla (Bla Bla) A50 & A51)) Name Multiple Words Testing With 1234 Numbers and this stuff7* ((Bla Bla Bla (Bla Bla) A52 & A53)) Name Multiple Words Testing With 1234 Numbers and this stuff8* ((Bla Bla Bla (Bla Bla) A54 & A55)) Name Multiple Words Testing With 1234 Numbers and this stuff9* ((Bla Bla Bla (Bla Bla) A56 & A57)) Name Multiple Words Testing With 1234 Numbers and this stuff10* ((Bla Bla Bla (Bla Bla) A58 & A59)) Name Multiple Words Testing With 1234 Numbers and this stuff11* ((Bla Bla Bla (Bla Bla) A60 & A61)) Name Multiple Words Testing With 1234 Numbers and this stuff12* ((Bla Bla Bla (Bla Bla) A62 & A63)) Name Multiple Words Testing With 1234 Numbers and this stuff13* ((Bla Bla Bla (Bla Bla) A64 & A65)) Name Multiple Words Testing With 1234 Numbers and this stuff14* ((Bla Bla Bla (Bla Bla) A66 & A67)) Name Multiple Words Testing With 1234 Numbers and this stuff15* ((Bla Bla Bla (Bla Bla) A68 & A69)) Name Multiple Words Testing With 1234 Numbers and this stuff16*"
fromnode = re.findall('(.*?)(?=\*\s)', RAW_Data)
print fromnode
del fromnode
del RAW_Data
The results are: 'Name Multiple Words Testing With 1234 Numbers and this
stuff', '', ' ((Bla Bla Bla (Bla Bla) A40 & A41)) Name Multiple Words Testing
With 3456 Numbers and this stuff2' _........ and so on._
I can't seem to capture only the strings like "Name Multiple Words Testing
With 3456 Numbers and this stuff" and omit all of the strings like "((Bla Bla
Bla (Bla Bla) A40 & A41))". Any help would be much appreciated.
Answer: You can split with
r'\*\s*\({2}.*?\){2}\s*'
The pattern ([see demo](https://regex101.com/r/yH8eJ0/1)) matches:
* `\*` \- a literal asterisk
* `\s*` \- zero or more whitespaces
* `\({2}` \- exactly 2 opening parentheses
* `.*?` \- zero or more characters other than a newline (NOTE: add the `re.S` flag if you need to match across several lines) as few as possible up to the first
* `\){2}` \- double closing parentheses
* `\s*` \- 0+ whitespace.
ALSO: The [same, but unrolled (thus, a bit more efficient)
regex](https://regex101.com/r/yH8eJ0/2):
\*\s*\({2}[^)]*(?:\)(?!\))[^)]*)*\){2}\s*
See [IDEONE demo](http://ideone.com/XjrxIw):
import re
p = re.compile(r'\*\s*\({2}.*?\){2}\s*')
test_str = "Name Multiple Words Testing With 1234 Numbers and this stuff* ((Bla Bla Bla (Bla Bla) A40 & A41)) Name Multiple Words Testing With 3456 Numbers and this stuff2* ((Bla Bla Bla (Bla Bla) A42 & A43)) Name Multiple Words Testing With 78910 Numbers and this stuff3* ((Bla Bla Bla (Bla Bla) A44 & A45)) Name Multiple Words Testing With 1234 Numbers and this stuff4* ((Bla Bla Bla (Bla Bla) A46 & A47)) Name Multiple Words Testing With 1234 Numbers and this stuff5* ((Bla Bla Bla (Bla Bla) A48 & A49)) Name Multiple Words Testing With 1234 Numbers and this stuff6* ((Bla Bla Bla (Bla Bla) A50 & A51)) Name Multiple Words Testing With 1234 Numbers and this stuff7* ((Bla Bla Bla (Bla Bla) A52 & A53)) Name Multiple Words Testing With 1234 Numbers and this stuff8* ((Bla Bla Bla (Bla Bla) A54 & A55)) Name Multiple Words Testing With 1234 Numbers and this stuff9* ((Bla Bla Bla (Bla Bla) A56 & A57)) Name Multiple Words Testing With 1234 Numbers and this stuff10* ((Bla Bla Bla (Bla Bla) A58 & A59)) Name Multiple Words Testing With 1234 Numbers and this stuff11* ((Bla Bla Bla (Bla Bla) A60 & A61)) Name Multiple Words Testing With 1234 Numbers and this stuff12* ((Bla Bla Bla (Bla Bla) A62 & A63)) Name Multiple Words Testing With 1234 Numbers and this stuff13* ((Bla Bla Bla (Bla Bla) A64 & A65)) Name Multiple Words Testing With 1234 Numbers and this stuff14* ((Bla Bla Bla (Bla Bla) A66 & A67)) Name Multiple Words Testing With 1234 Numbers and this stuff15* ((Bla Bla Bla (Bla Bla) A68 & A69)) Name Multiple Words Testing With 1234 Numbers and this stuff16*"
print(re.split(p, test_str))
**UPDATE**
A regex for use with `re.findall`:
(?:\*\s*\(\([^)]*(?:\)(?!\))[^)]*)*\)\))?\s*([^*]*(?:\*(?!\s*\(\()[^*]*)*)\s*
See the [regex demo](https://regex101.com/r/bG6tM6/1)
Horrified at the looks of it? It is just the unrolled version of a much
simpler
[`(?:\*\s*\(\(.*?\)\))?\s*(.*?(?=\*\s*(?:\(\(|$)))`](https://regex101.com/r/oX9fF9/1).
See the [IDEONE demo](http://ideone.com/WoiFJn).
|
Python Rearrange & remove character from html page title
Question: I'm running Python 2.7.11 | on Windows 10 using beautifulsoup4 and lxml.
import urllib2
import re
from bs4 import BeautifulSoup
soup = BeautifulSoup(urllib2.urlopen("http://www.daisuki.net/us/en/anime/watch.GUNDAMUNICORNRE0096.13142.html"), "lxml")
Name = soup.title.string
print(Name.replace('#', ""))
Output:
01 DEPARTURE 0096 - MOBILE SUIT GUNDAM UNICORN RE:0096 - DAISUKI
Desired Output:
MOBILE SUIT GUNDAM UNICORN RE:0096 - 01 DEPARTURE 0096
How would I go about removing the "- DAISUKI" at the end and re order the
string?
Answer: Split by `-` and rearrange parts of the title:
>>> import urllib2
>>> from bs4 import BeautifulSoup
>>>
>>> soup = BeautifulSoup(urllib2.urlopen("http://www.daisuki.net/us/en/anime/watch.GUNDAMUNICORNRE0096.13142.html"), "lxml")
>>> Name = soup.title.string
>>>
>>> " - ".join(Name.replace('#', "").split(" - ")[1::-1])
u'MOBILE SUIT GUNDAM UNICORN RE:0096 - 01 DEPARTURE 0096'
|
Calling variables in another module python
Question: I'm trying to access variables I created in one function inside another module
in Python to plot a graph however, Python can't find them. Heres some example
code:
class1:
def method1
var1 = []
var2 = []
#Do something with var1 and var2
print var1
print var2
return var1,var2
sample = class1()
sample.method1
here is class 2
from class1 import *
class number2:
sample.method1()
This does as intended and prints var1 and var2 but I can't call var1 or var2
inside class number 2
FIXED EDIT: Incase anyone else has this issue, I fixed it by importing this
above class two
from Module1 import Class1,sample
And then inside class2
var1,var2 = smaple.method1()
Answer: The code you posted is full of syntax errors as Francesco sayed in his
comment. Perhaps you could paste the correct one.
You don't import from a class but from a package or a module. Plus you don't
"call" a variable unless it's a
[callable](http://stackoverflow.com/a/111255/3156085).
In your case you could just have :
**file1.py :**
class class1:
def __init__(self): # In your class's code, self is the current instance (= this for othe languages, it's always the first parameter.)
self.var = 0
def method1(self):
print(self.var)
sample = class1()
**file2.py :**
from file1 import class1, sample
class class2(class1):
def method2(self):
self.var += 1
print(self.var)
v = class2() # create an instance of class2 that inherits from class1
v.method1() # calls method inherited from class1 that prints the var instance variable
sample.method1() # same
print(v.var) # You can also access it from outside the class definition.
v.var += 2 # You also can modify it.
print(v.var)
v.method2() # Increment the variable, then print it.
v.method2() # same.
sample.method1() # Print var from sample.
#sample.method2() <--- not possible because sample is an instance of class1 and not of class2
Note that to have `method1()` in `class2`, `class2` must inherit from
`class1`. But you can still import variables from other packages/modules.
Note also that `var` is unique for each instance of the class.
|
pylab.show() did not work
Question: I run the python code as it is from this website:
<http://cvxopt.org/examples/book/rls.html>
To show it here:
# Figure 4.11, page 185.
# Regularized least-squares.
....
pylab.figure(1, facecolor='w')
pylab.plot(lbnds, alpha1, 'b-', ubnds, alpha2, 'b-')
kmax = max([ k for k in range(len(alpha1)) if alpha1[k] <
blas.nrm2(xls)**2 ])
pylab.plot( [ blas.nrm2(b)**2 ] + lbnds[:kmax] +
[ blas.nrm2(A*xls-b)**2 ], [0.0] + alpha1[:kmax] +
[ blas.nrm2(xls)**2 ], '-', linewidth=2)
pylab.plot([ blas.nrm2(b)**2, blas.nrm2(A*xls-b)**2 ],
[0.0, blas.nrm2(xls)**2], 'bo')
pylab.fill(lbnds[-1::-1] + ubnds + [ubnds[-1]],
alpha1[-1::-1] + alpha2+ [alpha1[-1]], facecolor = '#D0D0D0')
pylab.axis([0, 15, -1.0, 15])
pylab.xlabel('||A*x-b||_2^2')
pylab.ylabel('||x||_2^2')
pylab.grid()
pylab.title('Regularized least-squares (fig. 4.11)')
pylab.show()
it is supposed to show the plot after I run `python rls.py` . But nothing
appear, any help? Thank you
Answer: You need to enable an interactive backend to get a plot viewer window when
using `pylab.show()`. The 'agg' backround is non-interactive (although there
are interactive backends based on Agg, e.g. TkAgg, Qt5Agg).
You have a few options, but the simplest option for MacOS X is the 'macosx'
backend. You can enable this by using the following at the top of your script:
import matplotlib
matplotlib.use('macosx')
|
How can I add all the images in a directory to a word document using Python
Question: I have been trying to make this code work so I can add hundreds of pictures
into a microsoft word document but can't quite get it to work. I think the
problem is in defining the location of the images correctly.
import win32com.client as win32
import os
#creating a word application object
wordApp = win32.gencache.EnsureDispatch('Word.Application') #create a word application object
wordApp.Visible = True # hide the word application
doc = wordApp.Documents.Add() # create a new application
#Formating the document
doc.PageSetup.RightMargin = 20
doc.PageSetup.LeftMargin = 20
doc.PageSetup.Orientation = win32.constants.wdOrientLandscape
# a4 paper size: 595x842
doc.PageSetup.PageWidth = 595
doc.PageSetup.PageHeight = 842
header_range= doc.Sections(1).Headers(win32.constants.wdHeaderFooterPrimary).Range
header_range.ParagraphFormat.Alignment = win32.constants.wdAlignParagraphCenter
header_range.Font.Bold = True
header_range.Font.Size = 20
header_range.Text = "Header Of The Document"
# Inserting Tables
total_column = 2
total_row = 5
rng = doc.Range(0,0)
rng.ParagraphFormat.Alignment = win32.constants.wdAlignParagraphCenter
table = doc.Tables.Add(rng,total_row, total_column)
table.Borders.Enable = False
if total_column > 1:
table.Columns.DistributeWidth()
#Collecting images in the same directory and inserting them into the document
frame_max_width= 167 # the maximum width of a picture
frame_max_height= 125 # the maximum height of a picture
filenames = os.listdir("some_directory") #Do I need this? I think it might be the issue...
for index, filename in enumerate(filenames): # loop through all the files and folders for adding pictures
if os.path.isfile(os.path.join(os.path.abspath("."), filename)): # check whether the current object is a file or not
if filename[len(filename)-3: len(filename)].upper() == 'JPG': # check whether the current object is a JPG file
#calculating the position of each image to be put into the correct table cell
cell_column= index % total_column + 1
cell_row = index / total_column + 1
print 'cell_column=%s,cell_row=%s' % (cell_column,cell_row)
#we are formatting the style of each cell
cell_range= table.Cell(cell_row, cell_column).Range
cell_range.ParagraphFormat.LineSpacingRule = win32.constants.wdLineSpaceSingle
cell_range.ParagraphFormat.SpaceBefore = 0
cell_range.ParagraphFormat.SpaceAfter = 3
#this is where we are going to insert the images
current_pic = cell_range.InlineShapes.AddPicture(os.path.join(os.path.abspath("."), filename))
width, height = (frame_max_height*width/height, frame_max_height)
#changing the size of each image to fit the table cell
current_pic.Height= height
current_pic.Width= width
#putting a name underneath each image which can be handy
table.Cell(cell_row, cell_column).Range.InsertAfter("\n"+filename)
this code gets me my doc created and one image inserted but then I get the
following error: Traceback (most recent call last): File "pic_dump.py", line
56, in width, height = (frame_max_height*width/height, frame_max_height)
NameError: name 'width' is not defined.
Any help gratefully received.
Answer: The problem is on this line. `width, height = (frame_max_height*width/height,
frame_max_height)`.
You're setting the `width` = `frame_max_height*width/height`, but it is
impossible for python to times `frame_max_height` * `width` when you haven't
told Python what `width` is. You need to set `width` equal to some value first
to resolve this error.
|
Python checking a Entry isn't working
Question:
__author__ = "Jack Ashton"
from tkinter import *
import random
window = Tk()
canvas = Canvas(window, width=800, height=600, background="dark cyan")
canvas.pack_propagate(False)
placeholder = StringVar
#--------------------------------------------------------------------
#images
title_image = PhotoImage(file="title_image.png")
button_bg_image = PhotoImage(file="button_bg_image.png")
game_bg_image = PhotoImage(file="Untitled-3.png")
sprite_bg_image = PhotoImage(file="Untitled-4.png")
knight = None
dragon = None
#--------------------------------------------------------------------
def start(canvas):
canvas.pack()
main_menu()
window.mainloop()
def main_menu():
#--- globals ---#
global title_image
global button_bg_image
global difficulty
#--- images ---#
title = canvas.create_image(402, 152, image=title_image)
button_bg = canvas.create_image(402, 452, image=button_bg_image)
#--- label ---#
instruction_label = Label(text="To start the game choose a difficulty!")
instruction_label.pack(side=TOP)
#--- buttons ---#
#- difficulty buttons -#
difficulty_button_1 = Button(canvas, text="Easy", command=lambda: play_game("Easy", button_list))
difficulty_button_1.place(width=100, x=100, y=450)
difficulty_button_2 = Button(canvas, text="Medium", command=lambda: play_game("Medium", button_list))
difficulty_button_2.place(width=100, x=275, y=450)
difficulty_button_3 = Button(canvas, text="Hard", command=lambda: play_game("Hard", button_list))
difficulty_button_3.place(width=100, x=425, y=450)
difficulty_button_4 = Button(canvas, text="Insta-Death", command=lambda: play_game("Insta-Death", button_list))
difficulty_button_4.place(width=100, x=600, y=450)
#--- list of buttons ---#
button_list = [instruction_label, difficulty_button_1, difficulty_button_2, difficulty_button_3, difficulty_button_4]
#--- end of function ---#
def play_game(difficulty, button_list):
#--- globals ---#
global game_bg_image
global sprite_bg_image
global word_create
global user_input
clear_canvas(button_list) #removes start menu before the game is started
#--- images ---#
top_half_bg = canvas.create_image(402, 152, image=game_bg_image)
bottom_half_bg = canvas.create_image(402, 452, image=sprite_bg_image)
#--- labels and entry_field ---#
labels = display_labels()
user_input = create_entry_field()
#--- get words ---#
word_list = get_words() #stores a list of all the words from words.txt
print("word list: ", word_list)
word_create = (random.choice(word_list))
create_text = canvas.create_text(415, 125, text=word_create, fill="black", font=("Arial", 20))
#--- check word ---#
window.bind("<Return>", check_word)
#--- end of function ---#
def clear_canvas(button_list):
canvas.delete(ALL)
for list_item in button_list:
list_item.destroy()
#--- end of function ---#
def display_labels():
level = 1
display_level = Label(canvas, text="Level: {}".format(level), fg="white", bg="black")
display_level.place(width=100, x=10, y=10)
time = 1
display_time = Label(canvas, text="Time: {}".format(time), fg="white", bg="black")
wpm = 1
display_wpm = Label(canvas, text="WPM: {}".format(wpm), fg="white", bg="black")
points = 1
display_points = Label(canvas, text="Points: {}".format(points), fg="white", bg="black")
accuracy = 1
display_accuracy = Label(canvas, text="Accuracy: {}".format(accuracy), fg="white", bg="black")
label_list = [display_time, display_wpm, display_points, display_accuracy] #list of labels
y_value = -15
for list_item in label_list: #places labels on canvas at equal distance apart
y_value += 25
list_item.place(width=100, x=690, y=y_value)
#--- end of function ---#
def create_entry_field():
entry_field = Entry(canvas, textvariable=placeholder)
entry_field.place(x=350, y=200)
entry_field.focus_set()
entered_text = entry_field.get()
return entered_text
#--- end of function ---#
def get_words():
word_list = [] #stores a list of all the words from words.txt
f = open("words.txt")
for word in f.readlines(): #appends all words from words.txt to the word list for use in the program
word_list.append(word.replace("\n", ""))
return word_list
def check_word(event):
global user_input
global word_create
if user_input == word_create:
print("yup")
else:
print("wrong")
def end_menu():
#--- end of function ---#
pass
start(canvas)
When I enter the text into the entry field, even though it is the same as the
word created, it still doesn't recognize this.
This program is supposed to be a typing game for a school project, it is
supposed to get the text from a file and then display it on screen, when text
is entered into the entry field it is supposed to check it, if correct the
word changes, if false you must rewrite your word and submit it again. I
cannot get it to check if the word is correct.
Answer: You are calling `entry_field.get()` immediately after creating it. You must
wait to call that function until the user has a chance to type. In this case,
you should call it fro inside `check_word`.
|
how to write a multithread kivy game(on rasp Pi) that can listen to a port at the same time
Question: I am writing a remote-control snake game on Raspberry Pi using kivy(output to
the 7" display). The socket is supposed to listen to the port while the game
is running. However it turns out that game loop and socketIO's wait loop can
not run together. I tried multithreading but it didn't work as expected.
Code for socketIO:
from socketIO_client import SocketIO, BaseNamespace
class Namespace(BaseNamespace):
def on_connect(self):
print('[Connected]')
def on_message(self,packet):
print packet
self.get_data(packet)
def get_data(self, packet):
if(type(packet) is str):
matches = re.findall(PATTERN, packet)
if(matches[0][0]=='2'):
dataMatches = re.findall(DATAPATTERN, matches[0][4])
print dataMatches
......
Code for main that definitely does not work:
if __name__ == '__main__':
MyKeyboardListener() #keyboard listener, works fine
SnakeApp().run()
socketIO = SocketIO('10.0.0.4',8080,Namespace)
socketIO.wait()
I tried the following multithreading, but it didn't work:
if __name__ == '__main__':
MyKeyboardListener() #keyboard listener, works fine
threading.Thread(target = SnakeApp().run).start() #results in abort
socketIO = SocketIO('10.0.0.4',8080,Namespace)
socketIO.wait()
The above code results in making program to abort with error message :"Fatal
Python error: (pygame parachute) Segmentation Fault Aborted"
I also tried another multithreading method but it didn't work as well. This is
really frustrating. Is there any way to let game loop and socketIO's wait loop
run at the same time? or I just missed something?
UPDATE: working code for main:
def connect_socket():
socketIO = SocketIO('10.0.0.4',8080,Namespace)
socketIO.wait()
if __name__ == '__main__':
MyKeyboardListener() #keyboard listener, works fine
socketThread = threading.Thread(target = connect_socket) #creat thread for socket
socketThread.daemon = True #set daemon flag
socketThread.start()
SnakeApp().run
Answer: You should run the kivy main loop in the primary thread, and the socket
listing in a secondary thread (reverse of your second try that didn't work).
But it will leave your app hanging when you simply close it, because the
secondary thread will keep it alive despite the primary thread being dead.
The easiest solution to this problem is to start the secondary thread with a
`daemon = True` flag, so it will be killed as soon as the primary thread is
dead.
|
Unable to see widgets on second window after successful login. Python and PyQt4
Question: There are two classes for two windows. Upon successful login it should launch
MainWindow. Code is able to launch MainWindow. But does not show any widgets
on it. There are two catergories of users: 1) admin 2) Other user I want to
show two different windows for admin and other user. how to fix above problem?
from PyQt4 import QtGui
import sys
class LoginDialog(QtGui.QDialog):
'''This is login window class'''
def __init__(self):
super().__init__()
self.username = QtGui.QLineEdit()
self.password = QtGui.QLineEdit()
self.login = QtGui.QPushButton('Login')
self.reset = QtGui.QPushButton('Reset')
loginLayout = QtGui.QFormLayout()
loginLayout.addRow("Username", self.username)
loginLayout.addRow("Password", self.password)
loginLayout.addRow(self.login, self.reset)
self.login.clicked.connect(self.onlogin)
self.reset.clicked.connect(self.onreset)
self.setGeometry(200,200,500,300)
self.setWindowTitle('test')
self.setWindowIcon(QtGui.QIcon('pythonlogo.png'))
## layout = QtGui.QVBoxLayout()
##
## layout.addLayout(loginLayout)
## layout.addWidget(self.buttons)
self.setLayout(loginLayout)
self.show()
def onlogin(self):
''''When login button is pressed '''
uname = str(self.username.text())
pwd = str(self.password.text())
if uname == 'admin' and pwd == 'someone':
self.accept()
else:
QtGui.QMessageBox.warning(self, 'Error', 'incorrect cred')
def onreset(self):
'''When reset button is called '''
self.username.setText('')
self.password.setText('')
class MainWindow(QtGui.QMainWindow):
'''This is main window class'''
def __init__(self):
super(MainWindow, self).__init__()
self.setGeometry(200,200,500,300)
self.home()
# print('yetotofnck nkdfnk')
# self.label = QtGui.QLabel()
# self.setCentralWidget(self.label)
self.searchbar = QtGui.QLineEdit()
self.searchbtn = QtGui.QPushButton('Search')
self.logoutbtn = QtGui.QPushButton('Logout')
self.searchbtn.clicked.connect(self.onsearch)
self.logoutbtn.clicked.connect(self.onlogout)
self.layout = QtGui.QFormLayout()
self.layout.addRow(self.searchbar, self.searchbtn)
self.layout.addRow(self.logoutbtn)
## wlayout = QtGui.QVBoxLayout()
## wlayout.addLayout(layout)
self.setLayout(self.layout)
def home(self):
btn = QtGui.QPushButton('Logout')
btn.clicked.connect(self.close_app)
self.show()
def close_app(self):
sys.exit(-1)
def onsearch(self):
print('serach successful')
def onlogout(self):
pass
def setusername(self, username):
self.username = username
self.label.setText("Username entered:%s"%self.username)
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
login = LoginDialog()
if not login.exec_():
sys.exit(-1)
main = MainWindow()
main.home()
## main.setusername(login.username.text())
## main.show()
sys.exit(app.exec_())
Answer: `QMainWindow` is a specialized window that has its own layout by default, so
you should be seeing a warning:
> QWidget::setLayout: Attempting to set QLayout "" on MainWindow "", which
> already has a layout
See the [QMainWindow docs](http://doc.qt.io/qt-4.8/qmainwindow.html#details)
for a description of the layout.
If you want to use a FormLayout, then use a QWidget as your main window:
> A widget that is not embedded in a parent widget is called a window
See the [QWidget docs](http://doc.qt.io/qt-4.8/qwidget.html#details).
Here's an example:
> **On OSX** 10.10.5 with Qt 4.8.6, PyQt 4.11.4, and python 2.7, the code
> below produces the warning:
>
>> modalSession has been exited prematurely - check for a reentrant call to
endModalSession:
>
> Apparently that is a bug. See <https://bugreports.qt.io/browse/QTBUG-37699>.
> With Qt 5.6.0, PyQt 5.5.1, python3.4, I don't see that warning.
from PyQt4.QtGui import (QMainWindow, QDialog, QApplication,
QLineEdit, QPushButton, QFormLayout, QMessageBox, QWidget)
#from PyQt5.QtWidgets import (QMainWindow, QDialog, QApplication,
# QLineEdit, QPushButton, QFormLayout, QMessageBox, QWidget)
from PyQt4.QtCore import pyqtSignal
#from PyQt5.QtCore import pyqtSignal
import sys
class LoginDialog(QDialog):
loginSignal = pyqtSignal(str, str) #Create a custom signal, which you can use
#to send two string arguments to a connected function.
def __init__(self, mainWindow):
super(LoginDialog, self).__init__()
self.setGeometry(200,200,500,300)
self.setWindowTitle('Login')
self.usernameInput = QLineEdit()
self.passwordInput = QLineEdit()
self.loginButton = QPushButton('Login')
self.resetButton = QPushButton('Reset')
#*****> CONNECT BUTTON TO A FUNCTION THAT EMITS CUSTOM SIGNAL <*****
self.loginButton.clicked.connect(self.emitLoginSignal)
self.resetButton.clicked.connect(self.onclickReset)
loginLayout = QFormLayout()
loginLayout.addRow("Username", self.usernameInput)
loginLayout.addRow("Password", self.passwordInput)
loginLayout.addRow(self.loginButton, self.resetButton)
self.setLayout(loginLayout)
def emitLoginSignal(self):
#***> EMIT CUSTOM SIGNAL <****
self.loginSignal.emit(
self.usernameInput.text(),
self.passwordInput.text()
)
def onclickReset(self):
pass
class MyAdminWindow(QWidget):
def __init__(self):
super(MyAdminWindow, self).__init__()
self.setGeometry(200,200,500,300)
self.setWindowTitle("MainWindow")
self.searchbar = QLineEdit()
self.searchbtn = QPushButton('Search')
self.logoutbtn = QPushButton('Logout')
self.searchbtn.clicked.connect(self.onsearch)
self.logoutbtn.clicked.connect(self.onlogout)
formLayout = QFormLayout()
formLayout.addRow(self.searchbar, self.searchbtn)
formLayout.addRow(self.logoutbtn)
self.setLayout(formLayout)
self.loginDialog = LoginDialog(self)
#*****> CONNECT TO CUSTOM SIGNAL HERE: <*******
self.loginDialog.loginSignal.connect(self.validateUser)
self.loginDialog.exec_()
def onsearch(self):
pass
def onlogout(self):
pass
def validateUser(self, username, password):
if username == 'admin' and password == 'someone':
self.loginDialog.close()
self.show() #Now show the window.
else:
QMessageBox.warning(self, 'Error', 'incorrect cred')
app = QApplication([])
window = MyAdminWindow()
#Don't show() the window
sys.exit(app.exec_())
|
subprocess.CalledProcessError In python when using unrar
Question: Python IDLE shows an error when I am trying to extract files using
winrar(UnRAR.exe):
"Traceback (most recent call last):
File "<pyshell#32>", line 1, in <module>
response=subprocess.check_output(['"C:\\Users\\B74Z3\\Desktop\\Test\\UnRAR.exe" e -p123 "C:\\Users\\B74Z3\\Desktop\\Test\\Test.rar"'], shell=True)
File "C:\Program Files\Python 3.5\lib\subprocess.py", line 629, in check_output
**kwargs).stdout
File "C:\Program Files\Python 3.5\lib\subprocess.py", line 711, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['"C:\\Users\\B74Z3\\Desktop\\Test\\UnRAR.exe" e -p123 "C:\\Users\\B74Z3\\Desktop\\Test\\Test.rar"']' returned non-zero exit status 1"
* * *
What is the problem with the Code:
import subprocess
response=subprocess.check_output(['"C:\\Users\\B74Z3\\Desktop\\Test\\UnRAR.exe" e -p123 "C:\\Users\\B74Z3\\Desktop\\Test\\Test.rar"'], shell=True)
Answer: I'd comment on this, but I don't have enough reputation to do so.
Try running the command without the shell interface, that is,
response=subprocess.check_output(["""C:\Users\B74Z3\Desktop\Test\UnRAR.exe""", "e", "-p123', """C:\Users\B74Z3\Desktop\Test\Test.rar"""])
I've also remove the complexity of adding additional backslashes from your
command by using triple quotes. This is almost more precise in that you know
exactly what command and arguments is being run.
Also on windows the shell=True is not needed unless you're running a shell
built in command, <https://docs.python.org/3/library/subprocess.html#popen-
constructor>:
> On Windows with shell=True, the COMSPEC environment variable specifies the
> default shell. The only time you need to specify shell=True on Windows is
> when the command you wish to execute is built into the shell (e.g. dir or
> copy). You do not need shell=True to run a batch file or console-based
> executable.
|
Method returns blank even though it isn't blank inside method
Question: I have written a small scrip (this is partial), the full code should search a
bunch of .c files and check if the parameters within it are being used or not.
This particular code is responsible for grabbing the parameter from a row, so
it can be used to search the .c files for identical parameter names and it's
values.
The issue is that the first instant of print (inside `takeTheParam` method)
shows the correct parameter in command prompt, while the second print instant
(after the call to the `takeTheParam` method) shows a blank in command prompt.
import os
theParam = ""
def takeTheParam(row, theParam):
for item in row.split():
if "_" in item:
theParam = item
print theParam
return theParam
for root, dirs, files in os.walk('C:/pathtoworkdir'):
for cFile in files:
if cFile.endswith('.c'):
with open(os.path.join(root, cFile), 'r') as this:
for row in this:
if '=' in row:
takeTheParam(row, theParam)
print theParam
while theParam not in usedParameters: # Has the param already been checked?
value(row, savedValue, statements, cur)
searchAndValueExtract(theParam, parameterCounter, compareValue)
while isEqual(savedValue, compareValue, equalValueCounter):
searchAndValueExtract(theParam, parameterCounter, compareValue)
else:
# If IsEqual returns false, that means a param has different values
# and it's therefore being used
usedParameters.append(theParam)
pass
I have't got enough experience in python to figure out why this happens, but I
suspect that when `theParam` is used outside of the method it's value it
retrieved from it's definition on the beginning of the code (`theParam = ""`)
and I have no idea why, if this is the case.
Answer: Change
takeTheParam(row, theParam)
to
theParam = takeTheParam(row, theParam)
The returned variable is never assigned to `theParam` in your case, so it
would stay `""` forever. Now it isn't anymore.
|
Distributing a Python script to unpack .tar.xz
Question: Is there a way to distribute a Python script that can unpack a .tar.xz file?
Specifically:
* This needs to run on other people's machines, not mine, so I can't require any extra modules to have been installed.
* I can get away with assuming the presence of Python 2.7, but not 3.x.
So that seems to amount to asking whether out-of-the-box Python 2.7 has such a
feature, and as far as I can tell the answer is no, but is there anything I'm
missing?
Answer: First decompress the `xz` file into `tar` data and then extract the `tar`
data:
import lzma
import tarfile
from contextlib import closing
with lzma.open("file.tar.xz") as f:
with tarfile.open(fileobj=f) as tar:
content = tar.extractall('/path/to/extract')
(not tested but it should be pretty much it)
|
Python click module for creating a CLI
Question: I am using python click module to create a CLI. The fact is that I want to
have category commands with arguments, for example:
myawesomecli env info
myawesomecli env clean
myawesomecli env ...
myawesomecli database create-table <name>
myawesomecli database insert <entry>
myawesomecli database ...
So far, I can come up with this:
import click
@click.group()
@click.version_option(version='0.1 ')
def cli():
pass
@cli.command()
@click.argument('option')
def env(option):
if option == 'info':
click.echo("run env info command")
elif option == 'clean':
click.echo("run env clean command")
...
@cli.command()
@click.argument('option')
def database(option):
if option == 'create-table':
click.echo("run database create-table command")
elif option == 'clean':
click.echo("run database clean command")
...
Is there a way to have a function for each subcommand instead of using the if-
else's?
Answer:
import click
@click.group()
@click.version_option(version='0.1 ')
def cli():
pass
@cli.group()
@click.argument('option')
def env(option):
""" Define the environment of the product """
pass
@env.command()
def info():
click.echo("run env info command")
@env.command():
def group():
click.echo("run env group command")
...
I'm also struggling to understand a few things but this is one idea that i
hope helps.
|
Python BeautifulSoup doesn't works on URL
Question: I'm happy to join Stack Overflow :) First time i don't find an answer at my
problem :)
I would like to scrap "meta description" on url list (in a SQL data base).
When I started my script, it gets "killed" without any error. It gets killed
reading the 11th URL.
I made some tests, and I identified an URL : "<http://www.les-
calories.com/famille-4.html>"
So i made this test, reducing my code at minimum :
# encoding=utf8
from bs4 import BeautifulSoup
import urllib
html = urllib.urlopen(" http://www.les-calories.com/famille-4.html").read()
soup = BeautifulSoup(html)
And this code gets "killed" by the shell.
[screen](http://i.stack.imgur.com/XMdGI.jpg)
I don't understand why...
Thank you for your help :)
Answer: It could be that you've not specified the parser in which case do the
following.
soup = BeautifulSoup(html, "html.parser")
However, I think what is more likely is that there was just too much
information in the HTML page. What I'd do is use the python-requests package,
and in the GET request, I'd set `stream` to `True`. Like so:
>>> import requests
>>> resp = requests.get("http://www.les-calories.com/famille-4.html", stream=True)
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(resp.text, "html.parser")
>>> soup.find("a")
<a href="http://www.fitadium.com/79-seche-et-definition-musculaire" target="_blank"><img border="0" height="60px" src="h
ttp://www.les-calories.com/images/234x60_pack-minceur-brule-graisses.gif" width="234px"/></a>
|
How can I fit images with different resolutions?
Question: How can I fit and then overlay 2 images which have different resolution ?
This is the main image: 
I have this one, which has the correct mesh to the image above:

#!/usr/bin/python
import cv2
from matplotlib import pyplot as plt
import numpy as np
img1 = cv2.imread('transparency.jpg')
img2 = cv2.imread('La1.png')
row1,cols1, ch1 = img1.shape
row2,cols2, ch2 = img2.shape
res = cv2.resize(img2, None , fx = (1.* row1 /row2 ), fy =(1.* cols1 /cols2 ), interpolation = cv2.INTER_CUBIC)
Answer: It is pretty unclear from your question how it is supposed to come out! I am
just doing this at the command line using ImageMagick which is installed on
most Linux distros and is available for OSX and Windows - there are Python
bindings if that floats your boat though.
Anyway, let's get the size of the images:
identify m*
main.png PNG 1790x4098 1790x4098+0+0 8-bit sRGB 942KB 0.000u 0:00.000
mesh.jpg JPEG 2537x5703 2537x5703+0+0 8-bit sRGB 3.493MB 0.000u 0:00.000
So, let's load up the main image and resize it to match the mesh. Then let's
load up the mesh, and make everything transparent that is within 10% of white
- that will leave just the black lines from the mesh. But we can't see black
on black, so let's make the black lines in the mesh red. Then splat that
(technical term meaning _"composite"_) on top of the main image:
convert main.png -resize 2537x5703! \( mesh.jpg -fuzz 10% -transparent white -fill red -colorize 100% \) -composite result.png
Here's what you get.
[](http://i.stack.imgur.com/P5Noq.jpg)
Looks like your mesh needs cropping down the left side to shift it over, so
try:
convert main.png -resize 2480x5703! \( mesh.jpg -crop +57 -fuzz 10% -transparent white -fill red -colorize 100% \) -composite result.png
[](http://i.stack.imgur.com/fyixS.jpg)
|
What is the practical difference between these two ways of making web connections in Python?
Question: I have notice there are several ways to iniciate http connections for web
scrapping. I am not sure if some are more recent and up-to-date ways of
coding, or if they are just different modules with different advantages and
disadvantages. More specifically, I am trying to understand what are the
differences between the following two approaches, and what would you
reccomend?
**1) Using urllib3:**
http = PoolManager()
r = http.urlopen('GET', url, preload_content=False)
soup = BeautifulSoup(r, "html.parser")
**2) Using requests**
html = requests.get(url).content
soup = BeautifulSoup(html, "html5lib")
What sets these two options apart, besides the simple fact that they require
importing different modules?
Answer: Under the hood, `requests` uses `urllib3` to do most of the http heavy
lifting. When used properly, it should be mostly the same unless you need more
advanced configuration.
Except, in your particular example they're **not** the same:
In the urllib3 example, you're re-using connections whereas in the requests
example you're not re-using connections. Here's how you can tell:
>>> import requests
>>> requests.packages.urllib3.add_stderr_logger()
2016-04-29 11:43:42,086 DEBUG Added a stderr logging handler to logger: requests.packages.urllib3
>>> requests.get('https://www.google.com/')
2016-04-29 11:45:59,043 INFO Starting new HTTPS connection (1): www.google.com
2016-04-29 11:45:59,158 DEBUG "GET / HTTP/1.1" 200 None
>>> requests.get('https://www.google.com/')
2016-04-29 11:45:59,815 INFO Starting new HTTPS connection (1): www.google.com
2016-04-29 11:45:59,925 DEBUG "GET / HTTP/1.1" 200 None
To start re-using connections like in a urllib3 PoolManager, you need to make
a requests _session_.
>>> session = requests.session()
>>> session.get('https://www.google.com/')
2016-04-29 11:46:49,649 INFO Starting new HTTPS connection (1): www.google.com
2016-04-29 11:46:49,771 DEBUG "GET / HTTP/1.1" 200 None
>>> session.get('https://www.google.com/')
2016-04-29 11:46:50,548 DEBUG "GET / HTTP/1.1" 200 None
_Now_ it's equivalent to what you were doing with `http = PoolManager()`. One
more note: urllib3 is a lower-level more explicit library, so you explicitly
create a pool and you'll explicitly need to specify [your SSL certificate
location](https://urllib3.readthedocs.io/en/latest/security.html#using-
certifi-with-urllib3), for example. It's an extra line or two of more work but
also a fair bit more control if that's what you're looking for.
All said and done, the comparison becomes:
**1) Using urllib3:**
import urllib3, certifi
http = urllib3.PoolManager(ca_certs=certifi.where())
html = http.request('GET', url).read()
soup = BeautifulSoup(html, "html5lib")
**2) Using requests** :
import requests
session = requests.session()
html = session.get(url).content
soup = BeautifulSoup(html, "html5lib")
|
Creating a Bandwidth Pool in Python
Question: I am trying to create a new bandwidth pool using Python. When I run the
following code I get what I believe is the proper response:
import SoftLayer
from pprint import pprint as pp
import logging
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler())
logger.setLevel(3)
client = SoftLayer.Client()
templateObject = client['SoftLayer_Network_Bandwidth_Version1_Allotment'].createObject({
"accountId": 11111,
"bandwidthAllotmentTypeId": 2,
"createDate": "04/28/2016 16:18:03",
"endDate": "04/28/2017 16:18:03",
"locationGroupId": 1,
"name": "RtiffanyTest1",
"serviceProviderId": 1
})
pp(templateObject)
The issue is when I log in to the customer portal the new pool is marked as
pending deletion.

Can you point me in the right direction to have a new bandwidth pool created?
I am using
[createObject](http://sldn.softlayer.com/reference/services/SoftLayer_Network_Bandwidth_Version1_Allotment/createObject)
on the [Network bandwidth
allotment](http://sldn.softlayer.com/reference/services/SoftLayer_Network_Bandwidth_Version1_Allotment)
Service.
Answer: Please try the following example:
"""
Create Bandwidth Pool
Important manual pages:
http://sldn.softlayer.com/reference/services/SoftLayer_Network_Bandwidth_Version1_Allotment/createObject/
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <[email protected]>
"""
import SoftLayer
# For nice debug output:
from pprint import pprint as pp
API_USERNAME = 'set me'
API_KEY = 'set me'
# Set the needed values to create a new item
accountId = 307600
# The values for bandwidthAllotmentTypeId are: (1) and (2)
# where: (1) means this allotment is marked as a virtual private rack or
# (2) bandwidth pooling
bandwidthAllotmentTypeId = 2
# To get locationGroupId, execute: SoftLayer_Location_Group::getAllObjects
locationGroupId = 1
newBandwithPoolName = 'testPool02'
# Create an object template to create the item.
objectTemplate = {
'accountId': accountId,
'bandwidthAllotmentTypeId': bandwidthAllotmentTypeId,
'locationGroupId': locationGroupId,
'name': newBandwithPoolName
}
# Creates a new connection to the API service.
client = SoftLayer.Client(
username=API_USERNAME,
api_key=API_KEY
)
try:
result = client['SoftLayer_Network_Bandwidth_Version1_Allotment'].createObject(objectTemplate)
pp(result)
except SoftLayer.SoftLayerAPIError as e:
pp('Failed ... Unable to create a new Bandwidth Pool faultCode=%s, faultString=%s'
% (e.faultCode, e.faultString))
|
python: Multiple plotting in one subplot2grid-image
Question: Let f be a function that depends on an independent variable x and on one (or
more) parameter b. My goal is to draw f for several values of b into the same
image. And I want that b is a global value (so that I can quickly change it).
Consider the code
import numpy as np
import pylab as pl
b=0.5
def f(x,b):
return np.sin(x*b)*b
X = np.linspace(0,10,1000)
F = f(X,b)
pl.figure(figsize=(12, 16), num="My sine")
pl.plot(X, F, label='b = %f' %b)
pl.legend(loc="best")
This code can be evaluated multiple times, just with another value for b. Then
all the curves are drawn into the same figure.
Now the problem: If in the above code I put
pl.subplot2grid((4, 1), (0, 0), rowspan=3)
between `pl.plot` and `pl.legend`, every new evaluation of the code is drawn
into a new blank figure.
So, how can I realise that with using `pl.subplot2grid` each evaluation is
drawn into the same figure?
Answer: Maybe you are looking for something like this?:
import numpy as np
import pylab as pl
def f(x,b):
return np.sin(x*b)*b
def draw(b):
X = np.linspace(0,10,1000)
F = f(X,b)
ax1.plot(X, F, label='b = %f' %b)
# Main program
fig = pl.figure(figsize=(12, 16), num="My sine")
ax1 = pl.subplot2grid((4,3), (0,0), colspan=2)
for b in [0.4, 0.5, 0.6]:
draw(b)
pl.legend(loc="best")
pl.show()
[](http://i.stack.imgur.com/HYAEl.png)
|
Selecting specific elements that contain a certain word from a list in python
Question: I want to do a sentiment analysis, but only want to use elements of a list
that contain a certain word. It's about comments and I only want to analyse
the comments that
For example, my list is:
comments = ["nice blog","i like your blog","nivea is a nice product","i like nivea"]
How do I create a list where only the comments that contain the word 'nivea'
are added?
So I want my final list to be:
commentsfinal = ["nivea is a nice product","i like nivea"]
* * *
I tried to count the total of comments (so not the total amount of nivea
mentions, but really the comments) where nivea is mentioned in different ways.
All the different ways resulted in different outcomes, could anyone help me
which one is the right one and why?
First try:
niveaucountlist=[]
match="nivea"
for comment in allcomments:
niveacount=0
for word in comment.split():
if word in match:
niveacount+=1
niveacountlist.append(niveacount)
total=sum(niveacount)
This got me an outcome of 4547 comments
Second try: The second thing I tried was to make a list, whereby every comment
is valued with the total of times that nivea is mentioned. I got a list like:
niveacountlist=[1,0,0,1,2,0]
Then I removed all the elements that had the value zero (because those are the
comments that are not about nivea
niveacountlistpos=[x for x in niveacountlist if x != 0]
print(len(niveacountlistpos))
This resulted in 3771 comments..
Last try: My last try was what you guys answered me in my first question. So I
used regexp and did:
import re
nivealist=[x for x in allcomments if re.search("nivea",x)]
This resulted in 2583 comments..
So, what is happening right here? Can someone explain me why the outcomes are
all different?
\--- Another (last) question that I have, is about the way I counted the total
of nivea mentions (so the sum of all the times nivea was in the comments). I
tried to do this by making a string of all the comments (called allwords)
together and then did this:
match="nivea"
niveacount1=0
for word in allwords:
niveacount1+=1
print(niveacount1)
Is this correct? Or can I do this in a better way..
Answer: You can use a [list
comprehension](https://docs.python.org/3.5/tutorial/datastructures.html#list-
comprehensions) and `in` to test for substring-ness.
nivea_comments = [c for c in comments if "nivea" in c]
If you're into functional programming you'll recognise this as a
[_filter_](https://docs.python.org/3.5/library/functions.html#filter).
nivea_comments = filter(lambda c: "nivea" in c, comments)
|
I can't get my python turtle to work
Question: Every time i put import turtle it always comes this
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
import turtle
File "C:/Users/Notandi/AppData/Local/Programs/Python/Python35-32\turtle.py", line 2, in <module>
wn = turtle.Screen()
AttributeError: module 'turtle' has no attribute 'Screen'
Answer: You named a file `turtle.py`, specifically this one:
C:/Users/Notandi/AppData/Local/Programs/Python/Python35-32\turtle.py
so Python thinks that file is the `turtle` module. Pick a different name.
(Incidentally, that seems like a weird place for you to put your files in.)
|
How do I apply a function to all items in a list in a text or csv file with Python?
Question: So I've built a function that will look through all the xml files in a folder,
and look for a node attribute (speaker name) and write to a row in a csv file.
Note, at the moment, it appends them all to the same csv file, but I'm looking
to get it to change up the file name after I've figured out the next step.
The next step that I was trying to do is to supply those speaker names from a
list in a text file (I've also tried a csv file, and a list of dictionaries)
and have the function applied to each of those speaker names individually.
I'm doing it with a function because I figured a for-loop iterating through a
set of items within another for-loop iterating through a different set of
items was kind of chancy, and a preliminary test I did with that, didn't prove
that worry wrong.
When I paste in any of the items in this list individually as the argument in
the function, it works. When I print the list after accessing through any of
the ways I've tried, it works, I just can't seem to get the two to talk.
I've tried to apply the function to each of the items in the following way,
but all it does is print out the error I gave to my except statement, and
write in the header column in the csv (so I know it's at least accessing the
function)
speaker_list = open("UAS_Speakers.csv","r").readlines()
for item in speaker_list:
look_for_speaker_in_files(item)
or
with open("speaking.txt","r") as f:
for x in f:
look_for_speaker_in_files(x)
for the heck of it, I even tried to open it as a list of dictionaries since
the data already had curly brackets around it. No change.
speaker_list = open("speaking.py","r")
for x in speaker_list:
look_for_speaker_in_files(x)
I also, modeled on a script that I did that was taking urls from a list and
performing a couple of urllib functions on them, tried this:
def main():
with open("speaking.py","r") as speaker_list:
for x in speaker_list:
look_for_speaker_in_files(x)
if __name__ == "__main__":
main()
I'm not sure if the issue is the whole list is being all fed into the function
at once when I do any of these, but in case there's something wrong with the
fucntion itself, preventing this from working, it's here:
def look_for_speaker_in_files(speakerAttrib):
c = csv.writer(open("allspeakers.csv","w"))
c.writerow(["Name", "Filename", "Text"])
for cr_file in glob.iglob('parsed/*.xml'):
try:
tree = etree.parse(cr_file)
for node in tree.iter('speaking'):
if node.attrib == speakerAttrib:
c.writerow([node.attrib, cr_file, node.text])
else:
continue
except:
print "bad string " + cr_file
continue
Any help on this would be greatly appreciated, otherwise I'll just be stuck
sorting this out by hand from OpenRefine or copy and pasting from a
spreadsheet by the hundreds, and the thought of that makes my eyeballs burn.
Sample list items:
{'name': 'Mr. BEGICH'}
{'name': 'The SPEAKER pro tempore (Mr. Miller of Florida)'}
{'name': 'The Acting CHAIR'}
{'name': 'Mr. McKINLEY'}
{'quote': 'true', 'speaker': 'recorder'}
{'name': 'Mr. WAXMAN'}
{'name': 'Mr. MORAN'}
{'name': 'Mr. McKEON'}
{'quote': 'true', 'speaker': 'The Acting CHAIR'}
{'name': 'Mr. RIGELL'}
{'name': 'Mr. SMITH of Washington'}
{'name': 'Mr. KILMER'}
{'name': 'Mr. LAMBORN'}
{'name': 'Mr. CLEAVER'}
{'name': 'Mr. MICA'}
{'name': 'Ms. SPEIER'}
{'name': 'Mrs. ELLMERS'}
Sample files are in this folder:
[https://drive.google.com/folderview?id=0B7lGA34vOZItREhRbmF6Z3YtTnM&usp=sharing](https://drive.google.com/folderview?id=0B7lGA34vOZItREhRbmF6Z3YtTnM&usp=sharing)
Answer: Please see if this works good for you.
I believe, you need to open the allspeakers.csv file in append mode, else it
would be replace by each main() for iteration. Else, for each iteration you
would have to write into a new file.
import csv
import glob
import ast
from os.path import isfile
from lxml import etree
def look_for_speaker_in_files(speakerAttrib):
speakerDict = ast.literal_eval(speakerAttrib)
l_file_exists = False
if isfile("allspeakers.csv"):
l_file_exists = True
c = csv.writer(open("allspeakers.csv","a"))
if not l_file_exists:
c.writerow(["Name", "Filename", "Text"])
lparser = etree.XMLParser(recover=True)
for cr_file in glob.iglob('parsed/*.xml'):
try:
tree = etree.parse(cr_file,parser=lparser)
for node in tree.iter('speaking'):
if node.keys() == speakerDict.keys():
c.writerow([node.attrib, cr_file, node.text])
else:
continue
except:
print "bad string " + cr_file
raise
def main():
with open("UAS_speakers.txt","r") as speaker_list:
for x in speaker_list:
print x
look_for_speaker_in_files(x)
if __name__ == "__main__":
main()
|
How do I deploy a function in python with its dependencies?
Question: I'm trying to use the `serverless` framework to create and deploy an AWS Lamba
function. I created a folder named `vendored` in the root of the project and
installed (using `pip install`) the function dependencies. However, whenever I
try to run it (using `serverless function run`) I got an error:
> Serverless: Running isNewUser...
> Serverless: WARNING: This variable is not defined: region
> Serverless: -----------------
> Serverless: Failed - This Error Was Returned:
> Serverless: {"exception": ["Traceback (most recent call last):\n", " File
> \"/home/fernando/.nvm/versions/node/v5.10.1/bin/serverless-run-python-
> handler\", line 170, in \n handler = import_program_as_module(path)\n", "
> File \"/home/fernando/.nvm/versions/node/v5.10.1/bin/serverless-run-python-
> handler\", line 149, in import_program_as_module\n module =
> make_module_from_file('lambda_handler', handler_file)\n", " File
> \"/home/fernando/.nvm/versions/node/v5.10.1/bin/serverless-run-python-
> handler\", line 129, in make_module_from_file\n py_source_description\n", "
> File \"/home/fernando/workspace/os-cac/isNewUser/handler.py\", line 11, in
> \n from vtex.order import Order\n", "ImportError: No module named
> vtex.order\n"], "success": false} Serverless: Exception message from Python
> Serverless: Traceback (most recent call last): , File
> "/home/fernando/.nvm/versions/node/v5.10.1/bin/serverless-run-python-
> handler", line 170, in handler = import_program_as_module(path) , File
> "/home/fernando/.nvm/versions/node/v5.10.1/bin/serverless-run-python-
> handler", line 149, in import_program_as_module module =
> make_module_from_file('lambda_handler', handler_file) , File
> "/home/fernando/.nvm/versions/node/v5.10.1/bin/serverless-run-python-
> handler", line 129, in make_module_from_file py_source_description , File
> "/home/fernando/workspace/os-cac/isNewUser/handler.py", line 11, in from
> vtex.order import Order ,ImportError: No module named vtex.order `
`vtex.order` is a module imported in handler.py
The structure of my project is something like:
/root/
|
|--_meta/
|--vendored/
|--dependencies...
|--function-name/
|--handler.py
|--event.json
|--s-function.json
|--requirements.txt
|--admin.env
|--package.json
|--s-project.json
|--s-resources-cf.json
|--s-project.json
Is there anything I'm doing wrong? Should I somehow configure my function to
include the dependencies from vendored?
Answer: Here are a few steps that should make it work:
1. Make sure that the handler entry in `s-function.json` has the function-name in its path: `"handler": "function-name/handler.handler",`
2. in `handler.py` add the following:
import os
import sys
here = os.path.dirname(os.path.realpath(__file__))
sys.path.append(os.path.join(here, "../vendored"))
from vtex.order import Order
That's it. Let me know if it worked.
|
Python - multithreaded sockets
Question: From my understanding python can only run 1 thread at a time so if I were to
do something like this
import socket, select
from threading import Thread
import config
class Source(Thread):
def __init__(self):
self._wait = False
self._host = (config.HOST, config.PORT + 1)
self._socket = socket.socket()
self._socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self._sock = None
self._connections = []
self._mount = "None"
self._writers = []
self._createServer()
Thread.__init__(self)
def _createServer(self):
self._socket.bind(self._host)
self._socket.listen(2)
self._connections.append(self._socket)
self._audioPackets=[]
def _addPacket(self, packet):
self._audioPackets.append(packet)
def _removePacket(self, packet):
self._audioPackets.remove(packet)
def _getPacket(self):
if len(self._audioPackets) > 0:
return self._audioPackets[0]
else:
return None
def _sendOK(self, sock):
sock.send("OK")
def _sendDenied(self, sock):
sock.send("DENIED")
def _sendMount(self, sock):
sock.send("mount:{0}".format(self._mount))
def _sendBufPacket(self, sock, packet):
packet = "buffer:%s" % packet
sock.send(packet)
def recv(self, sock, data):
data = data.split(":", 1)
if data[0] == "WAIT": self._wait = True
elif data[0] == "STOP_WAITING": self._wait = False
elif data[0] == "LOGIN":
if data[1] == config.SOURCE_AUTH:
self._source = sock
self._sendOK(sock)
else:
self._sendClose(sock)
elif data[0] == "MOUNT":
if self._source == sock:
self._mount = data[1]
else:
self._sendClose(sock)
elif data[0] == "CLIENT":
self._sendMount(sock)
self._writers.append(sock)
def _sendCloseAll(self):
for sock in self._connections:
sock.send("CLOSE")
sock.close()
def _sendClose(self, sock):
sock.send("CLOSE")
sock.close()
def main(self):
while True:
rl, wl, xl = select.select(self._connections, self._writers, [], 0.2)
for sock in rl:
if sock == self._socket:
con, ip = sock.accept()
self._connections.append(con)
else:
data = sock.recv(config.BUFFER)
if data:
self.recv(sock, data)
else:
if sock in self._writers:
self._writers.remove(sock)
if sock in self._connections:
self._connections.remove(sock)
for sock in wl:
packet = self._getPacket()
if packet != None:
self._sendBufPacket(sock, packet)
def run(self):
self.main()
class writeThread(Thread):
def __init__(self):
self.running = False
def make(self, client):
self.client = client
self.running = True
def run(self):
host = (config.HOST, config.PORT+1)
sock = socket.socket()
sock.connect(host)
sock.send("CLIENT")
sock.send("MOUNT:mountpoint")
while self.running:
data = sock.recv(config.BUFFER)
if data:
data = data.split(":", 1)
if data[0] == "buffer":
self.client.send(data[1])
elif data[0] == "CLOSE":
self.client.close()
break
if __name__=="__main__":
source = Source()
source.start()
webserver = WebServer()
webserver.runloop()
if I need to build the webserver part I will. But, I'll explain it. Okay, so
basically when someone connects to the websever under the mountpoint that was
set, They will get there own personal thread that then grabs the data from
`Source()` and sends it to them. Now say another person connects to the mount
point and the last client as well as the source is still going. Wouldn't the
new client be blocked from getting the Source data considering there are two
active threads?
Answer: Your understanding of how Threads work in Python seems to be incorrect, based
on the question you are asking. If used correctly, threads will not be
blocking: you can instantiate multiple thread with Python. The limitation is
that, due to the [Global Interpreter
Lock](http://www.dabeaz.com/python/UnderstandingGIL.pdf) (GIL), you cannot get
the full parallelism expected in thread programming (e.g. simultaneous
execution and thus, reduced runtime). What is going to happen in your case is
that the two threads will take, together, the same amount of time that they
would take if they were executed sequentially (although that is not
necessarily what happens in practice).
|
Python Regex over list of strings
Question: I'm trying to extract a url from a list of strings. Sample list:
import re
p = ['<img class="alignnone size-full wp-image-2087" src="http://www.sample.com/test.jpg" alt="0wCR41v" width="540" height="720" srcset="http://www.sample.com/test-225x300.jpg 225w, http://www.sample.com/test.jpg 540w" sizes="(max-width: 540px) 100vw, 540px" />', '<img class="alignnone size-large wp-image-2133" src="http://www.sample.com/test2.jpg" alt="NtAboHF" width="583" height="1024" srcset="http://www.happyfridaygents.com/wp-content/uploads/2016/04/NtAboHF-768x1349.jpg 768w, http://www.sample.com/test2.jpg 583w, http://www.happyfridaygents.com/wp-content/uploads/2016/04/NtAboHF.jpg 828w" sizes="(max-width: 583px) 100vw, 583px" />']
I'd like to extract the `http://www.sample.com/test.jpg` part that comes right
after the src=" part.
I can use findall if p is just one string like so:
t = re.findall('src="(.+)" alt', p)
print t
But how can I iterate over the list and return a list of all the urls in P?
Answer: Does this do what you'd like?
import re
p = ['<img class="alignnone size-full wp-image-2087" src="http://www.sample.com/test.jpg" alt="0wCR41v" width="540" height="720" srcset="http://www.sample.com/test-225x300.jpg 225w, http://www.sample.com/test.jpg 540w" sizes="(max-width: 540px) 100vw, 540px" />', '<img class="alignnone size-large wp-image-2133" src="http://www.sample.com/test2.jpg" alt="NtAboHF" width="583" height="1024" srcset="http://www.happyfridaygents.com/wp-content/uploads/2016/04/NtAboHF-768x1349.jpg 768w, http://www.sample.com/test2.jpg 583w, http://www.happyfridaygents.com/wp-content/uploads/2016/04/NtAboHF.jpg 828w" sizes="(max-width: 583px) 100vw, 583px" />']
outList = [re.findall('src="(.+)" alt', pp)[0] for pp in p]
|
From text to K-Means Vectors input
Question: I've just started diving into Machine Learning, specifically into Clustering.
(I'm using Python but this is irrelevant) My goal is, starting from a
collection of tweets (100K) about fashion world, to perform KMeans over their
text.
Till now I've filtered texts, truncating stopwords, useless terms,
punctuation; done lemmatization (exploiting Part Of Speech tagging for better
results).
I show the user the most frequent terms, hashtags, bigrams, trigrams,..9grams
so that he can refine preprocessing adding words to useless terms.
My initial idea was to use the top n(1K) terms as features, creating foreach
tweet a vector of fixed size n(1K) having a cell set to a value if the top
term (of this cell) appear in this tweet (maybe calculating the cell's value
with TFIDF).
Am I missing something(the 0 values will be considered)? Can I exploit n-grams
in some way?
This [scikit article](http://scikit-
learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer)
is pretty general and I'm not understanding the whole thing.
(Is LSA dimensionality reduction useful or is it better reducing the number of
features (so vectors dimension) manually? )
Answer: This [other sklearn page](http://scikit-
learn.org/stable/auto_examples/text/document_clustering.html#example-text-
document-clustering-py) contains an example of k-means clustering of texts.
But to address some of your specific questions:
> My initial idea was to use the top n(1K) terms as features, creating foreach
> tweet a vector of fixed size n(1K) having a cell set to a value if the top
> term (of this cell) appear in this tweet (maybe calculating the cell's value
> with TFIDF).
A standard approach to achieve that is to use sklearn's
[CountVectorizer](http://scikit-
learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)
and playing with the parameter `min_df`.
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(min_df=10)
X = cv.fit_transform(texts)
The above piece of code converts an array of texts into features X. Setting
`min_df=10` will ignore all words with less than 10 occurrences (to my
understanding, there is no direct way to say "take the top 1000" but this is
equivalent).
> Can I exploit n-grams in some way?
Yes, CountVectorizer can deal with n-grams. The `ngram_range` parameter
specifies the range of ngrams to consider (which starting "n" and which ending
"n"). For instance,
cv = CountVectorizer(min_df=10, ngram_range=(2,2))
will build features based on bigrams instead of individual words (unigrams).
For mixing unigrams and bigrams
cv = CountVectorizer(min_df=10, ngram_range=(2,2))
Then you can replace a CountVectorizer by a TfIdfVectorizer, which transforms
the word counts to weight more informative words.
> Is LSA dimensionality reduction useful or is it better reducing the number
> of features (so vectors dimension) manually?
Short answer, it depends on your purpose. The example in the link I mentioned
above does apply LSA first. But also, in my experience, "topic model" methods
like LSA or NMF can be already considered a clustering into latent semantic
topics. For instance,
from sklearn.decomposition import NMF
nmf = NMF(n_components=20)
mu = nmf.fit_transform(X)
This will convert the features X into projected feature vectors mu of 20
dimensions. Each dimension d can be interpreted as the score of the text in
topic d. By assigning each sample to the dimension with max score, this can
also be interpreted as a clustering.
|
no need of name mangling for another object of the same class in python?
Question: I'm new to Python learning about name mangling(double underscore), I've done
my search and learned, but I have one question and couldn't get the answer
through searching: we don't need name mangling when dealing with another
object of the same class in a class function, right? see my test code:
import math
class Point:
def __init__(self, loc_x, loc_y):
self.__x = loc_x
self.__y = loc_y
def distance(self, other):
return math.sqrt((self.__x - other.__x) * (self.__x - other.__x) + (self.__y - other.__y) * (self.__y - other.__y))
class Point1:
def __init__(self, loc_x, loc_y):
self.__x = loc_x
self.__y = loc_y
#two points of same class
p1 = Point(1,2)
p2 = Point(2,3)
print (p1.distance(p2))
#object of another point class
p3 = Point1(4,5)
print (p1.distance(p3))
Answer: If you use double underscores to use name mangling, you can only reference
that variable in methods on that class. Anytime you write
obj.__var
inside a class method, Python will replace it with
obj.__ClassName_var
Where `ClassName` is the name of the class for that method.
|
Command glossary for dataflow?
Question: I'm experimenting with the Dataflow Python SDK and would like some sort of
reference as to what the various commands do, their required args and their
recommended syntax.
So after `import google.cloud.dataflow as df`
Where can I read up on df.Create, df.Write, df.FlatMap, df.CombinePerKey, etc.
? Has anybody put together such a reference?
Is there anyplace (link please) where all the possible Apache Beam / Dataflow
commands are collected and explained?
Answer: There is not yet a pydoc server running for Dataflow Python. However, you can
easily run your own in order to browse:
<https://github.com/GoogleCloudPlatform/DataflowPythonSDK#a-quick-tour-of-the-
source-code>
|
Python defaultdict with string as type of value
Question: Need to create a defaultdict, value type is normal Python string, it seems my
below method does not work? Post compile error message. Using Python 2.7 and
wondering any good ideas how to fix? Thanks.
**Code**
import collections
a = collections.defaultdict("")
a[1]="Hello"
a[2]="World"
print a
**Error Message**
a = collections.defaultdict("")
TypeError: first argument must be callable
Answer: As the error says, the first argument has to be a callable that produces the
value you want. Use `str`:
a = collections.defaultdict(str)
If necessary, you can create a wrapper with a `lambda` function:
a = collections.defaultdict(lambda: 'initial')
|
Python: Deleting a list element based on elements from another list
Question: I have two lists here. I am shuffling list `i`, checking it's first element
and then deleting the corresponding element from list `j` e.g. if after a
shuffle, `i[0] == 3`, then i want to delete element 3 from list '`j`'.
i = ['a','b','c','d','e','f','g','h']
random.shuffle(i)
j = [1,2,3,4,5,6,7,8]
if i[0] == 'a':
del j[0]
elif i[0] == 'b':
del j[1]
elif i[0] == 'c':
del j[2]
elif i[0] == 'd':
del j[3]
elif i[0] == 'e':
del j[4]
elif i[0] == 'f':
del j[5]
elif i[0] == 'g':
del j[6]
elif i[0] == 'h':
del j[7]
Is there a way to do this task without listing out if statements like this? As
it stands, if `i[0] == 6`, then several if statements need to be checked which
is a waste of processing power, in principle.
Thanks, Steve
Answer: You can do
j.remove(i[0])
To remove element with value i[0] from list j.
[Edit] Updated:
import random
i = ['a','b','c','d','e','f','g','h']
k=i[:]
random.shuffle(i)
print i
print k
print j
j = [1,2,3,4,5,6,7,8]
print i[0]
print k.index(i[0])
del j[k.index(i[0])]
print j
|
How do I export a two dimensional list in Python to excel?
Question: I have a list that looks like this:
[[[u'example', u'example2'], [u'example', u'example2'], [u'example', u'example2'], [u'example', u'example2'], [u'example', u'example2']], [[5.926582278481011, 10.012500000000001, 7.133823529411763, 8.257352941176471, 7.4767647058823545]]]
I want to save this list to an Excel file in the following way:
Column 1: [example, example, ..., example]
Column 2: [example2, example2, ..., example2]
Column 3: [5.926582278481011, 10.012500000000001, ..., 7.4767647058823545]
Answer: Please use below link to explore different ways:
<http://www.python-excel.org/>
xlwt is one of the ways:
<http://xlwt.readthedocs.io/en/latest/>
<https://yuji.wordpress.com/2012/04/19/python-xlwt-writing-excel-files/>
If you want to use xlwt then below is the code:
import xlwt
workbook = xlwt.Workbook()
sheet = workbook.add_sheet("Sheet")
for i in range(len(rows)):
for j in range(len(rows[i])):
sheet.write(i, j, rows[i][j])
workbook.save("test.xls")
You have to install xlwt first if you want to use above code. For more
information please refer xlwt documentation.
|
Multiprocessing in python, multiple process running same instructions
Question: I'm using multiprocessing in Python for parallelizing. I'm trying to
parallelize the process on chunks of data read from an excel file using
pandas.
I'm new to multiprocessing and parallel processing. During implementation on
simple code,
import time;
import os;
from multiprocessing import Process
import pandas as pd
print os.getpid();
df = pd.read_csv('train.csv', sep=',',usecols=["POLYLINE"],iterator=True,chunksize=2);
print "hello";
def my_function(chunk):
print chunk;
count = 0;
processes = [];
for chunk in df:
if __name__ == '__main__':
p = Process(target=my_function,args=(chunk,));
processes.append(p);
if(count==4):
break;
count = count + 1;
The print "hello" is being executed multiple times, I'm guessing the
individual process created should work on the target rather than main code.
Can anyone suggest me where I'm wrong.
[](http://i.stack.imgur.com/i3vU6.jpg)
Answer: The way that `multiprocessing` works is create a new process and then import
the file with the target function. Since your outermost scope has print
statements, it will get executed once for every process.
By the way you should use a `Pool` instead of `Process`es directly. Here's a
cleaned up example:
import os
import time
from multiprocessing import Pool
import pandas as pd
NUM_PROCESSES = 4
def process_chunk(chunk):
# do something
return chunk
if __name__ == '__main__':
df = pd.read_csv('train.csv', sep=',', usecols=["POLYLINE"], iterator=True, chunksize=2)
pool = Pool(NUM_PROCESSES)
for result in pool.map(process_chunk, df):
print result
|
Python, looking to extract values from a masked array, then rebuild an array
Question: I’m currently writing something that involves a lot of noise I’m attempting to
remove, but in order do this I initially used masks, but the way in which I’m
analysing the data breaks using a mask.
The masking is done, I’m looking to extract the data that is not masked, run
analysis on this, then rebuild the array with the original order.
array([[3, 0, 3],
[6, 7, 2],
[2, 5, 0],
[2, 1, 4]])
Make Mask
array([[-, -, -],
[6, 7, 2],
[-, -, -],
[2, 1, 4]])
Extract Values
array([[6, 7, 2],
[2, 1, 4]])
Do analysis
Rebuild Array
array([[-, -, -],
[6, 7, 2],
[-, -, -],
[2, 1, 4]])
I’m hoping for an efficient way of doing this as I’m dealing with 100 million
data points. Any suggestions are appreciated.
Answer: You could use
masked[~masked.mask] = analyzed.ravel()
to reassign the analyzed values to the masked array.
* * *
import numpy as np
arr = np.array([[3, 0, 3],
[6, 7, 2],
[2, 5, 0],
[2, 1, 4]])
masked = np.ma.masked_array(arr, mask=False)
masked.mask[::2, None] = True
extracted = np.ma.compress_rows(masked)
analyzed = extracted*10
masked[~masked.mask] = analyzed.ravel()
print(masked)
yields
[[-- -- --]
[60 70 20]
[-- -- --]
[20 10 40]]
|
Install/import sklearn module on Spyder 2.1? (Ubuntu)
Question: I am trying to import the sklearn module into `Spyder 2.1 (for Python 2.7)`. I
have installed it on `Anaconda` through the terminal, but when I try to import
it from the console in `Spyder` I get this error:
> ImportError: No module named sklearn.linear_model
I installed sklearn using the command from [this link](http://scikit-
learn.org/stable/install.html), and it seemed to install fine from the
terminal.
I am running Spyder and Anaconda on Ubuntu Crouton (in case that is relevant).
Answer: I think this may help. [link][Adding a module (Specifically pymorph) to Spyder
(Python IDE)](http://stackoverflow.com/questions/10729116/adding-a-module-
specifically-pymorph-to-spyder-python-ide) If you can locate, it's just a
matter of changing the file paths.
|
Subsets and Splits