text
stringlengths 226
34.5k
|
---|
conditional product in numpy
Question: I have a list which controls what terms in the data list have to be multiplied
control_list = [1, 0, 1, 1, 0]
data_list = [5, 4, 5, 5, 4]
I need to find product of elements in the `data_list` for which the
`control_list` has `1`. My current attempt is naive and looks ugly!
product = 1
for i in range(len(control_list)):
if control_list[i]:
product *= data_list[i]
I looked at `numpy.where()` to get the required elements in `data_list` but it
looks like I did not get it right:
numpy.where(control_list, data_list)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-12-1534a6838544> in <module>()
----> 1 numpy.where(control_list, data_list)
ValueError: either both or neither of x and y should be given
My question is, can I do this somehow using numpy more efficiently?
Answer: Try this out. You can convert control_list to a list of booleans, and then use
it to index into data_list. Then, you can use numpy's product function to get
the product of all of the values.
>>> import numpy as np
>>> cList = np.array(control_list, dtype=np.bool)
>>> cList
array([ True, False, True, True, False], dtype=bool)
>>> data_list = np.array(data_list)
>>> data_list[cList] # numpy supports fancy indexing
array([5, 5, 5])
>>> np.product(data_list[cList])
125
|
Python regex dealing with "?"
Question: I know it's a pretty simple question. I happened to look at a regex example.
import re
pattern = r'^M?M?M?$'
s = "MDM"
re.search(pattern, s)
May I know why it doesn't match the string `s`? AFAIK, `?` is to specify 0 or
1 occurence. It matches MMM though.
However the same string matches when the pattern is `r'M?M?M?$'` or
`r'^M?M?M?'`. I am not getting what makes the difference here. Could someone
please explain?
Answer: `r'^M?M?M?$'` is the same as `r'^M{0,3}$'`. So, your pattern accepts `'', 'M',
'MM', 'MMM'` strings.
`r'M?M?M?$'` is the same as `r'M{0,3}$` and, actually, accepts all strings,
since there is always an empty part at the end of the string:
In [21]: pattern = r'M?M?M?$'
In [22]: re.search(pattern, 'A string without capital m at all')
Out[22]: <_sre.SRE_Match object; span=(33, 33), match=''>
|
Flask-PyMongo and init_app()
Question: I'm doing a clean-up of my development environment. I have code that was
running fine, but I wanted to remove any conflict between the various mongo
drivers. However now I'm perplexed by the error generated from the following
set-up
<app.py>
from database import mongo
app = Flask(__name__)
app.config.from_object('config')
mongo.init_app(app)
and `<database.py>`
from flask.ext.pymongo import PyMongo
mongo = PyMongo()
gives the following error:
mongo.init_app(app)
File "/home/x/venv/local/lib/python2.7/site-packages/flask_pymongo/__init__.py", line 232, in init_app
cx = connection_cls(*args, **kwargs)
File "/home/x/venv/local/lib/python2.7/site-packages/pymongo/mongo_client.py", line 342, in __init__
for k, v in keyword_opts.items())
File "/home/x/venv/local/lib/python2.7/site-packages/pymongo/mongo_client.py", line 342, in <genexpr>
for k, v in keyword_opts.items())
File "/home/x/venv/local/lib/python2.7/site-packages/pymongo/common.py", line 465, in validate
value = validator(option, value)
File "/home/x/venv/local/lib/python2.7/site-packages/pymongo/common.py", line 107, in raise_config_error
raise ConfigurationError("Unknown option %s" % (key,))
pymongo.errors.ConfigurationError: Unknown option auto_start_request
in my requirements.txt I have: `Flask-PyMongo==0.3.1`
Answer: You are probably using `PyMongo<3.0` version.
`auto_start_request` client method [was
removed](https://github.com/mongodb/mongo-python-
driver/blob/22fd629968527edcacf08322df60a4bf92572f65/doc/changelog.rst#changes-
in-version-30) in 3.0 release and changes added to `Flask-PyMongo==0.4.1`.
So you should either upgrade `Flask-PyMongo` or downgrade `PyMongo` package.
|
Are there any Kimonolabs alternatives?
Question: Recently kimonolabs announced they will be shutting down which is a major let
down as my app heavily relies on this service in terms of getting data. It's
really dissapointing that they're just shutting this service. I've been using
import.io in the mean time but it's no where near the standard of kimono and
is missing some features.
I was wondering if there are any services that are similar to kimono that have
the following features:
* Scheduled crawls i.e. schedule a crawl every 24 hours or Alternatively you can call a link to update the latest data for a crawl.
* Bulk or single url crawls i.e. Enter a list or a single url to scrape.
* Call a link to get the results from the crawl in JSON.
* Use a single api key to make a call for the api.
* It's free for most of these features.
Alternatively i may be tempted to create my own it's just that i don't want to
increase my dev time learning Node.js or Python which is why i'm asking this
question.
Answer: If you are looking for desktop app, Data Scraping Studio has the same plus
more feature as in Kimono. Or you may install it on a windows server to make
your own Kimono++
[](http://i.stack.imgur.com/X3QfL.png)
FYI : We also have plan to launch hosted solution and REST API by April, 2016
You may see more details on website (www.datascraping.co)
Disclosure : I'm one of the founding member
|
Cannot transfer Pixel to image in correct in right shape from cv code to cv2 code
Question: Recently I trying to do some image processing for my work. Unfortunately that
I keep trying to apply my old C++ code with cv support to python code with cv2
support. It cannot work very well... Can anyone help me?
Original C++ Code:
#define IMAGE_WIDE 40
#define IMAGE_LENGTH 30
#define CHANNELS 3
DNN_image_out = cvCreateImage(cvSize(IMAGE_WIDE, IMAGE_LENGTH), IPL_DEPTH_8U, 3);
for(int k = 0; k < IMAGE_LENGTH; k++){ //縦
for(int l = 0; l < IMAGE_WIDE; l++){ //横
DNN_image_out[i]->imageData[(k * IMAGE_WIDE + l)*3 +0] = DNN_image_tmp[(k * IMAGE_WIDE + l)*3 + 0 ];
DNN_image_out[i]->imageData[(k * IMAGE_WIDE + l)*3 +1] = DNN_image_tmp[(k * IMAGE_WIDE + l)*3 + 1 ];
DNN_image_out[i]->imageData[(k * IMAGE_WIDE + l)*3 +2] = DNN_image_tmp[(k * IMAGE_WIDE + l)*3 + 2 ];
}
}
* * *
My Python CV2 code:
import numpy as np
import cv2
def split_channel3(array,width,height):
R=[]
G=[]
B=[]
for k in range(height):
for l in range(width):
R.append(array[(k * width + l)*3 +0])
G.append(array[(k * width + l)*3 +1])
B.append(array[(k * width + l)*3 +2])
R = np.asarray(R)
G = np.asarray(G)
B = np.asarray(B)
return [R,G,B]
[R,G,B] = split_channel3(img,40,30)
R = R.reshape(40,30,1)
G = G.reshape(40,30,1)
B = B.reshape(40,30,1)
Color_img = np.dstack((R,G))
Color_img = np.dstack((Color_img,B))
cv2.imshow('image',Color_img)
cv2.waitKey(0)
Is my logic wrong? Or what should I change in python code?
Answer: You can simply use `cv2.split`, without the need of your custom function nor
`reshape`:
B,G,R = cv2.split(img)
and then evantually use:
Color_img = cv2.merge((B,G,R))
Remember that the channels are B,G,R by default in OpenCV, and not R,G,B.
|
Configuring either threading or multiprocessing to run multiple scripts python
Question: I trying to run multiple scripts, I have one master script which I just
replace the name with and run.
The threading method i'm trying looks like this
from threading import Thread
import sys
sys.path.append('/python/loanrates/master')
names = ['BTS', 'ETH', 'CLAM']#, 'DOGE', 'FCT', 'MAID', 'STR', 'XMR', 'XRP' ]
threads = []
for name in names:
sys.path.append('/python/loanrates/'+name)
import Master
for name in names:
T = Thread(target=Master.main(name))
print T
threads.append(T)
for thread_ in threads:
thread_.start()
for thread_ in threads:
thread_.join()
But this only starts the first script, ie the first name in names 'BTS'
Using multiprocessing seems a lot simpler but this time it doesn't recognize
Pool
import multiprocessing
import Master
pool = Pool(processes= 2)
names = ['BTS', 'ETH']#, 'CLAM', 'DOGE', 'FCT', 'MAID', 'STR', 'XMR', 'XRP' ]
pool.map(Master.main(), names)
Which would you recommend and what do I need to change the code to for it to
work ?
Answer: You want to send a function and it's arguments to the thread, not actually
call it in your context.
So change this:
T = Thread(target=Master.main(name)) # actually calls Master.main(args)
To this:
# send Master.main and arguemnts to the thread
T = Thread(target=Master.main, args=(name,))
Same goes for `Pool.map`:
Change this:
pool.map(Master.main(), names)
To this:
pool.map(Master.main, names)
|
Python changing file name
Question: My new application offers the ability to user to export the results. My
application exports text files with name Exp_Text_1, Exp_Text_2 etc. I want if
a file with the same file name pre-exists in Desktop to start counting from
this number and upwards. For example if a file with name Exp_Text_3 in already
in Desktop, I want the file will be created to have the name Exp_Text_4.
This is my code:
if len(str(self.Output_Box.get("1.0", "end"))) == 1:
self.User_Line_Text.set("Nothing to export!")
else:
import os.path
self.txt_file_num = self.txt_file_num + 1
file_name = os.path.join(os.path.expanduser("~"), "Desktop", "Exp_Txt" + "_" + str(self.txt_file_num) + ".txt")
file = open(file_name, "a")
file.write(self.Output_Box.get("1.0", "end"))
file.close()
self.User_Line_Text.set("A text file has been exported to Desktop!")
Answer: you likely want `os.path.exists`:
>>> import os
>>> help(os.path.exists)
Help on function exists in module genericpath:
exists(path)
Test whether a path exists. Returns False for broken symbolic links
a very basic example would be create a file name with a formatting mark to
insert the number for multiple checks:
import os
name_to_format = os.path.join(os.path.expanduser("~"), "Desktop", "Exp_Txt_{}.txt")
#the "{}" is a formatting mark so we can do file_name.format(num)
num = 1
while os.path.exists(name_to_format.format(num)):
num+=1
new_file_name = name_to_format.format(num)
this would check each filename starting with `Exp_Txt_1.txt` then
`Exp_Txt_2.txt` etc. until it finds one that does not exist.
However the format mark may cause a problem if curly brackets `{}` are part of
the rest of the path, so it may be preferable to do something like this:
import os
def get_file_name(num):
return os.path.join(os.path.expanduser("~"), "Desktop", "Exp_Txt_" + str(num) + ".txt")
num = 1
while os.path.exists(get_file_name(num)):
num+=1
new_file_name = get_file_name(num)
* * *
EDIT: answer to **why don't we need`get_file_name` function in first
example?**
First off if you are unfamiliar with `str.format` you may want to look at
[Python doc - common string
operations](https://docs.python.org/2/library/string.html#format-string-
syntax) and/or this simple example:
text = "Hello {}, my name is {}."
x = text.format("Kotropoulos","Tadhg")
print(x)
print(text)
The path string is figured out with this line:
name_to_format = os.path.join(os.path.expanduser("~"), "Desktop", "Exp_Txt_{}.txt")
But it has `{}` in the place of the desired number. (since we don't know what
the number should be at this point) so if the path was for example:
name_to_format = "/Users/Tadhg/Desktop/Exp_Txt_{}.txt"
then we can insert a number with:
print(name_to_format.format(1))
print(name_to_format.format(2))
and this does not change `name_to_format` since `str` objects are
[Immutable](http://stackoverflow.com/questions/1538663/why-are-python-strings-
and-tuples-are-made-immutable) so the `.format` returns a new string without
modifying `name_to_format`. However we would run into a problem if out path
was something like these:
name_to_format = "/Users/Bob{Cat}/Desktop/Exp_Txt_{}.txt"
#or
name_to_format = "/Users/Bobcat{}/Desktop/Exp_Txt_{}.txt"
#or
name_to_format = "/Users/Smiley{:/Desktop/Exp_Txt_{}.txt"
Since the formatting mark we want to use is no longer the only curly brackets
and we can get a variety of errors:
KeyError: 'Cat'
IndexError: tuple index out of range
ValueError: unmatched '{' in format spec
So you only want to rely on `str.format` when you know it is safe to use. Hope
this helps, have fun coding!
|
mysqldb: error with Select/execute/escape_string
Question: **executing this code on Python 2.7.10 [GCC 5.2.1 20151010] on linux2**
import flask
from MySQLdb import escape_string as thwart
username="abc"
conn = MySQLdb.connect(host="localhost",user="root", passwd="xxxxxxx", db="pythonprogramming")
c = conn.cursor()
x = c.execute("SELECT * FROM users WHERE username = (%s)", (thwart(username)))
I get the following error:
Traceback (most recent call last):
File "", line 1, in TypeError: must be impossible, not str
**this is MySQL version on my PC**
+-------------------------+------------------------------+
| Variable_name | Value
+-------------------------+------------------------------+
| innodb_version | 5.7.11
| protocol_version | 10
| slave_type_conversions |
| tls_version | TLSv1,TLSv1.1
| version | 5.7.11
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | x86_64
| version_compile_os | Linux
+-------------------------+------------------------------+
Answer: You are aware of the dangers of SQL injection. That's good.
You even use a very secure form of `execute`: _parametrized query_). When you
do your query this way, you do not need escaping at all. `execute` does it all
for you. Thus the solution is:
x = c.execute("SELECT * FROM users WHERE username = %
s", (username,))
You would need escaping if you did something like this (with the needed
`import`):
x = c.execute("SELECT * FROM users WHERE username = %s" % escape_string(username))
For further discussion, have a look at [Python MySQL with
variables](http://stackoverflow.com/questions/775296/python-mysql-with-
variables)
|
Either the websocket or the tornado goes down everytime.
Question: I am new to asynchronous programming. I have been using python 3.5 asyncio for
a few days. I wanted to make a server capable of receiving data from a
websocket machine client (GPS) as well as rendering a html page as the browser
client for the websocket server. I have used websockets for the connection
between my machine client and server at port 8765. For rendering the webpage I
have used tornado at port 8888 (The html file is at ./views/index.html ). The
code works fine for only the websocket server. When I added the tornado
server, the code behaved weird and I don't know why. There must be something
with the asyncio usage. If I place
app = make_app()
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
just before
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
, the websocket server doesn't connect. If I do the reverse, the tornado
server doesn't run.
Please help me out as I am new to asynchronous programming. The server.py,
index.html and the client.py (machine clients) are given below.
server.py
#!/usr/bin/env python
import tornado.ioloop
import tornado.web
import asyncio
import websockets
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render("./views/index.html", title = "GPS")
def make_app():
return tornado.web.Application([
(r"/", MainHandler),
])
clients = []
async def hello(websocket, path):
clients.append(websocket)
while True:
name = await websocket.recv()
print("< {}".format(name))
print(clients)
greeting = "Hello {}!".format(name)
for each in clients:
await each.send(greeting)
print("> {}".format(greeting))
start_server = websockets.serve(hello, 'localhost', 8765)
print("Listening on *8765")
app = make_app()
app.listen(8888)
print("APP is listening on *8888")
tornado.ioloop.IOLoop.current().start()
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
client.py
#!/usr/bin/env python
import serial
import time
import asyncio
import websockets
ser =serial.Serial("/dev/tty.usbmodem1421", 9600, timeout=1)
async def hello():
async with websockets.connect('ws://localhost:8765') as websocket:
while True:
data = await retrieve()
await websocket.send(data)
print("> {}".format(data))
greeting = await websocket.recv()
print("< {}".format(data))
async def retrieve():
data = ser.readline()
return data #return the location from your example
asyncio.get_event_loop().run_until_complete(hello())
asyncio.get_event_loop().run_forever()
./views/index.html
<html>
<head>
<title>{{ title }}</title>
</head>
<body>
<script>
var ws = new WebSocket("ws://localhost:8765/"),
messages = document.createElement('ul');
ws.onopen = function(){
ws.send("Hello From Browser")
}
ws.onmessage = function (event) {
var messages = document.getElementsByTagName('ul')[0],
message = document.createElement('li'),
content = document.createTextNode(event.data);
message.appendChild(content);
messages.appendChild(message);
};
document.body.appendChild(messages);
</script>
Answer: You can only run one event loop at a time (unless you give each one its own
thread, but that's significantly more complicated). Fortunately, there's a
bridge between Tornado and asyncio to let them share the same IOLoop.
Early in your program (before any tornado-related code like `app =
make_app()`), do this:
import tornado.platform.asyncio
tornado.platform.asyncio.AsyncIOMainLoop().install()
and do not call `IOLoop.current().start()`. This will redirect all Tornado-
using components to use the asyncio event loop instead.
|
MultiThreading with a python loop
Question: I an trying to run this Python code on several threads of my processor, but I
can't find how to allocate multiple threads. I am using **python 2.7** in
Jupyter (formerly IPython). The initial code is below (all this part works
perfectly). It is a web parser which takes `x` i.e., a url among my_list i.e.,
a list of url and then write a CSV (where out_string is a line).
# Code without MultiThreading
my_list = ['http://stackoverflow.com/', 'http://google.com']
def main():
with open('Extract.csv'), 'w') as out_file:
count_loop = 0
for x in my_list:
#================ Get title ==================#
out_string = ""
campaign = parseCampaign(x)
out_string += ';' + str(campaign.getTitle())
#================ Get Profile ==================#
if campaign.getTitle() != 'NA':
creator = parseCreator(campaign.getCreatorUrl())
out_string += ';' + str(creator.getCreatorProfileLinkUrl())
else:
pass
#================ Write ==================#
out_string += '\n'
out_file.write(out_string)
count_loop +=1
print '---- %s on %s ------- ' %(count_loop, len(my_list))
# Code with MultiThreading but not working
from threading import Thread
my_list = ['http://stackoverflow.com/', 'http://google.com']
def main(x):
with open('Extract.csv'), 'w') as out_file:
count_loop = 0
for x in my_list:
#================ Get title ==================#
out_string = ""
campaign = parseCampaign(x)
out_string += ';' + str(campaign.getTitle())
#================ Get Profile ==================#
if campaign.getTitle() != 'NA':
creator = parseCreator(campaign.getCreatorUrl())
out_string += ';' + str(creator.getCreatorProfileLinkUrl())
else:
pass
#================ Write ==================#
out_string += '\n'
out_file.write(out_string)
count_loop +=1
print '---- %s on %s ------- ' %(count_loop, len(my_list))
for x in my_list:
t = Thread(target=main, args=(x,))
t.start()
t2 = Thread(target=main, args=(x,))
t2.start()
I cannot find a good way to implement more than one thread to run this piece
of code, and I am a bit confused because the documentation is not very easy to
understand. With one core, this code takes 2 hours long, multi-threading will
save me lot of time!
Answer: Well... the answer to:
> Why would you assign two threads for the same exact task?
is:
> to run faster the loop
(see at the comments of the original post)
then something is pretty wrong here.
Dear OP, both of the threads will do _exactly_ the same thing! This means that
the first thread will do exactly the same thing as the second.
What you can do is something like the following:
import multiprocessing
nb_cores = 2 # Put the correct amount
def do_my_process_for(this_argument):
# Add the actual code
pass
def main():
pool = multiprocessing.Pool(processes=nb_cores)
results_of_processes = [pool.applyasync(
do_my_process,
args=(an_argument, ),
callback=None
) for an_argument in arguments_list]
pool.close()
pool.join()
Basically, you can think each process/thread as having its own "mind". This
means that in your code the first thread will do the process defined in
`main()` for the argument `x` (taken from your iteration on your list) and the
second one will do the same task (the one in the `main()`) again for `x`.
What you need is to formulate your process as a procedure having a set of
input parameters and a set of output. Then you can create multiple processes,
to each of them give one of the desired input parameters and then the process
will execute your main routine with the proper parameter.
Hope it helps. See also the code and I think you will understand it.
Also, see:
multiprocessing map and asynchronous map (I don't remember right now the exact
name)
and
functools partial
|
My typing simulator runs in the python shell but not in real life?
Question: I am writing a program to simulate typing and it runs in the python shell but
not when double clicked any ideas?
My code is as follows:
import sys,time
def slow_text(str):
for letter in str:
sys.stdout.write(letter)
sys.stdout.flush
time.sleep(0.1)
print("")
slow_text('Hello')
I am using python 3.5.
Answer: You're not actually calling `sys.stdout.flush`. That line should be:
sys.stdout.flush()
Without flushing, what's actually happening is that the script delays for some
seconds with a blank console window (while the characters go into the output
buffer) and then they all appear at once and the script ends and the window
immediately closes, before you have a chance to see them.
That it worked in the Python shell was just a coincidence.
|
cv2.FeatureDetector_create('SIFT') causes segmentation fault
Question: I am using opencv 2.4.11 and python 2.7 for a computer vision project. I am
trying to obtain the SIFT descriptors:
ima = cv2.imread('image.jpg')
gray = cv2.cvtColor(ima,cv2.COLOR_BGR2GRAY)
detector = cv2.FeatureDetector_create('SIFT') # or 'SURF' for that matter
descriptor = cv2.DescriptorExtractor_create('SIFT')
kpts = detector.detect(gray)
When calling the last instruction it throws an ugly segmentation fault. I have
to use a 2.4.x version, so uploading to the 3.x version of opencv to use SIFT
or SURF methods is not an option. I have downgraded from 3.1 previously using
sudo make uninstall and installed from 0 the actual opencv version.
Does anyone have an idea why this happens?
Answer: Try:
import cv2
ima = cv2.imread('image.jpg')
gray = cv2.cvtColor(ima, cv2.COLOR_BGR2GRAY)
detector = cv2.SIFT()
kp1, des1 = detector.detectAndCompute(gray, None)
`detector = cv2.FeatureDetector_create('SIFT')` should also work for creating
the SIFT object.
|
Keeping socket connection alive with Python client and Node.js server
Question: I'm trying to combine a Node.js with Python to create a socket connection.
The problem is that I can send data, but I can't maintain the connection.
This is my server in Node.js
var net = require('net');
var HOST = '127.0.0.1';
var PORT = 1337;
net.createServer(function(sock) {
console.log('CONNECTED: ' + sock.remoteAddress +':'+ sock.remotePort);
sock.on('data', function(data) {
console.log('DATA ' + sock.remoteAddress + ': ' + data);
sock.write('You said "' + data + '"');
});
sock.on('close', function(data) {
console.log('CLOSED: ' + sock.remoteAddress +' '+ sock.remotePort);
});
}).listen(PORT, HOST);
console.log('Server listening on ' + HOST +':'+ PORT);
and this is my client side in Python
import socket
import sys
# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Connect the socket to the port where the server is listening
server_address = ('localhost', 1337)
print >>sys.stderr, 'connecting to %s port %s' % server_address
sock.connect(server_address)
try:
# Send data
message = 'This is the message.'
print >>sys.stderr, 'sending "%s"' % message
sock.sendall(message)
finally:
print >>sys.stderr, 'closing socket'
This works great but the client disconnects right after it has sent the data.
Ultimately, I want to be able to give user-input to send data, and also
receive data.
Any suggestions on how to do this would be appreciated.
Answer: I'll approach the user input scenario. As of now, your program simply runs its
course and exists.
You want to be able to combine two naively blocking operations, running some
sort of input loop (e.g. `while True: data = input()`) but handle incoming
traffic as well.
The basic way to do this is to have 2 threads, one for user input and the
other for socket connections in similar `while True: data = socket.recv(buff)`
loop, but there's another catch here as you might block on a single connection
-- you'll have to dedicate a thread per connection. In order to avoid this,
you could use [select](http://man7.org/linux/man-pages/man2/select.2.html),
which maintains socket connections for you asynchronously.
If there's no user-input, than you can just use `select` \-- that will be
sufficient to handle multiple connections in a concise manner.
Either way, I suggest you take a look at some asynchronous event-driven
frameworks that are `select` based such as
[asyncio](https://docs.python.org/3/library/asyncio.html) and
[Twisted](http://twisted.readthedocs.org/en/latest/core/howto/internet-
overview.html).
|
Multiprocesing pool.join() hangs under some circumstances
Question: I am trying to create a simple producer / consumer pattern in Python using
`multiprocessing`. It works, but it hangs on `poll.join()`.
from multiprocessing import Pool, Queue
que = Queue()
def consume():
while True:
element = que.get()
if element is None:
print('break')
break
print('Consumer closing')
def produce(nr):
que.put([nr] * 1000000)
print('Producer {} closing'.format(nr))
def main():
p = Pool(5)
p.apply_async(consume)
p.map(produce, range(5))
que.put(None)
print('None')
p.close()
p.join()
if __name__ == '__main__':
main()
Sample output:
~/Python/Examples $ ./multip_prod_cons.py
Producer 1 closing
Producer 3 closing
Producer 0 closing
Producer 2 closing
Producer 4 closing
None
break
Consumer closing
**However** , it works perfectly when I change one line:
que.put([nr] * 100)
It is 100% reproducible on Linux system running Python 3.4.3 or Python 2.7.10.
Am I missing something?
Answer: There is quite a lot of confusion here. What you are writing is not a
producer/consumer scenario but a mess which is misusing another pattern
usually referred as "pool of workers".
The pool of workers pattern is an application of the producer/consumer one in
which there is one producer which schedules the work and many consumers which
consume it. In this pattern, the owner of the `Pool` ends up been the producer
while the workers will be the consumers.
In your example instead you have a hybrid solution where one worker ends up
being a consumer and the others act as sort of middle-ware. The whole design
is very inefficient, duplicates most of the logic already provided by the
`Pool` and, more important, is very error prone. What you end up suffering
from, is a [Deadlock](https://en.wikipedia.org/wiki/Deadlock).
Putting an object into a `multiprocessing.Queue` is an asynchronous operation.
It blocks only if the `Queue` is full and your `Queue` has infinite size.
This means your `produce` function returns immediately therefore the call to
`p.map` is not blocking as you expect it to do. The related worker processes
instead, wait until the actual message goes through the `Pipe` which the
`Queue` uses as communication channel.
What happens next is that you terminate prematurely your consumer as you put
in the `Queue` the `None` "message" which gets delivered before all the lists
your `produce` function create are properly pushed through the `Pipe`.
You notice the issue once you call `p.join` but the real situation is the
following.
* the `p.join` call is waiting for all the worker processes to terminate.
* the worker processes are waiting for the big lists to go though the `Queue`'s `Pipe`.
* as the consumer worker is long gone, nobody drains the `Pipe` which is obviously full.
The issue does not show if your lists are small enough to go through before
you actually send the termination message to the `consume` function.
|
Making POST request in Python
Question: This is the form I am working with.
<form name="form1" method="post" action="http://lumbininet.com.np/eservice/index.php/login/processLogin" id="login_form">
<input type="hidden" name="logout" value=0>
<table>
<tr>
<td colspan="2" class="sub-body-txt" height="10">
</td>
</tr>
<tr>
<td width="135" colspan="2" class="sub-body-txt"><p>Username</p></td>
</tr>
<tr>
<td colspan="2"><input name="username" type="text" id="username" class="sub-body-txt" size="20"></td>
</tr>
<tr>
<td colspan="2" class="sub-body-txt"><p>Password:</p></td>
</tr>
<tr>
<td colspan="2"><input name="password" type="password" id="password" AUTOCOMPLETE="off" class="sub-body-txt" size="20"></td>
</tr>
<tr>
<td>
<input type="hidden" name="port" value="110">
<input type="hidden" name="rootdir" value="">
</td>
</tr>
<tr>
<!-- <td colspan="2"><input type="button" value="Login" src="/main/icons/buttn_login.gif" style="width:100px; " name="Submit" id="login_btn"></td> -->
<td colspan="2"><input type="submit" value="Login" src=/main/icons/buttn_login.gif" style="width:100px; border-radius:2px; background:#000fff; color:white "name="Submit" id="login_btn"></td>
</tr>
<tr>
<td height="15" colspan="2" class="s-body-txt-lnk"> </td>
</tr>
</table>
</form>
It has four fields to fill in viz 'username', 'password', 'port' and
'rootdir'. I am making POST request to this page as:
import requests
proxies = {
"http":"http://heed:[email protected]:3128",
"https":"https://heed:[email protected]:3128"
}
headers = { 'Accept':'*/*',
'Accept-Language':'en-US,en;q=0.8',
'Cache-Control':'max-age=0',
'Connection':'keep-alive',
'Proxy-Authorization':'Basic ZWRjZ3Vlc3Q6ZWRjZ3Vlc3Q=',
'If-Modified-Since':'Fri, 13 Nov 2015 17:47:23 GMT',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36'
}
with requests.Session() as c:
url = 'http://lumbininet.com.np/eservice/index.php/login'
c.get(url, proxies=proxies, headers=headers)
payload = {'username': 'myusername', 'password': 'mypassword', 'port':'110', 'rootdir':''}
c.post(url, data = payload, proxies=proxies, headers=headers)
r = c.get('http://lumbininet.com.np/eservice/index.php/login/processLogin', proxies=proxies, headers=headers)
print (r.content)
But nothing gets printed when I run this. What am I doing wrong ? Is the link
I am making POST request to wrong ? and what about the field values
(particularly of 'hidden' type) ?
Please help.
Answer: According to the HTML, the `action` defined for the `form` is
`action="http://lumbininet.com.np/eservice/index.php/login/processLogin"`
Thus, your `POST` should be sent to this URL, and not the one you defined
(which was `http://lumbininet.com.np/eservice/index.php/login`)
|
Python: How to revolve a surface around z axis and make a 3d plot?
Question: I want to get 2d and 3d plots as shown below.
The equation of the curve is given.
How can we do so in python?
I know there may be duplicates but at the time of posting I could not fine any
useful posts.
My initial attempt is like this:
# Imports
import numpy as np
import matplotlib.pyplot as plt
# to plot the surface rho = b*cosh(z/b) with rho^2 = r^2 + b^2
z = np.arange(-3, 3, 0.01)
rho = np.cosh(z) # take constant b = 1
plt.plot(rho,z)
plt.show()
Some related links are following:
[Rotate around z-axis only in
plotly](http://stackoverflow.com/questions/27677556/rotate-around-z-axis-only-
in-plotly)
The 3d-plot should look like this:
[](http://i.stack.imgur.com/u3EIn.png)
Answer: Ok so I think you are really asking to revolve a 2d curve around an axis to
create a surface. I come from a CAD background so that is how i explain
things. and I am not the greatest at math so forgive any clunky terminology.
Unfortunately you have to do the rest of the math to get all the points for
the mesh.
Heres your code:
#import for 3d
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
change arange to linspace which captures the endpoint otherwise arange will be
missing the 3.0 at the end of the array:
z = np.linspace(-3, 3, 600)
rho = np.cosh(z) # take constant b = 1
since rho is your radius at every z height we need to calculate x,y points
around that radius. and before that we have to figure out at what positions on
that radius to get x,y co-ordinates:
#steps around circle from 0 to 2*pi(360degrees)
#reshape at the end is to be able to use np.dot properly
revolve_steps = np.linspace(0, np.pi*2, 600).reshape(1,600)
the Trig way of getting points around a circle is:
x = r*cos(theta)
y = r*sin(theta)
for you r is your rho, and theta is revolve_steps
by using np.dot to do matrix multiplication you get a 2d array back where the
rows of x's and y's will correspond to the z's
theta = revolve_steps
#convert rho to a column vector
rho_column = rho.reshape(600,1)
x = rho_column.dot(np.cos(theta))
y = rho_column.dot(np.sin(theta))
# expand z into a 2d array that matches dimensions of x and y arrays..
# i used np.meshgrid
zs, rs = np.meshgrid(z, rho)
#plotting
fig, ax = plt.subplots(subplot_kw=dict(projection='3d'))
fig.tight_layout(pad = 0.0)
#transpose zs or you get a helix not a revolve.
# you could add rstride = int or cstride = int kwargs to control the mesh density
ax.plot_surface(x, y, zs.T, color = 'white', shade = False)
#view orientation
ax.elev = 30 #30 degrees for a typical isometric view
ax.azim = 30
#turn off the axes to closely mimic picture in original question
ax.set_axis_off()
plt.show()
#ps 600x600x600 pts takes a bit of time to render
I am not sure if it's been fixed in latest version of matplotlib but the
setting the aspect ratio of 3d plots with:
ax.set_aspect('equal')
has not worked very well. you can find solutions at [this stack overflow
question](http://stackoverflow.com/questions/13685386/matplotlib-equal-unit-
length-with-equal-aspect-ratio-z-axis-is-not-equal-to)
|
what's the difference between rdd from PythonRDD and ParallelCollectionRDD
Question: I am learning how to program with Spark in Python and struggle with one
problem.
The problem is that I have a PythonRDD loaded as id and description:
pythonRDD.take(1)
## [('b000jz4hqo', ['clickart', '950', '000', 'premier', 'image', 'pack', 'dvd', 'rom', 'broderbund'])]
And ParallelCollectionRDD loaded as id and description:
paraRDD.take(1)
## [('b000jz4hqo', ['clickart', '950', '000', 'premier', 'image', 'pack', 'dvd', 'rom', 'broderbund'])]
I can do a count on the paraRDD like this:
paraRDD.map(lambda l: (l[0],len(l[1]))).reduce(lambda a,b: a[1] + b[1])
or simply
paraRDD.reduce(lambda a,b: len(a[1]) + len(b[1]))
but on pythonRDD it ran into bug, the bug says:
> "TypeError: 'int' object has no attribute '**getitem** '".
def countTokens(vendorRDD):
return vendorRDD.map(lambda l: (l[0],len(l[1]))).reduce(lambda a,b: a[1] + b[1])
Any idea on how this happened would be appreciated?!
Answer: Difference between `PythonRDD` and `ParallelCollectionRDD` is completely
irrelevant here. Your code is just wrong.
`reduce` method takes an associative and commutative function with the
following signature:
(T, T) => T
In other words both arguments and returned object have to be of the same type
and an order of operations and a parenthesizing cannot affect the final
result. Function you pass to the `reduce` simply doesn't satisfy these
criteria.
To make it work you'll need something like this:
rdd.map(lambda l: len(l[1])).reduce(lambda x, y: x + y)
or even better:
from operator import add
rdd.values().map(len).reduce(add)
|
Python Counter() adding value to existing keys
Question:
developer_base = Counter({
'user1': {'XS': 0, 'S': 0, 'M': 0, 'L': 0, 'XL': 0},
'user2': {'XS': 0, 'S': 0, 'M': 0, 'L': 0, 'XL': 0},
'user3': {'XS': 0, 'S': 0, 'M': 0, 'L': 0, 'XL': 0},
'user4': {'XS': 0, 'S': 0, 'M': 0, 'L': 0, 'XL': 0},
})
Loop to gather Counter data:
for y in time_list:
story_size = models.get_specific_story(y['story_id'])
if story_size is not "?":
counts = Counter(y['minutes_spent'])
print(counts)
developer_base = developer_base + counts
Should `Counter`be part of a for loop? story_size always equals one of the
keys in the nested dict (S,XS,M etc). `time_list` has the ['minutes_spent']
which is the value that needs to add into the dictionary. the problem seems to
be that time_list has a nested dict which is ['user']['first_name'] and it is
equal to the developer_base keys for user1 through user4.
So I need to add up all the 'minutes_spent' in time_list for each user.
Update: JSON data
[{'project_slug': 'test', 'project_id': 19855, 'date': '2016-02-11', 'task_name': None, 'iteration_name': 'test', 'notes': '', 'user_id': 81946, 'story_id': 392435, 'iteration_id': 76693, 'story_name': 'test', 'user': {'id': 81946, 'last_name': 'test', 'first_name': 'user1', 'email': 'test', 'username': 'test'}, 'project_name': 'Development', 'id': 38231, 'minutes_spent': 240}]
The data is much larger but this is one whole section.
Answer: In the first snippet, you are abusing `Counter`. That snippet only works due
to quirks in Python 2, where one can compare dicts. The values of a counter
are supposed to be numbers.
Similarly, `y['minutes_spent']` is an integer, and
`Counter(y['minutes_spent'])` will just throw an error. In addition,
`story_size is not "?"` [does not do what you
expect](http://stackoverflow.com/q/132988/35070).
Assuming the real problem is
> add up all the 'minutes_spent' in time_list for each user.
then you can use a Counter:
from collections import Counter
c = Counter()
for y in time_list:
c[y['user']['id']] += y['minutes_spent']
|
How do I get the entire selected row in a Qt proxy model?
Question: The code below is a working QTableView, which is using a QAbstractTableModel,
which is using a QSortFilterProxyModel. I've managed to figure out how to get
data out of a _single_ _cell_ of a selected row, but not the entire row at
once (e.g. as a list of strings). Please can I have some suggestions of what
to try?
(I'm using Python, but if someone knows how to do it in another language I'll
_try_ to translate it...)
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
class CustomTableModel(QAbstractTableModel):
def __init__(self, cells=[[]], headers=[]):
super(CustomTableModel, self).__init__()
self._cells = cells
self._headers = headers
def data(self, index, role):
if index.isValid() and (role == Qt.DisplayRole):
return self._cells[index.row()][index.column()]
def rowCount(self, parent=None):
return len(self._cells)
def columnCount(self, parent=None):
return len(self._headers)
def flags(self, index):
return Qt.ItemIsEnabled | Qt.ItemIsSelectable
class CustomSortFilterProxyModel(QSortFilterProxyModel):
def __init__(self):
super(CustomSortFilterProxyModel, self).__init__()
def get_selected_row(self):
# Unsure what to put here??
pass
def table_clicked():
selected_indexes = table_view.selectionModel().selectedRows()
first_cell_selected = proxy_model.data(proxy_model.index(selected_indexes[0].row(), 0), Qt.DisplayRole).toString()
print(first_cell_selected)
# But rather than the above I would like to be able to do something like:
print(proxy_model.get_selected_row())
# and for it to print out everything in the row e.g. ['Cell 1', 'Cell 2', 'Cell 3']
app = QApplication(sys.argv)
table_data = [["Cell 1", "Cell 2", "Cell 3"], ["Cell 4", "Cell 5", "Cell 6"]]
table_headers = ["Header 1", "Header 2", "Header 3"]
model = CustomTableModel(table_data, table_headers)
proxy_model = CustomSortFilterProxyModel()
proxy_model.setDynamicSortFilter(True)
proxy_model.setSourceModel(model)
table_view = QTableView()
table_view.setModel(proxy_model)
table_view.setSelectionBehavior(QAbstractItemView.SelectRows)
table_view.setSelectionMode(QAbstractItemView.SingleSelection)
table_view.setSortingEnabled(True)
table_view.clicked.connect(table_clicked)
table_view.show()
sys.exit(app.exec_())
It might be related to [How to get Index Row number from Source
Model](http://stackoverflow.com/questions/28160584/how-to-get-index-row-
number-from-source-model)
Answer: There's no ready-made method to get the data of the full row. You'll have to
loop over the existing columns.
So basically, you can transform your existing line:
first_cell_selected = proxy_model.data(proxy_model.index(selected_indexes[0].row(), 0), Qt.DisplayRole).toString()
into a list comprehension to get the content of each individual cell:
row = selected_indexes[0].row()
row_data = [proxy_model.index(row, col).data().toString()
for col in xrange(proxy_model.columnCount())]
By the way: A `QModelIndex` also offers a `data()` method and `Qt.DisplayRole`
is the default role here. So you can simplify
proxy_model.data(proxy_model.index(row, col), Qt.DisplayRole)
to
proxy_model.index(row, col).data()
which somewhat easier to read.
|
Example for setting (multiple) parameters in Python LXML XSLT
Question: Looked for the solution to this problem for a while since [the
documentation](http://lxml.de/xpathxslt.html#xslt) isn't really clear on it.
I ended up using the method below, and thought I'd share back.
Answer: Apparently you can [chain parameter
arguments](https://docs.python.org/2/tutorial/controlflow.html#keyword-
arguments) when applying the XSLT to the original xml tree. I found the most
reliable way is to always use the _tree.XSLT.strparam()_ method for wrapping
the argument values. Not really needed I guess for simpler types like string
or integers. But this method works regardless.
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<xsl:output method="xml" omit-xml-declaration="no"/>
<xsl:param name="var1"/>
<xsl:param name="var2"/>
<!-- actual sheet omitted because example -->
</xsl:stylesheet>
from lxml import etree
var = "variable string"
original_tree = etree.parse("original.xml")
xslt_tree = etree.parse("transform.xsl")
xslt = etree.XSLT(xslt_tree)
lom_tree = xslt(original_tree, var1=etree.XSLT.strparam("str_example"), var2=etree.XSLT.strparam(var))
print(etree.tostring(lom_tree, pretty_print=True))
|
Python: the fastest way to translate numpy string array to a number array
Question: anyone can tell me what is the fastest way to translate this string array into
a number array as below:
import numpy as np
strarray = np.array([["123456"], ["654321"]])
to
numberarray = np.array([[1,2,3,4,5,6], [6,5,4,3,2,1]])
map str to list and then map str to int is too slow for a large array!
Please help!
Answer: You can split the strings into single characters with the array `view` method:
In [18]: strarray = np.array([[b"123456"], [b"654321"]])
In [19]: strarray.dtype
Out[19]: dtype('S6')
In [20]: strarray.view('S1')
Out[20]:
array([['1', '2', '3', '4', '5', '6'],
['6', '5', '4', '3', '2', '1']],
dtype='|S1')
See
[here](http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html#specifying-
and-constructing-data-types) for data type character codes.
Then the most obvious next step is to use `astype`:
In [23]: strarray.view('S1').astype(int)
Out[23]:
array([[1, 2, 3, 4, 5, 6],
[6, 5, 4, 3, 2, 1]])
However, it's a lot faster to reinterpret (view) the memory underlying the
strings as single byte integers and subtract 48. This works because ASCII
characters take up a single byte and the characters `'0'` through `'9'` are
binary equivalent to (u)int8's 48 through 57 (check the [`ord`
builtin](https://docs.python.org/2/library/functions.html#ord)).
Speed comparison:
In [26]: ar = np.array([[''.join(np.random.choice(list('123456789'), size=320))] for _ in range(1000)], bytes)
In [27]: %timeit _ = ar.view('S1').astype(np.uint8)
1 loops, best of 3: 284 ms per loop
In [28]: %timeit _ = ar.view(np.uint8) - ord('0')
1000 loops, best of 3: 1.07 ms per loop
If have Unicode instead of ASCII you need to do these steps slightly
different. Or just convert to ASCII first with `astype(bytes)`.
|
Python 3.4.3 & Bottle with CGI - environ['REQUEST_METHOD']
Question: I am trying to use Python 3.4.3 and Bottle 0.12.8 to run a simple web service
using cgi. I am running the below script from my Linux system. I was able to
run the the same service without CGI.
======================================
import bottle
from bottle import route, run, request, response
host = 'XXXX'
port = '8080'
debug = 'False'
@route('/hello/', method=['OPTIONS','GET'])
def hello():
return("Success")
bottle.run(host=host, port=port,debug=debug,server='cgi')
#bottle.run(host=host, port=port,debug=debug)
======================================
I get the below error, when I run the service with CGI-
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/bottle.py", line 858, in _handle
route, args = self.router.match(environ)
File "/usr/local/lib/python3.4/site-packages/bottle.py", line 413, in match
verb = environ['REQUEST_METHOD'].upper()
KeyError: 'REQUEST_METHOD'
<h1>Critical error while processing request: </h1><h2>Error:</h2>
<pre>
KeyError('REQUEST_METHOD',)
</pre>
<h2>Traceback:</h2>
<pre>
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/bottle.py", line 957, in wsgi
or environ['REQUEST_METHOD'] == 'HEAD':
KeyError: 'REQUEST_METHOD'
</pre>
Status: 500 INTERNAL SERVER ERROR
Content-Type: text/html; charset=UTF-8
Content-Length: 374
<h1>Critical error while processing request: </h1><h2>Error:</h2>
<pre>
KeyError('REQUEST_METHOD',)
</pre>
<h2>Traceback:</h2>
<pre>
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/bottle.py", line 957, in wsgi
or environ['REQUEST_METHOD'] == 'HEAD':
KeyError: 'REQUEST_METHOD'
</pre>
Any pointers would help. Thanks
Answer: You're getting this error because the [basic CGI HTTP server in
Python](https://hg.python.org/cpython/file/3.4/Lib/http/server.py) only
supports GET, HEAD and POST commands. CGI server is throwing the KeyError when
it attempts to validate the OPTIONS command. If you want to use the OPTIONS
method in your server code, you'll need implement it yourself or switch to a
different server.
|
Running Django 1.9 on CentOS 7/Apache 2.4/Python 3.4
Question: I have successfully created my Django (1.9) site on my computer. And is now
trying to move it to a web server (CentOS 7). After sitting a whole day,
searching the web, I have found many guides on how to do this. But in the
midst of all of this, probably confused some of them together, since it seems
there is no "one-way" to make Django run on a webserver.
After a long struggle, I have actually managed to get the Apache (2.4.6)
running, but I am now seeing an Internal 500 error. I took a while but i found
the log files. For other readers in my case they were in
/etc/httpd/logs/error_log.
[Wed Feb 24 18:00:05.475116 2016] [mpm_prefork:notice] [pid 4641] AH00163: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 configured -- resuming normal operations
[Wed Feb 24 18:00:05.475162 2016] [core:notice] [pid 4641] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Wed Feb 24 18:00:12.867329 2016] [:error] [pid 4642] [client x:54699] mod_wsgi (pid=4642): Target WSGI script '/var/www/sites/mysite.com/mysite/wsgi.py' cannot be loaded as Python module.
[Wed Feb 24 18:00:12.867484 2016] [:error] [pid 4642] [client x:54699] mod_wsgi (pid=4642): Exception occurred processing WSGI script '/var/www/sites/mysite.com/mysite/wsgi.py'.
[Wed Feb 24 18:00:12.867570 2016] [:error] [pid 4642] [client x:54699] Traceback (most recent call last):
[Wed Feb 24 18:00:12.867664 2016] [:error] [pid 4642] [client x:54699] File "/var/www/sites/mysite.com/mysite/wsgi.py", line 12, in <module>
[Wed Feb 24 18:00:12.868020 2016] [:error] [pid 4642] [client x:54699] from django.core.wsgi import get_wsgi_application
[Wed Feb 24 18:00:12.868109 2016] [:error] [pid 4642] [client x:54699] ImportError: No module named django.core.wsgi
I assume I need some kind of reference to the Django code, though I cannot
figure out why, how or where to put this.
Out of interest of any future readers, and also to see what I have done, I
will try to retrace my steps I have done to be able to see what I might have
missed or done incorrectly.
NB: I am not using virtualenv to run my Django project.
1. Installed python 3.4
2. Installed pip for python 3.4 (using curl)
3. Installed Apache 2.4 (from yum)
4. Installed mod_wsgi (from yum) (many sites said to compile it directly from code, did not venture into this, anyone recommending highly to do this?)
5. Installed Django with pip (NB, without virtualenv)
After the above was done, I used SVN client to checkout my code on the server
to the folder /var/www/sites/mysite.com. Folder structure looks like this.
(Note still using sqllite, did not get to the migration to PostgreSQL yet,
this is next step, once i see my site online)
(Side note, I spend a lot of time figuring out where to put the Django code,
since everywhere i looked, it was placed differently. I ended up deciding to
put it directly in the /var/www since it is the site code, and it is needed
here, it seems. Any comments are welcome.)
+---var
| \---www
| \---sites
| \---static
| \---mysite.com
| +---db.sqllite3
| +---manage.py
| \---.svn
| \---mysite
| +---settings.py
| +---urls.py
| +---wsgi.py
| +---...
| \---mysiteapp
| +---urls.py
| +---admin.py
| +---...
I have used "sudo python3.4 manage.py collectstatic" to move the static files
to the /var/www/sites/static/ folder. Since I did not want this to be inside
the folder where my .svn files are. I could ignore the folder, but for now
this is how it is.
The Apache installations is pretty much standard, I have changed a few things,
but not something which should have an impact, as far as I am concerned, so I
am just showing here the conf file I am using in the "/etc/httpd/conf.d"
folder. (please note I have replace name of the project with mysite)
WSGIPythonPath /var/www/sites/mysite.com
ServerName sub.server.com
<VirtualHost *:80>
Alias /static/ /var/www/sites/static/
<Directory /var/www/sites/static/>
Options -Indexes
Require all granted
</Directory>
WSGIScriptAlias / /var/www/sites/mysite.com/mysite/wsgi.py
<Directory /var/www/sites/mysite.com/mysite>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
</VirtualHost>
My wsgi.py file is the standard one which Django creates when creating the
initial project. This works with Djangos own web server when running on my
computer, could not see if I might have to change something here to make it
work when using Apache?
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
application = get_wsgi_application()
For the interested I have also included the settings.py file, to see what is
here.
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = <removed>
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'mysiteapp.apps.mysiteappConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'mysite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'mysite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.9/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.9/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'CET'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Logging
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'logfile': {
'class': 'logging.handlers.WatchedFileHandler',
'filename': '/var/log/django/error.log'
},
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'django': {
'handlers': ['logfile'],
'level': 'ERROR',
'propagate': False,
},
}
}
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.9/howto/static-files/
STATIC_ROOT = '/var/www/sites/static/'
STATIC_URL = '/static/'
I hope someone can point me in the right direction. I appreciate any help I
can get, or comments to the setup. After a full day of trying to make this
work, and searching the web, I really miss a good guide on how to make Django
run on the server side. There are so many guides, but you need to combine a
lot of different once to get the full picture, since each guide makes so many
assumptions, and you more or less have to have prior knowledge to use them.
And when you do combine the guides, each guide is doing it a bit differently,
making your brain work overtime to piece it all together. :)
Answer: _(Posted on behalf of OP)._
Sometimes you just need to go away from things, and view it from a different
angle. It was just a reference that was missing. In the conf I needed to also
add the reference to the Django libs. To the WSGIPythonPath I added
"/usr/lib64/python3.4/site-packages:", so it now looks like the below. And
then it all worked. I hope this at least now will help someone else out there.
WSGIPythonPath /usr/lib64/python3.4/site-packages:/var/www/sites/mysite
If anyone stumbles across this post, and feels like commenting on any of the
other questions I have asked, please feel free. I would still like to know, if
my approach could be improved, as this is just a staging server, and I have to
do it all again for production. Might as well learn to do it better.
|
python's ggplot does not use year number as label on axis
Question: In the following MWE, my `year` variable is shown on the x-axis as 0 to 6
instead of the actual year number. Why is this?
import pandas as pd
from pandas_datareader import wb
from ggplot import *
dat = wb.download(
indicator=['BX.KLT.DINV.CD.WD', 'BX.KLT.DINV.WD.GD.ZS'],
country='CN', start=2005, end=2011)
dat.reset_index(inplace=True)
print ggplot(aes(x='year', y='BX.KLT.DINV.CD.WD'),
data=dat) + \
geom_line() + theme_bw()
Answer: All you need to do is convert the `year` column from an `object` `dtype` to
`datetime64`:
dat['year'] = pd.to_datetime(dat['year'])
[](http://i.stack.imgur.com/3fHgf.png)
|
Pyinstaller compile to exe
Question: I am trying to compile a Kivy application to a windows exe, but I keep
receiving an attribute error: AttributeError: 'str' object has no attribute
'items'
I have compiled other applications, and followed the instructions line for
line per the [kivy page](https://kivy.org/docs/guide/packaging-windows.html)
(completing the demo), but when I try to do the same to my application I
receive the above error. I'm not sure where to go I've been trying for several
hours now and I can't seem to make any headway. Any help would be greatly
appreciated.
Edit: Below is the tail of the stack trace, the whole thing is long and so I
pasted in what I think may be relevant, but frankly I'm a bit out of my depth
here :)
6363 WARNING: stderr: File "c:\python27\lib\site-packages\PyInstaller\depend\a
nalysis.py", line 198, in _safe_import_module
hook_module.pre_safe_import_module(hook_api)
6375 WARNING: stderr: hook_module.pre_safe_import_module(hook_api)
File "c:\python27\lib\site-packages\PyInstaller\hooks\pre_safe_import_module\
hook-six.moves.py", line 55, in pre_safe_import_module
6378 WARNING: stderr: File "c:\python27\lib\site-packages\PyInstaller\hooks\pr
e_safe_import_module\hook-six.moves.py", line 55, in pre_safe_import_module
for real_module_name, six_module_name in real_to_six_module_name.items():
6388 WARNING: stderr: for real_module_name, six_module_name in real_to_six_m
odule_name.items():
AttributeError: 'str' object has no attribute 'items'
6396 WARNING: stderr: AttributeError: 'str' object has no attribute 'items'
My Spec:
# -*- mode: python -*-
from kivy.deps import sdl2, glew
block_cipher = None
a = Analysis(['face.py'],
pathex=['c:\\Users\\Home\\PycharmProjects\\MSICheck\\Images'],
binaries=None,
datas=None,
hiddenimports=['sqlite3','kivy.app','six','packaging','packaging.version','packaging.specifiers'],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
exclude_binaries=True,
name='face',
debug=True,
strip=False,
upx=True,
console=True )
coll = COLLECT(exe,Tree('c:\\Users\\Home\\PycharmProjects\\MSICheck\\Images\\'),
a.binaries,
a.zipfiles,
a.datas,
*[Tree(p) for p in (sdl2.dep_bins + glew.dep_bins)],
strip=False,
upx=True,
name='face')
EDIT: Apparently it has nothing to do with Kivy as I have rewritten the front
end to use TKinter and i'm still having the issue.
Answer: I have faced some similar error when I use pyinstaller. And part of my error
message are showed following:
File "C:\Python27\lib\site-packages\pyinstaller-3.1.1-py2.7.egg\PyInstaller\depend\analysis.py", line 198, in _safe_import_module
hook_module.pre_safe_import_module(hook_api)
File "C:\Python27\lib\site-packages\pyinstaller-3.1.1-py2.7.egg\PyInstaller\hooks\pre_safe_import_module\hook-six.moves.py", line 55, in pre_safe_import_module
for real_module_name, six_module_name in real_to_six_module_name.items():
AttributeError: 'str' object has no attribute 'items'
When I scroll up this message, I found this:
18611 INFO: Processing pre-find module path hook distutils
20032 INFO: Processing pre-safe import module hook _xmlplus
23532 INFO: Processing pre-safe import module hook six.moves
Traceback (most recent call last):
File "<string>", line 2, in <module>
ImportError: No module named six
So I turned to install the module six. And when I installed it, my pyinstaller
could run successfully.
Hope this can help you.
|
Spyder crashes with error: "It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console"
Question: I have a 64 bit Windows 7 machine I am using Spyder 2.3.8 with Python 2.7 and
Matplotlib 1.4.2 ( I tried Matplotlib 1.5.1 and got the same error)
Every time I import matplotlib and then try to plot with it, a window will pop
up and has a few times displayed the figure.. but more often than not I get
the restart kernel error.
The code is super simple:
from matplotlib import pyplot
x_values = [0, 4, 7, 20, 22, 25]
y_values = [0, 2, 4, 8, 16, 32]
pyplot.plot(x_values, y_values, "o-")
pyplot.ylabel("Value")
pyplot.xlabel("Time")
pyplot.title("Test plot")
pyplot.show()
Answer: You can solve this issue by updating matplotlib. In my case it was solved by
doing this. To update matplotlib just type `conda update matplotlib` on your
terminal. Hope it will work!
|
Removing punctuation except intra-word dashes Python
Question: There already is an approaching
[answer](http://stackoverflow.com/questions/24550620/removing-punctuation-
except-for-apostrophes-and-intra-word-dashes-in-r) in R
`gsub("[^[:alnum:]['-]", " ", my_string)`, but it does not work in Python:
my_string = 'compactified on a calabi-yau threefold @ ,.'
re.sub("[^[:alnum:]['-]", " ", my_string)
gives `'compactified on a calab yau threefold @ ,.'`
So not only does it remove the intra-word dash, it also removes the last
letter of the word preceding the dash. And it does not remove punctuation
Expected result (string without any punctuation but intra-word dash):
`'compactified on a calabi-yau threefold'`
Answer: R uses TRE (POSIX) or PCRE regex engine depending on the `perl` option (or
function used). Python uses a modified, much poorer Perl-like version as `re`
library. Python does not support _POSIX character classes_ , as
**[`[:alnum:]`](http://www.regular-expressions.info/posixbrackets.html)** that
matches _alpha_ (letters) and _num_ (digits).
In Python, `[:alnum:]` can be replaced with `[^\W_]` (or ASCII only
`[a-zA-Z0-9]`) and the negated `[^[:alnum:]]` \- with `[\W_]` (`[^a-zA-Z0-9]`
ASCII only version).
The `[^[:alnum:]['-]` matches _any 1 symbol other than alphanumeric (letter or
digit),`[`, `'`, or `-`_. **That means the R question you refer to does not
provide a correct answer**.
You can use the [following solution](http://ideone.com/m2OdLQ):
import re
p = re.compile(r"(\b[-']\b)|[\W_]")
test_str = "No - d'Ante compactified on a calabi-yau threefold @ ,."
result = p.sub(lambda m: (m.group(1) if m.group(1) else " "), test_str)
print(result)
The [`(\b[-']\b)|[\W_]` regex](https://regex101.com/r/bS5cY6/1) matches and
captures intraword `-` and `'` and we restore them in the `re.sub` by checking
if the capture group matched and re-inserting it with `m.group(1)`, and the
rest (all non-word characters and underscores) are just replaced with a space.
If you want to remove sequences of non-word characters with one space, use
p = re.compile(r"(\b[-']\b)|[\W_]+")
|
Convert json to python list
Question: I'm new to JSON and trying to save the results of the following json response
into lists, in order to make some stats.Specifically, i'd like to save the
'results'.
{u'draws':
{u'draw':
[{u'drawTime': u'22-02-2016T09:00:00', u'drawNo': 542977, u'results': [72, 47, 10, 48, 65, 54, 55, 12, 73, 1, 2, 26, 13, 5, 46, 30, 62, 19, 66, 14]},
{u'drawTime': u'22-02-2016T09:05:00', u'drawNo': 542978, u'results': [71, 24, 4, 72, 14, 7, 63, 70, 3, 10, 42, 22, 15, 19, 79, 47, 1, 43, 55, 77]}, {u'drawTime': u'22-02-2016T09:10:00', u'drawNo': 542979, u'results': [24, 80, 45, 73, 72, 1, 41, 23, 56, 59, 31, 55, 29, 49, 51, 63, 40, 9, 21, 79]}
and it continues like that. Any advice would be very much appreciated.
Answer: Python has nice modules to parse JSON. Take a look here for quick examples:
<http://docs.python-guide.org/en/latest/scenarios/json/>
I'm not really sure what you want to do from your question though. What you
show in your snippet _is_ a Python data structure already... so you could just
access the list in question directly, e.g.:
results_list = your_data['draws']['draw'][0]['results']
and/or make a copy of it if you need:
new_list = old_list[:]
If you get JSON from somewhere as a string, you can just parse it and do the
same:
import json
your_data = json.loads(json_string)
|
subprocess error in python
Question: I am trying to run a praat file from python itself with subprocess but
python(subprocess) can't seem to find the directory. I don't understand why
since when I run the command in terminal, it works perfectly fine. Cant anyone
guide me to where I am going wrong? This is the subprocess code
import silex
import subprocess as sb
cmd_line = raw_input()
args = shlex.split(cmd_line)
p = sb.Popen(args)
When I run it with the input
Praat /Users/admirmonteiro/tmp/tmp.praat
this is the error that I get :
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/admirmonteiro/anaconda/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/Users/admirmonteiro/anaconda/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
As mentioned, I run the commands and they run fine in the terminal. I have
also tried to run subprocess.call but the same error occurs. I have also tried
with with shell=True as an argument but that also outputs the same error.
Please Help !
Answer: Type the following in the shell to get the full path of the
`Praat`application.
whereis Praat
Then use the full path in you python program.
|
How to get DLLs, function names, and addresses from the Import Table with PortEx?
Question: I'm using the [PortEx Java library for PE32
parsing](https://github.com/katjahahn/PortEx "PortEx") with the Capstone
disassembler, and I'd like to be able to have the disassembly replace the
appropriate `call 0x404040` lines to be something like `call SomeDLL:TheFunc`.
To do this, I need the imports from the Import Table. I am able to get the DLL
name and function, but the address reported by PortEx is way off, ex: 0x32E8
vs. 0x402004 as reported by the pefile Python module. I have tried looking at
some of the offsets as part of the `ImportSection`, `ImportDLL`, and
`NameImport` classes in PortEx, but it doesn't get close. The Any thoughts?
import com.github.katjahahn.parser.*;
public class ImportsExtractor {
public static Map<Integer,String> extract(PEData exe) throws IOException {
Map<Integer,String> importList = new HashMap<>();
SectionLoader loader = new SectionLoader(exe);
ImportSection idata = loader.loadImportSection();
List<ImportDLL> imports = idata.getImports();
for(ImportDLL dll : imports) {
for(NameImport nameImport : dll.getNameImports()) {
long addr = nameImport.getRVA(); // Some offset needed?
System.out.format("0x%X\t%s:%s%n", addr, dll.getName(), nameImport.getName());
importList.put((int)addr, dll.getName() + ":" + nameImport.getName());
}
}
return importList;
}
}
I'd like to be able to grab the address from a line of assembly, see if it's
in `importList`, and if so, replace the address with the value in
`importList`.
Answer: From the author:
public static Map<Integer,String> extract(PEData exe) throws IOException {
Map<Integer,String> importList = new HashMap<>();
SectionLoader loader = new SectionLoader(exe);
ImportSection idata = loader.loadImportSection();
List<ImportDLL> imports = idata.getImports();
for(ImportDLL dll : imports) {
for(NameImport nameImport : dll.getNameImports()) {
long iat = nameImport
.getDirEntryValue(DirectoryEntryKey.I_ADDR_TABLE_RVA);
long ilt = nameImport
.getDirEntryValue(DirectoryEntryKey.I_LOOKUP_TABLE_RVA);
long imageBase = exe.getOptionalHeader().get(
WindowsEntryKey.IMAGE_BASE);
long addr = nameImport.getRVA() + imageBase;
if(ilt != 0) addr = addr - ilt + iat;
System.out.format("0x%X\t%s:%s%n", addr, dll.getName(), nameImport.getName());
importList.put((int)addr, dll.getName() + ":" + nameImport.getName());
}
}
return importList;
}
|
Django StaticCompilationError, extend ' .clearfix' has no matches
Question: I'm working on a Django application and all of a sudden I'm getting the error
`extend ' .clearfix' has no matches` and this occurs at `Exception Location:
/Library/Python/2.7/site-packages/static_precompiler/compilers/less.py in
compile_file, line 41`, when I try to connect to the server.
I have never had this issue before, and I even added in `@import (reference)
"utilities.less";` to the `bootstrap.less` file ("utilities.less" contains the
definition of .clearfix), yet I still get the issue.
Any insight would be greatly appreciated.
For reference, my lessc version is `lessc 2.6.0`
Answer: I solved my issue by downgrading to less 2.1.
I did so by running the following commands:
sudo npm uninstall -g less
sudo npm install -g [email protected]
It seems that part of our codebase isn't compatible with less 2.6!
|
When does the order of multiplication matter in python?
Question: I wrote a program using dynamic programming and it took me quite a long time
to find that something is wrong with the different versions of my code. It is
as follows:
#the old version
if probs[i][k]*probs[k+1][j]*prob > tmp_prob:
tmp_prob = prob*probs[i][k]*probs[k+1][j]
#the new version
res = probs[i][k]*probs[k+1][j]*prob
if res > tmp_prob:
tmp_prob = res
I thought the results should be the same but, in fact, they are not. And
if probs[i][k]*probs[k+1][j]*prob > tmp_prob:
tmp_prob = probs[i][k]*probs[k+1][j]*prob
and the result is the same as the new version. So I know the problem is that
`probs[i][k]*probs[k+1][j]*prob` is not always equal to
`prob*probs[i][k]*probs[k+1][j]`.
But when are they not equal? I think it is when there may be overflow, i.e.,
when `inf` occurs. But since `probs[i][k]`, `probs[k+1][j]`, `prob` are all
probabilities so they are all less than 1, so I don't think this is the case.
Are there any other possibilities?
Answer: With floating point values, you may see slight differences by changing the
order of the multiplications, although I would normally expect the difference
to be very small (relative to the magnitude of the value), unless an
intermediate result overflows or underflows, but I would expect those cases to
be more obvious.
Here's a simple example:
>>> import math
>>> t = math.sqrt(3.0)
>>> 3*t*t
9.0
>>> t*t*3
8.999999999999998
>>> 3*t*t - t*t*3
1.7763568394002505e-15
>>>
Mathematically, both products should be `9.0` and their difference should be
`0.0`, but due to floating point roundoff, that is not the case. The actual
results may differ from one platform to another, but this is what I get on my
computer, and it illustrates one of the difficulties with floating point
arithmetic.
|
How to accept twitter stream using tweepy in streamparse spout and pass the tweets to bolt?
Question: Recently, I started working on storm and being more comfortable with python, I
decided to use streamparse for working with storm. I am planning to accept a
twitter stream in spout and perform some computations in bolt. But I cannot
figure out how I would code that in spout. I have gone through various
streamparse tutorials but they all show spout emitting tuples from static list
and do not have stream like twitter streaming api provides. This is my code
for storm:
class WordSpout(Spout):
def initialize(self, stormconf, context):
self.words = itertools.cycle(['dog', 'cat','zebra', 'elephant'])
def next_tuple(self):
word = next(self.words)
self.emit([word])
This is my code for tweepy:
class listener(StreamListener):
def on_status(self,status):
print(status.text)
print "--------------------------------"
return(True)
def on_error(self, status):
print "error"
def on_connect(self):
print "CONNECTED"
auth = OAuthHandler(ckey, csecret)
auth.set_access_token(atoken, asecret)
twitterStream = Stream(auth, listener())
twitterStream.filter(track=["california"])
How should I integrate both these codes?
Answer: To do this, I setup a kafka queue, by which the tweepy listener wrote the
status.text into the queue using
[pykafka](https://github.com/Parsely/pykafka). The spout then constantly read
data from the queue to perform the analytics. My code looks a bit like this:
listener.py:
class MyStreamListener(tweepy.StreamListener):
def on_status(self, status):
# print(status.text)
client = KafkaClient(hosts='127.0.0.1:9092')
topic = client.topics[str('tweets')]
with topic.get_producer(delivery_reports=False) as producer:
# print status.text
sentence = status.text
for word in sentence.split(" "):
if word is None:
continue
try:
word = str(word)
producer.produce(word)
except:
continue
def on_error(self, status_code):
if status_code == 420: # exceed rate limit
return False
else:
print("Failing with status code " + str(status_code))
return False
auth = tweepy.OAuthHandler(API_KEY, API_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
myStreamListener = MyStreamListener()
myStream = tweepy.Stream(auth=api.auth, listener=myStreamListener)
myStream.filter(track=['is'])
Spout File:
from streamparse.spout import Spout
from pykafka import KafkaClient
class TweetSpout(Spout):
words = []
def initialize(self, stormconf, context):
client = KafkaClient(hosts='127.0.0.1:9092')
self.topic = client.topics[str('tweets')]
def next_tuple(self):
consumer = self.topic.get_simple_consumer()
for message in consumer:
if message is not None:
self.emit([message.value])
else:
self.emit()
|
How to crawl pagination pages? There is no url change when I Click next page
Question: I use python3.5 and window10.
When I crawl some pages, I usually used url changes using urlopen and 'for'
iteration. like below code.
from bs4 import BeautifulSoup
import urllib
f = open('Slave.txt','w')
for i in range(1,42):
html = urllib.urlopen('http://xroads.virginia.edu/~hyper/JACOBS/hjch'+str(i)+'.htm')
soup = BeautifulSoup(html,"lxml")
text = soup.getText()
f.write(text.encode("utf-8"))
f.close()
But, I am in trouble because there is no change in url, although I clicked
next pages and web contentes were changed, like picture. there is no change in
url and no pattern. [enter image description
here](http://i.stack.imgur.com/wkVGV.png)
There is no signal in url that i can catch the websites change.
<http://eungdapso.seoul.go.kr/Shr/Shr01/Shr01_lis.jsp>
The web site is here The clue I found was in pagination class. I found some
links to go next pages, but i don't know how can i use this link in
Beautifulsoup. I think commonPagingPost is defined function by developer.
<span class="number"><a href="javascript:;"
class="on">1</a>
<a href="javascript:commonPagingPost('2','10','Shr01_lis.jsp');">2</a>
<a href="javascript:commonPagingPost('3','10','Shr01_lis.jsp');">3</a>
<a href="javascript:commonPagingPost('4','10','Shr01_lis.jsp');">4</a>
<a href="javascript:commonPagingPost('5','10','Shr01_lis.jsp');">5</a></span>
how can I open or crawl all these site using beutifulSoup4? I just get fisrt
pages when i use urlopen.
Answer: You won't be able to do this with beautifulsoup alone as it doesn't support
ajax. You'll need to use something like
[selenium](http://www.seleniumhq.org/),
[ghost.py](http://jeanphix.me/Ghost.py/) or other web browser with javascript
support.
Using these libraries you'll be able to simulate a click on these links and
then grab the newly loaded content.
|
Getting python2.7 path in django app for subprocess call
Question: I am using linux. I am trying to run daemon from function in django views. I
want to run shell command from a view in Djangp app. I am using python 2.7.
Command needs python2.7 path.
My app will be like plug n play. So on system on which it is going to install
may have python installed on different location. So I want to make python path
dynamic.
Command will be
usr/bin/python2.7 filename.py --start
On my system path is usr/bin/python2.7.
I found follwing using os.
On python shell I tried following code & I get what I want
import os
getPyPath = os.popen('which python2.7', 'r')
pyPath = getPyPath.read()
pyPath.rstrip()
I got o/p as which is expected as below
> usr/bin/python2.7
So now how to get this code is django app view function & run it so that I can
get python path in a variable.
I found pythons subprocess module call using which we can run command through
shell using shell=True.
So can I get above code running in django view function using subprocess
call??
If not what is the other ways to get python path in variable in function
django views.
Thanks in advance.
Answer: To view the full path to the current Python interpreter, use `sys.executable`
import sys
print(sys.executable)
|
Convert images drawn by turtle to PNG in Python
Question: I'm making a abstract art template generator in Python that takes inputs of
minimum radius, maximum radius, and number of circles. It draws random circles
in random places, also meeting the user's specifications. I want to convert
the Turtle graphics into a PNG so that the user can then edit the template
however he/she wants to, but I don't know how to proceed. Here's my code:
import random
import time
import turtle
print("Abstract Art Template Generator")
print()
print("This program will generate randomly placed and sized circles on a blank screen.")
num = int(input("Please specify how many circles you would like to be drawn: "))
radiusMin = int(input("Please specify the minimum radius you would like to have: "))
radiusMax = int(input("Please specify the maximum radius you would like to have: "))
screenholder = input("Press ENTER when you are ready to see your circles drawn: ")
t = turtle.Pen()
win = turtle.Screen()
def mycircle():
x = random.randint(radiusMin,radiusMax)
t.circle(x)
t.up()
y = random.randint(0,360)
t.seth(y)
if t.xcor() < -300 or t.xcor() > 300:
t.goto(0, 0)
elif t.ycor() < -300 or t.ycor() > 300:
t.goto(0, 0)
z = random.randint(0,100)
t.forward(z)
t.down()
for i in range(0, num):
mycircle()
turtle.done()
Answer: You can use `turtle.getcanvas()` to generate Tkinker canvas. Then save it as
postscript file.
...
cv = turtle.getcanvas()
cv.postscript(file="file_name.ps", colormode='color')
turtle.done()
Then you can convert it to png (I think you will find how to do it). Or use
PIL with Tkinker - more about this method
[here](http://stackoverflow.com/questions/9886274/how-can-i-convert-canvas-
content-to-an-image)
|
Getting SyntaxError while using pdfcrowd with python
Question: I am trying to learn pdfcrowd with Python 3.4, so I checked out their website
and copied the following example:
import pdfcrowd
try:
# create an API client instance
client = pdfcrowd.Client("username", "apikey")
# convert a web page and store the generated PDF into a pdf variable
pdf = client.convertURI('http://www.google.com')
# convert an HTML string and save the result to a file
output_file = open('html.pdf', 'wb')
html="<head></head><body>My HTML Layout</body>"
client.convertHtml(html, output_file)
output_file.close()
# convert an HTML file
output_file = open('file.pdf', 'wb')
client.convertFile('/path/to/MyLayout.html', output_file)
output_file.close()
except pdfcrowd.Error, why:
print('Failed: {}'.format(why))
When I try to run it, I get the following error:
File "pf.py" line 21
except pdfcrowd.Error, why:
^
SyntaxError: invalid syntax
Can anyone please tell me how to fix this?
Answer: That means to assign the error to the variable `why`. That is valid syntax in
Python2, but not in Python3. Use `except pdfcrowd.Error as why:` instead. That
is valid syntax in Python2 _and_ Python3
|
Python Flask cannot get element from form
Question: Im having trouble getting anything from the shown HTML form
I always get "ValueError: View function did not return a response"
Can somebody help me out here please? I have tried every variation of
request.get that I can find on the web. Also if I specify my form should use
post it uses get anyway - anybody know why this is?
Im new to flask so forgive my ignorance!
Thanks in advance.
The python file (routes.py)
from flask import Flask, render_template, request
import os
app = Flask(__name__)
musicpath = os.listdir(r"C:\Users\Oscar\Music\iTunes\iTunes Media\Music")
lsize = str(len(musicpath))
looper = len(musicpath)
@app.route('/')
def home():
return render_template('home.html', lsize=20, looper=looper, musicpath=musicpath)
@app.route('/pop', methods=['POST', 'GET'])
def pop():
if request.method == "GET":
text = request.args.get('som')
return text
#Have tried every variation of request.get
@app.route('/about')
def about():
name = "Hello!"
return render_template('about.html', name=name)
if __name__ == '__main__':
app.run(debug=True)
The html file (home.html)
{% extends "layout.html" %}
{% block content %}
<div class="jumbo">
<h2>A Music app!<h2>
</div>
<div>
{% if lsize %}
<form action="/pop">
<select id="som" size="20">
{% for i in range(looper):%}
<option value="{{i}}">{{ musicpath[i] }}</option>
{% endfor %}
</select>
</form>
{% endif %}
</div>
<a href="{{ url_for('pop') }}">Select,</a>
{% endblock %}
Answer: You don't have a `name` attribute on your `select` element. That is the
attribute that browsers use to send information in forms; without it no data
will be sent.
Note also that your `pop` handler does not do anything if the method is POST,
even though you explicitly say you accept that method.
|
Python Object Property Sharing
Question: I have a class that keeps track of several other classes. Each of these other
classes all need to access the value of a particular variable, and any one of
these other classes must also be able to modify this particular variable such
that all other classes can see the changed variable.
I tried to accomplish this using properties. An example is as follows:
class A:
def __init__(self, state):
self._b_obj = B(self)
self._state = state
@property
def state(self):
return self._state
@state.setter
def state(self,val):
self._state = val
@property
def b_obj(self):
return self._b_obj
@b_obj.setter
def b_obj(self,val):
self._b_obj = val
class B:
def __init__(self, a_obj):
self.a_obj = a_obj
@property
def state(self):
return self.a_obj.state
@state.setter
def state(self,val):
self.a_obj.state = val
I want it to work as follows:
>>> objA = A(4)
>>> objB = objA.b_obj
>>> print objA.state
4
>>> print objB.state
4
>>> objA.state = 10
>>> print objA.state
10
>>> print objB.state
10
>>> objB.state = 1
>>> print objA.state
1
>>> print objB.state
1
Everything works as I want it to except for the last 3 commands. They give:
>>> objB.state = 1
>>> print objA.state
10
>>> print objB.state
1
Why do the last 3 commands return these values? How can I fix this so that
they return the desired values?
Thanks
Answer: So it seems all you needed to do is have your classes inherit from `object`
:-) That gives you [new-style
classes](https://docs.python.org/2/reference/datamodel.html#newstyle) and [all
their benefits](https://www.python.org/doc/newstyle/).
class A(object):
... # rest is as per your code
class B(object):
... # rest is as per your code
>>> objA = A(4)
>>> objB = objA.b_obj
>>> print objA.state
4
>>> print objB.state
4
>>> objA.state = 10
>>> print objA.state
10
>>> print objB.state
10
>>> objB.state = 1
>>> print objA.state
1
>>> print objB.state
1
The specific reasons for why this would work only with new-style classes,
[from here](https://docs.python.org/2/howto/descriptor.html#invoking-
descriptors):
> For objects, the machinery is in `object.__getattribute__()` which
> transforms `b.x` into `type(b).__dict__['x'].__get__(b, type(b))`.
>
> For classes, the machinery is in `type.__getattribute__()` which transforms
> `B.x` into `B.__dict__['x'].__get__(None, B)`.
>
> (from "important points to remember")
>
> * `__getattribute__()` is only available with new style classes and
> objects
>
> * `object.__getattribute__()` and `type.__getattribute__()` make different
> calls to `__get__()`.
>
>
|
Passing variables in python from radio buttons
Question: I want to set values depends on the selected radio button and to use that
values in other function. Whatever i try, i always get the same answer
# NameError: global name 'tX' is not defined #
import maya.cmds as cmds
from functools import partial
winID='MSDKID'
def init(*args):
print tX
print tY
print tZ
print rX
print rY
print rZ
return
def prozor():
if cmds.window(winID, exists = True):
cmds.deleteUI(winID);
cmds.window()
cmds.columnLayout( adjustableColumn=True, rowSpacing=10 )
cmds.button(label = "Init")
cmds.button(label = "MirrorSDK",command=init)
cmds.setParent( '..' )
cmds.setParent( '..' )
cmds.frameLayout( label='Position' )
cmds.columnLayout()
collection2 = cmds.radioCollection()
RButton0 = cmds.radioButton( label='Behavior' )
RButton1 = cmds.radioButton( label='Orientation' )
cmds.button(l='Apply', command = partial(script,RButton0,RButton1,))
cmds.setParent( '..' )
cmds.setParent( '..' )
print script(RButton0,RButton1)
cmds.showWindow()
def script(RButton0,RButton1,*_cb_val):
X = 0
rb0 = cmds.radioButton(RButton0, q = True, sl = True)
rb1 = cmds.radioButton(RButton1,q = True, sl = True)
if (rb0 == True):
tX = -1
tY = -1
tZ = -1
rX = 1
rY = 1
rZ = 1
if (rb1 == True):
tX = -1
tY = 1
tZ = 1
rX = 1
rY = -1
rZ = -1
return tX,tY,tZ,rX,rY,rZ
prozor()
Answer: The variables you are defining in `script()` are local to that function. The
other functions don't see them.
If you need multiple UI elements to share data, you'll probably need to create
a class to let them share variables. Some reference
[here](http://theodox.github.io/2014/maya_callbacks_cheat_sheet) and
[here](http://tech-artists.org/forum/showthread.php?3292-maya-python-UI-
acessing-controls-from-external-functions)
|
How do I reload a python submodule?
Question: I'm loading a submodule in python (2.7.10) with `from app import sub` where
`sub` has a `config` variable. So I can run `print sub.config` and see a bunch
of config variables. Not super complex.
If I change the config variables in the script, there must be a way to reload
the module and see the change. I found a few instructions that indicated that
`reload(app.sub)` would work, but I get an error:
NameError: name 'app' is not defined
And if I do just `reload(sub)` the error is:
TypeError: reload() argument must be module
If I do `import app` I can view the config with `print app.sub.config` and
reload with `reload(app)`
\-- if I do `import app` and then run
I found instructions to automate reloading: [Reloading submodules in
IPython](http://stackoverflow.com/questions/5364050/reloading-submodules-in-
ipython)
but is there no way to reload a submodule manually?
Answer: When you `from foo import bar`, you now have a module object named `bar` in
your namespace, so you can
from foo import bar
bar.do_the_thing() # or whatever
reload(bar)
If you want some more details on how different import forms work, I personally
found [this answer](http://stackoverflow.com/a/2725668/939586) particularly
helpful, myself.
|
python lambda can't detect packaged modules
Question: I'm trying to create a lambda function by uploading a zip file with a single
.py file at the root and 2 folders which contain the requests lib downloaded
via pip.
Running the code local works file. When I zip and upload the code I very often
get this error:
`Unable to import module 'main': No module named requests`
Sometimes I do manage to fix this, but its inconsistent and I'm not sure how
I'm doing it. I'm using the following command:
in root dir `zip -r upload.zip *`
This is how I'm importing requests:
`import requests`
FYI: 1\. I have attempted a number of different import methods using the exact
path which have failed so I wonder if thats the problem? 2\. Every time this
has failed and I've been able to make it work in lambda, its involved a lot of
fiddling with the zip command as I thought the problem was I was zipping the
contents incorrect and hiding them behind an extra parent folder.
Looking forward to seeing the silly mistake i've been making!
Adding code snippet:
import json ##Built In
import requests ##Packaged with
import sys ##Built In
def lambda_function(event, context):
alias = event['alias']
message = event['message']
input_type = event['input_type']
if input_type == "username":
username = alias
elif input_type == "email":
username = alias.split('@',1)[0]
elif input_type is None:
print "input_type 'username' or 'email' required. Closing..."
sys.exit()
payload = {
"text": message,
"channel": "@" + username,
"icon_emoji": "<an emoji>",
"username": "<an alias>"
}
r = requests.post("<slackurl>",json=payload)
print(r.status_code, r.reason)
Answer: I got some help outside the stackoverflow loop and this seems to consistently
work.
`zip -r upload.zip main.py requests requests-2.9.1.dist-info`
|
PyMongo Collection Object Not Callable
Question: I'm trying to create a reddit scraper that takes the first 100 pages from the
reddit home page and stores them into MongoDB. This is my first post on
stackoverflow, so I apologize if my post is not formatted correctly. I keep
getting the error:
TypeError: 'Collection' object is not callable. If you meant to call the 'insert_one' method on a 'Collection' object it is failing because no such method exists.
Here is my code
import pymongo
import praw
import time
def main():
fpid = os.fork()
if fpid!=0:
# Running as daemon now. PID is fpid
sys.exit(0)
user_agent = ("Python Scraper by djames v0.1")
r = praw.Reddit(user_agent = user_agent) #Reddit API requires user agent
conn=pymongo.MongoClient()
db = conn.reddit
threads = db.threads
while 1==1: #Runs in an infinite loop, loop repeats every 30 seconds
frontpage_pull = r.get_front_page(limit=100) #get first 100 posts from reddit.com
for posts in frontpage_pull: #repeats for each of the 100 posts pulled
data = {}
data['title'] = posts.title
data['text'] = posts.selftext
threads.insert_one(data)
time.sleep(30)
if __name__ == "__main__":
main()
Answer: `insert_one()` was not added to pymongo until version 3.0. If you try calling
it on a version before that, you will get the error you are seeing.
To check you version of pymongo, open up a python interpreter and enter:
import pymongo
pymongo.version
The legacy way of inserting documents with pymongo is just with
`Collection.insert()`. So in your case you can change your insert line to:
threads.insert(data)
For more info, see [pymongo 2.8
documentation](https://api.mongodb.org/python/2.8/api/pymongo/collection.html?highlight=insert#pymongo.collection.Collection.insert)
|
Reading in csv file as dataframe from hdfs
Question: I'm using pydoop to read in a file from hdfs, and when I use:
import pydoop.hdfs as hd
with hd.open("/home/file.csv") as f:
print f.read()
It shows me the file in stdout.
Is there any way for me to read in this file as dataframe? I've tried using
pandas' read_csv("/home/file.csv"), but it tells me that the file cannot be
found. The exact code and error is:
>>> import pandas as pd
>>> pd.read_csv("/home/file.csv")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/pandas/io/parsers.py", line 498, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/lib64/python2.7/site-packages/pandas/io/parsers.py", line 275, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/usr/lib64/python2.7/site-packages/pandas/io/parsers.py", line 590, in __init__
self._make_engine(self.engine)
File "/usr/lib64/python2.7/site-packages/pandas/io/parsers.py", line 731, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/usr/lib64/python2.7/site-packages/pandas/io/parsers.py", line 1103, in __init__
self._reader = _parser.TextReader(src, **kwds)
File "pandas/parser.pyx", line 353, in pandas.parser.TextReader.__cinit__ (pandas/parser.c:3246)
File "pandas/parser.pyx", line 591, in pandas.parser.TextReader._setup_parser_source (pandas/parser.c:6111)
IOError: File /home/file.csv does not exist
Answer: I know next to nothing about `hdfs`, but I wonder if the following might work:
with hd.open("/home/file.csv") as f:
df = pd.read_csv(f)
I assume `read_csv` works with a file handle, or in fact any iterable that
will feed it lines. I know the `numpy` csv readers do.
`pd.read_csv("/home/file.csv")` would work if the regular Python file `open`
works - i.e. it reads the file a regular local file.
with open("/home/file.csv") as f:
print f.read()
But evidently `hd.open` is using some other location or protocol, so the file
is not local. If my suggestion doesn't work, then you (or we) need to dig more
into the `hdfs` documentation.
|
Python - Store variables in a list that save each time program restarts
Question: I am stuck on a seemingly simple task with a Python Twitch IRC Bot I'm
developing for my channel. I have a points system all figured out, and I
thought it was working, but I found out that every time I restart the program,
the list that contains balances of users resets.
This is because I declare the empty list at the beginning of the script each
time the program is run. If a user chats and they aren't in the list of
welcomed users, then the bot will welcome them and add their name to a list,
and their balance to a corresponding list.
Is there someway I can work around this resetting problem and make it so that
it won't reset the list every time the program restarts? Thanks in advance,
and here's my code:
welcomed = []
balances = []
def givePoints():
global balances
threading.Timer(60.0, givePoints).start()
i = 0
for users in balances:
balances[i] += 1
i += 1
def welcomeUser(user):
global welcomed
global balances
sendMessage(s, "Welcome, " + user + "!")
welcomed.extend([user])
balances.extend([0])
givePoints()
#other code here...
if '' in message:
if user not in welcomed:
welcomeUser(user)
break
(I had attempted to use global variables to overcome this problem, however
they didn't work, although I'm guessing I didn't use them right :P)
Answer: Try using the
[`json`](https://docs.python.org/3/library/json.html?highlight=json#module-
json) module to dump and load your list. You can catch file open problems when
loading the list, and use that to initialize an empty list.
import json
def loadlist(path):
try:
with open(path, 'r') as listfile:
saved_list = json.load(listfile)
except Exception:
saved_list = []
return saved_list
def savelist(path, _list):
try:
with open(path, 'w') as listfile:
json.dump(_list)
except Exception:
print("Oh, no! List wasn't saved! It'll be empty tomorrow...")
|
How to print the complete json array using python?
Question: I have a json array. And need to print only the id using python .How do i do
it ?
This is my json array :
{
"messages":
[
{
"id": "1531cf7d9e03e527",
"threadId": "1531cf7d9e03e527"
},
{
"id": "1531cdafbcb4a0e6",
"threadId": "1531bfccfceb1ed7"
}
],
"nextPageToken": "01647645424797380808",
"resultSizeEstimate": 103
}
*EDIT : *
I am actually writing a python code to get messages from the gmail API. This
program gives the message ids and thread ids of a gmail accounts' messages as
a json format. I only need the message id and not the thread id.
from __future__ import print_function
import httplib2
import os
from apiclient import discovery
import oauth2client
from oauth2client import client
from oauth2client import tools
import json
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
SCOPES = 'https://www.googleapis.com/auth/gmail.readonly'
CLIENT_SECRET_FILE = 'client_server.json'
APPLICATION_NAME = 'Gmail API Python Quickstart'
def get_credentials():
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'gmail-python-quickstart.json')
store = oauth2client.file.Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else:
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def main():
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('gmail', 'v1', http=http)
message = service.users().messages().get(userId='me',id='').execute()
response = service.users().messages().list(userId='me',q='').execute()
messages = []
if 'messages' in response:
messages.extend(response['messages'])
print(messages)
result = loads(messages)
ids = [message['id'] for message in result['messages']]
print (ids)
while 'nextPageToken' in response:
page_token = response['nextPageToken']
response = service.users().messages().list(userId='me', q='',pageToken=page_token).execute()
messages.extend(response['messages'])
print(message['id'])
print (message['snippet'])
if __name__ == '__main__':
main()
Answer:
import json
dict_result = json.loads(your_json)
ids = [message['id'] for message in dict_result['messages']]
|
how to upload multiple files using flask in python
Question: Here is my code for multiple files upload:
**HTML CODE:**
Browse <input type="file" name="pro_attachment1" id="pro_attachment1" multiple>
**PYTHON CODE:**
pro_attachment = request.files.getlist('pro_attachment1')
for upload in pro_attachment:
filename = upload.filename.rsplit("/")[0]
destination = os.path.join(application.config['UPLOAD_FOLDER'], filename)
print "Accept incoming file:", filename
print "Save it to:", destination
upload.save(destination)
But it uploads a single file instead of multiple files.
Answer: You can use [Flask-Uploads](https://pythonhosted.org/Flask-Uploads/).
Here is a image(multiple) upload demo.
View:
from flask_uploads import UploadSet, configure_uploads, IMAGES
from flask import Flask, request, render_template, flash
app = Flask(__name__)
app.config['UPLOADED_PHOTOS_DEST'] = '\static\img'
photos = UploadSet('photos', IMAGES)
configure_uploads(app, (photos,))
@main.route('/upload', methods=['GET', 'POST'])
def upload():
form = UploadForm()
if form.validate_on_submit():
if request.method == 'POST' and 'photo' in request.files:
for img in request.files.getlist('photo'):
photos.save(img)
flash("Photo saved.")
return render_template('upload.html', form=form)
Form:
from flask_wtf import Form
from wtforms import SubmitField
from flask_wtf.file import FileField, FileAllowed, FileRequired
class UploadForm(Form):
photo = FileField('Image', validators=[
FileRequired(),
FileAllowed(photos, 'Image only!')
])
submit = SubmitField('Submit')
Template(use [Flask-Bootstrap](https://pythonhosted.org/Flask-
Bootstrap/forms.html)):
{% import "bootstrap/wtf.html" as wtf %}
<form class="form form-horizontal" method="POST" enctype="multipart/form-data">
{{ form.hidden_tag() }}
{{ wtf.form_field(form.photo, multiple="multiple") }}
{{ wtf.form_field(form.submit) }}
</form>
|
How to remove a specific element from a python list?
Question: I want to remove an A element using a B array of IDs, given the specific
scalar ID 'C'
In matlab I can do this:
A(B == C) = []
This is an example of my code:
boxes = [[1,2,20,20],[4,8,20,20],[8,10,40,40]]
boxIDs = [1,2,3]
IDx = 2
I want to delete the second box completely out of the list.
How can I do this in python? I have numpy.
Answer: without import `numpy` you can `pop` out the element. try:
boxes = [[1,2,20,20],[4,8,20,20],[8,10,40,40]]
IDx = 1
pop_element = boxes.pop(IDx)
the list `boxes` now is `[[1, 2, 20, 20], [8, 10, 40, 40]]` and `pop_element`
is `[4, 8, 20, 20]`
PS: in python indexes start from `0` instead of `1`.
|
Looping through scrapped data and outputting the result
Question: I am trying to scrape the BBC football results website to get teams, shots,
goals, cards and incidents. I currently have 3 teams data passed into the URL.
I writing the script in Python and using the Beautiful soup `bs4` package.
When outputting the results to screen, the first team is printed, the the
first and second team, then the first, second and third team. So the first
team is effectively being printed 3 times, When I am trying to get the 3 teams
just once.
Once I have this problem sorted I will write the results to file. I am adding
the teams data into data frames then into a list (I am not sure if this is the
best method). I am sure if is something to do with the `for` loops, but I am
unsure how to resolve the problem. Code:
from bs4 import BeautifulSoup
import urllib2
import pandas as pd
out_list = []
for numb in('EFBO839787', 'EFBO839786', 'EFBO815155'):
url = 'http://www.bbc.co.uk/sport/football/result/partial/' + numb + '?teamview=false'
teams_list = []
inner_page = urllib2.urlopen(url).read()
soupb = BeautifulSoup(inner_page, 'lxml')
for report in soupb.find_all('td', 'match-details'):
home_tag = report.find('span', class_='team-home')
home_team = home_tag and ''.join(home_tag.stripped_strings)
score_tag = report.find('span', class_='score')
score = score_tag and ''.join(score_tag.stripped_strings)
shots_tag = report.find('span', class_='shots-on-target')
shots = shots_tag and ''.join(shots_tag.stripped_strings)
away_tag = report.find('span', class_='team-away')
away_team = away_tag and ''.join(away_tag.stripped_strings)
df = pd.DataFrame({'away_team' : [away_team], 'home_team' : [home_team], 'score' : [score], })
out_list.append(df)
for shots in soupb.find_all('td', class_='shots'):
home_shots_tag = shots.find('span',class_='goal-count-home')
home_shots = home_shots_tag and ''.join(home_shots_tag.stripped_strings)
away_shots_tag = shots.find('span',class_='goal-count-away')
away_shots = away_shots_tag and ''.join(away_shots_tag.stripped_strings)
dfb = pd.DataFrame({'home_shots': [home_shots], 'away_shots' : [away_shots] })
out_list.append(dfb)
for incidents in soupb.find("table", class_="incidents-table").find("tbody").find_all("tr"):
home_inc_tag = incidents.find("td", class_="incident-player-home")
home_inc = home_inc_tag and ''.join(home_inc_tag.stripped_strings)
type_inc_goal_tag = incidents.find("td", "span", class_="incident-type goal")
type_inc_goal = type_inc_goal_tag and ''.join(type_inc_goal_tag.stripped_strings)
type_inc_tag = incidents.find("td", class_="incident-type")
type_inc = type_inc_tag and ''.join(type_inc_tag.stripped_strings)
time_inc_tag = incidents.find('td', class_='incident-time')
time_inc = time_inc_tag and ''.join(time_inc_tag.stripped_strings)
away_inc_tag = incidents.find('td', class_='incident-player-away')
away_inc = away_inc_tag and ''.join(away_inc_tag.stripped_strings)
df_incidents = pd.DataFrame({'home_player' : [home_inc],'event_type' : [type_inc_goal],'event_time': [time_inc],'away_player' : [away_inc]})
out_list.append(df_incidents)
print "end"
print out_list
I am new to python and stack overflow, any suggestions on formatting my
questions is also useful.
Thanks in advance!
Answer: Those 3 for loops should be inside your main for loop.
out_list = []
for numb in('EFBO839787', 'EFBO839786', 'EFBO815155'):
url = 'http://www.bbc.co.uk/sport/football/result/partial/' + numb + '?teamview=false'
teams_list = []
inner_page = urllib.request.urlopen(url).read()
soupb = BeautifulSoup(inner_page, 'lxml')
for report in soupb.find_all('td', 'match-details'):
# your code as it is
for shots in soupb.find_all('td', class_='shots'):
# your code as it is
for incidents in soupb.find("table", class_="incidents-table").find("tbody").find_all("tr"):
# your code as it is
It works just fine - shows up a team just once.
Here's output of first for loop:
[{'score': ['1-3'], 'away_team': ['Man City'], 'home_team': ['Dynamo Kiev']},
{'score': ['1-0'], 'away_team': ['Zenit St P'], 'home_team': ['Benfica']},
{'score': ['1-2'], 'away_team': ['Boston United'], 'home_team': ['Bradford Park Avenue']}]
|
How to make python spot a folder and print its files
Question: I want to be able to make python print everything in my C drive. I have
figured out this to print whats on the first "layer" of the drive,
def filespotter():
import os
path = 'C:/'
dirs = os.listdir( path )
for file in dirs:
print(file)
but I want it to go into the other folders and print what is in every other
folder.
Answer: **Disclaimer** `os.walk` is just fine, I'm here to provide a easier solution.
If you're using python 3.5 or above, you can use
[`glob.glob`](https://docs.python.org/3/library/glob.html#glob.glob) with
`'**'` and `recursive=True`
For example: `glob.glob(r'C:\**', recursive=True)`
Please note that getting the entire file list of C:\ drive can take a lot of
time.
If you don't need the entire list at the same time,
[`glob.iglob`](https://docs.python.org/3/library/glob.html#glob.glob) is a
reasonable choice. The usage is the same, except that you get an iterator
instead of a list.
To print everything under C:\
for filename in glob.iglob(r'C:\**', recursive=True):
print(filename)
It gives you output as soon as possible.
However if you don't have python 3.5 available, you can see [the source of
glob.py](https://hg.python.org/cpython/file/3.5/Lib/glob.py) and adapt it for
your use case.
|
Update QlistView with python list updated from another thread (pyqt5)
Question: I try to create a GUI for displaying a python a list of 512 values 0/255
It's simple with PyQt to setup a QListWidget or QListView to display this kind
of list
from sys import argv, exit
from PyQt5.QtWidgets import QListWidgetItem, QListWidget, QApplication
class Universe(QListWidget):
"""This is a class for a DMX Universe of 512 dimmers"""
def __init__(self):
super(Universe, self).__init__()
for i in range(512):
item = QListWidgetItem('dimmer n° '+str(i+1)+' : 0')
self.addItem(item)
if __name__ == "__main__":
app = QApplication(argv)
list_widget = Universe()
list_widget.show()
exit(app.exec_())
I can create a button to send random values. No latency, everything is nice.
from random import randrange
from sys import argv, exit
from PyQt5.QtWidgets import QListWidgetItem, QListWidget, QApplication, QGroupBox, QVBoxLayout, QPushButton
class DisplayGroup(QGroupBox):
"""This is a group of widgets to display"""
def __init__(self):
super(DisplayGroup, self).__init__()
# create a vertical layout
vbox = QVBoxLayout()
# create an universeand and add it to the layout
universe = Universe()
self.universe = universe
vbox.addWidget(universe)
# create a button to make some noise
button = QPushButton('make some noise')
vbox.addWidget(button)
button.released.connect(self.make_some_noise)
# set the layout on the groupbox
vbox.addStretch(1)
self.setLayout(vbox)
def make_some_noise(self):
self.universe.update([randrange(0, 101, 2) for i in range(512)])
class Universe(QListWidget):
"""This is a class for a DMX Universe of 512 dimmers"""
def __init__(self):
super(Universe, self).__init__()
for index in range(512):
item = QListWidgetItem('dimmer n° '+str(index+1)+' : 0')
self.addItem(item)
def update(self, data):
for index, value in enumerate(data):
item = self.item(index)
item.setText('dimmer n° '+str(index+1)+' : '+str(value))
if __name__ == "__main__":
app = QApplication(argv)
group_widget = DisplayGroup()
group_widget.show()
exit(app.exec_())
My problem is that the lib I use for listening new frames need to be in a
separate thread. I create a thread to listen updates for the list. But I
cannot managed to have my QListView / QListWidget updated each time a value
changed. The Widgets are only updated when I click on the widget itself.
I try everything I found on forums, but I cannot managed to make it working. I
try to use signal for dataChanged.emit and even a (ugly) global. But the value
is not updated on my view.
Here is the latest code with the ugly global.
If anyone could help me on this point.
cheers !!
from random import randrange
from time import sleep
from sys import argv, exit
from PyQt5.QtCore import QThread, QAbstractListModel, Qt, QVariant
from PyQt5.QtWidgets import QListView, QApplication, QGroupBox, QVBoxLayout, QPushButton
universe_1 = [0 for i in range(512)]
class SpecialProcess(QThread):
def __init__(self):
super(SpecialProcess, self).__init__()
self.start()
def run(self):
global universe_1
universe_1 = ([randrange(0, 101, 2) for i in range(512)])
sleep(0.1)
self.run()
class Universe(QAbstractListModel):
def __init__(self, parent=None):
super(Universe, self).__init__(parent)
def rowCount(self, index):
return len(universe_1)
def data(self, index, role=Qt.DisplayRole):
index = index.row()
if role == Qt.DisplayRole:
try:
return universe_1[index]
except IndexError:
return QVariant()
return QVariant()
class Viewer(QGroupBox):
def __init__(self):
super(Viewer, self).__init__()
list_view = QListView()
self.list_view = list_view
# create a vertical layout
vbox = QVBoxLayout()
universe = Universe()
vbox.addWidget(list_view)
# Model and View setup
self.model = Universe(self)
self.list_view.setModel(self.model)
# meke a process running in parallel
my_process = SpecialProcess()
# set the layout on the groupbox
vbox.addStretch(1)
self.setLayout(vbox)
if __name__ == "__main__":
app = QApplication(argv)
group_widget = Viewer()
group_widget.show()
exit(app.exec_())
Answer: You need to tell the model that the underlying data has changed so that it can
update the view. This can be done with a custom signal, like this:
from PyQt5.QtCore import pyqtSignal
universe_1 = [0 for i in range(512)]
class SpecialProcess(QThread):
universeChanged = pyqtSignal()
def __init__(self):
super(SpecialProcess, self).__init__()
self.start()
def run(self):
universe_1[:] = [randrange(0, 101, 2) for i in range(512)]
self.universeChanged.emit()
sleep(0.1)
self.run()
Which is then connected to the model like this:
my_process = SpecialProcess()
my_process.universeChanged.connect(self.model.layoutChanged.emit)
|
Python 2.7 bottle web
Question: I’m trying to figure out how to rename an existing text file when I change the
title of the text file. If I change the title now, it’s going to create a new
text file with the new title. The "old text file" that I wanted to save with a
new name still exists but with the orginal name. So i got two files with the
same content.
I’m creating new articles (text files) through @route('/update/',
method='POST') in my ”edit templet” where title=title, text=text. Let’s say
after I have created a new article with the name(title) = ”Key” and wrote a
bit in that text file. Then If I want to edit/change my ”Key” article I click
on that article and present the article in @route('/wiki/',)def
show_article(article):. title = article, text = text)
In this template I can change my ”Key” name(title) to ”Lock”. I’m still using
the same form @route('/update/', method='POST') to post my changes. Here is
the problem, it creates a new text file instead of renaming the ”Key” article
to ”Lock”.
How can I change the @route('/update/', method='POST') to make it realise that
I’m working with an already existing text file and only wants to rename that
file. I have tried to use two different method=’POST’ but only gets method not
allowed error all the time.
from bottle import route, run, template, request, static_file
from os import listdir
import sys
host='localhost'
@route('/static/<filname>')
def serce_static(filname):
return static_file(filname, root="static")
@route("/")
def list_articles():
files = listdir("wiki")
articles = []
for i in files:
lista = i.split('.')
word = lista[0]
lista1 = word.split('/')
articles.append(lista1[0])
return template("index", articles=articles)
@route('/wiki/<article>',)
def show_article(article):
wikifile = open('wiki/' + article + '.txt', 'r')
text = wikifile.read()
wikifile.close()
return template('page', title = article, text = text)
@route('/edit/')
def edit_form():
return template('edit')
@route('/update/', method='POST')
def update_article():
title = request.forms.title
text = request.forms.text
tx = open('wiki/' + title + '.txt', 'w')
tx.write(text)
tx.close()
return template('thanks', title=title, text=text)
run(host='localhost', port=8080, debug=True, reloader=True)
Answer: You can use `os.replace('old_name', 'new_name')`:
import os
...
tx = open('wiki/' + title + '.txt', 'w')
tx.write(text)
os.replace(tx.name, 'name_you_want.txt') # use os.replace()
tx.close()
|
In AWS, how to create elastic ip with boto3 ? or more generaly with python?
Question: I'd like to create an elastic ip with a python script. It didn't find a way in
the doc.
Answer: Use [Allocate
Address](http://boto3.readthedocs.org/en/latest/reference/services/ec2.html#EC2.Client.allocate_address)
> Acquires an Elastic IP address.
>
> An Elastic IP address is for use either in the EC2-Classic platform or in a
> VPC. For more information, see Elastic IP Addresses in the Amazon Elastic
> Compute Cloud User Guide
import boto3
client = boto3.client('ec2')
addr = client.allocate_address(Domain='vpc')
print addr['PublicIp']
|
Datalab does not populate bigQuery tables
Question: Hi I have a problem while using ipython notebooks on datalab.
I want to write the result of a table into a bigQuery table but it does not
work and anyone says to use the insert_data(dataframe) function but it does
not populate my table. To simplify the problem I try to read a table and write
it to a just created table (with the same schema) but it does not work. Can
anyone tell me where I am wrong?
import gcp
import gcp.bigquery as bq
#read the data
df = bq.Query('SELECT 1 as a, 2 as b FROM [publicdata:samples.wikipedia] LIMIT 3').to_dataframe()
#creation of a dataset and extraction of the schema
dataset = bq.DataSet('prova1')
dataset.create(friendly_name='aaa', description='bbb')
schema = bq.Schema.from_dataframe(df)
#creation of the table
temptable = bq.Table('prova1.prova2').create(schema=schema, overwrite=True)
#I try to put the same data into the temptable just created
temptable.insert_data(df)
Answer: Calling insert_data will do a HTTP POST and return once that is done. However,
it can take some time for the data to show up in the BQ table (up to several
minutes). Try wait a while before using the table. We may be able to address
this in a future update, [see
this](https://github.com/GoogleCloudPlatform/datalab/issues/754)
The hacky way to block until ready right now should be something like:
import time
while True:
info = temptable._api.tables_get(temptable._name_parts)
if 'streamingBuffer' not in info:
break
if info['streamingBuffer']['estimatedRows'] > 0:
break
time.sleep(5)
|
os.chdir working once, then not working after called a second time; python script
Question: in the following script, I try to clone all projects except two, then clone
those two into homepath, not my projects dir:
#!/usr/bin/env python
import os, sys, subprocess, time, re
from my_scripting_library import *
bucket_hoss = BitbucketAPIHoss()
all_project_names = bucket_hoss.get_bitbucket_project_names()
print(all_project_names)
os.chdir(PROJECT_PATH)
print(PROJECT_PATH)
TOP_LEVEL_PROJECTS = ['scripts', 'my_documents']
for project in all_project_names:
if project not in TOP_LEVEL_PROJECTS:
clone_project(project=project)
os.chdir(HOMEPATH)
print (HOMEPATH)
print(os.getcwd())
for project in TOP_LEVEL_PROJECTS:
clone_project(project=project)
output
cchilders:~/scripts [master]$ ./git_and_bitbucket/clone_all.py
[u'autohelper', u'bookwormbuddy', u'buildyourownlisp_in_C', u'bytesized_python', u'craigslist', u'foodpro', u'govdict', u'javascript-practice', u'learn_c_the_hard_way', u'my_documents', u'neo4j_sandbox', u'notes_at_work', u'poker_program_demo', u'portfolio', u'scriptcity_demo', u'scripts', u'transmorg_django', u'writing']
/home/cchilders/projects
fatal: destination path '/home/cchilders/projects/autohelper' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/bookwormbuddy' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/buildyourownlisp_in_C' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/bytesized_python' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/craigslist' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/foodpro' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/govdict' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/javascript-practice' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/learn_c_the_hard_way' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/neo4j_sandbox' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/notes_at_work' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/poker_program_demo' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/portfolio' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/scriptcity_demo' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/transmorg_django' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/writing' already exists and is not an empty directory.
/home/cchilders
/home/cchilders
fatal: destination path '/home/cchilders/projects/scripts' already exists and is not an empty directory.
fatal: destination path '/home/cchilders/projects/my_documents' already exists and is not an empty directory.
Since the script is now showing to be in the correct dir (my HOMEPATH), why do
these projects still have a mind of their own to be cloned into the projects
dir? Thank you
Answer: The problem is somewhere in your `clone_project` function. My guess (since you
didn't post that code) is that `projects` is either directly hard-coded, or is
set to `PROJECT_PATH`.
|
Python script to parse text file and execute inline python code
Question: I am new to python and trying to create a script that could do the following:
infile = open("input.txt", "r")
outfile = open("output.txt", "w")
print ("Starting file transformation...")
for line in infile:
# If the line is not encapsulated in between two markers:
# //START_PY & //END_PY then print the line verbatim. If
# it is between the markers then execute the inline python code
infile.close()
outfile.close()
So an input.txt like the following:
* * *
Hello World
//START_PY
for count in range(1,2):
print ("More Hello World");
//END_PY
Even More Hello World
* * *
This would output the following:
Hello World
More Hello World
More Hello World
Even More Hello World
I could create an intermediate python script and execute that from within the
parent script. Is there a way to do this in one pass i.e. directly create the
output file?
Answer: It can be done withe the help of abstract syntax trees "ast" module:
>>> import ast
>>> # let's assume you already parsed your code sections
>>> code_text = "for count in range(2):\n\tprint('[%d] More hello worlds' % count)\n"
>>> po = ast.parse(code_text)
>>> co = compile(po, "<unknown>", "exec")
>>> namespace = {}
>>> exec(co, namespace)
[0] More hello worlds
[1] More hello worlds
>>>
|
Python addition math quiz
Question: Just learning python and I'm trying to make an extremely simple math quiz but
when running I get a syntax error please explain what I have done wrong
from random import randint
inf = 0
while inf < 10:
num1 = randint(0,5000)
num2 = randint(0,5000)
ans = num1+num2
print(num1,"+",num2)
plrans = input(int()"What's the answer?")
if plrans = ans
print("Correct!")
else
print("Incorrect :(")
Answer: There are a few things wrong here:
1. The conversion to `int` should be done after getting the input from the user
2. `=` is the assignment operator. In order to check for equality you should use the `==` operator
3. Your `if` statement is missing a colon (`:`)
4. So is your `else` statement:
from random import randint
inf = 0
while inf < 10:
num1 = randint(0,5000)
num2 = randint(0,5000)
ans = num1+num2
print(num1,"+",num2)
plrans = int(input("What's the answer?")) # issue 1
if plrans == ans : # issues 2 and 3
print("Correct!")
else: # issue 4
print("Incorrect :(")
|
Get the name of the current module that has been failed to import
Question: i need help to get the name of the not imported module while doing that
so the code is:
#!/usr/bin/env python
bla=[]
try:
import os
import sys
import somethings
import blabla
except:
bla.append(NOT_IMPORTED_MODULE_NAME) # it should be here
if len(bla)>0:
exit("not imported:%s" % " ".join(bla))
thank you in advance
Answer: Although what I am going to say is more appropriate as comment, I am just
typing here since I was not allowed comment due to low reputation number.
(since I am very new here)
Let's say you have sys and blabla module missing. With current setting it will
throw exception when it tried to load sys. And it will not even reach `import
blabla`. So you wouldn't know if the module missing or not if previous module
is missing. If you really really want the feature you want, you can wrap every
single `import module` with exception handling for all module import. But it
seems to me that's over kill. As idjaw mentioned, you should be able to see
from stacktrace if any module is missing.
|
unable to run mongo-connector
Question: I have installed mongo-connector in the mongodb server.
I am executing by giving the command
mongo-connector -m [remote mongo server IP]:[remote mongo server port] -t [elastic search server IP]:[elastic search server Port] -d elastic_doc_manager.py
I also tried with this since mongo is running in the same server with the
default port.
mongo-connector -t [elastic search server IP]:[elastic search server Port] -d elastic_doc_manager.py
I am getting error
Traceback (most recent call last):
File "/usr/local/bin/mongo-connector", line 9, in <module>
load_entry_point('mongo-connector==2.3.dev0', 'console_scripts', 'mongo-connector')()
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/util.py", line 85, in wrapped
func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/connector.py", line 1037, in main
conf.parse_args()
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/config.py", line 118, in parse_args
option, dict((k, values.get(k)) for k in option.cli_names))
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/connector.py", line 820, in apply_doc_managers
module = import_dm_by_name(dm['docManager'])
File "/usr/local/lib/python2.7/dist-packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/connector.py", line 810, in import_dm_by_name
"Could not import %s." % full_name)
**mongo_connector.errors.InvalidConfiguration: Could not import mongo_connector.doc_managers.elastic_doc_manager.py.**
NOTE: I am using python2.7 and mongo-connector 2.3
Elastic search server is 2.2
Any suggestions ?
**[edit]** After applying `Val`'s suggestion:
> 2016-02-29 19:56:59,519 [CRITICAL] mongo_connector.oplog_manager:549 -
> Exception during collection dump
>
> Traceback (most recent call last):
>
> File "/usr/local/lib/python2.7/dist-
> packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/oplog_manager.py",
> line 501, in do_dump
>
> upsert_all(dm)
>
> File "/usr/local/lib/python2.7/dist-
> packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/oplog_manager.py",
> line 485, in upsert_all dm.bulk_upsert(docs_to_dump(namespace), mapped_ns,
> long_ts)
>
> File "/usr/local/lib/python2.7/dist-
> packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/util.py", line
> 32, in wrapped
>
> return f(*args, **kwargs)
>
> File "/usr/local/lib/python2.7/dist-
> packages/mongo_connector-2.3.dev0-py2.7.egg/mongo_connector/doc_managers/elastic_doc_manager.py",
> line 190, in bulk_upsert
>
> for ok, resp in responses:
>
> File "/usr/local/lib/python2.7/dist-
> packages/elasticsearch-1.9.0-py2.7.egg/elasticsearch/helpers/init.py", line
> 160, in streaming_bulk
>
> for result in _process_bulk_chunk(client, bulk_actions, raise_on_exception,
> raise_on_error, **kwargs):
>
> File "/usr/local/lib/python2.7/dist-
> packages/elasticsearch-1.9.0-py2.7.egg/elasticsearch/helpers/init.py", line
> 132, in _process_bulk_chunk
>
> raise BulkIndexError('%i document(s) failed to index.' % len(errors),
> errors)
>
> BulkIndexError: (u'2 document(s) failed to index.',..document_class=dict,
> tz_aware=False, connect=True, replicaset=u'mss'), u'local'), u'oplog.rs')
>
> 2016-02-29 19:56:59,835 [ERROR] mongo_connector.connector:302 -
> MongoConnector: OplogThread unexpectedly stopped! Shutting down
* * *
Hi Val,
I connected with another mongodb instance, which had only one database, having
one collection with 30,000+ records and I was able to execute it succesfully.
The previous mongodb collection has multiple databases (around 7), which
internally had multiple collections (around 5 to 15 per databases) and all
were having good amount of documents (ranging from 500 to 50,000) in the
collections.
Was Mongo-connector failing because of huge data residing in the mongo
database ?
I have further queries
a. Is is possible to get indexing done of only specific collections in the
mongodb, residing in different databases? I wan to index only specific
collections (not the entire database). How can I achieve this ?
b. In elasticsearch i can see duplicate indexes for one collection. First one
is with the database name (as expected), other one with the name mongodb_meta,
both of them having same data, if I am changing the collection, the update is
happening in both the collections.
c. Is it possible to configure the output index name or any other parameters
any how?
Answer: I think the only issue is that you have the `.py` extension on the doc manager
(it was needed before mongo-connector 2.0), you simply need to remove it:
mongo-connector -m [remote mongo server IP]:[remote mongo server port] -t [elastic search server IP]:[elastic search server Port] -d elastic_doc_manager
|
Problems with installing scikit-learn on Fedora
Question: I have some problems while installing scikit-learn on Fedora 23 using pip
`pip install scikit-learn`
Here's what I get
> Command "/usr/bin/python -u -c "import setuptools, tokenize;**file**
> ='/tmp/pip-build-MPbvR0/scikit-
> learn/setup.py';exec(compile(getattr(tokenize, 'open',
> open)(**file**).read().replace('\r\n', '\n'), **file** , 'exec'))" install
> --record /tmp/pip-k_kxgh-record/install-record.txt --single-version-
> externally-managed --compile" failed with error code 1 in /tmp/pip-build-
> MPbvR0/scikit-learn
What may the problem be?
Answer: There is a way to get around the problem. Download and install
[Anaconda](https://www.continuum.io/downloads) which is Python + 195 packages
including scikit-learn.
|
wsgi breaks on ec2 django installation
Question: I'm getting the following error when testing a start django app: ImportError:
No module named django.core.wsgi
[Fri Feb 26 23:23:33 2016] [error] [client 100.9.129.136] mod_wsgi (pid=25312): Target WSGI script '/var/www/html/app_mgmt/app_core/wsgi.py' cannot be loaded as Python module.
[Fri Feb 26 23:23:33 2016] [error] [client 100.9.129.136] mod_wsgi (pid=25312): Exception occurred processing WSGI script '/var/www/html/app_mgmt/app_core/wsgi.py'.
[Fri Feb 26 23:23:33 2016] [error] [client 100.9.129.136] Traceback (most recent call last):
[Fri Feb 26 23:23:33 2016] [error] [client 100.9.129.136] File "/var/www/html/app_mgmt/app_core/wsgi.py", line 16, in <module>
[Fri Feb 26 23:23:33 2016] [error] [client 100.9.129.136] from django.core.wsgi import get_wsgi_application
[Fri Feb 26 23:23:33 2016] [error] [client 100.9.129.136] ImportError: No module named django.core.wsgi
my httpd.conf looks like this: LoadModule wsgi_module modules/mod_wsgi.so
#loaded above
NameVirtualHost *:80
WSGIScriptAlias / /var/www/html/app_mgmt/app_core/wsgi.py
WSGIPythonPath /var/www/html/app_mgmt
<Directory /var/www/html/app_mgmt>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
My wsgi.py looks like:
import os
import sys
sys.path.append("/var/www/html/app_mgmt")
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "app_mgmt.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
I installed mod_wsgi with the following command:
sudo yum install mod_wsgi
Does anyone know what I am doing wrong?
Answer: `sudo yum install mod_wsgi` does not work. It has to be compiled on your
instance. Here are the complete steps:
sudo yum install httpd httpd-devel
sudo yum install mysql mysql-server
sudo pip install django
sudo yum group install "Development Tools"
download the latest version of modwsgi here: http://code.google.com/p/modwsgi/downloads/list
https://code.google.com/archive/p/modwsgi/wikis/QuickInstallationGuide.wiki this is a good ref
tar xvfz mod_wsgi-X.Y.tar.gz
cd into mod_wsgi
./configure
make
make install
pip install MySQL-python
|
Python: Rock Paper Scissors While Loop Issue
Question: I'm having an issue with my programming of Rock, Paper, Scissors for Python.
My issue occurs when there is a tie. When there is a tie, my program is
supposed to going into a while loop within the if statement of the tie and
reask the player the same question, rock, paper or scissors again until it
breaks out of the tie. I have attached the link to the image of the issue:

In Round 5: you can see the issue.
I am taking an intro to programming class so I'm still a beginner and I do not
know what I am doing wrong.
# A Python program for the Rock, Paper, Scissors game.
import random
def rock_paper_scissors():
playerscore = 0
computerscore = 0
rounds = input('\nHow many points does it take to win?: ')
count = 1
while playerscore or computerscore != int(rounds):
print('\n********************* ROUND #',count,'*********************')
player = input('\nPick your throw: [r]ock, [p]aper, or [s]cissors?: ')
computerthrow = random.randint(0,2)
if (computerthrow == 0):
computer = "rock"
computer = 'r'
elif (computerthrow == 1):
computer = "paper"
computer = 'p'
elif (computerthrow == 2):
computer = "scissors"
computer = 's'
if (player == computer):
print('Tie!')
while (player == computer):
player = input('\nPick your throw: [r]ock, [p]aper, or [s]cissors?: ')
computerthrow = random.randint(0,2)
if (computerthrow == 0):
computer = "rock"
computer = 'r'
elif (computerthrow == 1):
computer = "paper"
computer = 'p'
elif (computerthrow == 2):
computer = "scissors"
computer = 's'
print(computer)
elif (player == 'r'):
if (computer == "p"):
print('Computer threw paper, you lose!')
computerscore=computerscore+1
else:
print('Computer threw scissors, you win!')
playerscore = playerscore+1
#count = count + 1
elif (player == 'p'):
if (computer == "r"):
print('Computer threw rock, you win!')
playerscore = playerscore+1
else:
print('Computer threw scissors, you lose!')
computerscore=computerscore+1
#count = count + 1
elif (player == 's'):
if (computer == "p"):
print('Computer threw paper, you win!')
playerscore = playerscore+1
else:
print('Computer threw rock, you lose!')
computerscore=computerscore+1
count = count + 1
print('\nYour score: ',playerscore)
print('Computer''s score: ',computerscore,'\n')
print('********************* GAME OVER ********************')
def main():
print('ROCK PAPER SCISSORS in Python')
print()
print('Rules: 1) Rock wins over Scissors.')
print(' 2) Scissors wins over Paper.')
print(' 3) Paper wins over Rock.')
rock_paper_scissors()
main()
Answer: Your problem comes from the way you have structured your control statements
(if, elif, else). When you enter your tie while loop, you are constantly
running it until someone wins and that looks like it works no problem. The
issue is that once you do that, the python interpreter skips all elif and else
statements in that control block (if I say if x == 3: do this else: do that) I
don't want python to follow through with the else condition if x does indeed
== 3). Sorry if that is confusing, long story short you need to make sure that
even when your tie block gets executed you still move on to scoring the round
and starting a new one. The easy way to do that is just change the elif
(player == "r") to an if statement. That way the interpreter treats the
scoring control sequence as its own block and it will always be executed once
you assign the throws each player makes.
# Example:
def f(x):
if (x == 0):
print("1")
x += 1
elif (x == 1):
print("2")
print("Done!")
def g(x):
if (x == 0):
print("1")
x += 1
if (x == 1):
print("2")
print("Done!")
if you call f(0): Python will print out 1 and then Done!
if you call g(0): Python will print out 1 then 2 then Done!
|
HTML and Python : How to make variables in a html code written in a python script
Question:
from bs4 import BeautifulSoup
import os
import re
htmlDoc="""
<html>
<body>
<table class="details" border="1" cellpadding="5" cellspacing="2" style="width:95%">
<tr>
<td>Roll No.</td>
<td><b>Subject 1</b></td>
<td>Subject 2</td>
</tr>
<tr>
<td>01</td>
<td>Absent</td>
<td>Present</td>
</tr>
<tr>
<td>02</td>
<td>Absent</td>
<td>Absent</td>
</tr>
</table>
</body>
</html>
"""
soup = BeautifulSoup(htmlDoc,"lxml")
#table = soup.find("table",attrs={class:"details"})
html = soup.prettify("utf-8")
with open("/home/alan/html_/output.html", "wb") as file:
file.write(html)
I've used BeautifulSoup to write the HTML code. In the code, the variable
I'vve to make is Present, Absent. Upon change of some parameter, I've to
change the values, change present to absent and vice versa. I've to make
pesent/absent a variable 'a'.
Answer: Do you mean composing your own html from some form on python data using
BeautifulSoup? If so, the example below may be useful to you:
from bs4 import BeautifulSoup, Tag
import os
subjects = ['subject1', 'subject2']
vals = [['absent','present'],['absent','absent']] #rows
titles = ['Roll No.'] + subjects
html = """<html>
<body>
<table class="details" border="1" cellpadding="5" cellspacing="2" style="width:95%">
</table>
</body>
</html>"""
soup = BeautifulSoup(html)
#find table tag
table = soup.find('table')
#add header to table tag
tr = Tag(table, name = 'tr')
for title in titles:
td = Tag(tr, name = 'td')
td.insert(0, title)
tr.append(td)
table.append(tr)
#add data to table one row at a time
for i in range(len(vals[0])):
tr = Tag(table, name = 'tr')
td = Tag(tr, name = 'td')
td.string = str(i+1)
tr.append(td)
for val in vals[i]:
td = Tag(tr, name = 'td')
td.string = val
tr.append(td)
table.append(tr)
os.chdir(os.getcwd())
f = open('test.html','w')
f.write(soup.prettify())
f.close()
|
Use string in subprocess
Question: I've written Python code to compute an IP programmatically, that I then want
to use in an external connection program.
I don't know how to pass it to the subprocess:
import subprocess
from subprocess import call
some_ip = "192.0.2.0" # Actually the result of some computation,
# so I can't just paste it into the call below.
subprocess.call("given.exe -connect host (some_ip)::5631 -Password")
I've read what I could and found similar questions but I truly cannot
understand this step, to use the value of `some_ip` in the subprocess. If
someone could explain this to me it would be greatly appreciated.
Answer: If you don't use it with `shell=True` (and I don't recommend `shell=True`
unless you really know what you're doing, as shell mode can have security
implications) `subprocess.call` takes the command as an sequence (e.g. a
`list`) of its components: First the executable name, then the arguments you
want to pass to it. All of those should be strings, but whether they are
string literals, variables holding a string or function calls returning a
string doesn't matter.
Thus, the following should work:
import subprocess
some_ip = "192.0.2.0" # Actually the result of some computation.
subprocess.call(
["given.exe", "-connect", "host", "{}::5631".format(some_ip), "-Password"])
* I'm using [`str`'s `format` method](https://docs.python.org/3.4/library/stdtypes.html#str.format) to replace the `{}` placeholder in `"{}::5631"` with the string in `some_ip`.
* If you invoke it as `subprocess.call(...)`, then
import subprocess
is sufficient and
from subprocess import call
is unnecessary. The latter would be needed if you want to invoke the function
as just `call(...)`. In that case the former import would be unneeded.
|
Python: How to get rid of the sequences according to the sequence bases rather than their header name?
Question: I would like to deduct two files based on the sequence constituents rather
than using the header name to get rid of the sequences. Is there any other way
I can deduct the sequences? can anyone help me? If the fasta header below is
replaced with >human then the following code cannot function.
**Code**
from Bio import SeqIO
input_file = 'a.fasta'
merge_file = 'original.fasta'
output_file = 'results.fasta'
exclude = set()
fasta_sequences = SeqIO.parse(open(input_file),'fasta')
for fasta in fasta_sequences:
exclude.add(fasta.id)
fasta_sequences = SeqIO.parse(open(merge_file),'fasta')
with open(output_file, 'w') as output_handle:
for fasta in fasta_sequences:
if fasta.id not in exclude:
SeqIO.write([fasta], output_handle, "fasta")
**a.fasta**
>chr12:15747942-15747949
TGACATCA
>chr2:130918058-130918065
TGACCTCA
**original.fasta**
>chr3:99679938-99679945
TGACGTAA
>chr9:135822160-135822167
TGACCTCA
>chr12:15747942-15747949
TGACATCA
>chr2:130918058-130918065
TGACCTCA
>chr2:38430457-38430464
TGACCTCA
>chr1:112381724-112381731
TGACATCA
**results.fasta**
>chr3:99679938-99679945
TGACGTAA
>chr9:135822160-135822167
TGACCTCA
>chr2:38430457-38430464
TGACCTCA
>chr1:112381724-112381731
TGACATCA
Answer: You can check the sequences against one another. Be careful though, the
sequences may not be 100% matches and they need to be for this method to give
you the desired result. Access the sequence with `str(your_obj.seq)`.
In your code, implement the changes here:
for fasta in fasta_sequences:
exclude.add(str(fasta.seq))
and here:
for fasta in fasta_sequences:
if str(fasta.seq) not in exclude:
In your example, you should note that the `results.fasta` file will only
contain the following line because it is the only sequence in `original.fasta`
that doesn't match a sequence from `a.fasta`.
>chr3:99679938-99679945
TGACGTAA
|
Adding short-hostname into the /etc/hosts file with python
Question: Currently my /etc/hosts file is missing the short-hostname(last column) is
there a way to take the FQDN value in the file remove '.pdp.wdf.ltd' and add
the hostname to the last column. To reach till here I did write a small python
script wrote it to a file, but unable to proceed to get the short-hostname
added
#!/usr/bin/env python
import re,subprocess,os,socket
a=subprocess.Popen('ifconfig -a', stdout=subprocess.PIPE, shell=True)
_a, err= a.communicate()
_ou=dict(re.findall(r'^(\S+).*?inet addr:(\S+)', _a, re.S | re.M))
_ou=_ou.values()
_ou.remove('127.0.0.1')
y=[]
for i in _ou:
_z = '{0} ' .format (i), socket.getfqdn(i)
y.append(_z)
_y=dict(y)
_z=(' \n'.join('{0} \t {1}'.format(key, val)for (key,val) in _y.iteritems()))
# cat /etc/hosts
#IP-Address Full-Qualified-Hostname Short-Hostname
10.68.80.28 dewdfgld00035.pdp.wdf.ltd
10.68.80.45 lddbrdb.pdp.wdf.ltd
10.68.80.46 ldcirdb.pdp.wdf.ltd
10.72.176.28 dewdfgfd00035b.pdp.wdf.ltd
**Output needed in the /etc/hosts file**
##IP-Address Full-Qualified-Hostname Short-Hostname
10.68.80.28 dewdfgld00035.pdp.wdf.ltd dewdfgld00035
10.68.80.45 lddbrdb.pdp.wdf.ltd lddbrdb
10.68.80.46 ldcirdb.pdp.wdf.ltd ldcirbd
10.72.176.28 dewdfgfd00035b.pdp.wdf.ltd dewdfgfd00035b
Answer: You can use the following to match (with `g`lobal and `m`ultiline flags) :
(^[^\s#]+\s+([^.\n]+).*)
And replace with the following:
\1\2
See [RegEX DEMO](https://regex101.com/r/cP2fA1/1)
|
pass data frame from one function to another in python
Question: I am using two functions, one to load data and another to get a summary of the
same data. However in second function analyze() I get the error df not
defined. How do I pass df from loader() to analyze() ?
from xlwings import Workbook, Range
import pandas as pd
def Loader():
wb = Workbook.caller()
file_path = Range(1,(1,1)).value
file=pd.read_excel(file_path, sheetname='Sheet1')
df = pd.DataFrame(file)
def analyze():
Range('C1').value=df.describe()
Answer: With several ways depending on what you want to do. The simplest way is to
`return` the `df` from the `Loader()` and then give it to the `analyze()` as
an argument:
def Loader():
wb = Workbook.caller()
file_path = Range(1,(1,1)).value
file=pd.read_excel(file_path, sheetname='Sheet1')
df = pd.DataFrame(file)
return df
def analyze(df):
Range('C1').value=df.describe()
# Use it this way
dataFrame = Loader()
analyze(dataframe)
Then another way is to have a Loader class like this:
class Loader(object):
def __init__(self):
wb = Workbook.caller()
file_path = Range(1,(1,1)).value
file=pd.read_excel(file_path, sheetname='Sheet1')
self.df = pd.DataFrame(file)
# 1) If you want to analyse when you create the object
# call analyze() here
self.analyze()
def analyze(self):
Range('C1').value=self.df.describe()
loader = Loader()
# 2) Otherwise you can keep control of analyze()
# and call it whenever you want, like this:
loader.analyze()
Of course there are other ways too (like having a global variable for the df).
|
Python Pandas Pivot Table Sort by Date
Question: I have the following code:
data_df = pandas.read_csv(filename, parse_dates = True)
groupings = np.unique(data_df[['Ind']])
for group in groupings:
data_df2 = data_df[data_df['Ind'] == group]
table = pandas.pivot_table(data_df2, values='Rev', index=['Ind', 'Month'], columns=['Type'], aggfunc=sum)
table = table.sort_index(ascending=[0, 0])
print(table)
How can I sort the pivot 'table' by month and year (e.g. when I print 'table'
I want Dec-14 to be the first row of output for each group)?
Below is a sample of the data in 'data_df':
Ind Type Month Rev
0 A Voice Dec-14 10.00
1 A Voice Jan-15 8.00
2 A Voice Feb-15 13.00
3 A Voice Mar-15 9.00
4 A Voice Apr-15 11.00
5 A Voice May-15 14.00
6 A Voice Jun-15 6.00
7 A Voice Jul-15 4.00
8 A Voice Aug-15 12.00
9 A Voice Sep-15 7.00
10 A Voice Oct-15 5.00
11 A Elec Dec-14 8.04
12 A Elec Jan-15 6.95
13 A Elec Feb-15 7.58
14 A Elec Mar-15 8.81
15 A Elec Apr-15 8.33
16 A Elec May-15 9.96
17 A Elec Jun-15 7.24
18 A Elec Jul-15 4.26
19 A Elec Aug-15 10.84
20 A Elec Sep-15 4.82
21 A Elec Oct-15 5.68
22 B Voice Dec-14 10.00
23 B Voice Jan-15 8.00
24 B Voice Feb-15 13.00
25 B Voice Mar-15 9.00
26 B Voice Apr-15 11.00
27 B Voice May-15 14.00
28 B Voice Jun-15 6.00
29 B Voice Jul-15 4.00
.. .. ... ... ...
The output is (I was playing with ascending but it only wants to sort alpha):
Type Elec Voice
Ind Month
A Sep-15 4.82 7
Oct-15 5.68 5
May-15 9.96 14
Mar-15 8.81 9
Jun-15 7.24 6
Jul-15 4.26 4
Jan-15 6.95 8
Feb-15 7.58 13
Dec-14 8.04 10
Aug-15 10.84 12
Apr-15 8.33 11
I want the output to be sorted by date:
Type Elec Voice
Ind Month
A Dec-14 8.04 10
Jan-15 6.95 8
Feb-15 7.58 13
...
Answer: You need to convert your 'Month' column to datetime after creating the
DataFrame from CSV file:
df['Month'] = pd.to_datetime(df['Month'], format="%b-%y")
Because currently it's a string...
Or you can use the following trick (`date_parser`) in order to parse dates,
during "read_csv":
from __future__ import print_function
import pandas as pd
dateparser = lambda x: pd.datetime.strptime(x, '%b-%y')
df = pd.read_csv('data.csv', delimiter=r'\s+', parse_dates=['Month'], date_parser=dateparser)
print(df.sort_values(['Month']))
PS i don't what is your preferred output date format...
|
Python if else statement
Question: My if statement works but else doesn't can anyone help me? this is my code.
Btw if anyone knows how to ask for a retry after one time would be awesome!
import random
print('choose a number between 1 and 10,if you guess right you get 10 points if you guess wrong you lose 15points')
answer = input()
randint = random.randint(0,2)
print('the answer is ',randint)
if [answer == randint]:
print('gratz! you win 10points!')
else:
print('you lose 15points!')
Answer: Don't put brackets around your `if` statement. When you do that, you are
creating a new list. Change it to this:
if answer == randint:
You could put parentheses around it if you wanted to, but not `[]`. Your
second problem is that `random.randint()` returns an integer, but `input()`
returns a string (in Python3). You could say `if int(answer) == randint:`
instead, or you could say `if answer == str(randint):`. Your third problem, as
@cricket_007 pointed out is `randint(0, 2)` will return an integer between `0`
and `2`, not `1` and `10`. Just change that line to `randint =
random.randint(1, 10)`.
|
Implementing a basic graph database engine
Question: I need to implement a simple graph database engine, what are the things should
I consider? First, I am confused between which data structure to use, I mean
graph representation (like adjacency matrix or adjacency list) or the actual
graph itself? I need this to be scalable. Later how do I store the graph in
the hard disk as files? After I store the graph data in the form of files, I
would also need a way to selectively load only certain files into the graph,
since I can not load everything at once into the RAM. Sorry for being vague,
but I need someone to point me in the right direction. Also please suggest the
language I can use, can I use python for this project? Thank you.
Answer: Depending on your needs you will implement different interface to the database
ie. an adjacency matrix or the graph itself.
Instead of using a file based database, the important step forward you can
take is use a key/value store like
[bsddb](http://stackoverflow.com/questions/tagged/bsddb),
[leveldb](http://stackoverflow.com/questions/tagged/leveldb) or
[wiredtiger](http://stackoverflow.com/questions/tagged/wiredtiger) (prefered).
This will deal with caching often accessed files, provide ACID semantic, and
indices if you use wiredtiger.
The storage layer made upon the key/value store, can have several layout. It
depends on the final interface you need.
To get started with developing custom databases using key/value stores I
recommend you read questions answered about mostly leveldb and bsddb on SO.
Like the following:
* [store list in key value database](http://stackoverflow.com/questions/18513419/store-list-in-key-value-database)
* [How to give multiple values to a single key using a dictionary?](http://stackoverflow.com/questions/19873620/how-to-give-multiple-values-to-a-single-key-using-a-dictionary/32147466#32147466)
* [Use integer keys in Berkeley DB with python (using bsddb3)](http://stackoverflow.com/questions/18664940/use-integer-keys-in-berkeley-db-with-python-using-bsddb3/32147596#32147596)
* [Expressing multiple columns in berkeley db in python?](http://stackoverflow.com/questions/2399643/expressing-multiple-columns-in-berkeley-db-in-python)
|
How to exclude multiple columns in Spark dataframe in Python
Question: I found pyspark has a method called `drop` but it seems it can only drop one
column at a time. Any ideas about how to drop multiple columns at the same
time?
df.drop(['col1','col2'])
TypeError Traceback (most recent call last)
<ipython-input-96-653b0465e457> in <module>()
----> 1 selectedMachineView = machineView.drop([['GpuName','GPU1_TwoPartHwID']])
/usr/hdp/current/spark-client/python/pyspark/sql/dataframe.pyc in drop(self, col)
1257 jdf = self._jdf.drop(col._jc)
1258 else:
-> 1259 raise TypeError("col should be a string or a Column")
1260 return DataFrame(jdf, self.sql_ctx)
1261
TypeError: col should be a string or a Column
Answer: Simply with `select`:
df.select([c for c in df.columns if c not in {'GpuName','GPU1_TwoPartHwID'}])
or if you really want to use `drop` then `reduce` should do the trick:
from functools import reduce
from pyspark.sql import DataFrame
reduce(DataFrame.drop, ['GpuName','GPU1_TwoPartHwID'], df)
**Note** :
(_difference in execution time_):
There should be no difference when it comes to data processing time. While
these methods generate different logical plans physical plans are exactly the
same.
There is a difference however when we analyze driver-side code:
* the first method makes only a single JVM call while the second one has to call JVM for each column that has to be excluded
* the first method generates logical plan which is equivalent to physical plan. In the second case it is rewritten.
* finally comprehensions are significantly faster in Python than methods like `map` or `reduce`
|
How to add legend/label in python animation
Question: I want to add a legend in a python animation, like the `line.set_label()`
below. It is similar to `plt.plot(x,y,label='%d' %*variable*)`.
However, I find that codes do not work here. The animation only shows lines
changing but no label or legend available. How can I fix this problem?
from matplotlib import pyplot as plt
from matplotlib import animation
fig = plt.figure()
ax = plt.axes(xlim=(0, 2), ylim=(0, 100))
N = 3
lines = [plt.plot([], [])[0] for _ in range(N)]
def init():
for line in lines:
line.set_data([], [])
return lines
def animate(i):
for j,line in enumerate(lines):
line.set_data([0, 2], [10*j,i])
line.set_label('line %d, stage %d'%(j,i))
return lines
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=20, blit=True)
plt.show()
Answer: I'm no expert on matplotlib at all, but in the [Double Pendulum
animation](https://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-
tutorial/) they display texts which changes, and this leads to some variations
which can help you.
To get legends with the actual color of the lines, you can either change the
initial setting `lines` to:
lines = [plt.plot([], [], label = 'line {}'.format(i))[0] for i in range(N)]
or add a `line.set_label()` to the `for` loop in the `init()` function. Both
these seem to work as expected. At least if you add `plt.legend(loc="upper
left")` right before `plt.show()`.
However the `set_label` doesn't work within the `animate()` function, but
according to the linked animation you can use specific text fields added to
the animation, and that seems to work nicely. Add the following code after
initialisation of `lines`:
texts = [ax.text(0.80, 0.95-i*0.05, '', transform=ax.transAxes) for i in range(N)]
And change `animate()` to be:
def animate(i):
for j in range(N):
lines[j].set_data([0, 2], [10*j,i]) #, label="hei {}".format(i))
texts[j].set_text('line %d, stage %d'%(j,i))
return lines
This places the text close to the upper right corner, and is updated for each
animation step. Since the lines still have their legend displayed, you
possibly simplify into one text only displaying the stage. But I leave the
fine tuning of messages to your discretion.
## Addendum: Extend Line2D
Another alternative could _possibly_ be to extend `lines.Line2D` and use these
lines in your animation, something similar to [this
article](http://matplotlib.org/examples/api/line_with_text.html). Not sure if
this would work with animation, but if you can't get the above to work, this
might be worth a try.
|
How can I use the value from a spin box (or other Tkinter widget) properly in a calculation?
Question: I am writing a program in Python/Tkinter where I need to get the user's
inputted value from a spin box and use it in a mathematical calculation (to
calculate the cost of an item, more specifically). This is triggered by
pressing a button.
from tkinter import *
root = Tk()
root.wm_title("Kiosk")
root.geometry("300x75")
root.resizable(0, 0)
popcorn = Spinbox(root, from_=0, to=10, state="readonly")
popcorn.pack()
def getvalue():
print(popcorn.get()*9)
button = Button(root, text="Get value", command=getvalue)
button.pack()
root.mainloop()
However, the problem I end up running into is the program not multiplying the
numbers together, but printing the number nine times. The output when I click
the button ends up something like "777777777". I set the spinbox to "readonly"
so the user can't input text, only the values I assigned.
Obviously this isn't my entire project, just an example of what I'm trying to
achieve.
Total newbie question, I know, but I can't seem to find the answer anywhere...
Any help is appreciated.
Answer: `popcorn.get()` returns a string you need to convert it to integer using `int`
or float point number using `float`.
def getvalue():
print(int(popcorn.get()) * 9)
|
Python Slicing text file into arrays based on field value
Question: I am new to Python and I want to read a text file that has three fields; `X`,
`Y` and `Time`. I want to form arrays from the x and y fields as long as the
time field is still the constant. For example:
X Y Time
1 2 100
3 4 100
5 6 100
7 8 200
9 10 200
therefore, for `time = 100`, I build `x_array = [1,3,5]` and `y_array =
[2,4,6]` as input to other processes and then go back to update them when time
change to `200`. Thank you for your help.
Answer: ## Solution
You can use [pandas](http://pandas.pydata.org/):
>>> import pandas as pd
>>> df = pd.read_csv('my_data.csv', delim_whitespace=True)
>>> df.groupby('Time')['X'].apply(list).to_dict()
{100: [1, 3, 5], 200: [7, 9]}
>>> df.groupby('Time')['Y'].apply(list).to_dict()
{100: [2, 4, 6], 200: [8, 10]}
## Explanation
This reads you file:
df = pd.read_csv('my_data.csv', delim_whitespace=True)
into such a dataframe:
[](http://i.stack.imgur.com/f0uVD.png)
Now, you group by `Time` and convert the entries in `X` into lists:
df.groupby('Time')['X'].apply(list)
This gives you this pandas series:
Time
100 [1, 3, 5]
200 [7, 9]
Name: X, dtype: object
Finally, use `to_dict()` to convert it to a dictionary:
>>> df.groupby('Time')['X'].apply(list).to_dict()
{100: [1, 3, 5], 200: [7, 9]}
## Alternative Solution:
This gives you a different arrangement of the result:
>>> df.groupby('Time').apply(lambda x: {'X': list(x['X']), 'Y': list(x['Y'])}).to_dict()
{100: {'X': [1, 3, 5], 'Y': [2, 4, 6]}, 200: {'X': [7, 9], 'Y': [8, 10]}}
|
Why copied objects have the same id as previously copied ones in Python?
Question: I am trying to understand one observation. I have an application that loads
various `Canvas` classes which a user can later work with. These classes are
located in several files. For example.
canvas/
bw.py
colored.py
oil.py
I import, instantiate and copy these objects like this:
canvas_files = os.listdir('images')
imported_canvs = []
for canv in canvas_files:
canv = __import__(canv.split('.')[0], fromlist=['Canvas'])
try:
new_canv = canv.Canvas()
new_canv_copy = copy.copy(new_canv)
imported_canvs.append(new_canv_copy)
except AttributeError as ex:
pass
Afterwards, a user works with each `Canvas` object from `imported_canvs` list.
However, when I import and instantiate these objects twice (run the `for` loop
again) I can see `id(new_canv_copy)` is the same as previously imported and
instantiated ones. This would not be a problem unless that each `Canvas` has
settings which should be unique for each instance and this is not currently
happening. Whenever a user changes the settings in one `Canvas` they are
automatically changed in the copied one.
Why is this happening and what am I doing wrong?
Answer: Using just `copy.copy()` creates a shallow copy. You probably want to use deep
copy when copying objects using
[`copy.deepcopy()`](https://docs.python.org/2/library/copy.html#copy.deepcopy).
You can read in detail what's the difference here:
<https://docs.python.org/2/library/copy.html>
I don't know what `canv.Canvas()` does inside so it's hard to tell what's
going on when you run the same code twice when I can't try it by myself.
|
Python image recognition with pyautogui
Question: When I try to recognize an image with `pyautogui` it just says: `None`
import pyautogui
s = pyautogui.locateOnScreen('Dark.png')
print s
When I ran this code the picture was on my screen but it still failed.
Answer: On my system, I get this if the picture is on a second monitor. If I move it
to the main screen, the image is located successfully.
It looks like multiple-monitor functionality is not yet implemented: From
<http://pyautogui.readthedocs.org/en/latest/roadmap.html>
> Future features planned (specific versions not planned yet):
>
> * Find a list of all windows and their captions.
> * Click coordinates relative to a window, instead of the entire screen.
> * Make it easier to work on systems with multiple monitors.
> * ...
>
|
Using Python to use a website's search function
Question: I am trying to use a search function of a website with this code structure:
<div class='search'>
<div class='inner'>
<form accept-charset="UTF-8" action="/gr/el/products" method="get"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /></div>
<label for='query'>Ενδιαφέρομαι για...</label>
<fieldset>
<input class="search-input" data-search-url="/gr/el/products/autocomplete.json" id="text_search" name="query" placeholder="Αναζητήστε προϊόν" type="text" />
<button type='submit'>Αναζήτηση</button>
</fieldset>
</form>
</div>
</div>
with this python script:
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1'}
payload = {
'query':'test'
}
r = requests.get('http://www.pharmacy295.gr',data = payload ,headers = headers)
soup = BeautifulSoup(r.text,'lxml')
products = soup.findAll('span', {'class':'name'})
print(products)
This code came as a result of extensive searches on this website on how to do
this task, however I never seem to manage to get any search results - just the
main page of the website.
Answer: Add `products` to your url and it will work fine, the method is get in the
form and the form shows also the url. If you are unsure crack open use the
developer console on firefox or chrome you can see exactly how the the request
is made
payload = {
'query':'neutrogena',
}
r = requests.get('http://www.pharmacy295.gr/products',data = payload ,headers = headers)
soup = BeautifulSoup(r.text,'lxml')
products = soup.findAll('span', {'class':'name'})
print(products)
Output:
[<span class="name">NEUTROGENA - Hand & Nail Cream - 75ml</span>, <span class="name">NEUTROGENA - Hand Cream (Unscented) - 75ml</span>, <span class="name">NEUTROGENA - PROMO PACK 1+1 \u0394\u03a9\u03a1\u039f Lip Moisturizer - 4,8gr</span>, <span class="name">NEUTROGENA - Lip Moisturizer with Nordic Berry - 4.9gr</span>]
Also if you prefer you can get the data as json:
In [13]: r = requests.get('http://www.pharmacy295.gr/el/products/autocomplete.json',data = payload ,headers = headers)
In [14]: print(r.json())
[{u'title': u'NEUTROGENA - Hand & Nail Cream - 75ml', u'discounted_price': u'5,31 \u20ac', u'photo': u'/system/uploads/asset/data/12584/tiny_108511.jpg', u'brand': u'NEUTROGENA ', u'path': u'/products/7547', u'price': u'8,17 \u20ac'}, {u'title': u'NEUTROGENA - Hand Cream (Unscented) - 75ml', u'discounted_price': u'4,03 \u20ac', u'photo': u'/system/uploads/asset/data/4689/tiny_102953.jpg', u'brand': u'NEUTROGENA ', u'path': u'/products/3958', u'price': u'6,20 \u20ac'}, {u'title': u'NEUTROGENA - PROMO PACK 1+1 \u0394\u03a9\u03a1\u039f Lip Moisturizer - 4,8gr', u'discounted_price': u'3,91 \u20ac', u'photo': u'/system/uploads/asset/data/5510/tiny_118843.jpg', u'brand': u'NEUTROGENA ', u'path': u'/products/4644', u'price': u'4,60 \u20ac'}, {u'title': u'NEUTROGENA - Lip Moisturizer with Nordic Berry - 4.9gr', u'discounted_price': u'2,91 \u20ac', u'photo': u'/system/uploads/asset/data/12761/tiny_126088.jpg', u'brand': u'NEUTROGENA ', u'path': u'/products/7548', u'price': u'4,48 \u20ac'}]
|
Get absolute path of shared library in Python
Question: Let's say I wanted to use libc in Python. This can be easily done by
from ctypes import CDLL
from ctypes.util import find_library
libc_path = find_library('c')
libc = CDLL(libc_path)
Now, I know I could use ldconfig to get libc's abspath, but is there a way to
acquire it from the CDLL object? Is there something that can be done with its
`_handle`?
_Update_ : Ok.
libdl = find_library('dl')
RTLD_DI_LINKMAP = 2
//libdl.dlinfo(libc._handle, RTLD_DI_LINKMAP, ???)
I need to redefine the `link_map` struct then?!
Answer: A handle in this context is basically a reference to the memory mapped library
file.
However there are existing ways to achieve what what you want with the help of
OS functions.
**windows:** Windows provides an API for this purpose called
`GetModuleFileName`. Some example of usage is already
[here](http://stackoverflow.com/questions/11007896/how-can-i-search-and-get-
the-directory-of-a-dll-file-in-python).
**linux:** There is existing a `dlinfo` function for this purpose, see
[here](http://man7.org/linux/man-pages/man3/dlinfo.3.html).
* * *
I played around with ctypes and here is my solution for linux based Systems. I
have zero knowledge of ctypes so far, if there are any suggestions for
improvement I appreciate them.
from ctypes import *
from ctypes.util import find_library
#linkmap structure, we only need the second entry
class LINKMAP(Structure):
_fields_ = [
("l_addr", c_void_p),
("l_name", c_char_p)
]
libc = CDLL(find_library('c'))
libdl = CDLL(find_library('dl'))
dlinfo = libdl.dlinfo
dlinfo.argtypes = c_void_p, c_int, c_void_p
dlinfo.restype = c_int
#gets typecasted later, I dont know how to create a ctypes struct pointer instance
lmptr = c_void_p()
#2 equals RTLD_DI_LINKMAP, pass pointer by reference
dlinfo(libc._handle, 2, byref(lmptr))
#typecast to a linkmap pointer and retrieve the name.
abspath = cast(lmptr, POINTER(LINKMAP)).contents.l_name
print(abspath)
|
Getting an empty list as attribute when parsing XML with xml.etree.ElementTree
Question: So I use python 3 to parse an XML.
text = '''
<body>
<list>
<item>
<cmid>16934673</cmid>
<day>29.02.2016</day>
<relay>1</relay>
<num>1</num>
<starttime>08:15</starttime>
<endtime>08:55</endtime>
<subjid>81327</subjid>
<subjname>Литературное чтение</subjname>
<subjabbr>Лит.чт.</subjabbr>
<sgid>447683</sgid>
<sgname>Литературное чтение</sgname>
<tid>551817</tid>
<tlastname>Фамилия</tlastname>
<tfirstname>Имя</tfirstname>
<tmidname>Отчество</tmidname>
<roomid>68672</roomid>
<roomname>Филиал 1 кабинет</roomname>
</item>
</list>
</body>'''
I try to get `subjname`, using `xml.etree.ElementTree` this way.
>>> import xml.etree.ElementTree as ET
>>> doc = ET.fromstring(text)
>>> print(doc[0][0][7].tag)
subjname
>>> print(doc[0][0][7].attrib)
{}
So I always get an empty dict. But I can't find the problem. I thought the
problem is that attributes are Cyrillic, but the same problem occurs when I
try to get the `cmid` attribute
>>> doc = ET.fromstring(r.text.encode('utf-8'))
>>> print(doc[0][0][0].attrib)
{}
Answer: `.attrib` is an empty dictionary in your case since _the tags you show don't
have any attributes at all_. You probably meant to get the
[`.text`](https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.text)
instead:
doc.find("subjname").text
|
How to use python 3.5.1 with a MySQL dtabase
Question: I have been trying to use MySQL in a python project I've been working on. I
downloaded the connector: mysql-connector-python-2.1.3-py3.4-winx64
[here.](https://dev.mysql.com/downloads/connector/python/)
I already had python 3.5.1 installed. When i tried to install the connector,
it didn't work because it required python 2.7 instead. I have searched on many
sites even on stackoverflow couldn't find a solution. Thanks for any help.
Answer: I did the steps below with Python 3.5.1 and it works:
* Download driver from [here](https://pypi.python.org/pypi/PyMySQL)
* Driver installation in cmd, in this folder Python\Python35\PyMySQL-0.7.4\pymysql
python setup.py build
python setup.py install
* Copy folder Python\Python35\PyMySQL-0.7.4\pymysql to Python\Python35\pymysql
* Sample code in python IDE
import pymysql
import pymysql.cursor
conn= pymysql.connect(host='localhost',user='user',password='user',db='testdb',charset='utf8mb4',cursorclass=pymysql.cursors.DictCursor)
a=conn.cursor()
sql='CREATE TABLE `users` (`id` int(11) NOT NULL AUTO_INCREMENT,`email` varchar(255) NOT NULL,`password` varchar(255) NOT NULL,PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;'
a.execute(sql)
* Enjoy It!
|
Python iteration: sorting through a .txt file extract wanted data
Question: I have a sample inputfile.txt:
chr1 34870071 34899867 pi-Fam168b.1 -
chr11 98724946 98764609 pi-Wipf2.1 +
chr11 105898192 105920636 pi-Dcaf7.1 +
chr11 120486441 120495268 pi-Mafg.1 -
chr12 3891106 3914443 pi-Dnmt3a.1 +
chr12 82815946 82882157 pi-Map3k9.1 -
chr13 23855536 23856215 pi-Hist1h1a.1 +
chr13 55206682 55236190 pi-Zfp346.1 +
chr1 95700553 95718679 pi-Ing5.1 +
chr13 55313417 55419685 pi-Nsd1.1 +
chr14 27852218 27920472 pi-Il17rd.1 +
chr14 65430438 65568699 pi-Hmbox1.1 -
chr1 120524521 120581739 pi-Tfcp2l1.1 +
chr15 81633147 81657289 pi-Tef.1 +
chr15 89331804 89390691 pi-Shank3.1 +
chr15 103021983 103070259 pi-Cbx5.1 -
chr16 16896549 16927451 pi-Ppm1f.1 +
chr16 17233679 17263523 pi-Hic2.1 +
chr16 17452059 17486929 pi-Crkl.1 +
chr16 24393531 24992661 pi-Lpp.1 +
chr16 43964878 43979143 pi-Zdhhc23.1 -
chr17 25098236 25152532 pi-Cramp1l.1 -
chr17 27993451 28036985 pi-Uhrf1bp1.1 +
chr17 83973363 84031786 pi-Kcng3.1 -
chr1 133904194 133928161 pi-Elk4.1 +
chr18 60844148 60908308 pi-Ndst1.1 -
chr19 10057193 10059582 pi-Fth1.1 +
chr19 44637337 44650762 pi-Hif1an.1 +
chr1 135027714 135036359 pi-Ppp1r15b.1 +
chr2 28677821 28695861 pi-Gtf3c4.1 -
chr1 136651241 136852527 pi-Ppp1r12b.1 -
chr2 154262219 154365092 pi-Cbfa2t2.1 +
chr2 156022393 156135687 pi-Phf20.1 +
chr3 51028854 51055547 pi-Ccrn4l.1 +
chr3 94985683 95021902 pi-Gabpb2.1 -
chr1 158488203 158579750 pi-Abl2.1 +
chr4 45411294 45421633 pi-Mcart1.1 -
chr4 56879897 56960355 pi-D730040F13Rik.1 -
chr4 59818521 59917612 pi-Snx30.1 +
chr4 107847846 107890527 pi-Zyg11a.1 -
chr4 107900359 107973695 pi-Zyg11b.1 -
chr4 132195002 132280676 pi-Eya3.1 +
chr4 134968222 134989706 pi-Rcan3.1 -
chr4 136025678 136110697 pi-Luzp1.1 +
chr1 162933052 162964958 pi-Zbtb37.1 -
chr5 38591490 38611628 pi-Zbtb49.1 -
chr5 67783388 67819359 pi-Bend4.1 -
chr5 114387108 114443767 pi-Ssh1.1 -
chr5 115592990 115608225 pi-Mlec.1 -
chr5 143628624 143656891 pi-Fbxl18.1 -
chr1 172123561 172145541 pi-Uhmk1.1 -
chr6 83312367 83391602 pi-Tet3.1 -
chr6 85419571 85434653 pi-Fbxo41.1 -
chr6 116288039 116359551 pi-March08.1 +
chr6 120786229 120842859 pi-Bcl2l13.1 +
chr7 71031236 71083761 pi-Klf13.1 -
chr7 107068766 107128968 pi-Rnf169.1 -
chr7 139903770 140044311 pi-Fam53b.1 -
chr8 72285224 72298794 pi-Zfp866.1 -
chr8 106872110 106919708 pi-Cmtm4.1 -
chr8 112250549 112261649 pi-Atxn1l.1 -
chr10 41901651 41911816 pi-Foxo3.1 -
chr8 119682164 119739895 pi-Gan.1 +
chr8 125406988 125566154 pi-Ankrd11.1 -
chr9 27148219 27165314 pi-Igsf9b.1 +
chr9 44100521 44113717 pi-Hinfp.1 -
chr9 61761092 61762348 pi-Rplp1.1 -
chr9 106590412 106691503 pi-Rad54l2.1 -
chr9 114416339 114473487 pi-Trim71.1 -
chr9 119311403 119351032 pi-Acvr2b.1 +
chr9 119354082 119373348 pi-Exog.1 +
chr10 82822985 82831579 pi-D10Wsu102e.1 +
chr10 126415753 126437016 pi-Ctdsp2.1 +
chr1 90159688 90174093 pi-Hjurp.1 -
chr11 60591039 60597792 pi-Smcr8.1 +
chr11 69209318 69210176 pi-Lsmd1.1 +
chr11 75345218 75391069 pi-Slc43a2.1 +
chr11 79474214 79511524 pi-Rab11fip4.1 +
chr11 95818479 95868022 pi-Igf2bp1.1 -
chr11 97223641 97259855 pi-Socs7.1 +
chr11 97524530 97546757 pi-Mllt6.1 +
chr1 120355721 120355843 1-qE2.3-2.1 -
chr2 120518324 120540873 2-qE5-4.1 +
chr7 82913927 82926993 7-qD2-40.1 -
Column1=chromosome_number
Column2=start
Column3=end
Column4=gene_name
Column5=Orientation (either + or -)
1.) I need to extract lines that have the **same chromosome number**
(column1), **their start sites have a difference of 200 Maximum (so 200 or
less)** (column2) that are in **opposite** orientation (one is plus/minus).
This is what I have so far and not sure where my mistake is:
import csv
import itertools as it
f=open('inputfile.txt', 'r')
def getrecords(f):
for line in open(f):
yield line.strip().split()
key=lambda x: x[0]
for i, rec in it.groupby(sorted(getrecords('inputfile.txt'), key=key), key=key):
for c0, c1 in it.combinations(rec, 2):
if (c0[4]!= c1[4] and (abs(int(c0[1])-int(c1[1]))) < 200):
print ("%s\t%s\t%s" % (c0[0], c0[1], c0[3]))
print("%s\t%s\t%s" % (c1[0], c1[1], c1[3]))
_Please note: this code runs, but does not give out any output, when I am
certain there should be something_ I am expecting there to be around 15 unique
sequence lines. Expected output:
ChrX start_number1 gene_name1
ChrX start_number1+/-200 gene_name2
ChrY start_number2 gene_name3
ChrY start_number2+/-200 gene_name4
Then I'd sort through these lines to get rid of duplicates.
Answer: There are no values in your example that meet your specified criteria, so I
added a single line to your `inputfile.txt`:
chr1 34870091 34899887 pi-Fam168b.1 +
I copied the first line of your `inputfile.txt` and added `20` to the integers
in the second and third columns.
To begin, you don't need to import `csv`, you won't use it. You should import
[`groupby`](https://docs.python.org/2/library/itertools.html#itertools.groupby)
and
[`product`](https://docs.python.org/2/library/itertools.html#itertools.product)
and [`itemgetter`](https://docs.python.org/2/library/operator.html), I'll
explain below.
from itertools import groupby,product
from operator import itemgetter
This block is just parsing your `inputfile.txt` into a usable data structure
(list of dictionaries) where each record in the file will be a `dictionary`
element in the `sites` list.
with open('/home/kevin/inputfile.txt', 'rb') as f: # should use with open()
sites = [] #list to hold each record as a dictionary
for row in f:
row = tuple(row.strip().split())
d = {'chr': row[0], 'start': row[1], 'stop':row[2], 'gene_name':row[3], 'strand':row[4]}
sites.append(d)
I chose to first, sort by _strand_ using `itemgetter`, Now, when you `groupby`
strand we can separate the dictionaries into list of all the `plus` strands
and a list of all the `minus` strands:
plus = []
minus = []
for elmt,grp in groupby(sites, itemgetter('strand')): # sites is our sorted list of dicts
for item in grp:
if elmt == '+':
plus.append(item)
else:
minus.append(item)
Now you can iterate through `plus` and `minus` using `product`, which acts
like a nested for loop and compare `start` positions:
for p,m in product(plus,minus):
if p['chr'] == m['chr'] and abs(int(p['start']) - int(m['start'])) < 200:
print ("%s\t%s\t%s") % (p['chr'], p['start'], p['gene_name'])
print ("%s\t%s\t%s") % (m['chr'], m['start'], m['gene_name'])
This returned:
chr1 34870091 pi-Fam168b.1 #remember I artificially added this one
chr1 34870071 pi-Fam168b.1
As a reference, this type of task may be more elegantly achieved in the python
library [pandas](http://pandas.pydata.org/).
[Bedtools](http://bedtools.readthedocs.org/en/latest/) (C++ i think) is
specifically designed to work with `.bed` files, which is the format you're
working with. HTH!
|
python left and right arrow key event not working
Question: I am new to Python and trying to create a turtle shape and once the user
clicks the left or right arrow keys on keyboard the shape should move in that
direction, however nothing is happening.
I am trying to move the player using the left and right arrow keys, buts its
not working please help and advice.
#Create the player turtle
player = turtle.Turtle()
player.color("blue")
player.shape("triangle")
player.penup()
player.speed(0)
player.setposition(0, -235)
player.setheading(90)
playerspeed = 15
#Move the player Left and Right
def move_left():
x = player.xcor()
x -= playerspeed
if x < -200:
x = - 200
player.setx(x)
def move_right():
x = player.xcor()
x +- playerspeed
if x < -200:
x = - 280
player.setx(x)
#Create Keyboard Bindings
turtle.listen()
turtle.onkey(move_left,"Left")
turtle.onkey(move_right, "Right")
Answer: You need to call `turtle.mainloop()` at the end of the script, see
<https://docs.python.org/2/library/turtle.html#turtle.mainloop>
This works (it shows a blue triangle turtle that moves left or right depending
in the cursor keys pressed; it includes the fix suggested by
[@zondo](http://stackoverflow.com/users/5827958/zondo)):
import turtle
#Create the player turtle
player = turtle.Turtle()
player.color("blue")
player.shape("triangle")
player.penup()
player.speed(0)
player.setposition(0, -235)
player.setheading(90)
playerspeed = 15
#Move the player Left and Right
def move_left():
x = player.xcor()
x -= playerspeed
if x < -200:
x = - 200
player.setx(x)
def move_right():
x = player.xcor()
x += playerspeed
if x < -200:
x = - 280
player.setx(x)
#Create Keyboard Bindings
turtle.listen()
turtle.onkey(move_left, "Left")
turtle.onkey(move_right, "Right")
#Start the main loop
turtle.mainloop()
|
TclError: can't invoke "destroy" command: application has been destroyed
Question: I am a python beginner. Try to make a new button to close the window. I got
the error message:
> Exception in Tkinter callback Traceback (most recent call last): File
> "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-
> tk/Tkinter.py", line 1536, in **call** return self.func(*args) File
> "tk_cp_successful.py", line 138, in buttonPushed self.root.destroy() File
> "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-
> tk/Tkinter.py", line 1859, in destroy self.tk.call('destroy', self._w)
> TclError: can't invoke "destroy" command: application has been destroyed
class LoginPage(tk.Frame):
def __init__(self, parent, controller):
self.controller = controller
self.root = tk.Tk()
global entry_1
global entry_2
tk.Frame.__init__(self, parent)
label = tk.Label(self, text="Welcome to VISA Login Page",fg="blue")
label.pack(pady=10,padx=10)
label_1 = Label(self, text="Username")
label_1.pack()
label_2 = Label(self, text="Password")
label_2.pack()
entry_1 = Entry(self)
entry_1.pack()
entry_2 = Entry(self, show="*")
entry_2.pack()
label_1.grid(row=0, sticky=E)
label_1.pack()
label_2.grid(row=1, sticky=E)
label_2.pack()
entry_1.grid(row=0, column=1)
entry_1.pack()
entry_2.grid(row=1, column=1)
entry_2.pack()
checkbox = Checkbutton(self, text="Keep me logged in")
checkbox.grid(columnspan=2)
checkbox.pack()
logbtn = Button(self, text="Login", command = self._login_btn_clickked)
logbtn.grid(columnspan=2)
logbtn.pack()
myButton = Button(self, text="Exit",command = self.buttonPushed)
myButton.pack()
def buttonPushed(self):
self.root.destroy()
def _login_btn_clickked(self):
#print("Clicked")
username = entry_1.get()
password = entry_2.get()
#print(username, password)
if username == "test" and password == "test":
#box.showinfo("Login info", "Welcome Tester")
button1 = ttk.Button(self, text="Please click, Welcome to login!!!",
command=lambda: self.controller.show_frame(StartPage))
button1.pack()
else:
box.showerror("Login failed", "Incorrect username")
Answer: There are many problems with your code
1. Indentation errors
2. Mixing `grid()` and `pack()`
3. Do you `import tkinter as tk` or `from tkinter import *`, i.e.
`self.root = tk.Tk()` (`import as tk`) or
`label_1 = Label(self, text="Username")` (`from tkinter import *`)
4. No `mainloop` in program
5. Use of global in a class is not necessary and poor style
In any case, the following modified code runs so hopefully it will help.
import sys
if sys.version_info[0] < 3:
import Tkinter as tk ## Python 2.x
else:
import tkinter as tk ## Python 3.x
class LoginPage():
def __init__(self):
self.root=tk.Tk()
label = tk.Label(self.root, text="Welcome to VISA Login Page",fg="blue")
label.grid(row=0)
label_1 = tk.Label(self.root, text="Username")
label_2 = tk.Label(self.root, text="Password")
self.entry_1 = tk.Entry(self.root)
self.entry_2 = tk.Entry(self.root, show="*")
label_1.grid(row=1, sticky="e")
label_2.grid(row=2, sticky="e")
self.entry_1.grid(row=1, column=1)
self.entry_2.grid(row=2, column=1)
## doesn't do anything at this time
##checkbox = tk.Checkbutton(self.root, text="Keep me logged in")
##checkbox.grid(row=3, columnspan=2)
logbtn = tk.Button(self.root, text="Login", command = self._login_btn_clickked)
logbtn.grid(row=9, columnspan=2)
myButton = tk.Button(self.root, text="Exit",command = self.buttonPushed)
myButton.grid(row=10)
self.root.mainloop()
def buttonPushed(self):
self.root.destroy()
def _login_btn_clickked(self):
#print("Clicked")
username = self.entry_1.get()
password = self.entry_2.get()
#print(username, password)
if username == "test" and password == "test":
print "OK login"
#box.showinfo("Login info", "Welcome Tester")
#button1 = ttk.Button(self.root, text="Please click, Welcome to login!!!",
# command=lambda: self.controller.show_frame(StartPage))
#button1.pack()
else:
#box.showerror("Login failed", "Incorrect username")
print "Error"
LP=LoginPage()
|
Sorting a list alphabetically from a CSV in Python by column
Question: I've written a piece of code that sends data to a .csv file, sorting by name,
and then 3 scores from a quiz. I need to call that data from the .csv file
created and sort the data alphabetically by name, numerically, and by average.
However, when I try to sort the names alphabetically, nothing comes up. I'm
quite new to Python so I can't see where my error is.
This is the part of my code that saves and (tries) to print the data.
if class_number == 2:
f = open("Class 2" + ".csv", 'a')
writer = csv.writer(f, delimiter =',')
writer.writerow([name, count1, count2, count3])
print ("Your scores were", count1, ",", count2, ", and", count3)
print("Would you like to see previous results?")
print("Press 1 to see previous results for your class. Press 2 to close the program")
answer = int(input())
if answer == 1:
print("How would you like data to be sorted?")
print("Press 1 for alphabetically")
print("Press 2 for highest to lowest")
print("Press 3 for average")
score = input()
if score == 1:
sample = open("Class 2.csv", "r")
csv1 = csv.reader(sample, delimiter=",")
sort = sorted(csv1, key=operator.itemgetter(0))
for eachline in sort:
print("Class 2.csv", "r")
I'm really confused about what I'm doing wrong.
EDIT: The part of code I need help with is
if score == 1:
sample = open("Class 2.csv", "r")
csv1 = csv.reader(sample, delimiter=",")
sort = sorted(csv1, key=operator.itemgetter(0))
for eachline in sort:
print("Class 2.csv", "r")
This part will not display for some reason, but without the "If score" part it
will display.
Answer: I've tested this with a csv file with one column and it worked. I'm assuming
you only want to display your result
import csv
import operator
sample = open("file.csv", "r")
csv_file = csv.reader(sample, delimiter=",")
sort = sorted(csv_file)
for eachline in sort:
print eachline
|
Looping in Python
Question: I'm trying to figure a way to loop this code so that it restarts once all
three of the calculations are complete. I have figured a way to restart the
program itself, however I can't manage to restart it so that it returns back
to the first calculation step. Anyone can help a brother out? Thanks in
advance.
The code that I have used to restart the program:
def restart_program():
python = sys.executable
os.execl(python, python, * sys.argv)
if __name__ == "__main__":
answer = input("Do you want to restart this program?")
if answer.lower().strip() in "y, yes".split():
restart_program()
My program without the restart code:
import math
import sys
import os
print ("This program will calculate the area, height and perimeter of the Triangles: Scalene, Isosceles, Equilateral and a Right Angled Triangle.")
# calculate the perimeter
print ("Please enter each side for the perimeter of the triangle")
a = float(input("Enter side a: "))
b = float(input("Enter side b: "))
c = float(input("Enter side c "))
perimeter = (a + b + c)
print ("The perimeter for this triangle is: " ,perimeter)
# calculate the area
print ("Please enter each side for the area of the triangle")
a = float(input("Enter side a: "))
b = float(input("Enter side b: "))
c = float(input("Enter side c "))
s = (a + b + c) / 2
sp = (a + b + c) / 2
area = (s*(s-a)*(s-b)*(s-c)) ** 0.5 #area = math.sqrt(sp*(sp - a)*(sp - b)*(sp - c))#
print ("The area for this triangle is %0.2f: " %area)
# calculate the height
height = area / 2
print ("The height of this triangle is: ", height)
Answer: You could put everything in a while loop which could repeat forever or until a
user types a certain phrase.
import math
import sys
import os
print ("This program will calculate the area, height and perimeter of the Triangles: Scalene, Isosceles, Equilateral and a Right Angled Triangle.")
while True:
# calculate the perimeter
print ("Please enter each side for the perimeter of the triangle")
a = float(input("Enter side a: "))
b = float(input("Enter side b: "))
c = float(input("Enter side c "))
perimeter = (a + b + c)
print ("The perimeter for this triangle is: " ,perimeter)
# calculate the area
print ("Please enter each side for the area of the triangle")
a = float(input("Enter side a: "))
b = float(input("Enter side b: "))
c = float(input("Enter side c "))
s = (a + b + c) / 2
sp = (a + b + c) / 2
area = (s*(s-a)*(s-b)*(s-c)) ** 0.5 #area = math.sqrt(sp*(sp - a)*(sp - b)*(sp - c))#
print ("The area for this triangle is %0.2f: " %area)
# calculate the height
height = area / 2
print ("The height of this triangle is: ", height)
or
while answer.lower() in ("yes", "y"):
//code
answer = input("Would you like to repeat?")
You could also put it all into a function `def main():` and then do some form
of recursion (calling a function in itself).
Those are just a few ways. There are a ton of ways you can get what you want.
|
Are there raw strings in R for regular expressions?
Question: In Python you can use raw strings:
import re
re.sub(r"\\", ":", "back\\slash") # r"\\" instead of "\\\\"
Does this exist in **R** as well? For example, here is an equivalent code
snippet without raw strings in **R** :
library(stringr)
str_replace("back\\slash", "\\\\", ":")
I would love to be able to do this:
str_replace("back\\slash", raw("\\"), ":")
Does this functionality already exist, or should I just implement my own
function `raw()`?
Answer: Not in exactly the way you want (i.e. not a language-wide specification for
raw strings), and not clearly documented, but you can achieve what I interpret
as your **broader goal** (doing string replacement without fighting with
unwieldy specifications of target patterns) in `stringr` functions by setting
the class of the search pattern.
s <- "\\"
class(s) <- "fixed"
str_replace("back\\slash", s, ":")
Unfortunately `setClass()` means something other than "set the class attribute
of an object on the fly", so you might want to define
str_replace_fixed <- function(string, pattern, replacement) {
class(pattern) <- "fixed"
return(str_replace(string,pattern,replacement))
}
There is a `str_split_fixed()` in `stringr` already; you could request a
`str_replace_fixed` be added to the package ... (if I were posting an [issue
on the stringr Github repo](https://github.com/hadley/stringr/issues) I might
ask first that this feature be clearly documented, and secondarily that
`str_split_fixed` be implemented ...)
And as pointed out in the comments, most of the base-R string manipulation
functions (`sub`, `gsub`, `grep`, `grepl`) already have a `fixed` argument ...
|
Issues with pyinstaller and reportlab
Question: Alright so I have a python project that I want to compile, so I decided to use
pyinstaller (first time compiling python). Now it compiled fine but when I run
the exe it returns -1. So after a bit of messing around I figured out that it
was related to reportlab.platypus.
So my first instinct was to check to see if using hooks changed anything, so I
tried adding the `reportlab.pdfbase._fontdata` and `reportlab.lib.utils` hooks
(these were the only hook files I could find related to reportlab). Despite
this effort it still failed.
Here is the output when the exe is run from the terminal:
Traceback (most recent call last):
File "<string>", line 12, in <module>
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "C:\Users\Jon\Desktop\PyInstaller-3.1.1\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "Board_builder.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "C:\Users\Jon\Desktop\PyInstaller-3.1.1\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "site-packages\reportlab\platypus\__init__.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "C:\Users\Jon\Desktop\PyInstaller-3.1.1\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "site-packages\reportlab\platypus\flowables.py", line 32, in <module>
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "C:\Users\Jon\Desktop\PyInstaller-3.1.1\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "site-packages\reportlab\lib\styles.py", line 28, in <module>
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "C:\Users\Jon\Desktop\PyInstaller-3.1.1\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.__dict__)
File "site-packages\reportlab\rl_config.py", line 131, in <module>
File "site-packages\reportlab\rl_config.py", line 102, in _startUp
File "site-packages\reportlab\lib\utils.py", line 695, in rl_isdir
AttributeError: 'FrozenImporter' object has no attribute '_files'
main returned -1
From this I gather that it crashes on running line 5 in "Board_builder.py"
(the file that handles reportlab in my project) here are the first 5 lines of
that file:
import subprocess
import datetime
from reportlab.lib.units import mm, inch
from reportlab.lib.pagesizes import legal, landscape
from reportlab.platypus import SimpleDocTemplate, Table
I have no idea what the AttributeError it is throwing means, any advice would
be very welcome!
Answer: Well I got it working,
Decided to go look at where exactly the AttributeError was being thrown from
so I inspected the `reportlab/rl_config.py` and `reportlab/lib/utils.py` files
and found that it was checking objects recursively looking for directories (as
insinuated by `rl_isdir`). Some how the FrozenImporter got stuck being checked
with a list of other objects
so I replaced the line:
return len(list(filter(lambda x,pn=pn: x.startswith(pn),list(__loader__._files.keys()))))>0
with:
try:
return len(list(filter(lambda x,pn=pn: x.startswith(pn),list(__loader__._files.keys()))))>0
except AttributeError:
return False
This may not have been the cleanest most efficient way to resolve the issue
but it only touches one line of the original code so I found this to be the
most straight forward solution.
|
Running a python script with nose.run(...) from outside of the script's directory results in AttributeError: 'module' object has no attribute 'tests'
Question: I have a python application with a few sub directories. Each subdirectory has
its own `tests.py` file.
I use nose to run all of the unittests across all of these files in one shot,
by creating a script `run_unit_tests.py` that calls `nose.run(...)`.
If I am inside of the directory containing `run_unit_tests.py`, everything
works fine. However, if I am anywhere else on the file system, it fails with
AttributeError: 'module' object has no attribute 'tests'.
Here is something similar to my directory structure:
MyApp/
foo/
__init__.py
tests.py
bar/
__init__.py
tests.py
run_unit_tests.py
In my `run_unit_tests.py`:
class MyPlugin(Plugin):
...
if __name__ == '__main__':
nose.run(argv=['', 'foo.tests', '--with-my-plugin'])
nose.run(argv=['', 'foo.bar.tests', '--with-my-plugin'])
If I run `run_unit_tests.py` while inside the top `MyApp` directory,
everything works fine.
However, if I run the script while in some other folder on the file system, it
fails with:
======================================================================
ERROR: Failure: AttributeError ('module' object has no attribute 'tests')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/apps/Python/lib/python2.7/site-packages/nose/loader.py", line 407, in loadTestsFromName
module = resolve_name(addr.module)
File "/apps/Python/lib/python2.7/site-packages/nose/util.py", line 322, in resolve_name
obj = getattr(obj, part)
AttributeError: 'module' object has no attribute 'tests'
* * *
In fact, if I add the following to `run_unit_tests.py`, it works fine:
import os
os.chdir('/path/to/MyApp')
What can I change inside of my nose script such that I can run the script from
outside of the directory?
Answer: Actually, you want to be careful here. Because, the reason why this is
happening is because your imports in your test are with respect to:
`/path/to/MyApp`.
So, when you run your tests from that working directory, your unit test files
are all importing with respect to that directory being the project source. If
you change directories and run from another location, that now becomes your
root, and your imports will surely fail.
This could bring different opinions, but I usually make sure my sources are
all referenced from the same project root. So if we are here:
MyApp/
foo/
__init__.py
tests.py
bar/
__init__.py
tests.py
run_unit_tests.py
I would run everything from within `MyApp`
Furthemore, I would consider creating a `tests` directory and putting all your
tests in that directory, making your imports easier to manage and better
segregating your code. However, this is just an opinion, please don't feel
like this is a necessity. Whatever works for you, go with it.
Hope this helps.
|
Python constructor
Question: I have this constructor for a line class in python and it takes two points as
a parameter. The problem is my constructor is only copying the references. So
self.point0 and point 0 are pointing to the same object. I am not really sure
how to change that so that I am not just copying the reference. Line class:
def __init__(self, point0, point1):
self.point0 = point0
self.point1 = point1
Point class:
def __init__(self, x, y):
self.x = x
self.y = y
Answer: Use the [`copy`](https://docs.python.org/2/library/copy.html#module-copy)
module:
import copy
def __init__(self, point0, point1):
self.point0 = copy.copy(point0)
self.point1 = copy.copy(point1)
This is required if your point objects are mutable, such as a lists or
dictionaries. If you are using immutable types, such as a `tuple`, then it
would not be required to make a copy.
If your points are represented as lists, you can also make a copy of the list
using this syntax:
self.point0 = point0[:]
self.point1 = point1[:]
I could advise you with more certainty if you provided the definition of your
point class.
* * *
**Update** after OP has posted `Point` class definition:
If `copy.copy()` is undesirable (why?) you can manually copy the attributes to
the new `Point` instances:
class Line(object):
def __init__(self, point0, point1):
self.point0 = Point(point0.x, point0.y)
self.point1 = Point(point1.x, point1.y)
|
Python error: argument -c/--conf is required
Question:
I'm new in python, my native language is C. I'm doing a code in python for a
surveillance system triggered by motion using OpenCV. I based my code in the
one made by Adrian Rosebrock in his blog [
pyimagesearch.com](http://www.pyimagesearch.com/2015/06/01/home-surveillance-
and-motion-detection-with-the-raspberry-pi-python-and-opencv/). Originally the
code was developed for a Raspiberry Pi with a Pi Camera module attached to it,
now I'm trying to adapt to my notebook's webcam. He made a easier tutorial
about a simple code for motion detection and it worked very nicely in my PC.
But I'm having a hardtime with this other code. Probably it's a silly mistake,
but as begginer I couldn't found a specific answer to this issue.
This image have the part of the code that is causing the error (line 15) and
the structure of the project on the left side of the screen.
[Image of python project for
surveillance](http://i.stack.imgur.com/fWwBZ.png).
Similar part, originall code:
# import the necessary packages
from pyimagesearch.tempimage import TempImage
from dropbox.client import DropboxOAuth2FlowNoRedirect
from dropbox.client import DropboxClient
from picamera.array import PiRGBArray
from picamera import PiCamera
import argparse
import warnings
import datetime
import imutils
import json
import time
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-c", "--conf", required=True,
help="path to the JSON configuration file")
args = vars(ap.parse_args())
# filter warnings, load the configuration and initialize the Dropbox
# client
warnings.filterwarnings("ignore")
conf = json.load(open(args["conf"]))
client = None
Until now I only change these things:
* Exclude the imports relatives to pi camera.
* Change `camera = PiCamera()` by `camera = cv2.VideoCapture(0)`. This way I use notebook's webcam.
* Exclude:
camera.resolution = tuple(conf["resolution"])
camera.framerate = conf["fps"]
rawCapture = PiRGBArray(camera, size=tuple(conf["resolution"]))
* Substitute the line `for f in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):` by `while True:`.
* Exclude two lines in program that was `rawCapture.truncate(0)`.
Probably there is more things to repair, if you now please tell me, but first
I'd like to understand how solve that mensage error. I use PyCharm in Windows
7 with Python 2.7 and OpenCV 3.1. Sorry for not post the entire code, but once
that this is my first question in the site and I have 0 reputation, apparently
I can just post 2 links. The entire originall code is in the
pyimagesearch.com. Thank you for your time!
Answer: I think you probably not running it properly. Error message is clear. You are
adding argument that means you need to provide them while running which you
are not doing.
Check this how he ran this in tutorial link you provided
<http://www.pyimagesearch.com/2015/06/01/home-surveillance-and-motion-
detection-with-the-raspberry-pi-python-and-
opencv#crayon-56d3c5551ac59089479643>
|
transform new dataset for prediction in Python
Question: I train model (for ample _linear_model.LinearRegression_) with some iteration
like `*pd.get_dummies*`
and I get new structure of data Now I take a new dataset & want to predict. I
cann't use _`predict`_ because structures are different. `*pd.get_dummies*`for
new data will give us another number of columns
How can I transform this dataset? By appendind to the previous dataset and
train again with the new data? Or may i use "transform" for new data?
import pandas as pd
import numpy as np
from sklearn import linear_model
df1 = pd.DataFrame({ 'y' : np.array([1., 1., 2., 3., 1.] ,dtype='int32'),
....: 'X' : np.array(["1","1","2","2", "1"])})
y = df1[df1.columns[0]]
X = pd.get_dummies(df1['X'])
lr = linear_model.LinearRegression()
lr = lr.fit(X, y)
lr.predict(X)
Now i have
df2 = pd.DataFrame({ 'y' : 'nan',
....: 'X' : np.array(["3"])})
Xnew = pd.get_dummies(df2['X'])
lr.predict(Xnew)
ValueError: shapes (1,1) and (2,) not aligned: 1 (dim 1) != 2 (dim 0)
Answer: I see just this way
import numpy as np
import pandas as pd
from sklearn import linear_model, metrics, pipeline, preprocessing
df = pd.DataFrame({'a':range(12), 'b':[1,2,3,1,2,3,1,2,3,3,1,2], 'c':['a', 'b', 'c']*4, 'd': ['m', 'f']*6})
y = df.a
num = df[['b']]
cat = df[['c', 'd']]
from sklearn.feature_extraction import DictVectorizer
enc = DictVectorizer(sparse = False)
enc_data = enc.fit_transform(cat .T.to_dict().values())
crat = pd.DataFrame(enc_data, columns=enc.get_feature_names())
X = pd.concat([crat, num], axis=1)
cat_columns = ['c=a', 'c=b', 'c=c', 'd=f', 'd=m']
cat_indices = np.array([(column in cat_columns) for column in X.columns], dtype = bool)
numeric_col = ['b']
num_indices = np.array([(column in numeric_col) for column in X.columns], dtype = bool)
reg = linear_model.SGDRegressor()
estimator = pipeline.Pipeline(steps = [
('feature_processing', pipeline.FeatureUnion(transformer_list = [
('categorical', preprocessing.FunctionTransformer(lambda data: data[:, cat_indices])),
#numeric
('numeric', pipeline.Pipeline(steps = [
('select', preprocessing.FunctionTransformer(lambda data: data[:, num_indices])),
('scale', preprocessing.StandardScaler())
]))
])),
('model', reg)
]
)
estimator.fit(X, y)
and now we work witn a new dataset
test = pd.DataFrame({ 'b':[1], 'c':['a'], 'd': ['f']})
cat = test[['c', 'd']]
num = test[['b']]
enc_data = enc.transform(cat.T.to_dict().values())
crat = pd.DataFrame(enc_data, columns=enc.get_feature_names())
test = pd.concat([crat, num], axis=1)
estimator.predict(test)
|
AWS S3 policies confusions
Question: I would like to give read (download) right to a single user. I am confused
about what I should use:
Should I use
* The Bucket Policy Editor from the S3 interface
* The inline policies for the user and specify read permissions (from IAM interface)
* Activate "Any Authenticated AWS User" has the right to read (from s3 interface) and then use inline permissions for more granularity ?
I used the inline policies and it won't work:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToReadObject",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectTorrent"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::staging/*",
"arn:aws:s3:::prod/*"
]
}
]
}
When I use Boto:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from boto.s3.connection import S3Connection
from boto.s3.key import Key
import sys, os
AWS_KEY = ''
AWS_SECRET = ''
from boto.s3.connection import S3Connection
conn = S3Connection(AWS_KEY, AWS_SECRET)
bucket = conn.get_bucket('staging')
for key in bucket.list():
print key.name.encode('utf-8')
I got the following error:
Traceback (most recent call last):
File "listing_bucket_files.py", line 20, in <module>
bucket = conn.get_bucket('staging')
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 503, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 536, in head_bucket
raise err
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
Answer: You didn't assign `"s3:ListBucket"` permission, that the account was stopped
at the access to buckets of `staging` and `prod`, then has no permission to
access the files/folders in these buckets.
Remember you have to seperate the code as below, and don't add `/*` after
bucket name.
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::staging",
"arn:aws:s3:::prod",
]
},
|
Python: How to set values of zero in a list/array/pd.Series to be the next non-zero value?
Question: I have a Python list-like structure with more than 1 million elements. Each
element takes one of three possible values, namely `-1`, `0`, or `1`. What I'm
trying to achieve is to replace all the zeros with the next non-zero value.
For instance, if I have
[1, 0, 0, -1, 0, 1, 0, 0, 0, -1]
after the operation I'll have
[1, **_-1_** , **_-1_** , -1, **_1_** , 1, **_-1_** , **_-1_** , **_-1_** ,
-1].
I can have a nested loop structure to achieve this goal, but with more than 1
million elements in the list, it's taking forever to run. Does anyone know a
faster algorithm that'll achieve this goal?
Answer: You can try first create `Series`, then
[`replace`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.replace.html) `0` to `NaN` and last use
[`fillna`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.fillna.html):
import pandas as pd
import numpy as np
li = [1, 0, 0, -1, 0, 1, 0, 0, 0, -1]
s = pd.Series(li)
print s
0 1
1 0
2 0
3 -1
4 0
5 1
6 0
7 0
8 0
9 -1
dtype: int64
print s.replace({0:np.nan})
0 1
1 NaN
2 NaN
3 -1
4 NaN
5 1
6 NaN
7 NaN
8 NaN
9 -1
dtype: float64
print s.replace({0:np.nan}).fillna(method='bfill')
0 1
1 -1
2 -1
3 -1
4 1
5 1
6 -1
7 -1
8 -1
9 -1
dtype: float64
Or instead `replace` use [`loc`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.loc.html), then convert to int by
[`astype`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.astype.html) and last use
[`tolist`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.tolist.html):
s.loc[s == 0] = np.nan
s.loc[s == 0] = np.nan
print s.fillna(method='bfill').astype(int).tolist()
[1, -1, -1, -1, 1, 1, -1, -1, -1, -1]
|
MIMEMultipart() in Python
Question: Why do I always get `From nobody` when creating a message with
`MIMEMultipart()` in Python? Is this changeable?
msg2 = MIMEMultipart('csv')
print msg2
From nobody Mon Feb 29 11:38:50 2016
Content-Type: multipart/csv; boundary="===============3465836505230217811=="
MIME-Version: 1.0
Answer: You can set it to whatever you want by using
[`set_unixfrom`](https://docs.python.org/2.7/library/email.message.html#email.message.Message.set_unixfrom):
>>> from email.mime.multipart import MIMEMultipart
>>> msg = MIMEMultipart('csv')
>>> msg.set_unixfrom('From user')
>>> print msg
From user
Content-Type: multipart/csv; boundary="===============9221413516749323109=="
MIME-Version: 1.0
`From nobody` is [hardcoded
value](https://github.com/python/cpython/blob/master/Lib/email/generator.py#L113)
if `unixfrom` is not set for a given message.
|
About lists in python
Question: I have an excel file with a column in which values are in multiple rows in
this format 25/02/2016. I want to save all this rows of dates in a list. Each
row is a separate value. How do I do this? So far this is my code:
I have an excel file with a column in which values are in multiple rows in
this format 25/02/2016. I want to save all this rows of dates in a list. Each
row is a separate value. How do I do this? So far this is my code:
import openpyxl
wb = openpyxl.load_workbook ('LOTERIAREAL.xlsx')
sheet = wb.get_active_sheet()
rowsnum = sheet.get_highest_row()
wholeNum = []
for n in range(1, rowsnum):
wholeNum = sheet.cell(row=n, column=1).value
print (wholeNum[0])
When I use the print statement, instead of printing the value of the first row
which should be the first item in the list e.g. 25/02/2016, it is printing the
first character of the row which is the number 2. Apparently it is slicing
thru the date. I want the first row and subsequent rows saved as separate
items in the list. What am I doing wrong? Thanks in advance
Answer: `wholeNum = sheet.cell(row=n, column=1).value` assigns the value of the cell
to the variable wholeNum, so you're never adding anything to the initial empty
list and just overwrite the value each time. When you call `wholeNum[0]` at
the end, wholeNum is a the last string that was read, and you're getting the
first character of it.
You probable want `wholeNum.append(sheet.cell(row=n, column=1).value)` to
accumulate a list.
|
script in python: Template is not defined
Question: I am using the following Python script:
import numpy as np
import matplotlib.pyplot as plt
import nibabel
import os
def collapse_probtrack_results(waytotal_file, matrix_file):
with open(waytotal_file) as f:
waytotal = int(f.read())
data = nibabel.load(matrix_file).get_data()
collapsed = data.sum(axis=0) / waytotal * 100.
return collapsed
matrix_template = 'results/{roi}.nii.gz.probtrackx2/matrix_seeds_to_all_targets.nii.gz'
processed_seed_list = [s.replace('.nii.gz','').replace('label/', '')
for s in open('/home/salvatore/tirocinio/aal_rois_diff_space/aal.txt').read().split('\n')
if s]
N = len(processed_seed_list)
conn = np.zeros((N, N))
rois=[]
idx = 0
for roi in processed_seed_list:
matrix_file = template.format(roi=roi)
seed_directory = os.path.dirname(result)
roi = os.path.basename(seed_directory).replace('.nii.gz.probtrackx2', '')
waytotal_file = os.path.join(seed_directory, 'waytotal')
rois.append(roi)
try:
# if this particular seed hasn't finished processing, you can still
# build the matrix by catching OSErrors that pop up from trying
# to open the non-existent files
conn[idx, :] = collapse_probtrack_results(waytotal_file, matrix_file)
except OSError:
pass
idx += 1
# figure plotting
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(conn, interpolation='nearest', )
cax.set_cmap('hot')
caxes = cax.get_axes()
When I try to run it I get the following error: NameError: name 'template' is
not defined. This refers to line 22 of the script above. Can you please help
me to figure out what is this about?
Answer: There is no variable like `template`, so `template` cannot be used in the
right side of the expression. Try
matrix_file = matrix_template.format(roi=roi)
instead.
|
Python, is there a easier way to add values to a default key?
Question: The program I am working does the following:
* Grabs stdout from a .perl program
* Builds a nested dict from the output
I'm using the AutoVivification approach found
[here](http://stackoverflow.com/questions/635483/what-is-the-best-way-to-
implement-nested-dictionaries-in-python) to build a default nested dictionary.
I'm using this method of defaultdict because it's easier for me to follow as a
new programmer.
I'd like to add one key value to a declared key per pass of the `for line` in
the below code. Is there a easier way to add values to a key beyond making a
`[list]` of values then adding said values as a group?
import pprint
class Vividict(dict):
def __missing__(self, key):
value = self[key] = type(self)()
return value
reg = 'NtUser'
od = Vividict()
od[reg]
def run_rip():
os.chdir('/Users/ME/PycharmProjects/RegRipper2.8') # Path to regripper dir
for k in ntDict:
run_command = "".join(["./rip.pl", " -r
/Users/ME/Desktop/Reg/NTUSER.DAT -p ", str(k)])
process = subprocess.Popen(run_command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = process.communicate() # wait for the process to terminate
parse(out)
# errcode = process.returncode // used in future for errorcode checking
ntDict.popitem(last=False)
def parse(data):
pattern = re.compile('lastwrite|(\d{2}:\d{2}:\d{2})|alert|trust|Value')
grouping = re.compile('(?P<first>.+?)(\n)(?P<second>.+?)
([\n]{2})(?P<rest>.+[\n])', re.MULTILINE | re.DOTALL)
if pattern.findall(data):
match = re.search(grouping, data)
global first
first = re.sub("\s\s+", " ", match.group('first'))
od[reg][first]
second = re.sub("\s\s+", " ", match.group('second'))
parse_sec(second)
def parse_sec(data):
pattern = re.compile(r'^(\(.*?\)) (.*)$')
date = re.compile(r'(.*?\s)(.*\d{2}:\d{2}:\d{2}.*)$')
try:
if pattern.match(data):
result = pattern.match(data)
hive = result.group(1)
od[reg][first]['Hive'] = hive
desc = result.group(2)
od[reg][first]['Description'] = desc
elif date.match(data):
result = date.match(data)
hive = result.group(1)
od[reg][first]['Hive'] = hive
time = result.group(2)
od[reg][first]['Timestamp'] = time
else:
od[reg][first]['Finding'] = data
except IndexError:
print('error w/pattern match')
run_rip()
pprint.pprint(od)
Sample Input:
bitbucket_user v.20091020
(NTUSER.DAT) TEST - Get user BitBucket values
Software\Microsoft\Windows\CurrentVersion\Explorer\BitBucket
LastWrite Time Sat Nov 28 03:06:35 2015 (UTC)
Software\Microsoft\Windows\CurrentVersion\Explorer\BitBucket\Volume
LastWrite Time = Sat Nov 28 16:00:16 2015 (UTC)
Answer: If I understand your question correctly, you want to change the lines where
you're actually adding values to your dictionary (e.g. the
`od[reg][first]['Hive'] = hive` line and the similar one for `desc` and
`time`) to create a list for each `reg` and `first` value and then extend that
list with each item being added. Your dictionary subclass takes care of
creating the nested dictionaries for you, but it won't build a list at the
end.
I think the best way to do this is to use the `setdefault` method on the inner
dictionary:
od[reg][first].setdefault("Hive", []).append(hive)
The `setdefault` will add the second value (the "default", here an empty list)
to the dictionary if the first argument doesn't exist as a key. It preempts
the dictionary's `__missing__` method creating the item, which is good, since
we want a the value to be list rather than another layer of dictionary. The
method returns the value for the key in all cases (whether it added a new
value or if there was one already), so we can chain it with `append` to add
our new `hive` value to the list.
|
Pythonic way to generate a list of a certain size with no duplicates?
Question: I'm trying to generate a list of `(x, y)` tuples of size `num_cities` with the
constraint that no two tuples are the same. Is there a shorter, Pythonic way
to do this using a set comprehension or `itertools`? I currently have:
def make_random_cities(num_cities, max_x, max_y):
cities = set()
while len(cities) < num_cities:
x, y = randint(0, max_x), randint(0, max_y)
cities.add((x, y))
return list(cities)
Answer: If the maximum values aren't too large to store the complete set of
possibilities in memory (and it won't take forever to generate them),
[`random.sample`](https://docs.python.org/3/library/random.html#random.sample)
and
[`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product)
can be used effectively here:
import itertools
import random
def make_random_cities(num_cities, max_x, max_y):
return random.sample(list(itertools.product(range(max_x+1), range(max_y+1))), num_cities)
If the `product` of the inputs gets too large though, you could easily exceed
main memory; in that case, your approach of looping until you get sufficient
unique results is probably the best approach.
You could do samples of each `range` independently and then combine them
together, but that would add uniqueness constraints to each axis, which I'm
guessing you don't want.
For this specific case (unique numbers following a predictable pattern), you
could use a trick to make this memory friendly while still avoiding the issue
of arbitrarily long loops. Instead of taking the `product` of two `range`s,
you'd generate a single `range` (or in Py2, `xrange`) that encodes both unique
values from the `product` in a single value:
def make_random_cities(num_cities, max_x, max_y):
max_xy = (max_x+1) * (max_y+1)
xys = random.sample(range(max_xy), num_cities)
return [divmod(xy, max_y+1) for xy in xys]
This means you have no large intermediate data to store (because Py3
`range`/Py2 `xrange` are "virtual" sequences, with storage requirements
unrelated to the range of values they represent, and `random.sample` produces
samples without needing to read all the values of the underlying sequence).
|
How to send None with Signals across threads?
Question: I've implemented a version of the worker pattern that is described in the [Qt
Threading docs](http://doc.qt.io/qt-4.8/qthread.html).
I'm using `Signals/Slots` to send data between the worker thread and the main
thread.
When defining the `Signal`, I've set the argument signature type to `object`
since I believe it should allow me to pass any python object through the
`Signal`.
result_ready = QtCore.Signal(object)
However, when I try to pass `None` through the `Signal` it crashes python.
This only happens when trying to pass the `Signal` across threads. If I
comment out the `self.worker.moveToThread(self.thread)` line, it works and
`None` is successfully passed through the `Signal`.
Why am I unable to pass `None` in this instance?
I'm using `PySide 1.2.2` and `Qt 4.8.5`.
import sys
from PySide import QtCore, QtGui
class Worker(QtCore.QObject):
result_ready = QtCore.Signal(object)
@QtCore.Slot()
def work(self):
print 'In Worker'
# This works
self.result_ready.emit('Value')
# This causes python to crash
self.result_ready.emit(None)
class Main(QtGui.QWidget):
def __init__(self):
super(Main, self).__init__()
self.ui_lay = QtGui.QVBoxLayout()
self.setLayout(self.ui_lay)
self.ui_btn = QtGui.QPushButton('Test', self)
self.ui_lay.addWidget(self.ui_btn)
self.ui_lay.addStretch()
self.setGeometry(400, 400, 400, 400)
self.worker = Worker()
self.thread = QtCore.QThread(self)
self.worker.moveToThread(self.thread)
self.thread.start()
self.ui_btn.clicked.connect(self.worker.work)
self.worker.result_ready.connect(self.handle_worker_result)
@QtCore.Slot(object)
def handle_worker_result(self, result=None):
print 'Handling output', result
def closeEvent(self, event):
self.thread.quit()
self.thread.wait()
super(Main, self).closeEvent(event)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
obj = Main()
obj.show()
app.exec_()
Answer: This looks like a PySide bug. The same example code works exactly as expected
with PyQt4.
The issue is with the [type of signal
connection](http://doc.qt.io/qt-4.8/qt.html#ConnectionType-enum). For cross-
thread signals, this will use a `QueuedConnection` unless you specify
otherwise. If the connection type is changed to `DirectConnection` in the
example code, it will work as expected - but of course it won't be thread-safe
anymore.
A `QueuedConnection` will post an event to the event-queue of the receiving
thread. But in order for this to be thread-safe, Qt has to serialize the
emitted arguments. However, PySide will obviously need to inject some magic
here to deal with python types that Qt doesn't know anything about. If I had
to guess, I would bet that PySide is mistakenly converting the python `None`
object to a C++ NULL pointer, which will obviously have nasty consequences
later on.
If you want to work around this, I suppose you could emit your own sentinel
object as a placeholder for `None`.
**UPDATE** :
Found the bug, [PYSIDE-17](http://bugreports.qt.io/browse/PYSIDE-17), which
was posted in March 2012! Sadly, the suggested patch seems to have never been
reviewed.
|
PhantomJS stability when rendering multiple pages
Question: I am running PhantomJS on a big set of pages to scrape some specific JS-
generated content. I am using the Python Selenium bindings with which it's
easy to perform XPath queries on the results. I have noticed that if I try to
instantiate a single `webdriver.PhantomJS` object and perform the entire job
with it (by "reusing" it so to speak), my script soon becomes unstable, with
sporadic memory and connectivity issues. My next attempt has been to try to
instantiate a new driver for every render call (and by calling `quit()` on it
when it's done), which also didn't work for more than a few requests. My final
attempt was to use `subprocess` to insulate the rendering call in its own
process space. But even with this technique, which is the stablest by far, I
still need to wrap my entire script in `supervisor`, to handle occasional
hiccups. I am really wondering if I might be doing something wrong, or if
there is something I should be aware of. I understand that PhantomJS (and
other automated browsers) are not really meant for scraping per se (more for
testing), but is there a way to make it work with great stability
nevertheless?
Answer: I use Selenium with `pyvirtualdisplay` with a normal browser in a manner
similar to this: [Python - Headless Selenium WebDriver Tests using
PyVirtualDisplay](http://coreygoldberg.blogspot.jp/2011/06/python-headless-
selenium-webdriver.html) (though I'm using Chrome; just a matter of a
different driver).
Much more stable than my experience with PhantomJS from both node and Python.
You'll still likely want to use a process manager, just in case, but this way
has been far less error-prone for me.
Also, I suggest writing a little Python wrapper class so you can use a `with`
block and ensure your environment always gets cleaned up; if you don't kill
the session appropriately you can end up with an orphaned browser eating
memory.
From my project:
import os, time
from selenium import webdriver
from pyvirtualdisplay import Display
class ChromeSession(object):
def __enter__(self):
self.display = Display(visible=0, size=(1024, 768))
self.display.start()
chromedriver = "/usr/lib/chromium/chromedriver"
os.environ["websession.chrome.driver"] = chromedriver
self.driver = webdriver.Chrome(chromedriver)
# Tell the driver to wait (if necessary) in case UI rendering takes a while...
self.driver.implicitly_wait(5)
return self.driver
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type:
print exc_type, exc_val
print exc_tb
self.driver.quit()
self.display.stop()
|
Subsets and Splits