text
stringlengths 226
34.5k
|
---|
Python: deleting string from first numeric character
Question: How could I delete/split the string below from the first numeric character?
> Good 11 hdle
would become
> Good
the only things I can seem to find are removing numbers OR letters from the
whole string
Answer:
import re
str = "some text 12345 other text"
result = re.split("\d+", str)[0]
|
Quandl is not being imported
Question: I'm getting started for Machine learning using Python and would like to use
Quandl for computing. I installed the Quandl using `pip install Quandl` and
also, pandas using `pip install pandas`. Later, the `import` for pandas is
successful, but, I couldn't import quandl. I get error as following,
`ImportError: No module named Quandl`
I use Python 2.7 and Quandl supports both 2 and 3 version of Python. How to do
the import properly ?
Answer: Looking at [the docs](https://www.quandl.com/tools/python), it is lower case:
import quandl
|
Timing Modular Exponentiation in Python: syntax vs function
Question: In Python, if the builtin `pow()` function is used with 3 arguments, the last
one is used as the modulus of the exponentiation, resulting in a [Modular
exponentiation](https://en.wikipedia.org/wiki/Modular_exponentiation)
operation.
In other words, `pow(x, y, z)` is equivalent to `(x ** y) % z`, but
accordingly to Python help, the `pow()` may be more efficient.
When I timed the two versions, I got the opposite result, the `pow()` version
seemed slower than the equivalent syntax:
Python 2.7:
>>> import sys
>>> print sys.version
2.7.11 (default, May 2 2016, 12:45:05)
[GCC 4.9.3]
>>>
>>> help(pow)
Help on built-in function pow in module __builtin__: <F2> Show Source
pow(...)
pow(x, y[, z]) -> number
With two arguments, equivalent to x**y. With three arguments,
equivalent to (x**y) % z, but may be more efficient (e.g. for longs).
>>>
>>> import timeit
>>> st_expmod = '( 65537 ** 767587 ) % 14971787'
>>> st_pow = 'pow(65537, 767587, 14971787)'
>>>
>>> timeit.timeit(st_expmod)
0.016651153564453125
>>> timeit.timeit(st_expmod)
0.016621112823486328
>>> timeit.timeit(st_expmod)
0.016611099243164062
>>>
>>> timeit.timeit(st_pow)
0.8393168449401855
>>> timeit.timeit(st_pow)
0.8449611663818359
>>> timeit.timeit(st_pow)
0.8767969608306885
>>>
Python 3.4:
>>> import sys
>>> print(sys.version)
3.4.3 (default, May 2 2016, 12:47:35)
[GCC 4.9.3]
>>>
>>> help(pow)
Help on built-in function pow in module builtins:
pow(...)
pow(x, y[, z]) -> number
With two arguments, equivalent to x**y. With three arguments,
equivalent to (x**y) % z, but may be more efficient (e.g. for ints).
>>>
>>> import timeit
>>> st_expmod = '( 65537 ** 767587 ) % 14971787'
>>> st_pow = 'pow(65537, 767587, 14971787)'
>>>
>>> timeit.timeit(st_expmod)
0.014722830994287506
>>> timeit.timeit(st_expmod)
0.01443593599833548
>>> timeit.timeit(st_expmod)
0.01485627400688827
>>>
>>> timeit.timeit(st_pow)
3.3412855619972106
>>> timeit.timeit(st_pow)
3.2800855879904702
>>> timeit.timeit(st_pow)
3.323372773011215
>>>
Python 3.5:
>>> import sys
>>> print(sys.version)
3.5.1 (default, May 2 2016, 14:34:13)
[GCC 4.9.3
>>>
>>> help(pow)
Help on built-in function pow in module builtins:
pow(x, y, z=None, /)
Equivalent to x**y (with two arguments) or x**y % z (with three arguments)
Some types, such as ints, are able to use a more efficient algorithm when
invoked using the three argument form.
>>>
>>> import timeit
>>> st_expmod = '( 65537 ** 767587 ) % 14971787'
>>> st_pow = 'pow(65537, 767587, 14971787)'
>>>
>>> timeit.timeit(st_expmod)
0.014827249979134649
>>> timeit.timeit(st_expmod)
0.014763347018742934
>>> timeit.timeit(st_expmod)
0.014756042015505955
>>>
>>> timeit.timeit(st_pow)
3.6817933860002086
>>> timeit.timeit(st_pow)
3.6238356370013207
>>> timeit.timeit(st_pow)
3.7061628740048036
>>>
What is the explanation for the above numbers?
* * *
**Edit** :
After the answers I see that in the `st_expmod` version, the computation were
not being executed in runtime, but by the parser and the expression became a
constant..
Using the fix suggested by @user2357112 in Python2:
>>> timeit.timeit('(a**b) % c', setup='a=65537; b=767587; c=14971787', number=150)
370.9698350429535
>>> timeit.timeit('pow(a, b, c)', setup='a=65537; b=767587; c=14971787', number=150)
0.00013303756713867188
Answer: You're not actually timing the computation with `**` and `%`, because the
result gets constant-folded by the bytecode compiler. Avoid that:
timeit.timeit('(a**b) % c', setup='a=65537; b=767587; c=14971787')
and the `pow` version will win hands down.
|
Dealing with mis-escaped characters in JSON
Question: I am reading a JSON file into Python which contains escaped single quotes
(_\'_). This leads to all kinds of hiccups, as nicely discussed e.g.
[here](http://stackoverflow.com/questions/2275359/jquery-single-quote-in-json-
response). However, I could not find anything on how to **address** the issue.
I just did a
newstring=originalstring.replace(r"\'", "'")
and things worked out. But this seems rather ugly. I could not really find
much material on how to deal with this kind of thing (creating an exception,
or something) in the json [docs](https://docs.python.org/2/library/json.html)
either.
* Is there a good, clean procedure for such an issue?
Going back to the source is not possible, unfortunately.
Thanks for your help!
Answer: The right thing would be to fix whatever is creating the invalid JSON file.
But if that's not possible, I guess the replace is needed. But you should use
a regular expression so it doesn't replace `\\'` with `\'` \-- in this case
the first backslash is escaping the second backslash, they're not escaping the
quote. A negative lookbehind will prevent this.
import re
newstring = re.sub(r"(?<!\\)\\'", "'", originalstring)
|
can't download and install python image library
Question: I'm trying to download and install pythons image library PIL or pillow. I've
looked at this question ([No module named
Image](http://stackoverflow.com/questions/12024397/no-module-named-image)) and
this question ([Can't install Python Imaging Library using
pip](http://stackoverflow.com/questions/20614185/cant-install-python-imaging-
library-using-pip)) and although I seemed to be having the same problem none
of the answers helped me.
I use a mac with OSX 10.11.4 and my python interpreter is versions 2.7.10
### Here is what I have tried to do:
download the tar ball (Python Imaging Library 1.1.7 Source Kit from
<http://www.pythonware.com/products/pil/#pil117>) and unzip (the result is a
folder called Imaging-1.1.7). I have this folder in my downloads folder. I
then ran this in the command line:
pip install pillow
and this is what I got back:
Requirement already satisfied (use --upgrade to upgrade): pillow in /usr/local/lib/python3.5/site-packages
I then tried to run this python script:
from PIL import Image
but I got this error:
python test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
import Image
ImportError: No module named Image
I am very confused, I have never been able to download and install any modules
before because I have had similar problems, so if your help will be greatly
appreciated as it will allow me to download other modulus too. thanks in
advance
Thanks so far for the help but nothing suggested has worked. I tried to
download python 3.5.1 from this site (<https://www.python.org/downloads/>) but
when I run this command (python -V) in command line it still tells me I am
using version 2.7.10
Also, I went into my applications folder to see if I had PIL and uninstall it
if I did because it cannot coexist with pillow (according to one the answers
so far) but I couldn't find it there. Am I looking in the wrong place or do I
simply not have it?
Anyway, still haven't figured it out yet. It would be great if I could have
some advice on downloading and installing stuff in general because, like I
said before, I've never been able to download anything and have it actually
work.
Answer: Python 2.7 and 3.3 have their own package locations. Since you have version
2.7 and 3.3 installed, you must do the following:
pip2 install pillow
|
Add a list to a numpy array
Question: Right now I'm writing a function that reads data from a file, with the goal
being to add that data to a numpy array and return said array.
I would like to return the array as a 2D array, however I'm not sure what the
complete shape of the array will be (I know the amount of columns, but not
rows). What I have right now is:
columns = _____
for line in currentFile:
currentLine = line.split()
data = np.zeros(shape=(columns),dtype=float)
tempData = []
for i in range(columns):
tempData.append(currentLine[i])
data = np.concatenate((data,tempdata),axis=0)
However, this makes a 1D array.
Essentially what I'm asking is:
Is there any way to have add a python list as a row to a numpy array with a
variable amount of rows?
Answer: If your file `data.txt` is
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
All you need to do is
>>> import numpy as n
>>> data_array = n.loadtxt("data.txt")
>>> data_array
array([[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.]])
|
How to upload a file to a server?
Question: I want to upload files to my servers at Digital Ocean and AWS. I can do that
via the terminal using scp or sftp, but I want to automate this and do it in
Python or any other programming language. In case of Python, how can I upload
a file to a server in high level, should I use sftp client? **Any other
options?**
Answer: You can use pysftp package;
import pysftp
with pysftp.Connection('hostname', username='me', password='secret') as sftp:
with sftp.cd('public') # temporarily chdir to public
sftp.put('/my/local/filename') # upload file to public/ on remote
sftp.get_r('myfiles', '/backup') # recursively copy myfiles/ to local
<https://pypi.python.org/pypi/pysftp>
It also uses paramiko internally I guess which can be used for ssh, sftp etc.
<http://docs.paramiko.org/en/1.17/api/sftp.html>
|
Running python program on linux
Question: I'm not very familiar with linux as well as python. I'm taking this class that
have example code of a inverted index program on python. I would like to know
how to run and test the code. Here's the code that was provided to me.
This is the code for the mapping file. (inverted_index_map.py)
import sys
for line in sys.stdin:
#print(line)
key, value = line.split('\t', 1)
for word in value.strip().split():
if len(word) <=5 and len(word) >= 3:
print '%s\t%s' % (word, key.split(':', 1)[0]) #what are we emitting?
This is the code for the reduce program. (inverted_index_reduce.py)
import sys
key = None
total = ''
for line in sys.stdin:
k, v = line.split('\t', 1)
if key == k:
total += v.strip() #what are we accumulating?
else:
if key:
print '%s\t%s' % (key, total) #what are we printing?
key = k
total = v
if key:
print '%s\t%s' % (key, total) #what are we printing?
It wasn't an executable file so I tried
chmod +x inverted_index_map.py
Then I tried to run the program with:
./inverted_index_map.py testfilename.txt
But I'm not sure if the program is waiting for some kind of input from the
keyboard or something. So my question is how do I test this code and see the
result? I'm really not familiar with python.
Answer: These two programs are written as command-line tools, meaning they take their input from the stdin and display it to stdout. By default, that means that they take input from the keyboard and display output on the screen. In most Linux shells, you can change where input comes from and output goes to by using `<file.txt` to get input from `file.txt` and `>file.txt` to write output in `file.txt`. Additionally, you can make the output of one command become the input of another command by using `firstcommand | secondcommand`.
Another problem is that the scripts you posted don't have a `#!` (shebang)
line, which means that you will need to use `python inverted_index_map.py` to
run your programs.
If you want to run `inverted_index_map.py` with input from `testfilename.txt`
and see the output on the screen, you should try running:
python inverted_index_map.py <testfilename.txt
To run `inverted_index_map.py` followed by `inverted_index_reduce.py` with
input from `testfilename.txt` and output written to `outputfile.txt`, you
should try running:
python inverted_index_map.py <testfilename.txt | python inverted_index_reduce.py >outputfile.txt
|
Python: How to prevent python dictionary from putting quotes around my json?
Question: I am using requests to create a post request on a contractor's API. I have a
JSON variable `inputJSON` that undergoes formatting like so:
def dolayoutCalc(inputJSON):
inputJSON = ast.literal_eval(inputJSON)
inputJSON = json.dumps(inputJSON)
url='http://xxyy.com/API'
payload = {'Project': inputJSON, 'x':y, 'z':f}
headers = {'content-type': 'application/json', 'Accept': 'text/plain'}
r = requests.post(url, data=json.dumps(payload), headers=headers)
My issue arises when I define `payload={'Project':inputJSON, 'x':y, 'z':f}`
What ends up happening is Python places a pair of quotes around the inputJSON
structure. The API I am hitting is not able to handle this. It needs Project
value to be the exact same inputJSON value just without the quotes.
What can I do to prevent python from placing quotes around my `inputJSON`
object? Or is there a way to use requests library to handle such POST request
situation?
Answer: inputJSON gets quotes around it because it's a string. When you call
json.dumps() on something a string will come out, and then when it's converted
to JSON it will get quotes around it. e.g.:
>>> import json
>>> json.dumps('this is a string')
>>> '"this is a string"'
I'm with AKS in that should be able to remove this line:
inputJSON = json.dumps(inputJSON)
From your description inputJSON sounds like a Python literal (e.g. {'blah':
True} instead of {"blah": true}. So you've used the ast module to convert it
into a Python value, and then in the final json.dumps() it should be converted
to JSON along with everything else.
Example:
>>> import ast
>>> import json
>>> input = "{'a_var': True}" # A string that looks like a Python literal
>>> input = ast.literal_eval(input) # Convert to a Python dict
>>> print input
>>> {'a_var': True}
>>> payload = {'Project': input} # Add to payload as a dict
>>> print json.dumps(payload)
>>> {"Project": {"a_var": true}} # In the payload as JSON without quotes
|
How to get offset position of a text in html page in python
Question: I am doing a webscraping to extract some text using beautiful soup.
I am successfully extracting the required text from the webpage but my new
requirement is along with the text I need to extract the offset
number/position where the text actually started and ended in the document.
Is there any possibility for this using beautiful soup or any helpful packages
for this ?
Please provide your thoughts and suggestions...
Thanks
Answer: Try to use following code
import re
DATA = "This is test message"
for match in re.finditer(r'(?s)((?:[^\n][\n]?)+)', DATA):
print match.start(), match.end()
Output
0 20
|
Python3 Converting Non-English Chars to English Chars
Question: I have a text file, I read file and after some operation I put these lines
into another file. But input file has some Turkish chars such as
"Δ°,Γ,Γ,Ε,Γ,Δ". I want these chars to be converted to English chars because
when I open the files in UTF-8 encoding, these chars are not shown. My code is
below:
for i in range (len(singleLine)):
if singleLine[i] == "Δ°":
singleLine.replace(singleLine[i:i+1],"I")
if singleLine[i] == "Γ":
singleLine.replace(singleLine[i:i + 1], "U")
if singleLine[i] == "Γ":
singleLine.replace(singleLine[i:i + 1], "O")
if singleLine[i] == "Γ":
singleLine.replace(singleLine[i:i + 1], "C")
if singleLine[i] == "Ε":
singleLine.replace(singleLine[i:i + 1], "S")
if singleLine[i] == "Δ":
singleLine.replace(singleLine[i:i + 1], "G")
return singleLine
But the code does not recognize these Turkish chars in Input file and putting
them into outputfile without any operation.
What is the way to recognize these chars? Is there any special way for ASCII
code based search or something like this ?
Answer: `str` instances are immutable so `str.replace()` does not operate in-place but
instead returns the result.
But [don't do things the hard way](https://pypi.python.org/pypi/Unidecode).
>>> import unidecode
>>> unidecode.unidecode('Δ°,Γ,Γ,Ε,Γ,Δ')
'I,O,U,S,C,G'
|
python numpy strange boolean arithmetic behaviour
Question: Why is it, in python/numpy:
from numpy import asarray
bools=asarray([False,True])
print(bools)
[False True]
print(1*bools, 0+bools, 0-bools) # False, True are valued as 0, 1
[0 1] [0 1] [ 0 -1]
print(-2*bools, -bools*2) # !? expected same result! :-/
[0 -2] [2 0]
print(-bools) # this is the reason!
[True False]
I consider it weird that `-bools` returns `logical_not(bools)`, because in all
other cases the behaviour is "arithmetic", not "logical".
One who wants to use an array of booleans as a 0/1 mask (or "characteristic
function") is forced to use somehow involute expressions such as `(0-bools)`
or `(-1)*bools`, and can easily incur into bugs if he forgets about this.
Why is it so, and what would be the best acceptable way to obtain the desired
behaviour? (beside commenting of course)
Answer: Its all about operator order and data types.
>>> import numpy as np
>>> B = np.array([0, 1], dtype=np.bool)
>>> B
array([False, True], dtype=bool)
With numpy, boolean arrays are treated as that, boolean arrays. Every
operation applied to them, will first try to maintain the data type. That is
way:
>>> -B
array([ True, False], dtype=bool)
and
>>> ~B
array([ True, False], dtype=bool)
which are equivalent, return the element-wise negation of its elements. Note
however that using `-B` throws a warning, as the function is deprecated.
When you use things like:
>>> B + 1
array([1, 2])
`B` and `1` are first casted under the hood to the same data type. In data-
type promotions, the `boolean` array is always casted to a `numeric` array. In
the above case, `B` is casted to `int`, which is similar as:
>>> B.astype(int) + 1
array([1, 2])
In your example:
>>> -B * 2
array([2, 0])
First the array `B` is negated by the operator `-` and then multiplied by 2.
The desired behaviour can be adopted either by explicit data conversion, or
adding brackets to ensure proper operation order:
>>> -(B * 2)
array([ 0, -2])
or
>>> -B.astype(int) * 2
array([ 0, -2])
Note that `B.astype(int)` can be replaced without data-copy by
`B.view(np.int8)`, as boolean are represented by `characters` and have thus 8
bits, the data can be viewed as integer with the `.view` method without
needing to convert it.
>>> B.view(np.int8)
array([0, 1], dtype=int8)
So, in short, `B.view(np.int8)` or `B.astype(yourtype)` will always ensurs
that `B` is a `[0,1]` numeric array.
|
python3 and RO package and DS9
Question: How to get RO package working in Python 3? I managed to get it to work in
Python 2.7, but when I install it manually as `python3 setup.py install` and
then do `import RO.DS9` I get this:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/RO-3.6.9-py3.4.egg/RO/DS9.py", line 160, in <module>
import RO.OS
File "/usr/local/lib/python3.4/dist-packages/RO-3.6.9-py3.4.egg/RO/OS/__init__.py", line 7, in <module>
from .OSUtil import *
File "/usr/local/lib/python3.4/dist-packages/RO-3.6.9-py3.4.egg/RO/OS/OSUtil.py", line 31, in <module>
import RO.SeqUtil
File "/usr/local/lib/python3.4/dist-packages/RO-3.6.9-py3.4.egg/RO/SeqUtil.py", line 33, in <module>
import UserString
ImportError: No module named 'UserString'
>>> exit()
Answer: In Python 3, [`UserString` is part of the `collections`
module](https://docs.python.org/3/library/collections.html?highlight=userstring#collections.UserString).
As you can see on [the RO page](https://pypi.python.org/pypi/RO), this library
does only support Python 2.6 and 2.7.
|
How to access a List of Objects from outside a class in python
Question: Hope you can help me on this one. I have created a list of objects because the
program that I use creates lots of agents and it is easier to keep track.I
want to access that information from outside the class, so I need to call that
list and call in the agent number(which is created from the simulator). I have
put a simplified version so you can understand better.
This is the Main Class
from StoreCar import *
carObject = []
class Machine:
def calculation():
VehicleID = 2 # this is genarated Austomatically from system
#and increases every time a vehicle enters
Fuel = 15 # this is also calculated automatically from system.
carObject.append(StoreCar(VehicleID,'car')
carObject[VehicleID-1].setFC(Fuel)
This is the Class StoreCar which stores all the info
class StoreCar:
def __init__(self, id_,name):
self.id_ = id_
self.name= name
self.FCList= []
def setFC(self,Fuel):
self.FCList.append(Fuel)
This is the outside class that I want to access data from
from Machine import *
class outsideclass:
def getVehiData():
# I want to access the data which was saved in Machine class from here.
Answer: You're not actually storing anything inside the `Machine` class. The only
thing that you _are doing_ is storing values in the (confusingly named)
`carObject`:
from StoreCar import *
carObject = []
class Machine:
def calculation():
VehicleID = 2 # this is genarated Austomatically from system
#and increases every time a vehicle enters
Fuel = 15 # this is also calculated automatically from system.
# You're putting things in the `carObject` *list*, which
# should probably just be called `cars`
carObject.append(StoreCar(VehicleID,'car')
self.carObject[VehicleID-1].setFC(Fuel)
Your code, in general, has a few problems that is probably making your life
more difficult that it needs to be right now, and will certainly make things
worse down the road. I'm _assuming_ that you're in some kind of class and this
is homework given with some specific constraints because otherwise there is
absolutely no reason to do a lot of the things that you're doing.
Here are the things I'm changing:
* `from <module> import *` is _very_ rarely what you want to do. Just `import module`. Or, `import super_long_annoying_to_type_module as slattm` and use dot access.
* You don't _need_ a `Machine` class, unless that's one of the parameters of your assignment. It's not doing anything except cluttering up your code. `calculation` doesn't even take `self`, so either it should be decorated with `@classmethod`, or just be a function.
* Python naming conventions - modules (files), variables, and functions/methods should be `snake_cased`, classes should be `StudlyCased`. This won't kill you, but it's a convention that you'll see in most other Python code, and if you follow it will make your code easier to read by other Python programmers.
**cars.py**
class StoreCar:
def __init__(self, id_,name):
self.id_ = id_
self.name= name
self.fc_list= []
# If you're *setting* the fuel capacity, it shouldn't be a list.
# (assuming that's what FC stands for)
def add_fuel(self, fuel):
self.fc_list.append(fuel)
**factory.py**
import cars
class Machine:
def __init__(self):
self.cars = []
# Assuming that the vehicle ID shouldn't
# be public knowledge. It can still be got
# from outside the class, but it's more difficult now
self.__vehicle_id = 0
def calculation(self):
self.__vehicle_id += 1
fuel = 15 # this is also calculated automatically from system.
car = cars.StoreCar(self.__vehicle_id, 'car')
# Typically, I'd actually have `fuel` as a parameter
# for the constructor, i.e.
# cars.StoreCar(self.__vehicle_id, 'car', fuel)
car.add_fuel(fuel)
self.cars.append(car)
**somethingelse.py**
import factory
class SomeOtherClass:
def get_vehicle_data(self):
machine = factory.Machine()
machine.calculate()
print(machine.cars)
Note, that if I were unconstrained by any kind of assignment, I would probably
just do something like this:
from collections import namedtuple
Car = namedtuple('Car', ('id', 'fuel_capacity', 'name'))
def gen_vehicle_ids():
id = 0
while True:
id += 1
yield id
vehicle_id = gen_vehicle_ids()
def build_car():
return Car(id=next(vehicle_id), name='car', fuel_capacity=15)
# If you don't want a namedtuple, you *could* just
# use a dict instead
return {'id': next(vehicle_id), 'type': 'car', 'fuel_capacity': 15}
cars = []
for _ in range(20): # build 20 cars
cars.append(build_car())
# an alternative approach, use a list comprehension
cars = [build_car() for _ in range(20)]
print(cars) # or do whatever you want with them.
For a comparison between what you can do with the namedtuple approach vs. dict
approach:
# dict approach
for car in cars:
print('Car(id={}, name={}, fuel_capacity={})'.format(
car['id'], car['name'], car['fuel_capacity']))
# namedtuple approach
for car in cars:
print('Car(id={}, name={}, fuel_capacity{})'.format(
car.id, car.name, car.fuel_capacity))
Check out <http://pyformat.info> for more string formatting tricks.
|
Python: Using Pandas, how do I choose the columns in my output?
Question: I am running my whole Active directory against user accounts trying to find
what doesn't belong. Using my code my output gives me the words that only
occur once in the Username column. Even though I am analyzing one column of
data, I want to keep all of the columns that are with the data.
from pandas import DataFrame, read_csv
import pandas as pd
f1 = pd.read_csv('lastlogonuser.txt', sep='\t', encoding='latin1')
f2 = pd.read_csv('UserAccounts.csv', sep=',', encoding ='latin1')
f2 = f2.rename(columns={'Shortname':'User Name'})
f = pd.concat([f1, f2])
counts = f['User Name'].value_counts()
f = counts[counts == 1]
f
I get something like this when I run my code:
sample534 1
sample987 1
sample342 1
sample321 1
sample123 1
I would like ALL of the data from the txt files to come out in my out put, but
I still want only the username column analyzed. How do I keep all of the data
in all columns, or do I have to use a different word count to include all
columns of data?
I would like something like:
User Name Description
1 sample534 Journal Mailbox managed by
1 sample987 Journal Mailbox managed by
1 sample342 Journal Mailbox managed by
1 sample321 Journal Mailbox managed by
1 sample123 Journal Mailbox managed by
Sample of data I am using:
Account User Name User CN Description
ENABLED MBJ29 CN=MBJ29,CN=Users Journal Mailbox managed by
ENABLED MBJ14 CN=MBJ14,CN=Users Journal Mailbox managed by
ENABLED MBJ08 CN=MBJ30,CN=Users Journal Mailbox managed by
ENABLED MBJ07 CN=MBJ07,CN=Users Journal Mailbox managed by
Answer: Based on your description, I guess you want to use the counts of unique
elements as index to select rows in your dataframe. Maybe you can try this:
df2 = pd.DataFrame()
counts = f['User Name'].value_counts()
counts = counts[counts == 1].index
for index in counts:
df2 = df2.append(f[f['User Name'] == index])
|
bug while trying to create a function from other functions in python
Question: I have been trying to create a calculator and i had for practical reasons i
tried to import functions from a separate python file. It works at some extent
but it breaks when it tries to do the calculations. The bug is is that the add
is not defined but i did defined it while importing the function. Here is the
code:
class Calculator(object):
import a10 as add
import d10 as div
import m10 as mult
import s10 as sub
def choice(self):
print("A. Addition\l B. Substraction\l C. Division\l D. Multiplication")
xn = input("What do you want to do? ")
if xn == "a":
addition = add.addition
x = self.addition()
self.x = x
return x
elif xn == "b":
subtraction = sub.subtraction
z = self.subtraction()
self.z = z
return z
elif xn == "c":
division = div.division
y = self.division()
self.y = y
return y
elif xn == 'd':
Multiplication = mult.multiplication
v = self.Multiplication()
self.v = v
return v
objcalc = Calculator()
print(objcalc.choice())
Here is the a10
def addition(self):
try:
n = int(input("enter number: "))
n_for_add = int(input("What do you want to add on " + str(n) + " ? "))
except ValueError:
print("you must enter an integer!")
n_from_add = n + n_for_add
print(str(n) + " plus " + str(n_for_add) + " equals to " + str(n_from_add))
s10
def subtraction(self):
try:
nu = int(input("enter number: "))
nu_for_sub = int(input("What do you want to take off " + str(nu) + " ? "))
except ValueError:
print("you must enter an integer!")
nu_from_sub = nu - nu_for_sub
print(str(nu) + " minus " + str(nu_for_sub) + " equals to " + str(nu_from_sub))
m10
def Multiplication(self):
try:
numb = int(input("enter number: "))
numb_for_multi = int(input("What do you want to multiply " + str(numb) + " on? "))
except ValueError:
print("you must enter an integer!")
numb_from_multi = numb * numb_for_multi
print(str(numb) + " multiplied by " + str(numb_for_multi) + " equals to " + str(numb_from_multi))
d10
def division(self):
try:
num = int(input("enter number: "))
num_for_div = int(input("What do you want to divide " + str(num) + " off? "))
except ValueError:
print("you must enter an integer!")
num_from_div = num / num_for_div
print(str(num) + " divided by " + str(num_for_div) + " equals to " + str(num_from_div))
Answer: In the `if` statements, like this:
if xn == "a":
addition = add.addition
x = self.addition()
self.x = x
return x
`addition` is created as a variable local to the function `choice`, but you're
then setting `x` to be `self.addition()`, which isn't defined.
If you mean to write `x = add.addition()` then be warned that your `addition`
function doesn't return anything, it just prints out a value. The same for the
other functions - none of them return anything. So `self.addition` is not
defined, and `x` will be a `NoneType` object
Your `addition`, `subtraction` and other functions also take `self` as an
argument, when they're not methods in a class, so this doesn't make much
sense.
|
Can a set() be shared between Python processes?
Question: I am using multiprocessing in Python 2.7 to process a very large set of data.
As each process runs, it adds integers to a shared mp.Manager.Queue(), but
only if some other process hasn't already added the same integer. Since you
can't do an "in"-style membership test for Queues, the way I'm doing it is to
check each int for membership in a shared mp.Manager.list(). The list will
eventually have ~30 million entries, and so membership tests will be extremely
slow, nullifying the advantage of multiprocessing.
Here's a much simplified version of what I'm doing:
import multiprocessing as mp
def worker(shared_list, out_q, lock):
# Do some processing and get an integer
result_int = some_other_code()
# Use a lock to ensure nothing is added to the list in the meantime
lock.acquire()
# This lookup can take forever when the list is large
if result_int not in shared_list:
out_q.put(result_int)
shared_list.append(result_int)
lock.release()
manager = mp.Manager()
shared_list = manager.list()
lock = manager.lock()
out_q = manager.Queue()
for i in range(8):
p = mp.Process(target=worker, args=(shared_list, out_q, lock))
p.start()
I previously tried using a set() instead of an mp.Manager.list(), but it seems
that each process has its own memory space, and so when I updated the set, it
didn't synchronize across processes. Hence, I switched to the current
approach.
Here's roughly how I previously tried using a set(): import multiprocessing as
mp
def worker(shared_set, out_q, lock):
# Do some processing and get an integer
result_int = some_other_code()
# Use a lock to ensure nothing is added to the set in the meantime
lock.acquire()
# This lookup is fast, but the set doesn't reflect additions made by other processes.
if result_int not in shared_set:
out_q.put(result_int)
shared_set.add(result_int)
lock.release()
manager = mp.Manager()
lock = manager.lock()
out_q = manager.Queue()
# This set will NOT synchronize between processes
shared_set = set()
for i in range(8):
p = mp.Process(target=worker, args=(shared_set, out_q, lock))
p.start()
Note: these examples are untested and simply represent the relevant parts of
my code.
Is there a way to share sets across processes, or otherwise do faster
membership lookups?
EDIT: A little more information: the out_q is consumed by another process
which writes the data to a single output file. There can be no duplicates. If
I generate an integer and it's found to be a duplicate, the process needs to
go back and generate the next-best integer.
Answer: An obvious tweak is to use an `mp.Manager.dict()` instead of the set, and use
arbitrary values (say, set `the_dict[result_int] = 1` to indicate membership
in the set). BTW, this is how "everyone" implemented sets before Python added
the `set` type, and even now dicts and sets are implemented by basically the
same code under the covers.
Added later: I confess I don't grasp why you used both a set and a list in the
original code, since the set's keys are identical to the list's contents. If
order of entry isn't important, why not forget the list entirely? Then you
could also drop the layer of locking needed in the original to keep the set
and the list in synch.
Fleshing that out, with the dict suggestion, the whole function would become
just like:
def worker(shared_dict):
# Do some processing and get an integer
result_int = some_other_code()
shared_dict[result_int] = 1
Other processes could do `shared_dict.pop()` then to get one value at a time
(although, no, they couldn't wait on `.pop()` as they do for a queue's
`.get()`).
And one more: consider using local (process-local) sets? They'll run much
faster. Then each worker won't add any duplicates _it_ knows about, but there
may be duplicates _across_ processes. Your code didn't give any hints about
what the `out_q` consumer does, but if there's only one then a local set in
that too could weed out cross-process duplicates. Or perhaps the memory burden
gets too high then? Can't guess from here ;-)
## BIG EDIT
I'm going to suggest a different approach: don't use `mp.Manager` at all. Most
times I see people use it, they regret it, because it's not doing what they
_think_ it's doing. What they think: it's supplying physically shared objects.
What it's doing: it's supplying _semantically_ shared objects. Physically,
they live in Yet Another, under-the-covers, process, and operations on the
objects are forwarded to that latter process, where they're performed by that
process in its own address space. It's not _physically_ shared at all. So,
while it can be very convenient, there are substantial interprocess overheads
for even the simplest operations.
So I suggest instead using a single, ordinary set in one process, which will
be the sole code concerned with weeding out duplicates. The worker processes
produce ints with no concern for duplicates - they just pass the ints on. An
`mp.Queue` is fine for that (again, no real need for an `mp.Manager.Queue`).
Like so, which is a complete executable program:
N = 20
def worker(outq):
from random import randrange
from time import sleep
while True:
i = randrange(N)
outq.put(i)
sleep(0.1)
def uniqueifier(inq, outq):
seen = set()
while True:
i = inq.get()
if i not in seen:
seen.add(i)
outq.put(i)
def consumer(inq):
n = 0
for _ in range(N):
i = inq.get()
print(i)
if __name__ == "__main__":
import multiprocessing as mp
q1 = mp.Queue()
q2 = mp.Queue()
consume = mp.Process(target=consumer, args=(q2,))
consume.start()
procs = [mp.Process(target=uniqueifier, args=(q1, q2))]
for _ in range(4):
procs.append(mp.Process(target=worker, args=(q1,)))
for p in procs:
p.start()
consume.join()
for p in procs:
p.terminate()
The second queue passed to `uniqueifier` plays the role of your original
queue: it delivers only unique integers. No attempt is made to "share memory",
and so no costs due to that are paid. The only interprocess communication is
via easy, explicit `mp.Queue` operations. There is only one set, and since
it's not shared in any way it runs as fast as possible.
In effect, this just sets up a simple pipeline, although with multiple inputs.
|
How to crawl each and every link given on a website and collect all the text using scrapy
Question: I followed link
`https://stackoverflow.com/questions/19254630/how-to-use-scrapy-to-crawl-all-
items-in-a-website`
but things does not work out for me.
I am trying to learn scraping data over web.I was implementing tutorial given
on <http://scrapy.readthedocs.io/en/latest/intro/examples.html>
and able to crawl over a given link here is sample code snap
from scrapy.spiders import Spider
from scrapy.selector import Selector
from dirbot.items import Website
class DmozSpider(Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
]
def parse(self, response):
"""
The lines below is a spider contract. For more info see:
http://doc.scrapy.org/en/latest/topics/contracts.html
@url http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/
@scrapes name
"""
sel = Selector(response)
sites = sel.xpath('//ul[@class="directory-url"]/li')
items = []
for site in sites:
item = Website()
item['name'] = site.xpath('a/text()').extract()
item['url'] = site.xpath('a/@href').extract()
item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
items.append(item)
return items
and Code snap for Item is
from scrapy.item import Item, Field
class Website(Item):
name = Field()
description = Field()
url = Field()
I am able to run crawler using `scrapy crawl dmoz`
but not able to achieve following thing
1. All given link on website
2. Didn't get all text from all possible link
3. Want to save them to a file
Can some one guide me , for what changes need to do in my code so that i can
achieve my objective ?
Answer: 1. All given link on website
The response doesn't have `class="directory-url"` in it. You can get all given
links from the website using `sites = sel.xpath('//a/@href')` instead of that.
filter the needed urls
Or
If you can start from the main domain (`http://www.dmoz.org/`),like
`doc = html.fromstring(response.body)` `sites =
doc.xpath('.//section[@id="category-section"]//aside')`
`for site in sites:` `item = StackDemoItem()` `item['name'] =
site.xpath('.//div/h2/a/text()')` `item['url'] =
site.xpath('.//div/h2/a/@href')`
you have to append the domain name with `item['url']` to get the proper url.Do
the same for other pages respective of there url path.
2. Didn't get all text from all possible link
Most of the links does't have text along with it. So you have to strip the
contents from the url itself.else `text = sel.xpath('//a/text()')`
3. Want to save them to a file
You can simply save the contents using `scrapy crawl your_crawler_name -o
out.csv` , use json or txt instead of csv for that kind of file.
|
Singleton/Borg pattern based on different parameters passed while creating the object
Question: I am using borg pattern to share state amongst the objects:
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
Now lets assume that I want to create context based objects of the Borg class,
based on parameters I pass while creating an object. Is this a correct way to
create Borg pattern ( state sharing) for multiple contexts ?
import random
import cPickle
class Borg:
__shared_state = { }
def __init__(self,*args,**kwargs):
context_key = hash('{0}{1}'.format(cPickle.dumps(args),cPickle.dumps(kwargs)))
self.__shared_state.setdefault(context_key, {})
self.__dict__ = self.__shared_state[context_key]
print(self.__shared_state)
def set_random_property(self):
self.num = str(random.randint(1,100000))
a = Borg(x='ONE')
a.set_random_property()
b = Borg(x = 'TWO')
b.set_random_property()
c = Borg(x = 'TWO')
print('a with ONE has num:{0}'.format(a.num))
print('b with TWO has num:{0}'.format(b.num))
print('c with TWO has num:{0}'.format(c.num))
**output**
{7373348246660160089: {}}
{7373348246660160089: {'num': '18322'}, 3334843421982509183: {}}
{7373348246660160089: {'num': '18322'}, 3334843421982509183: {'num': '33084'}}
a with ONE has num:18322
b with TWO has num:33084
c with TWO has num:33084
Works correctly. Is there a way to improvise this pattern ? Or any better
alternatives available for python 2.7 ?
Answer: No, what you use is what I'd use; use a dictionary for the shared states.
You can simplify it _slightly_ by using the return value of
`dict.setdefault()` rather than ignore it:
def __init__(self, *args, **kwargs):
context_key = hash('{0}{1}'.format(cPickle.dumps(args),cPickle.dumps(kwargs)))
self.__dict__ = self.__shared_state.setdefault(context_key, {})
All this can be encapsulated in a metatype:
class PerArgsBorgMeta(type):
def __new__(mcls, name, bases, attrs):
cls = super(PerArgsBorgMeta, mcls).__new__(mcls, name, bases, attrs)
setattr(cls, '_{}__shared_state'.format(name), {})
return cls
def __call__(cls, *args, **kwargs):
instance = super(PerArgsBorgMeta, cls).__call__(*args, **kwargs)
context_key = hash('{0}{1}'.format(cPickle.dumps(args),cPickle.dumps(kwargs)))
state = getattr(cls, '_{}__shared_state'.format(cls.__name__))
instance.__dict__ = state.setdefault(context_key, {})
return instance
Then use this as a `__metaclass__` attribute on the class:
class SomeBorgClass:
__metaclass__ = PerArgsBorgMeta
Do note that using `hash(cPickle.dumps(kwargs))` will still create distinct
hashes for dictionaries with collisions:
>>> import cPickle
>>> hash(cPickle.dumps({'a': 42, 'i': 81}))
-7392919546006502834
>>> hash(cPickle.dumps({'i': 81, 'a': 42}))
2932616521978949826
The same applies to _sets_. Sorting (recursively if you must be exhaustive)
can help here, but be careful that you don't then produce false-positives
between, say, a set passed in as a value, and a tuple with the same values in
it used instead. There are increasingly convoluted work-arounds possible for
each of these, but at some point you just have to accept the limitation rather
than complicate the hashing code more still.
|
Python relative/absolute import (again)
Question: This topic has been covered several times but I still can't get my package to
work. Here is the situation: I've got a package in which a `logging` module
takes care of setting up the logging. So clearly, `mypackage.logging`
conflicts with Python `logging` from the standard library.
The directory' structure:
βββ mypackage
βΒ Β βββ __init__.py
βΒ Β βββ logging.py
βββ script.py
**mypackage.__init__**
import logging
from . import logging as _logging
logger = logging.getLogger(__name__)
def main():
_logging.init_logging()
logger.info("hello")
**mypackage.logging**
"""logging - Setup logging for mypackage."""
import copy
import logging
import logging.config
_DEFAULT_LOGGING_CONFIG_DICT = {
'version': 1,
'formatters': {
'verbose': {
'format': '%(asctime)s - %(name)s::%(levelname)s: %(message)s',
},
'simple': {
'format': '-- %(message)s',
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'DEBUG',
'formatter': 'simple',
},
'file': {
'class': 'logging.FileHandler',
'filename': 'oprpred.log',
'mode': 'w',
'formatter': 'verbose',
},
},
'loggers': {
'oprpred': {
'level': 'INFO',
},
},
'root': {
'level': 'INFO',
'handlers': ['console', 'file'],
},
}
def init_logging(verbose=False):
"""Initialize logging.
Set the log level to debug if verbose mode is on.
Capture warnings.
"""
d = default_logging_dict()
if verbose:
d['root']['level'] = 'DEBUG'
d['loggers']['oprpred']['level'] = 'DEBUG'
logging.config.dictConfig(d)
logging.captureWarnings(True)
def default_logging_dict():
return copy.deepcopy(_DEFAULT_LOGGING_CONFIG_DICT)
**script.py**
import mypackage
mypackage.main()
Finally, this is the error message I'm getting:
$ python3 script.py [11:09:01]
Traceback (most recent call last):
File "script.py", line 4, in <module>
mypackage.main()
File "/Users/benoist/Desktop/test_logging/mypackage/__init__.py", line 8, in main
_logging.init_logging()
AttributeError: module 'logging' has no attribute 'init_logging'
Final remark, I noticed that if in `mypackage.__init.py__` I import
`mypackage.logging` prior to the standard library `logging`, it works. I don't
want to do that since it is against Python PEP8 recommandations:
> Imports should be grouped in the following order:
>
> 1. standard library imports
> 2. related third party imports
> 3. local application/library specific imports
>
Any help would be greatly appreciated.
Ben.
P.S. I'm using Python 3.5.1.
Answer: The way I deal with this specific issue of using a custom logging module is to
import all the logging functions into my custom module. Now you can also
reimplement module level functions with customized versions as well.
For example:
"""logging - Setup logging for mypackage."""
import copy
from logging import *
import logging.config
_DEFAULT_LOGGING_CONFIG_DICT = { ... }
def init_logging(verbose=False):
...
def default_logging_dict():
...
Now you only need to import your custom module.
from . import logging
log = logging.getLogger()
An alternative to using `from logging import *` is to reimplement any commonly
used functions from the built in logging module.
import copy
import logging
def getLogger(*args, **kwargs):
logging.getLogger(*args, **kwargs)
If you need to reach logging function that you have not reimplemented you can
call through to the builtin logging module, for example:
`logging.logging.addLevelName(...)`.
|
python ldap3 search LDAPOperationsErrorResult
Question: I would like to get all PCs in the local network from ldap, so I tried
(variations of) this:
import ldap3
from ldap3 import ALL_ATTRIBUTES, SUBTREE, ALL
import dns.resolver
import socket
def get_ldap_server():
domain_name = socket.getfqdn().lstrip( socket.gethostname() )
answers = dns.resolver.query( '_ldap._tcp'+domain_name, rdtype='srv' )
#for srv in answers:
return answers[0].target.to_text()[:-1]
srv_name = get_ldap_server()
print srv_name
server = ldap3.Server( srv_name, get_info=ALL )
with ldap3.Connection( server ) as c:
print "Bound", c.bound
c.search( search_base='dc='+', dc='.join(srv_name.split('.')[1:]),
search_filter='(objectCategory=computer)',
search_scope=SUBTREE,
attributes=ALL_ATTRIBUTES,
get_operational_attributes=True)
print(c.response)
But all I get is: LDAPOperationsErrorResult: LDAPOperationsErrorResult - 1 -
operationsError - None - 000004DC: LdapErr: DSID-0C090748, comment: In order
to perform this operation a successful bind must be completed on the
connection., data 0, v2580 - searchResDone - None
Despite "Bound" being "True".
I'm using python 2.7. Any help would be greatly appreciated!
Answer: You didn't provide any username or password in the connection object, so an
anonymous bind is performed.
Try adding username=xxx and password=yyy to the Connection definition in the
"with" statement.
|
Python limit on input function in terminal
Question: I am currently using the input function to capture user inputs in the terminal
and copy them to the clipboard where it is then used by another application.
Wierdly it appears that there is a limit to the number of characters that you
can enter when using input in the terminal when running the script in batch
mode (~ 100). I was hoping someone could let me know what controls this limit
and how to adjust it as there doesn't appear to be any limit when I run the
code interactively.
Using python 3.4 running in Powershell on windows 7 64bit
Edit: Imagine to help clarify. When running in batch the "d"s were capped I
could not add anymore to the input. However when running interactive I had no
limit on how many "k"s I could type.
Testing.py is simply
x = input("Enter string:")
[](http://i.stack.imgur.com/zMYa9.png)
Thanks
C
Answer: Just doing it in command prompt I'm not seeing any limits for either the input
prompt or the value given in the input, it might be a powershell issue.
Test script I used:
import os
var = ""
for i in range(0,500):
var += "Input"
var += "?: "
var2 = input(var)
print(var2)
os.system('pause')
Edit: I don't see it on value given side either
|
Why is subprocess.run output different from shell output of same command?
Question: I am using `subprocess.run()` for some automated testing. Mostly to automate
doing:
dummy.exe < file.txt > foo.txt
diff file.txt foo.txt
If you execute the above redirection in a shell, the two files are always
identical. But whenever `file.txt` is too long, the below Python code does not
return the correct result.
This is the Python code:
import subprocess
import sys
def main(argv):
exe_path = r'dummy.exe'
file_path = r'file.txt'
with open(file_path, 'r') as test_file:
stdin = test_file.read().strip()
p = subprocess.run([exe_path], input=stdin, stdout=subprocess.PIPE, universal_newlines=True)
out = p.stdout.strip()
err = p.stderr
if stdin == out:
print('OK')
else:
print('failed: ' + out)
if __name__ == "__main__":
main(sys.argv[1:])
Here is the C++ code in `dummy.cc`:
#include <iostream>
int main()
{
int size, count, a, b;
std::cin >> size;
std::cin >> count;
std::cout << size << " " << count << std::endl;
for (int i = 0; i < count; ++i)
{
std::cin >> a >> b;
std::cout << a << " " << b << std::endl;
}
}
`file.txt` can be anything like this:
1 100000
0 417
0 842
0 919
...
The second integer on the first line is the number of lines following, hence
here `file.txt` will be 100,001 lines long.
**Question:** Am I misusing subprocess.run() ?
**Edit**
My exact Python code after comment (newlines,rb) is taken into account:
import subprocess
import sys
import os
def main(argv):
base_dir = os.path.dirname(__file__)
exe_path = os.path.join(base_dir, 'dummy.exe')
file_path = os.path.join(base_dir, 'infile.txt')
out_path = os.path.join(base_dir, 'outfile.txt')
with open(file_path, 'rb') as test_file:
stdin = test_file.read().strip()
p = subprocess.run([exe_path], input=stdin, stdout=subprocess.PIPE)
out = p.stdout.strip()
if stdin == out:
print('OK')
else:
with open(out_path, "wb") as text_file:
text_file.write(out)
if __name__ == "__main__":
main(sys.argv[1:])
Here is the first diff:
[](http://i.stack.imgur.com/Fk2IW.jpg)
Here is the input file: <https://drive.google.com/open?id=0B--
mU_EsNUGTR3VKaktvQVNtLTQ>
Answer: To reproduce, the shell command:
subprocess.run("dummy.exe < file.txt > foo.txt", shell=True, check=True)
without the shell in Python:
with open('file.txt', 'rb', 0) as input_file, \
open('foo.txt', 'wb', 0) as output_file:
subprocess.run(["dummy.exe"], stdin=input_file, stdout=output_file, check=True)
It works with arbitrary large files.
You could use `subprocess.check_call()` in this case (available since Python
2), instead of `subprocess.run()` that is available only in Python 3.5+.
> Works very well thanks. But then why was the original failing ? Pipe buffer
> size as in Kevin Answer ?
It has nothing to do with OS pipe buffers. The warning from the subprocess
docs that @Kevin J. Chase cites is unrelated to `subprocess.run()`. You should
care about OS pipe buffers only if you use `process = Popen()` and _manually_
read()/write() via multiple pipe streams (`process.stdin/.stdout/.stderr`).
It turns out that the observed behavior is due to [Windows bug in the
Universal
CRT](https://connect.microsoft.com/VisualStudio/feedback/details/1902345/regression-
fread-on-a-pipe-drops-some-newlines). Here's the same issue that is reproduced
without Python: [Why would redirection work where piping
fails?](http://stackoverflow.com/q/36781891/4279)
As said in [the bug
description](https://connect.microsoft.com/VisualStudio/feedback/details/1902345/regression-
fread-on-a-pipe-drops-some-newlines), to workaround it:
* _"use a binary pipe and do text mode CRLF => LF translation manually on the reader side"_ or use `ReadFile()` directly instead of `std::cin`
* or wait for Windows 10 update this summer (where the bug should be fixed)
* or use a different C++ compiler e.g., there is [no issue if you use `g++` on Windows](https://gist.github.com/zed/dd44ade13d313ceb8ba8e384ba1ff1ac)
The bug affects only text pipes i.e., the code that uses `<>` should be fine
(`stdin=input_file, stdout=output_file` should still work or it is some other
bug).
|
Reading with xlrd in python
Question: I wrote this program to read a column from an excel file then write it into a
txt file:
import xlrd, sys
text_file = open("Output.txt", "w")
isotope = xlrd.open_workbook(sys.argv[1])
first_sheet=isotope.sheet_by_index(0)
x= []
for rownum in range(first_sheet.nrows):
x.append(first_sheet.cell(rownum, 1))
for item in x:
text_file.write("%s\n" % item)
text_file.close()
It reads the column correctly but writes it like so:
number:517.0
number:531.0
number:517.0
number:520.0
number:513.0
number:514.0
number:522.0
Can I read it in a way that it just writes the value and not "number:"? I
could just cut out the first 7 characters of every line, but that seems kind
of inefficient. Thanks for the help!
Answer: Also, if you want a way to read entire values of a row in one shot:
You can take first_sheet and do:
first_sheet.row_values(index_of_row)
This will return a list with all the values of the index_of_row.
|
Pandas GroupBy Two Text Columns And Return The Max Rows Based On Counts
Question: I'm trying to figure out the max `(First_Word, Group)` pairs
import pandas as pd
df = pd.DataFrame({'First_Word': ['apple', 'apple', 'orange', 'apple', 'pear'],
'Group': ['apple bins', 'apple trees', 'orange juice', 'apple trees', 'pear tree'],
'Text': ['where to buy apple bins', 'i see an apple tree', 'i like orange juice',
'apple fell out of the tree', 'partrige in a pear tree']},
columns=['First_Word', 'Group', 'Text'])
First_Word Group Text
0 apple apple bins where to buy apple bins
1 apple apple trees i see an apple tree
2 orange orange juice i like orange juice
3 apple apple trees apple fell out of the tree
4 pear pear tree partrige in a pear tree
Then I do a `groupby`:
grouped = df.groupby(['First_Word', 'Group']).count()
Text
First_Word Group
apple apple bins 1
apple trees 2
orange orange juice 1
pear pear tree 1
And I now want to filter it down to only unique index rows that have the max
`Text` counts. Below you'll notice `apple bins` was removed because `apple
trees` has the max value.
Text
First_Word Group
apple apple trees 2
orange orange juice 1
pear pear tree 1
This [max value of group](http://stackoverflow.com/questions/15707746/python-
how-can-i-get-rows-which-have-the-max-value-of-the-group-to-which-they)
question is similar but when I try something like this:
df.groupby(["First_Word", "Group"]).count().apply(lambda t: t[t['Text']==t['Text'].max()])
I get an error: `KeyError: ('Text', 'occurred at index Text')`. If I add
`axis=1` to the `apply` I get `IndexError: ('index out of bounds', 'occurred
at index (apple, apple bins)')`
Answer: Given `grouped`, you now want to group by the `First Word` index level, and
find the index labels of the maximum row for each group (using
[`idxmax`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.idxmax.html)):
In [39]: grouped.groupby(level='First_Word')['Text'].idxmax()
Out[39]:
First_Word
apple (apple, apple trees)
orange (orange, orange juice)
pear (pear, pear tree)
Name: Text, dtype: object
You can then use [`grouped.loc`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.loc.html) to select rows from `grouped`
by index label:
import pandas as pd
df = pd.DataFrame(
{'First_Word': ['apple', 'apple', 'orange', 'apple', 'pear'],
'Group': ['apple bins', 'apple trees', 'orange juice', 'apple trees', 'pear tree'],
'Text': ['where to buy apple bins', 'i see an apple tree', 'i like orange juice',
'apple fell out of the tree', 'partrige in a pear tree']},
columns=['First_Word', 'Group', 'Text'])
grouped = df.groupby(['First_Word', 'Group']).count()
result = grouped.loc[grouped.groupby(level='First_Word')['Text'].idxmax()]
print(result)
yields
Text
First_Word Group
apple apple trees 2
orange orange juice 1
pear pear tree 1
|
virtualenv isolated app somehow finds global django installation instead of local one
Question: 1. I have globally installed django v1.8 on (ubuntu + apache + mod_wsgi)
2. I have a virtualenv _'myenv'_ with --no-site-packages (which means it is isolated from global packages) with django 1.9 installed inside
here is my app's apache config
WSGIPythonPath /var/djp/myapp:/root/.virtualenvs/myapp/lib/python2.7/site-packages
<VirtualHost *:80>
WSGIDaemonProcess mydomain.com python-path=/var/djp/myapp:/root/.virtualenvs/myenv/lib/python2.7/site-packages
WSGIProcessGroup mydomain.com
WSGIPassAuthorization On
WSGIScriptAlias / /var/djp/myapp/myapp/wsgi.py
ServerName mydomain.com
ServerAlias *.mydomain.com
ErrorLog ${APACHE_LOG_DIR}/myapp/myapp_error.log
LogLevel info
</VirtualHost>
if then i switch to myenv and check version in python i get
>>> import django
>>> django.VERSION
(1, 9, 7, 'final', 0)
>>> import sys
>>> print sys.path
*the path is ok*
But if i open up webpage of my app i see the following
Django Version: 1.8.3
Python Executable: /usr/bin/python
Python Version: 2.7.3
Python Path:
['/var/djp/myapp', # - ok
'/root/.virtualenvs/myenv/lib/python2.7/site-packages', # - ok
'/usr/lib/python2.7', # - not ok (global)
'/usr/lib/python2.7/plat-linux2', # - not ok (global)
'/usr/lib/python2.7/lib-tk', # - not ok (global)
'/usr/lib/python2.7/lib-old', # - not ok (global)
'/usr/lib/python2.7/lib-dynload', # - not ok (global)
'/usr/local/lib/python2.7/dist-packages', # - not ok (global)
'/usr/lib/python2.7/dist-packages', # - not ok (global)
'/usr/lib/python2.7/dist-packages/PIL', # - not ok (global)
'/usr/lib/pymodules/python2.7']
I just dont get it, why it executes django 1.8 first? My local site-packages
should be found first. My first thought was, that python just couldn't find
django 1.9 in myenv. But i can easily import it from python shell as shown
above!
Here is the output of pip freeze in myenv:
Django==1.9.7
argparse==1.2.1
distribute==0.6.24
django-crispy-forms==1.6.0
djangorestframework==3.3.3
psycopg2==2.6.1
wsgiref==0.1.2
Everything is on it's place. I have no idea what happens. Please help
Answer: Try:
WSGIRestrictEmbedded On
<VirtualHost *:80>
WSGIDaemonProcess mydomain.com python-home=/root/.virtualenvs/myenv python-path=/var/djp/myapp
WSGIProcessGroup mydomain.com
WSGIPassAuthorization On
WSGIScriptAlias / /var/djp/myapp/myapp/wsgi.py
ServerName mydomain.com
ServerAlias *.mydomain.com
ErrorLog ${APACHE_LOG_DIR}/myapp/myapp_error.log
LogLevel info
</VirtualHost>
That is, turn off interpreter initialisation in embedded mode processes and
then use `python-home` option to say where virtual environment is.
The remaining question is whether you are using a non system Python
installation. If you are and mod_wsgi was actually compiled for system Python
and not your separate one, more work is needed.
A further issue may also be that `/root` directories are not actually readable
to Apache user.
|
Send file contents over ftp python
Question: I have this Python Script
import os
import random
import ftplib
from tkinter import Tk
# now, we will grab all Windows clipboard data, and put to var
clipboard = Tk().clipboard_get()
# print(clipboard)
# this feature will only work if a string is in the clipboard. not files.
# so if "hello, world" is copied to the clipboard, then it would work. however, if the target has copied a file or something
# then it would come back an error, and the rest of the script would come back false (therefore shutdown)
random_num = random.randrange(100, 1000, 2)
random_num_2 = random.randrange(1, 9999, 5)
filename = "capture_clip" + str(random_num) + str(random_num_2) + ".txt"
file = open(filename, 'w') # clears file, or create if not exist
file.write(clipboard) # write all contents of var "foo" to file
file.close() # close file after printing
# let's send this file over ftp
session = ftplib.FTP('ftp.example.com','ftp_user','ftp_password')
session.cwd('//logs//') # move to correct directory
f = open(filename, 'r')
session.storbinary('STOR ' + filename, f)
f.close()
session.quit()
The file will send the contents created by the Python script (under variable
"filename" eg: "capture_clip5704061.txt") to my FTP Server, though the
contents of the file on the local system do not equal the file on the FTP
server. As you can see, I use the ftplib module. Here is my error:
Traceback (most recent call last):
File "script.py", line 33, in<module>
session.storbinary('STOR ' + filename, f)
File "C:\Users\willi\AppData\Local\Programs\Python\Python36\lib\ftplib.py", line 507, in storbinary
conn.sendall(buf)
TypeError: a bytes-like object is required, not 'str'
Answer: Your library expects the file to be open in binary mode, it appears. Try the
following:
f = open(filename, 'rb')
This ensures that the data read from the file is a `bytes` object rather than
`str` (for text).
|
Bytecode optimization
Question: Here are 2 simple examples. In the first example `append` method produces
LOAD_ATTR instruction inside the cycle, in the second it only produced once
and result saved in variable (ie cached). _Reminder: I remember, that
there`extend` method for this task which is much faster that this_
setup = \
"""LIST = []
ANOTHER_LIST = [i for i in range(10**7)]
def appender(list, anohter_list):
for elem in anohter_list:
list.append(elem)
def appender_optimized(list, anohter_list):
append_method = list.append
for elem in anohter_list:
append_method(elem)"""
import timeit
print(timeit.timeit("appender(LIST, ANOTHER_LIST)", setup=setup, number=10))
print(timeit.timeit("appender_optimized(LIST, ANOTHER_LIST)", setup=setup, number=10))
Results:
11.92684596051036
7.384205785584728
4.6 seconds difference (even for such a big list) is no joke - for my opinion
such difference can not be counted as "micro optimization". Why Python does
not do it for me? Because bytecode must be exact reflection of source code? Do
compiler even optimize anything? For example,
def te():
a = 2
a += 1
a += 1
a += 1
a += 1
produces
LOAD_FAST 0 (a)
LOAD_CONST 2 (1)
INPLACE_ADD
STORE_FAST 0 (a)
4 times instead of optimize into a += 4. Or do it optimize some famous things
like producing bit shift instead of multiplying by 2? Am I misunderstand
something about basic language concepts?
Answer: Python is a dynamic language. This means that you have _a lot_ of freedom in
how you write code. Due to the crazy amounts of introspection that python
exposes (which are incredibly useful BTW), many optimizations simply cannot be
performed. For example, in your first example, python has no way of knowing
what datatype `list` is going to be when you call it. I could create a really
weird class:
class CrazyList(object):
def append(self, value):
def new_append(value):
print "Hello world"
self.append = new_append
Obviously this isn't useful, but I _can_ write this and it _is_ valid python.
If I were to pass this type to your above function, the code would be
different than the version where you "cache" the `append` function.
We could write a similar example for `+=` (it could have side-effects that
wouldn't get executed if the "compiler" optimized it away).
In order to optimize efficiently, python would have to know your types ... And
for a vast majority of your code, it has no (fool-proof) way to get the type
data so it doesn't even try for most optimizations.
* * *
Please note that this _is_ a micro optimization (and a [well
documented](https://www.python.org/doc/essays/list2str/) one). It is useful in
some cases, but in most cases it is unnecessary if you write idiomatic python.
e.g. your `list` example is best written using the `.extend` method as you've
noted in your post. Most of the time, if you have a loop that is tight enough
for the lookup time of a method to matter in your overall program runtime,
then either you should find a way to rewrite just that loop to be more
efficient or even push the computation into a faster language (e.g. `C`). Some
libraries are _really_ good at this (`numpy`).
* * *
With that said, there are some optimizations that _can_ be done safely by the
"compiler" in a stage known as the "peephole optimizer". e.g. It will do some
simple constant folding for you:
>>> import dis
>>> def foo():
... a = 5 * 6
...
>>> dis.dis(foo)
2 0 LOAD_CONST 3 (30)
3 STORE_FAST 0 (a)
6 LOAD_CONST 0 (None)
9 RETURN_VALUE
In some cases, it'll cache values for later use, or turn one type of object
into another:
>>> def translate_tuple(a):
... return a in [1, 3]
...
>>> import dis
>>> dis.dis(translate_tuple)
2 0 LOAD_FAST 0 (a)
3 LOAD_CONST 3 ((1, 3))
6 COMPARE_OP 6 (in)
9 RETURN_VALUE
(Note the list got turned into a `tuple` and cached -- In python3.2+ `set`
literals can also get turned into `frozenset` and cached).
|
python nosetests AssertionError: None != 'hmmm...'
Question: below is the test I am trying to run:
def test_hmm_method_returns_hmm(self):
#set_trace()
assert_equals( orphan_elb_finder.hmm(), 'hmmm...')
When I run the code I get the following output:
D:\dev\git_repos\platform-health\tests\unit\test_orphan_elb_finder>nosetests
.F
======================================================================
FAIL: test_hmm_method_returns_hmm (test_orphan_elb_finder.test_basic.BasicTestSuite)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\dev\git_repos\platform-health\tests\unit\test_orphan_elb_finder\test_basic.py", line 18, in test_hmm_method_returns_hmm
self.assertEqual(orphan_elb_finder.hmm(), 'hmmm...')
AssertionError: None != 'hmmm...'
-------------------- >> begin captured stdout << ---------------------
hmmm...
--------------------- >> end captured stdout << ----------------------
----------------------------------------------------------------------
Ran 2 tests in 0.002s
FAILED (failures=1)
It seems to be that orphan_elb_finder.hmm results to None. Which is weird
because when I uncomment the set_trace and run the command manually it gives
me the correct output:
-> assert_equals( orphan_elb_finder.hmm(), 'hmmm...')
(Pdb) orphan_elb_finder.hmm()
hmmm...
But when I try and run the same assertion in the debugger:
(Pdb) assert_equals(orphan_elb_finder.hmm(), 'hmmm...')
hmmm...
*** AssertionError: None != 'hmmm...'
I have a feeling that it has something to do with the way stdout is used but
I'm a little bit lost as to how to find out more information / fix this
problem.
Below is the orphan_elb_finder methods:
# -*- coding: utf-8 -*-
def get_hmm():
"""Get a thought."""
return 'hmmm...'
def hmm():
"""Contemplation..."""
print get_hmm()
Any help would be greatly appreciated
UPDATE:
So following Blckknght response I have tried to call get_hmm instead of hmm().
But when I try call the method I get the below error
assert_equals(orphan_elb_finder.get_hmm(), 'hmmm...')
AttributeError: 'module' object has no attribute 'get_hmm'
Then I try and check available methods
(Pdb) dir(orphan_elb_finder)
['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'core', 'hmm']
It seems that the below module does not reveal the get_hmm() method for some
reason?
UPDATE 2:
Found out what was going on. Inside my orphan_elb_finder package inside
**init**.py I had
from .core import hmm
changed it to
from .core import get_hmm
and it seemed to work. Somehow though I think the author of the package
construction indented get_hmm to be a private method. Not sure How I would
have tested it if that is the case seeing as get_hmm returns None?
Answer: The `hmm` method, unlike the `get_hmm` method, does not have a `return`
statement. It `print`s the string `"hmmm..."`, but returns `None`.
Compare calling `get_hmm()` and calling `hmm()`. The former will print
`'hmmm...'` with the quotation marks. That's because it's returning the
string, and the interactive console is printing the `repr` of the return
value. In contrast, when you call `hmm()`, it does its own printing (with no
quotation marks), then returns `None` (the default return value when nothing
else is specified). The interactive console skips printing out the `repr` of
the return value when it is `None`, so there's nothing extra printed.
>>> get_hmm()
'hmmm...'
>>> hmm()
hmmm...
|
Python import old version package instead of new one
Question: I install the library 'numpy1.11.0', 'pandas0.18.1', 'scipy0.17.1' with pip
into the site-packages. The problem is that when I import numpy and scipy in
my project, an old version which has also been installed is imported instead
of the new version:
import numpy as np
import pandas as pd
import scipy as sc
print(np.__version__)
print(np.__file__)
print(pd.__version__)
print(pd.__file__)
print(sc.__version__)
print(sc.__file__)
output:
1.8.0rc1
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/__init__.pyc
0.18.1
/Library/Python/2.7/site-packages/pandas/__init__.pyc
0.13.0b1
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/__init__.pyc
As only one pandas is installed, the newest version is imported correctly.
[](http://i.stack.imgur.com/lIo0j.jpg)
Both of the python and site-packages have numpy and scipy.
How could I fix the problem, thanks!
Answer: You can use [virtualenv](https://virtualenv.pypa.io/en/stable/), install the
libraries you want in the version you want.
|
Selecting values from a JSON file in Python
Question: I am getting JIRA data using the following python code,
how do I store the response for more than one key (my example shows only one
KEY but in general I get lot of data) and print **only** the values
corresponding to `total,key, customfield_12830, summary`
import requests
import json
import logging
import datetime
import base64
import urllib
serverURL = 'https://jira-stability-tools.company.com/jira'
user = 'username'
password = 'password'
query = 'project = PROJECTNAME AND "Build Info" ~ BUILDNAME AND assignee=ASSIGNEENAME'
jql = '/rest/api/2/search?jql=%s' % urllib.quote(query)
response = requests.get(serverURL + jql,verify=False,auth=(user, password))
print response.json()
`response.json()` OUTPUT:-
<http://pastebin.com/h8R4QMgB>
Answer: From the the link you pasted to pastebin and from the json that I saw, its a
you `issues` as list containing `key, fields(which holds custom fields), self,
id, expand`.
You can simply iterate through this response and extract values for keys you
want. You can go like.
data = response.json()
issues = data.get('issues', list())
x = list()
for issue in issues:
temp = {
'key': issue['key'],
'customfield': issue['fields']['customfield_12830'],
'total': issue['fields']['progress']['total']
}
x.append(temp)
print(x)
**x** is list of dictionaries containing the data for fields you mentioned.
Let me know if I have been unclear somewhere or what I have given is not what
you are looking for.
**PS:** It is always advisable to use **dict.get('keyname', None)** to get
values as you can always put a default value if key is not found. For this
solution I didn't do it as I just wanted to provide approach.
**Update** : In the comments you(OP) mentioned that it gives attributerror.Try
this code
data = response.json()
issues = data.get('issues', list())
x = list()
for issue in issues:
temp = dict()
key = issue.get('key', None)
if key:
temp['key'] = key
fields = issue.get('fields', None)
if fields:
customfield = fields.get('customfield_12830', None)
temp['customfield'] = customfield
progress = fields.get('progress', None)
if progress:
total = progress.get('total', None)
temp['total'] = total
x.append(temp)
print(x)
|
How to apply a python file execution over selected file in OSX Terminal?
Question: I am asked to create a python file abc.py, and then execute that python
command over filename.txt
The code in terminal (OSX):
$ python abc.py filename.txt
How do I write the code in the abc.py file such that it is will read
"filename.txt" from the command-line, as an input in the python code?
Thanks much.
Answer: The simplest way is with the `sys` library. Arguments to the python
interpreter are stored in
[`sys.argv`](https://docs.python.org/2/library/sys.html#sys.argv)
import sys
def main():
# sys.arv[0] is the path to this script (ie. /path/to/abc.py)
filepath = sys.argv[1]
print filepath
if __name__ == '__main__':
main()
If you wanted to get fancier, you could use the
[`argparse`](https://docs.python.org/2/library/argparse.html) library
import argparse
def main():
parser = argparse.ArgumentParser()
parser.add_argument('filepath')
args = parser.parse_args()
print args.filepath
|
Python3 convert Julian date to standard date
Question: I have a string as Julian date like `"16152"` meaning 152'nd day of 2016 or
`"15234"` meaning 234'th day of 2015.
How can I convert these Julian dates to format like `20/05/2016` using Python
3 standard library?
I can get the year 2016 like this: `date = 20 + julian[0:1]`, where `julian`
is the string containing the Julian date, but how can I calculate the rest
according to 1th of January?
Answer: The
[`.strptime()`](https://docs.python.org/3.5/library/datetime.html#datetime.datetime.strptime)
method supports the _day of year_ format:
>>> import datetime
>>>
>>> datetime.datetime.strptime('16234', '%y%j').date()
datetime.date(2016, 8, 21)
And then you can use
[`strftime()`](https://docs.python.org/3.5/library/datetime.html#datetime.date.strftime)
to reformat the date
>>> date = datetime.date(2016, 8, 21)
>>> date.strftime('%d/%m/%Y')
'21/08/2016'
|
Different models with gensim Word2Vec on python
Question: I am trying to apply the word2vec model implemented in the library gensim in
python. I have a list of sentences (each sentences is a list of words).
For instance let us have:
sentences=[['first','second','third','fourth']]*n
and I implement two identical models:
model = gensim.models.Word2Vec(sententes, min_count=1,size=2)
model2=gensim.models.Word2Vec(sentences, min_count=1,size=2)
I realize that the models sometimes are the same, and sometimes are different,
depending on the value of n.
For instance, if n=100 I obtain
print(model['first']==model2['first'])
True
while, for n=1000:
print(model['first']==model2['first'])
False
How is it possible?
Thank you very much!
Answer: Looking at the `gensim`
[documentation](https://radimrehurek.com/gensim/models/word2vec.html), there
is some randomization when you run `Word2Vec`:
> `seed` = for the random number generator. Initial vectors for each word are
> seeded with a hash of the concatenation of word + str(seed). Note that for a
> fully deterministically-reproducible run, you must also limit the model to a
> single worker thread, to eliminate ordering jitter from OS thread
> scheduling.
Thus if you want to have reproducible results, you will need to set the seed:
In [1]: import gensim
In [2]: sentences=[['first','second','third','fourth']]*1000
In [3]: model1 = gensim.models.Word2Vec(sentences, min_count = 1, size = 2)
In [4]: model2 = gensim.models.Word2Vec(sentences, min_count = 1, size = 2)
In [5]: print(all(model1['first']==model2['first']))
False
In [6]: model3 = gensim.models.Word2Vec(sentences, min_count = 1, size = 2, seed = 1234)
In [7]: model4 = gensim.models.Word2Vec(sentences, min_count = 1, size = 2, seed = 1234)
In [11]: print(all(model3['first']==model4['first']))
True
|
Golang: How can I write a map which is mixed with string and array?
Question: I am a beginner of `Go`. I wrote this code, but an error occurred. How should
I write a map which contains `string` and `[]string` properties?
package main
import (
"fmt"
)
func main() {
prof := make(map[string]map[string]interface{})
prof["me"] = map[string]string{
"name": "John Lennon",
"email": "[email protected]",
"phone": "090-0000-0000",
"occupation": []string{"Programmer", "System Engineer"},
"language": []string{"Go", "Java", "Python", "PHP", "JavaScript", "SQL"},
"hobby": []string{"Photography", "Traveling", "Fishing", "Eating"},
}
fmt.Println(prof)
}
This error is from [Ideone](https://ideone.com/Q0roBI).
# _/home/NcWlmE
./prog.go:14: cannot use []string literal (type []string) as type string in map value
./prog.go:15: cannot use []string literal (type []string) as type string in map value
./prog.go:16: cannot use []string literal (type []string) as type string in map value
Answer: You're assigning the wrong kind of map. Try:
prof["me"] = map[string]interface{}{
^^^^^^^^^^^ instead of string
|
update tkinter label with mouse click
Question: I am the beginner in Python and am trying to code a tictactoe game with
`tkinter`. My class named `Cell` extends `Tkinter.Label`. The `Cell` class
contains data fields `emptyLabel`, `xLabel` and `oLabel`. This is my code so
far for class `Cell`:
from tkinter import *
class Cell(Label):
def __init__(self,container):
super().__init__(container)
self.emptyImage=PhotoImage(file="C:\\Python34\\image\\empty.gif")
self.x=PhotoImage(file="C:\\Python34\\image\\x.gif")
self.o=PhotoImage(file="C:\\Python34\\image\\o.gif")
def getEmptyLabel(self):
return self.emptyImage
def getXLabel(self):
return self.x
def getOLabel(self):
return self.o
and my main class is as follows:
from tkinter import *
from Cell import Cell
class MainGUI:
def __init__(self):
window=Tk()
window.title("Tac Tic Toe")
self.frame1=Frame(window)
self.frame1.pack()
for i in range (3):
for j in range (3):
self.cell=Cell(self.frame1)
self.cell.config(image=self.cell.getEmptyLabel())
self.cell.grid(row=i,column=j)
self.cell.bind("<Button-1>",self.flip)
frame2=Frame(window)
frame2.pack()
self.lblStatus=Label(frame2,text="Game Status").pack()
window.mainloop()
def flip(self,event):
self.cell.config(image=self.cell.getXLabel())
MainGUI()
The code displays an empty cell image on cell 3x3, but when I click the cell
to update the empty cell image to X image. It currently only happens on the
empty label in row 3 column 3.
My question is: How to change the label on a mouse click?
Answer: You keep reassigning `self.cell`, and then when that part is done you bind a
mouse button to the last cell. Bind the mouse button to each cell within the
loop.
The callback function is also hard-coded to only look at `self.cell`, which
you kept reassigning to end up with only the last one. In addition to binding
the mouse button to each cell, you'll have to change the callback function to
look at the proper cell.
In `__init__`:
for i in range (3):
for j in range (3):
cell=Cell(self.frame1)
cell.config(image=self.cell.getEmptyLabel())
cell.grid(row=i,column=j)
cell.bind("<Button-1>", lambda event, cell=cell: self.flip(cell))
Or, without using `lambda`:
for i in range (3):
for j in range (3):
cell=Cell(self.frame1)
cell.config(image=self.cell.getEmptyLabel())
cell.grid(row=i,column=j)
def temp(event, cell=cell):
self.flip(cell)
cell.bind("<Button-1>", temp)
In `flip`:
def flip(self, cell):
self.cell.config(image=cell.getXLabel())
|
Kivy - Touch not answer everytime on android
Question: I have an App which is working fine, but sometimes I don't have touch answer,
no matter where (Button, Tabbed Panel...). This happens in other android I
tested, different versions and different cell phones. Sometimes I touch once
and answer is ok, sometimes I need touch two or three times. Is not just me,
other people using same app in others cell phones had the same problem. I
built with buildozer and have no idea why I have this behavior. I built and
install touch tracer (that demo app) and all the touchs were recognized, so I
suppose the problem is not with buildozer, but just is case, this is my
buildozer.spec (for my app, not for touch tracer):
[app]
# (str) Title of your application
title = DAP
# (str) Package name
package.name = DAP
# (str) Package domain (needed for android/ios packaging)
package.domain = com.doatlanticoaopacifico
# (str) Source code where the main.py live
source.dir = .
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,kv,atlas,ttf,db
# (list) Source files to exclude (let empty to not exclude anything)
#source.exclude_exts = spec
# (list) List of directory to exclude (let empty to not exclude anything)
#source.exclude_dirs = tests, bin
# (list) List of exclusions using pattern matching
#source.exclude_patterns = license,images/*/*.jpg
# (str) Application versioning (method 1)
version = 0.1
# (str) Application versioning (method 2)
# version.regex = __version__ = ['"](.*)['"]
# version.filename = %(source.dir)s/main.py
# (list) Application requirements
# comma seperated e.g. requirements = sqlite3,kivy
requirements = sqlite3,kivy,datetime,plyer,ecdsa,paramiko
# (str) Custom source folders for requirements
# Sets custom source for any requirements with recipes
# requirements.source.kivy = ../../kivy
# (list) Garden requirements
#garden_requirements =
# (str) Presplash of the application
presplash.filename = %(source.dir)s/data/figura.png
# (str) Icon of the application
icon.filename = %(source.dir)s/data/logo.png
# (str) Supported orientation (one of landscape, portrait or all)
orientation = portrait
# (list) List of service to declare
#services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY
#
# OSX Specific
#
#
# author = Β© Copyright Info
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 0
# (list) Permissions
android.permissions = INTERNET,ACCESS_NETWORK_STATE,CAMERA
# (int) Android API to use
#android.api = 19
# (int) Minimum API required
#android.minapi = 9
# (int) Android SDK version to use
#android.sdk = 20
# (str) Android NDK version to use
#android.ndk = 9c
# (bool) Use --private data storage (True) or --dir public storage (False)
android.private_storage = False
# (str) Android NDK directory (if empty, it will be automatically downloaded.)
#android.ndk_path =
# (str) Android SDK directory (if empty, it will be automatically downloaded.)
#android.sdk_path =
# (str) ANT directory (if empty, it will be automatically downloaded.)
#android.ant_path =
# (str) python-for-android git clone directory (if empty, it will be automatically cloned from github)
#android.p4a_dir =
# (list) python-for-android whitelist
#android.p4a_whitelist =
# (bool) If True, then skip trying to update the Android sdk
# This can be useful to avoid excess Internet downloads or save time
# when an update is due and you just want to test/build your package
# android.skip_update = False
# (str) Android entry point, default is ok for Kivy-based app
#android.entrypoint = org.renpy.android.PythonActivity
# (list) List of Java .jar files to add to the libs so that pyjnius can access
# their classes. Don't add jars that you do not need, since extra jars can slow
# down the build process. Allows wildcards matching, for example:
# OUYA-ODK/libs/*.jar
#android.add_jars = foo.jar,bar.jar,path/to/more/*.jar
# (list) List of Java files to add to the android project (can be java or a
# directory containing the files)
#android.add_src =
# (str) python-for-android branch to use, if not master, useful to try
# not yet merged features.
#android.branch = master
# (str) OUYA Console category. Should be one of GAME or APP
# If you leave this blank, OUYA support will not be enabled
#android.ouya.category = GAME
# (str) Filename of OUYA Console icon. It must be a 732x412 png image.
#android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png
# (str) XML file to include as an intent filters in <activity> tag
#android.manifest.intent_filters =
# (list) Android additionnal libraries to copy into libs/armeabi
#android.add_libs_armeabi = libs/android/*.so
#android.add_libs_armeabi_v7a = libs/android-v7/*.so
#android.add_libs_x86 = libs/android-x86/*.so
#android.add_libs_mips = libs/android-mips/*.so
# (bool) Indicate whether the screen should stay on
# Don't forget to add the WAKE_LOCK permission if you set this to True
#android.wakelock = False
# (list) Android application meta-data to set (key=value format)
#android.meta_data =
# (list) Android library project to add (will be added in the
# project.properties automatically.)
#android.library_references =
# (str) Android logcat filters to use
#android.logcat_filters = *:S python:D
# (bool) Copy library instead of making a libpymodules.so
#android.copy_libs = 1
#
# iOS specific
#
# (str) Name of the certificate to use for signing the debug version
# Get a list of available identities: buildozer ios list_identities
#ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)"
# (str) Name of the certificate to use for signing the release version
#ios.codesign.release = %(ios.codesign.debug)s
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 2
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 1
# -----------------------------------------------------------------------------
# List as sections
#
# You can define all the "list" as [section:key].
# Each line will be considered as a option to the list.
# Let's take [app] / source.exclude_patterns.
# Instead of doing:
#
#[app]
#source.exclude_patterns = license,data/audio/*.wav,data/images/original/*
#
# This can be translated into:
#
#[app:source.exclude_patterns]
#license
#data/audio/*.wav
#data/images/original/*
#
# -----------------------------------------------------------------------------
# Profiles
#
# You can extend section / key with a profile
# For example, you want to deploy a demo version of your application without
# HD content. You could first change the title to add "(demo)" in the name
# and extend the excluded directories to remove the HD content.
#
#[app@demo]
#title = My Application (demo)
#
#[app:source.exclude_patterns@demo]
#images/hd/*
#
# Then, invoke the command line with the "demo" profile:
#
#buildozer --profile demo android debug
I can provide all the code and apk for test if necessary. I found similiar
problems in other forums (stackoverflow too) but in all of them the touch just
doesn't work - in my case doesn't work many times, but not with a pattern,
apparently.
EDIT
This is a short example of a code which have the same problem:
import kivy
kivy.require('1.0.5')
from kivy.uix.floatlayout import FloatLayout
from kivy.app import App
from kivy.properties import ObjectProperty, StringProperty
from kivy.uix.tabbedpanel import TabbedPanel
from kivy.core.window import Window
Window.clearcolor = (1, 1, 1, 1)
import kivy.metrics as conv
class Dap(FloatLayout):
telal,telaa = Window.size
class DapApp(App):
def build(self):
return Dap()
if __name__ == '__main__':
DapApp().run()
kv file:
#:kivy 1.0
#:import conv kivy.metrics
<Dap>:
TabbedPanel:
do_default_tab: False
background_color: (1, 1, 1, 0)
background_normal: ''
background_disabled_normal:''
background_down: ''
background_disabled_down: ''
tab_width:root.telal/4
tab_height:conv.cm(1.25)
TabbedPanelItem:
background_color: (0, 0, 1, 0.7)
background_normal: ''
background_disabled_normal:''
background_down: ''
background_disabled_down: ''
font_size: 18
color: (1,1,1,1)
text: 'Login'
Label:
text: 'Login tab content area'
background_color: (1, 1, 1, 1)
background_normal: ''
size:(root.telal,conv.cm(3))
color:(0, 0, 1, 1)
TabbedPanelItem:
background_color: (0, 0, 1, 0.7)
background_normal: ''
background_disabled_normal:''
background_down: ''
background_disabled_down: ''
font_size: 18
color: (1,1,1,1)
text: 'Home'
Label:
text: 'Home tab content area'
TabbedPanelItem:
background_color: (0, 0, 1, 0.7)
background_normal: ''
background_disabled_normal:''
background_down: ''
background_disabled_down: ''
font_size: 18
color: (1,1,1,1)
text: 'Pass'
Label:
text: 'Pass tab content area'
TabbedPanelItem:
background_color: (0, 0, 1, 0.7)
background_normal: ''
background_disabled_normal:''
background_down: ''
background_disabled_down: ''
font_size: 18
color: (1,1,1,1)
text: 'Fotos'
Label:
text: 'Fotos tab content area'
Answer: Me and some other people I know had the same problem, and we got it fixed by
upgrading to kivy 1.9.2_dev.
Try changing requirements to `requirements: kivy==master, ...`
|
How to disregard the NaN data point in numpy array and generate the normalized data in Python?
Question: Say I have a numpy array that has some float('nan'), I don't want to impute
those data now and I want to first normalize those and keep the NaN data at
the original space, is there any way I can do that?
Previously I used `normalize` function in `sklearn.Preprocessing`, but that
function seems can't take any NaN contained array as input.
Answer: You can mask your array using the `numpy.ma.array` function and subsequently
apply any `numpy` operation:
import numpy as np
a = np.random.rand(10) # Generate random data.
a = np.where(a > 0.8, np.nan, a) # Set all data larger than 0.8 to NaN
a = np.ma.array(a, mask=np.isnan(a)) # Use a mask to mark the NaNs
a_norm = a / np.sum(a) # The sum function ignores the masked values.
a_norm2 = a / np.std(a) # The std function ignores the masked values.
You can still access your raw data:
print a.data
|
Load a cache file in Maya using Python and create the same render output
Question: I try to load a cache file in Maya using a python script. I used the code
snipped posted here: [importing multiple cache files in Maya using
Python](http://stackoverflow.com/questions/20174424/importing-multiple-cache-
files-in-maya-using-python?answertab=active#tab-top)
My code looks like this:
pm.mel.doImportCacheFile(myCachePath, "", [selectedObject], list())
`myCachePath`: Stores the path to the xml file `selectedObject`: e.g.
`flameShepe1` (represents the fluid container)
First I thought that it finally worked, but whenever I press the play button
and render an image again I don't get the same output. The simulation has the
same shape but the colors are not the same.
When I use `Fluid nCache -> Attache Existing` ... everything works.
How is that possible?
Answer: Reading the attach cache command, Attaching cache to fluid is different, try :
pm.mel.doImportFluidCacheFile(pathCache, "xmlcache", ['fluid1'], [])
Hope it will do the trick !
\---EDIT---
Note that you could do without pymel formating a string like this :
lineToEval = 'doImportFluidCacheFile("{0}", "xmlcache", {{"{1}"}}, {{}});'.format( pathCache, fluidsSel[0])
mel.eval(lineToEval)
|
Restart ipython Kernel with a command from a cell
Question: Is it possible to restart an `ipython` Kernel NOT by selecting `Kernel` >
`Restart` from the notebook GUI, but from executing a command in a notebook
cell?
Answer: As Thomas K. suggested, here is the way to restart the `ipython` kernel from
your keyboard:
import os
os._exit(00)
|
Tkinter GUI Freezes - Tips to Unblock/Thread?
Question: New to python3 and started my first project of using a raspberry pi 3 to
create an interface to monitor and control elements in my greenhouse.
Currently the program reads Temperature and Humidity via a DHT11 sensor, and
controls a number of relays and servo via the GPIO pins.
I created a GUI to display the Temperature and Humidity that reads and updates
every 250ms. There is also a number of buttons that control the specific
relays/servo.
I am now running into some issues with the tkinter GUI freezing on a button
press. I have looked on the forum a bit but don't understand how to implement
threading or a check function to keep my GUI from freezing.
Code Below:
from tkinter import *
import tkinter.font
import RPi.GPIO as GPIO
import time
import Adafruit_DHT
#Logic Setup
temp = 0
humd = 0
#GPIO Setup
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BOARD)
GPIO.setup(16, GPIO.OUT) #Water Pump
GPIO.setup(18, GPIO.IN) #Tank Float Switch
GPIO.output(16, GPIO.LOW)
#Window Setup
win = Tk()
win.title("Test")
win.geometry("200x300+0+0")
#Label Setup
Label (win, text="Temperature", fg="red", bg="black", font="24").grid(row=0, column=0)
Label (win, text="Humidity", fg="red", bg="black", font="24").grid(row=0, column=2)
Label (win, text="Water System", fg="red", bg="black", font="24").grid(row=3, column=0)
TEMP = Label (win, text="", fg="black", bg="white", font="36")
TEMP.grid(row=1, column=0)
HUMD = Label (win, text="", fg="black", bg="white", font="36")
HUMD.grid(row=1, column=2)
#Functions
def wait(time_lapse):
time_start = time.time()
time_end = (time_start+time_lapse)
while time_end >= time.time():
pass
def RTEMP ():
global temp
humidity, temperature = Adafruit_DHT.read_retry(11, 27)
temp = temperature * 9/5.0 + 32
TEMP.configure(text=str(temp))
def RHUMD ():
global humd
humidity, temperature = Adafruit_DHT.read_retry(11, 27)
humd = humidity
HUMD.configure(text=str(humd))
def READ ():
RTEMP()
RHUMD()
win.after(250, READ)
def PUMP ():
if GPIO.input(18):
WTR.config(bg="green")
GPIO.output(16, GPIO.HIGH)
wait (10)
GPIO.output(16, GPIO.LOW)
WTR.config(text="Water", bg="grey")
else:
GPIO.output(16, GPIO.LOW)
WTR.config(text="LOW WATER", bg="red")
#Buttons
WTR = Button(win, text="Water", bg="grey", command = PUMP, height = 2, width = 8)
WTR.grid(row=4, column=0) #Water Pump Control
#Function Calls
READ()
mainloop()
Answer: Tkinter GUIs (as well as most other GUIs) are in a perpetual wait state.
There's no reason to introduce explicit waiting for events.
If you need to run some function, and then run some other function 10ms later,
you would schedule the other code to run with `after`. For example:
GPIO.output(16, GPIO.HIGH)
win.after(10, GPIO.output, 16, GPIO.LOW)
Naturally, if you want to do more than one thing you can use `lambda` or write
another function.
|
How do I extract only the file of a .tar.gz member?
Question: My goal is to unpack a `.tar.gz` file and not its sub-directories leading up
to the file.
My code is based off this
[question](http://stackoverflow.com/questions/4917284/extract-files-from-zip-
without-keeping-the-structure-using-python-zipfile) except instead of
unpacking a `.zip` I am unpacking a `.tar.gz` file.
I am asking this question because the error I'm getting is very vague and
doesn't identify the problem in my code:
import os
import shutil
import tarfile
with tarfile.open('RTLog_20150425T152948.gz', 'r:gz') as tar:
for member in tar.getmembers():
filename = os.path.basename(member.name)
if not filename:
continue
# copy file (taken from zipfile's extract)
source = member
target = open(os.path.join(os.getcwd(), filename), "wb")
with source, target:
shutil.copyfileobj(source, target)
As you can see I copied the code from the linked question and tried to change
it to deal with .tar.gz members instead of .zip members. Upon running the code
I get the following error:
Traceback (most recent call last):
File "C:\Users\dzhao\Desktop\123456\444444\blah.py", line 27, in <module>
with source, target:
AttributeError: __exit__
From the reading I've done, `shutil.copyfileobj` takes as input two "file-
like" objects. `member` is a `TarInfo` object. I'm not sure if a `TarInfo`
object is a file-like object so I tried changing this line from:
source = member #to
source = open(os.path.join(os.getcwd(), member.name), 'rb')
But this understandably raised an error where the file wasn't found.
What am I not understanding?
Answer: This code has worked for me:
import os
import shutil
import tarfile
with tarfile.open(fname, "r|*") as tar:
counter = 0
for member in tar:
if member.isfile():
filename = os.path.basename(member.name)
if filename != "myfile": # do your check
continue
with open("output.file", "wb") as output:
shutil.copyfileobj(tar.fileobj, output, member.size)
break # got our file
counter += 1
if counter % 1000 == 0:
tar.members = [] # free ram... yes we have to do this manually
But your problem might not be the extraction, but rather that your file is
indeed no .tar.gz but just a .gz file.
Edit: Also your getting the error on the with line because python is trying to
call the [`__enter__`](http://stackoverflow.com/questions/1984325/explaining-
pythons-enter-and-exit) function of the member object (wich does not exist).
|
Python for Android : apk stuck on loading screen
Question: I am trying to convert my python 3 code into an apk using Python-For-Android's
tool. They have recently added python 3 support albeit it being experimental.
It may be of importance to note that my whole program is written in pure
python and uses no kivy frameworks, it's graphical interface is all done in
tkinter, no extra modules apart from the ones that already come python have
been used.
I have compiled my programs (the user interface references the 'brains')
stored in the following directiories
package\
__pycache__
__init__.py
Solver.py
main.py
__pycache__
with python-for-android and I've then got the resultant apk, this is on debian
by the way, if it makes any difference, which I have installed, without any
problems so far on my phone...
**It's only when I launch the application, which has installed without problem
that it goes to a white loading screen with loading in the top left corner but
never gets past it**
I read somewhere it's because of a Java error, I understand this may have been
used to compile the programs...
**My question, after all of this background stuff is how do I fix it as I
don't know much about Java?**
EDIT: I've ran this on a virtual machine and got an error... please see
[here](http://i.stack.imgur.com/ZqCk5.png)
EDIT 2: [Javac warnings whilst compiling](http://i.stack.imgur.com/7K4hw.png)
Answer: This is not a supported use of python-for-android. In order for your app to
function, you need to interact with one of the available bootstraps - sdl2,
pygame or webview. Kivy knows how to interact with the sdl2 and pygame
bootstraps, and the webview bootstrap just uses an Android Webview to display
content from a local web server (flask). If you want to use Tkinter, you would
need to create a bootstrap for it (either a new bootstrap in p4a itself, or
some python code to connect Tkinter to an existing bootstrap like sdl2).
|
How to take a word from a dictionary by its definition
Question: I am creating a code where I need to take a string of words, convert it into
numbers where `hi bye hi hello` would turn into `0 1 0 2`. I have used
dictionary's to do this and this is why I am having trouble on the next part.
I then need to compress this into a text file, to then decompress and
reconstruct it into a string again. This is the bit I am stumped on.
The way I would like to do it is by compressing the indexes of the numbers, so
the `0 1 0 2` bit into the text file with the dictionary contents, so in the
text file it would have `0 1 0 2` and `{hi:0, bye:1, hello:3}`.
Now what I would like to do to decompress or read this into the python file,
to use the _indexes_(this is how I will refer to the `0 1 0 2` from now on) to
then take each word out of the dictionary and reconstruct the sentence, so if
a **0** came up, it would look into the dictionary and then find what has a
`0` definition, then pull that out to put into the string, so it would find
`hi` and take that.
I hope that this is understandable and that at least one person knows how to
do it, because I am sure it is possible, however I have been unable to find
anything here or on the internet mentioning this subject.
Answer: Yes, you can just use regular dicts and lists to store the data. And use
`json` or `pickle` to persist the data to disk.
import pickle
s = 'hi hello hi bye'
words = s.split()
d = {}
for word in word:
if word not in d:
d[word] = len(d)
data = [d[word] for word in words]
with open('/path/to/file', 'w') as f:
pickle.dump({'lookup': d, 'data': data}, f)
Then read it back in
with open('/path/to/file', 'r') as f:
dic = pickle.load(f)
d = d['lookup']
reverse_d = {v: k for k, v in d.iteritems()}
data = d['data']
words = [reverse_d[index] for index in data]
line = ' '.join(words)
print line
|
How to use SyntaxNet output to operate an executive command ,for example save a file in a folder, on Linux system
Question: having downloaded and trained
[SyntaxNet](https://github.com/tensorflow/models/tree/master/syntaxnet), I am
trying to write a program that can open new/existed files, for example AutoCAD
files, and save the files in an specific directory by analyzing the text:
**open LibreOffice file X** . considering the output of SyntaxNet as:
echo "save AUTOCAD file X in directory Y" | ./test.sh > output.txt
Input: save AUTOCAD file X in directory Y
Parse:
save VB ROOT
+-- X NNP dobj
| +-- file NN compound
| +-- AUTOCAD CD nummod
+-- directory NN nmod
+-- in IN case
+-- Y CD nummod
First I thought about changing the parsed text to XML format then parse the
XML file with semantic analyses (like `SPARQL`) to find ROOT=save, dobj=X, and
nummode=Y and write a python program that can do the same thing which is said
in the text
1. I donβt know that if I change the parsed text to XML and then use semantic analyses that uses querying in order to match `ROOT` with its counterpart function or script that saves `dobj`, in a directory with mention in `nummode`
2. I have some ideas to connect python to the terminal with `subprocess`package but I didn't find anything that can help me saving for example an AUTOCAD file, or any other files, from the terminal or do I need to write a script, `.sh`, with the help of python?
I did a huge research on syntactic and semantic analyses on texts, such as
[Christian Chiarcos,
2011](http://svn.aksw.org/papers/2012/LDL/ldl2012_proceedings/public/chiarcos.pdf),
[Hunter and Cohen, 2006](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1702322/)
and [Verspoor et al.,
2015](https://www.semanticscholar.org/paper/Interoperability-of-text-corpus-
annotations-with-Verspoor-Kim/65c80f186ff7da0c7e4e847856d9acf57e479061/pdf),
and also studied [Microsoft Cortana](http://windows.microsoft.com/en-
us/windows-10/getstarted-what-is-cortana), [Sirius](http://sirius.clarity-
lab.org/tutorial/), [google now](https://en.wikipedia.org/wiki/Google_Now) but
none of them went through the detail of how they change a **parsed text** to a
**executive command** which brought me to the conclusion that this work is too
easy to be talked about but as I am none computer science major I cannot
figure out what I can do about it.
Answer: I am a beginner in Computer Science World and SyntaxNet. I wrote a simple
SyntaxNet-Python algorithm which used SyntaxNet to analyze a text command a
user inserts,"open the file book which I have written with laboratory writer
with LibreOffice writer", and then analyzes SyntaxNet output with a python
algorithm in order to turn it to an executive command, in this case open a
file, with any supported format, with LibreOffice in Linux, Ubuntu 14.04)
environment. you can see
[here](https://help.libreoffice.org/Common/Starting_the_Software_With_Parameters)
the different command lines defined by LibreOffice in order to use different
application in this package.
1. After installing and running SyntaxNet (the installation process in explained [here](https://github.com/JoshData/models/blob/b72274d38f169f77e6a15e54834f463f627dc82a/syntaxnet/build/ubuntu-14.04_x64.sh)),the shell script is opened [demo.sh](https://github.com/tensorflow/models/blob/master/syntaxnet/syntaxnet/demo.sh) in `~/models/syntaxnet/suntaxnet/` directory and the `conl2tree` function (`line 54 to 56`) is erased in order to get a `tab delimited` output from SyntaxNet instead of a tree format output.
2. This command is typed in the terminal window:
echo 'open the file book which I have writtern with the laboratory writer with libreOffice writer' | syntaxnet/demo.sh > output.txt
the `output.txt` document is saved in the directory where `demo.sh` exists and
it will be somehow like the below figure:
[](http://i.stack.imgur.com/KrzfB.png)
3. The `output.txt` as the input file and use the below python algorithm to analyze SyntaxNet output and identifies the name of the file you want the target application from LibreOffice package and the command the user wants to use.
`#!/bin/sh`
import csv
import subprocess
import sys
import os
#get SyntaxNet output as the Python algorithm input file
filename='/home/username/models/syntaxnet/work/output.txt'
#all possible executive commands for opening any file with any format with Libreoffice file
commands={
('open', 'libreoffice', 'writer'): ('libreoffice', '--writer'),
('open', 'libreoffice', 'calculator'): ('libreoffice' ,'--calc'),
('open', 'libreoffice', 'draw'): ('libreoffice' ,'--draw'),
('open', 'libreoffice', 'impress'): ('libreoffice' ,'--impress'),
('open', 'libreoffice', 'math'): ('libreoffice' ,'--math'),
('open', 'libreoffice', 'global'): ('libreoffice' ,'--global'),
('open', 'libreoffice', 'web'): ('libreoffice' ,'--web'),
('open', 'libreoffice', 'show'): ('libreoffice', '--show'),
}
#all of the possible synonyms of the application from Libreoffice
comments={
'writer': ['word','text','writer'],
'calculator': ['excel','calc','calculator'],
'draw': ['paint','draw','drawing'],
'impress': ['powerpoint','impress'],
'math': ['mathematic','calculator','math'],
'global': ['global'],
'web': ['html','web'],
'show':['presentation','show']
}
root ='ROOT' #ROOT of the senctence
noun='NOUN' #noun tagger
verb='VERB' #verb tagger
adjmod='amod' #adjective modifier
dirobj='dobj' #direct objective
apposmod='appos' # appositional modifier
prepos_obj='pobj' # prepositional objective
app='libreoffice' # name of the package
preposition='prep' # preposition
noun_modi='nn' # noun modifier
#read from Syntaxnet output tab delimited textfile
def readata(filename):
file=open(filename,'r')
lines=file.readlines()
lines=lines[:-1]
data=csv.reader(lines,delimiter='\t')
lol=list(data)
return lol
# identifies the action, the name of the file and whether the user mentioned the name of the application implicitely
def exe(root,noun,verb,adjmod,dirobj,apposmod,commands,noun_modi):
interprete='null'
lists=readata(filename)
for sublist in lists:
if sublist[7]==root and sublist[3]==verb: # when the ROOT is verb the dobj is probably the name of the file you want to have
action=sublist[1]
dep_num=sublist[0]
for sublist in lists:
if sublist[6]==dep_num and sublist[7]==dirobj:
direct_object=sublist[1]
dep_num=sublist[0]
dep_num_obj=sublist[0]
for sublist in lists:
if direct_object=='file' and sublist[6]==dep_num_obj and sublist[7]==apposmod:
direct_object=sublist[1]
elif direct_object=='file' and sublist[6]==dep_num_obj and sublist[7]==adjmod:
direct_object=sublist[1]
for sublist in lists:
if sublist[6]==dep_num_obj and sublist[7]==adjmod:
for key, v in comments.iteritems():
if sublist[1] in v:
interprete=key
for sublist in lists:
if sublist[6]==dep_num_obj and sublist[7]==noun_modi:
dep_num_nn=sublist[0]
for key, v in comments.iteritems():
if sublist[1] in v:
interprete=key
print interprete
if interprete=='null':
for sublist in lists:
if sublist[6]==dep_num_nn and sublist[7]==noun_modi:
for key, v in comments.iteritems():
if sublist[1] in v:
interprete=key
elif sublist[7]==root and sublist[3]==noun: # you have to find the word which is in a adjective form and depends on the root
dep_num=sublist[0]
dep_num_obj=sublist[0]
direct_object=sublist[1]
for sublist in lists:
if sublist[6]==dep_num and sublist[7]==adjmod:
actionis=any(t1==sublist[1] for (t1, t2, t3) in commands)
if actionis==True:
action=sublist[1]
elif sublist[6]==dep_num and sublist[7]==noun_modi:
dep_num=sublist[0]
for sublist in lists:
if sublist[6]==dep_num and sublist[7]==adjmod:
if any(t1==sublist[1] for (t1, t2, t3) in commands):
action=sublist[1]
for sublist in lists:
if direct_object=='file' and sublist[6]==dep_num_obj and sublist[7]==apposmod and sublist[1]!=action:
direct_object=sublist[1]
if direct_object=='file' and sublist[6]==dep_num_obj and sublist[7]==adjmod and sublist[1]!=action:
direct_object=sublist[1]
for sublist in lists:
if sublist[6]==dep_num_obj and sublist[7]==noun_modi:
dep_num_obj=sublist[0]
for key, v in comments.iteritems():
if sublist[1] in v:
interprete=key
else:
for sublist in lists:
if sublist[6]==dep_num_obj and sublist[7]==noun_modi:
for key, v in comments.iteritems():
if sublist[1] in v:
interprete=key
return action, direct_object, interprete
action, direct_object, interprete = exe(root,noun,verb,adjmod,dirobj,apposmod,commands,noun_modi)
# find the application (we assume we know user want to use libreoffice but we donot know what subapplication should be used)
def application(app,prepos_obj,preposition,noun_modi):
lists=readata(filename)
subapp='not mentioned'
for sublist in lists:
if sublist[1]==app:
dep_num=sublist[6]
for sublist in lists:
if sublist[0]==dep_num and sublist[7]==prepos_obj:
actioni=any(t3==sublist[1] for (t1, t2, t3) in commands)
if actioni==True:
subapp=sublist[1]
else:
for sublist in lists:
if sublist[6]==dep_num and sublist[7]==noun_modi:
actioni=any(t3==sublist[1] for (t1, t2, t3) in commands)
if actioni==True:
subapp=sublist[1]
elif sublist[0]==dep_num and sublist[7]==preposition:
sublist[6]=dep_num
for subline in lists:
if subline[0]==dep_num and subline[7]==prepos_obj:
if any(t3==sublist[1] for (t1, t2, t3) in commands):
subapp=sublist[1]
else:
for subline in lists:
if subline[0]==dep_num and subline[7]==noun_modi:
if any(t3==sublist[1] for (t1, t2, t3) in commands):
subapp=sublist[1]
return subapp
sub_application=application(app,prepos_obj,preposition,noun_modi)
if sub_application=='not mentioned' and interprete!='null':
sub_application=interprete
elif sub_application=='not mentioned' and interprete=='null':
sub_application=interprete
# the format of file
def format_function(sub_application):
subapp=sub_application
Dobj=exe(root,noun,verb,adjmod,dirobj,apposmod,commands,noun_modi)[1]
if subapp!='null':
if subapp=='writer':
a='.odt'
Dobj=Dobj+a
elif subapp=='calculator':
a='.ods'
Dobj=Dobj+a
elif subapp=='impress':
a='.odp'
Dobj=Dobj+a
elif subapp=='draw':
a='.odg'
Dobj=Dobj+a
elif subapp=='math':
a='.odf'
Dobj=Dobj+a
elif subapp=='math':
a='.odf'
Dobj=Dobj+a
elif subapp=='web':
a='.html'
Dobj=Dobj+a
else:
Dobj='null'
return Dobj
def get_filepaths(directory):
myfile=format_function(sub_application)
file_paths = [] # List which will store all of the full filepaths.
# Walk the tree.
for root, directories, files in os.walk(directory):
for filename in files:
# Join the two strings in order to form the full filepath.
if filename==myfile:
filepath = os.path.join(root, filename)
file_paths.append(filepath) # Add it to the list.
return file_paths # Self-explanatory.
# Run the above function and store its results in a variable.
full_file_paths = get_filepaths("/home/ubuntu/")
if full_file_paths==[]:
print 'No file with name %s is found' % format_function(sub_application)
if full_file_paths!=[]:
path=full_file_paths
prompt='> '
if len(full_file_paths) >1:
print full_file_paths
print 'which %s do you mean?'% subapp
inputname=raw_input(prompt)
if inputname in full_file_paths:
path=inputname
#the main code structure
if sub_application!='null':
command= commands[action,app,sub_application]
subprocess.call([command[0],command[1],path[0]])
else:
print "The sub application is not mentioned clearly"
I again say I am a beginner and the code might not seems so tidied up or
professional but I just tried to use all my knowledge about this fascinating
`SyntaxNet` to a practical algorithm. **This simple algorithm can open the
file:**
1. with any format which is supported by `LibreOffice` e.g. `.odt,.odf,.ods,.html,.odp`.
2. it can understand implicit reference of different application in `LibreOffice`, for example: " open the text file book with libreoffice" instead of "open the file book with libreoffice writer"
3. can overcome the problem of SyntaxNet interpreting the name of the files which are referred as an adjective.
|
Python lxml getpath error
Question: I'm trying to get a full list of xpaths from a device config in xml.
When I run it though I get:
AttributeError: 'Element' object has no attribute 'getpath'
Code is just a few lines
import xml.etree.ElementTree
import os
from lxml import etree
file1 = 'C:\Users\test1\Desktop\test.xml'
file1_path = file1.replace('\\','/')
e = xml.etree.ElementTree.parse(file1_path).getroot()
for entry in e.iter():
print e.getpath(entry)
anyone come across this before ?
Thanks
Richie
Answer: You are doing it incorrectly, don't call getroot just parse and iter using
**_lxml**.etree_:
import lxml.etree as et
file1 = 'C:/Users/test1/Desktop/test.xml'
root = et.parse(file1)
for e in root.iter():
print root.getpath(e)
If you are dealing with namespaces you may find `getelementpath` usefule:
root.getelementpath(e)
|
Python, using tkinter how to customize where classes of ui components are displayed?
Question: I am very new to python, and am currently trying to organize my tkinter app in
a slightly different way. I'm trying to use classes to make the app more
modular and be able to use methods in the class in multiple places in the app.
Here is the updated code that I have:
from tkinter import *
class Application(Frame):
def __init__(self, master=None):
Frame.__init__(self, master)
self.pack()
self.createWidgets()
self.QUIT.pack(side=LEFT)
self.hi_there.pack(side=RIGHT)
Frame.__init__(self, master)
self.pack()
self.createAnotherWidget()
self.title_label.pack(side=LEFT)
self.title_entry.pack(side=RIGHT)
def say_hi(self):
print("hi there, everyone!")
def createWidgets(self):
self.QUIT = Button(self)
self.QUIT["text"] = "QUIT"
self.QUIT["fg"] = "red"
self.QUIT["command"] = self.quit
self.hi_there = Button(self)
self.hi_there["text"] = "Hello",
self.hi_there["command"] = self.say_hi
def createAnotherWidget(self):
self.title_label = Label(self)
self.title_label["text"] = "Title: "
self.title_entry = Entry(self)
def __init__(self, master=None):
Frame.__init__(self, master)
self.pack()
self.createWidgets()
self.createAnotherWidget()
root = Tk()
app = Application(master=root)
app.mainloop()
root.destroy()
This runs without errors, but nothing shows in the window. How can I customize
where the code for these methods will be placed when rendered? For this
example, I'm simply wanting the `createAnotherWidget` to display below the two
buttons in `createWidgets`
Answer: One solution is to have the caller be responsible for calling `pack` or `grid`
on the individual widgets. This requires that you save references to the
widgets somehow, such as attributes of the object. You do this already for
some widgets, but not for others. You need to be consistent.
Do this by moving the calls to `pack()` from the functions and into your
`__init__`:
def __init__(self, master=None):
Frame.__init__(self, master)
self.pack()
self.createWidgets()
self.createAnotherWidget()
self.QUIT.pack(...)
self.hi_there.pack(...)
self.title_label.pack(...)
self.title_entry.pack(...)
Of course, you'll need to modify `createAnotherWidget` to save references to
the widgets.
The point being, _creating_ widgets and _laying out widgets on the screen_ are
two separate problems that should be solved separately.
* * *
That being said, it's a bit unusual to have functions that create multiple
widgets that are designed to be stitched together by some other function.
A more common pattern is for functions to create widgets that are related, and
to manage the layout of the related widgets itself. That way the caller only
has to worry about organizing the groups of widgets rather than a bunch of
individual widgets.
For example, you might have one function that creates a toolbar. Another that
creates the main area with scrollbars. Another function would create a footer.
Another that creates a form with a bunch of label and entry widgets. Another
one that creates a label/entry combination. And so on.
|
unbound method must be called with instance as first argument
Question: I am trying to build simple fraction calculator in python2.x
from fractions import Fraction
class Thefraction:
def __init__(self,a,b):
self.a = a
self.b =b
def add(self):
return a+b
def subtract(self,a,b):
return a-b
def divide(self,a,b):
return a/b
def multiply(self,a,b):
return a/b
if __name__=='__main__':
try:
a = Fraction(input('Please type first fraction '))
b = Fraction(input('Please type second fraction '))
choice = int(input('Please select one of these 1. add 2. subtract 3. divide 4. multiply '))
if choice ==1:
print(Thefraction.add(a,b))
elif choice==2:
print(Thefraction.subtract(a,b))
elif choice==3:
print(Thefraction.divide(a,b))
elif choice==4:
print(Thefraction.multiply(a,b))
except ValueError:
print('Value error!!!!!')
I am not sure that I made correct class that can be instantiated, however I
used it like,`Thefraction.add` in side of `__name__=='__main__'`. Did I miss
something?
Answer: It's meant to be done like this:
thefraction = Thefraction(a, b)
if choice == 1:
print(thefraction.add())
Then in your class:
def add(self):
return self.a + self.b
and so on. Don't include `a` and `b` as parameters in the methods.
And yes, go through a tutorial on classes again. Thoroughly.
|
Python: Return all Indices of every occurrence of a Sub List within a Main List
Question: I have a Main List and a Sub List and I want to locate the indices of every
occurrence of the Sub List that are found in the Main List, in this example, I
want the following list of indices returned.
>>> main_list = [1,2,3,4,4,4,1,2,3,4,4,4]
>>> sub_list = [4,4,4]
>>> function(main_list, sub_list)
>>> [3,9]
Ideally, the function should also ignore fragments of the sub_list, in this
case [4,4] would be ignored. Also, I expect the elements to all be single
digit integers. Here is a second example, for clarity:
>>> main_list = [9,8,7,5,5,5,5,5,4,3,2,5,5,5,5,5,1,1,1,5,5,5,5,5]
>>> sub_list = [5,5,5,5,5]
>>> function(main_list, sub_list)
>>> [3,11,19]
Answer: Maybe using strings is the way to go?
import re
original = ''.join([str(x) for x in main_list])
matching = ''.join([str(x) for x in sub_list])
starts = [match.start() for match in re.finditer(re.escape(matching), original)]
The only problem with this one is that it doesn't count for overlapping values
|
MongoDB query filters using Stratio's Spark-MongoDB library
Question: I'm trying to query a MongoDB collection using Stratio's Spark-MongoDB
[library](https://github.com/Stratio/Spark-MongoDB). I followed
[this](http://stackoverflow.com/questions/33391840/getting-spark-python-and-
mongodb-to-work-together) thread to get started with and I'm currently running
the following piece of code:
reader = sqlContext.read.format("com.stratio.datasource.mongodb")
data = reader.options(host='<ip>:27017', database='<db>', collection='<col>').load()
This will load the whole collection into Spark dataframe and as the collection
is large, it's a taking a lot of time. Is there any way to specify query
filters and load only selected data into Spark?
Answer: Spark dataframe processing requires schema knowledge. When working with data
sources with flexible and/or unknown schema, before Spark can do anything with
the data, it has to discover its schema. This is what `load()` does. It looks
at the data only for the purpose of discovering the schema of `data`. When you
perform an action on `data`, e.g., `collect()`, Spark will actually read the
data for processing purposes.
There is only one way to radically speed up `load()` and that's by providing
the schema yourself and thus obviating the need for schema discovery. Here is
an example taken from [the library
documentation](https://github.com/Stratio/spark-
mongodb/blob/master/doc/src/site/sphinx/First_Steps.rst#scala-api):
import org.apache.spark.sql.types._
val schemaMongo = StructType(StructField("name", StringType, true) :: StructField("age", IntegerType, true ) :: Nil)
val df = sqlContext.read.schema(schemaMongo).format("com.stratio.datasource.mongodb").options(Map("host" -> "localhost:27017", "database" -> "highschool", "collection" -> "students")).load
You can get a slight gain by sampling only a fraction of the documents in the
collection by setting the `schema_samplingRatio` configuration parameter to a
value less than the `1.0` default. However, since Mongo doesn't have sampling
built in, you'll still be accessing potentially a lot of data.
|
Cannot pickle Scikit learn NearestNeighbor classifier - can't pickle instancemethod objects
Question: I'm trying to pickle NearestNeighbor model but it says can't pickle
instancemethod objects.
The code:
import cPickle as pickle
from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=50, algorithm='ball_tree', metric=self.distanceCIE2000_classifier)
nbrs.fit(allValues)
with open('/home/ubuntu/nbrs.p','wb') as f:
pickle.dump(nbrs, f)
The full traceback:
File "/home/ubuntu/colorSetter.py", line 82, in createClassifier
pickle.dump(nbrs, f)
File "/usr/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle instancemethod objects
Answer: Somewhere within the `NearestNeighbors` instance is an attribute that refers
to the instance method that you passed to it in the `metric` argument. Pickle
won't pickle instance methods, hence the error.
One way around it is to move method `distanceCIE2000_classifier()` out of your
class to a regular standalone function, if that is possible.
|
Python OpenCV face detection code sometimes raises `'tuple' object has no attribute 'shape'`
Question: I am trying to build a face detection application in python using opencv.
Please see below for my code snippets:
# Loading the Haar Cascade Classifier
cascadePath = "/home/work/haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath)
# Dictionary to store image name & number of face detected in it
num_faces_dict = {}
# Iterate over image directory.
# Read the image, convert it in grayscale, detect faces using HaarCascade Classifier
# Draw a rectangle on the image
for img_fname in os.listdir('/home/work/images/caltech_face_dataset/'):
img_path = '/home/work/images/caltech_face_dataset/' + img_fname
im = imread(img_path)
gray = cv2.cvtColor(im, cv2.COLOR_RGB2GRAY)
faces = faceCascade.detectMultiScale(im)
print "Number of faces found in-> ", img_fname, " are ", faces.shape[0]
num_faces_dict[img_fname] = faces.shape[0]
for (x,y,w,h) in faces:
cv2.rectangle(im, (x,y), (x+w,y+h), (255,255,255), 3)
rect_img_path = '/home/work/face_detected/rect_' + img_fname
cv2.imwrite(rect_img_path,im)
This code works fine for most of the images but for some of them it throws an
error -
> AttributeError: 'tuple' object has no attribute 'shape' [](http://i.stack.imgur.com/DTjKC.png)
I get error in the line where I print the number of faces. Any help would be
appreciated.
Answer: From your error understand that you are trying to read the `shape`. But
[shape](http://docs.scipy.org/doc/numpy-1.10.1/reference/arrays.ndarray.html)
is the attribute of `numpy.ndarray`. You are trying to read the shape from the
result of face detection. But that will only return the position only. Look at
the types. Here `img` is an image and `faces` is the result of face detection.
I hope you got the problem.
**Updated with full code. For more clarification**
In [1]: import cv2
In [2]: cap = cv2.VideoCapture(0)
In [3]: ret,img = cap.read()
In [4]: cascadePath = "/home/bikz05/Desktop/SNA_work/opencv-2.4.9/data/haarcascades/haarcascade_frontalface_default.xml"
In [5]: faceCascade = cv2.CascadeClassifier(cascadePath)
In [6]: faces = faceCascade.detectMultiScale(img)
In [7]: type(img)
Out[1]: numpy.ndarray
In [8]: type(faces)
Out[2]: tuple
Look at the diffrence.
In [9]: img.shape
Out[3]: (480, 640, 3)
In [10]: faces.shape
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-40-392225a0e11a> in <module>()
----> 1 faces.shape
AttributeError: 'tuple' object has no attribute 'shape'
If you want the number of faces. It's in the form of list of tuple. You can
find the number of faces using `len` like `len(faces)`
|
Django website on Apache with wsgi failing
Question: so i'm about to lunch my first django website , i currently have a server that
has been configured to host php websites and i've decided to test a simple
empty project to get familiar with the process
so the python version in this server is bit old (2.6) so i couldn't install
latest version of django , i installed 1.6 and since it's just a test that's
not important (im going to upgrade python version when my website is ready to
lunch )
so i've installed django and created a new project called testing in this dire
/home/sdfds34fre/public_html/
which you can see using this domain
<http://novadmin20.com>
and after reading documentation on django (unfortunately they have removed doc
for 1.6 and i had to use
[1.9](https://docs.djangoproject.com/en/1.9/howto/deployment/wsgi/modwsgi/#using-
mod-wsgi-daemon-mode)) and wsgi i've updated my httpd.conf like this
<VirtualHost 111.111.111.111:80>
ServerName 111.111.111.111
DocumentRoot /usr/local/apache/htdocs
ServerAdmin [email protected]
<IfModule mod_suphp.c>
suPHP_UserGroup nobody nobody
</IfModule>
<Directory /home/sdfds34fre/public_html/testing/testing>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess testing python-path=/home/sdfds34fre/public_html/testing:/usr/lib64/python2.6/site-packages/
WSGIProcessGroup testing
WSGIScriptAlias / /home/sdfds34fre/public_html/testing/testing/wsgi.py
</VirtualHost>
but even after restarting httpd service when i go to
http://novadmin20.com/testing/
all i see is directory list , am i missing something ?
here is my wsgi.py file
"""
WSGI config for testing project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.6/howto/deployment/wsgi/
"""
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "testing.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
Answer: `DocumentRoot` directive is the main root of your problem.
([ref](https://httpd.apache.org/docs/current/mod/core.html#documentroot))
try this config:
<VirtualHost 111.111.111.111*:80>
ServerName novadmin20.com
WSGIDaemonProcess testing python-path=/home/sdfds34fre/public_html/testing:/usr/lib64/python2.6/site-packages/
WSGIScriptAlias / /home/sdfds34fre/public_html/testing/testing/wsgi.py
<Directory /home/sdfds34fre/public_html/testing/testing>
<Files wsgi.py>
Order deny,allow
Require all granted
WSGIProcessGroup testing
</Files>
</Directory>
</VirtualHost>
|
How to properly do importing during development of a python package?
Question: I am a first year computer science student currently working on a small
project that I save to dropbox for school.
I apologize in advance for a potentially trivial question. But having little
to no experience and after trying all the debugging techniques I have been
taught, Im really stuck!
It has the following file structure
school_project/
__init__.py #(empty)
main_functions/
__init__.py #(empty)
render.py
filter.py
helper_functions/
__init__.py #(empty)
string.py
utility.py
Currently, I need to use functions founded in `utility.py` in the file
`render.py`. My first attempt at solving this problem was to do `import
..helper_functions.utility` in the file `render.py`.
Unfortunately, it was met with the following error message.
import ..helper_functions.utility
^
SyntaxError: invalid syntax
First off, I have no idea why this relative import is not working.
Secondly, should I just use an absolute import instead? In the form `import
school_project.helper_functions.utility`? If so, would I then need to add the
directory that `school_project/` is currently in to **PYTHONPATH**? How would
I do this?
Would I just modify my computer's **PATH** and **PYTHONPATH** will adapt
accordingly? Or are they separate entities and the process is a bit more
involved? Ive looked at other threads but they all seem to modify
**PYTHONPATH** at run time in the python script itself, something I see as a
giant potential origin of bugs in the future.
Answer: This is the way you should do it:
from ..helper_functions import utility
This will not work if you run your python program normally due to relative
imports.
This is the way you are supposed to run it:
python3 -m helper_functions.utility
But it's somewhat verbose, and doesn't mix well with a shebang line like
#!/usr/bin/env python3.
Although it's not unique. Your package structure is more complex. You'll need
to **include the directory containing your package directory in PYTHONPATH** ,
and do it like this.
from mypackage.mymodule import as_int
You can also do this. But this is not recommanded for beginners. You just frob
the PYTHONPATH in code first with this...
import sys
import os
PACKAGE_PARENT = '..'
SCRIPT_DIR = os.path.dirname(os.path.realpath(os.path.join(os.getcwd(), os.path.expanduser(__file__))))
sys.path.append(os.path.normpath(os.path.join(SCRIPT_DIR, PACKAGE_PARENT)))
from mypackage.mymodule import as_int
|
Tensorflow error: InvalidArgumentError: Different number of component types.
Question: I want to input batches of shuffled images to be training, and I write the
code according to [the generic input images in
TensorVision](https://github.com/TensorVision/TensorVision/blob/master/examples/inputs/generic_input.py),
but I get an error. I cannot figure it where it is wrong. This is my code:
import os
import tensorflow as tf
def read_labeled_image_list(image_list_file):
"""
Read a .txt file containing pathes and labeles.
Parameters
----------
image_list_file : a .txt file with one /path/to/image per line
label : optionally, if set label will be pasted after each line
Returns
-------
List with all filenames in file image_list_file
"""
f = open(image_list_file, 'r')
filenames = []
labels = []
for line in f:
filename, label = line[:-1].split(' ')
filenames.append(filename)
labels.append(int(label))
return filenames, labels
def read_images_from_disk(input_queue):
"""Consumes a single filename and label as a ' '-delimited string.
Parameters
----------
filename_and_label_tensor: A scalar string tensor.
Returns
-------
Two tensors: the decoded image, and the string label.
"""
label = input_queue[1]
file_contents = tf.read_file(input_queue[0])
example = tf.image.decode_png(file_contents, channels=3)
# example = rescale_image(example)
# processed_label = label
return example, label
def random_resize(image, lower_size, upper_size):
"""Randomly resizes an image
Parameters
----------
lower_size:
upper_size:
Returns
-------
a randomly resized image
"""
new_size = tf.to_int32(
tf.random_uniform([], lower_size, upper_size))
return tf.image.resize_images(image, new_size, new_size,
method=0)
def _input_pipeline(filename, batch_size,
processing_image=lambda x: x,
processing_label=lambda y: y,
num_epochs=None):
"""The input pipeline for reading images classification data.
The data should be stored in a single text file of using the format:
/path/to/image_0 label_0
/path/to/image_1 label_1
/path/to/image_2 label_2
...
Args:
filename: the path to the txt file
batch_size: size of batches produced
num_epochs: optionally limited the amount of epochs
Returns:
List with all filenames in file image_list_file
"""
# Reads pfathes of images together with there labels
image_list, label_list = read_labeled_image_list(filename)
images = tf.convert_to_tensor(image_list, dtype=tf.string)
labels = tf.convert_to_tensor(label_list, dtype=tf.int32)
# Makes an input queue
input_queue = tf.train.slice_input_producer([images, labels],
num_epochs=num_epochs,
shuffle=True)
# Reads the actual images from
image, label = read_images_from_disk(input_queue)
pr_image = processing_image(image)
pr_label = processing_label(label)
image_batch, label_batch = tf.train.batch([pr_image, pr_label],
batch_size=batch_size,
shapes = [256,256,3])
# Display the training images in the visualizer.
tensor_name = image.op.name
tf.image_summary(tensor_name + 'images', image_batch)
return image_batch, label_batch
def test_pipeline():
data_folder = '/home/kang/Documents/work_code_PC1/data/UCLandUsedImages/'
data_file = 'UCImage_Labels.txt'
filename = os.path.join(data_folder, data_file)
image_batch, label_batch = _input_pipeline(filename, 75)
# Create the graph, etc.
init_op = tf.initialize_all_variables()
sess = tf.InteractiveSession()
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
a = sess.run([image_batch, label_batch])
coord.request_stop()
coord.join(threads)
print("Finish Test")
return a
if __name__ == '__main__':
# aa = test_preprocc()
# matplotlib.pyplot.imshow(aa[1])
a1 = test_pipeline()
a2 = test_pipeline()
but it comes out an error, it confuses me for a long time:
Traceback (most recent call last):
File "<ipython-input-7-e24901ce3365>", line 1, in <module>
runfile('/home/kang/Documents/work_code_PC1/VGG_tensorflow_UCMerced/readUClandUsedImagetxt1.py', wdir='/home/kang/Documents/work_code_PC1/VGG_tensorflow_UCMerced')
File "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 81, in execfile
builtins.execfile(filename, *where)
File "/home/kang/Documents/work_code_PC1/VGG_tensorflow_UCMerced/readUClandUsedImagetxt1.py", line 254, in <module>
a1 = test_pipeline()
File "/home/kang/Documents/work_code_PC1/VGG_tensorflow_UCMerced/readUClandUsedImagetxt1.py", line 244, in test_pipeline
a = sess.run([image_batch, label_batch])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 340, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 564, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 637, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 659, in _do_call
e.code)
InvalidArgumentError: Different number of component types. Types: uint8, int32, Shapes: [[256,256,3]]
[[Node: batch_11/fifo_queue = FIFOQueue[capacity=32, component_types=[DT_UINT8, DT_INT32], container="", shapes=[[256,256,3]], shared_name="", _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'batch_11/fifo_queue', defined at:
Answer: The error is due to wrong argument `shapes` for function
[`tf.train.batch`](https://www.tensorflow.org/versions/r0.9/api_docs/python/io_ops.html#batch).
The argument `shapes` should be left to default, or should be:
> shapes: (Optional) The shapes for each example. Defaults to the inferred
> shapes for tensor_list
Here you are giving `shapes = [256, 256, 3]`, but you should give the shape
for `pr_image` and `pr_label` in a list:
image_batch, label_batch = tf.train.batch(
[pr_image, pr_label],
batch_size=batch_size,
shapes = [[256,256,3], pr_label.get_shape()])
|
Assigning 2d array in vector of indices
Question: Given 2d array `k = np.zeros((M, N))` and list of indices in the range `0, 1
.., M-1` of size `N` called `places = np.random.random_integers(0, M-1, N)`
how do I assign 1 in each column of `k` in the `places[i]` index where i is
running index. I would like to achieve that in python compact style and
without any loops
Examples:
N = 5, M =3
places= 0, 0, 1, 1, 2
Then:
k = [1, 1, 0, 0, 0
0, 0, 1, 1, 0
0, 0, 0, 0, 1]
Answer:
rslt = np.zeros((M, N))
for i, v in enumerate(places): rslt[v,i]=1
Full code:
import numpy as np
N = 5
M=3
#places = np.random.random_integers(0, M-1, N)
places= 0, 0, 1, 1, 2
rslt = np.zeros((M, N))
for i, v in enumerate(places): rslt[v,i]=1
print(rslt)
Out [34]:
[[ 1. 1. 0. 0. 0.]
[ 0. 0. 1. 1. 0.]
[ 0. 0. 0. 0. 1.]]
|
Why isn't kv binding of the screen change working?
Question: I've defined two buttons: one in kv and one in Python. They are located in
different screens and are used to navigate between them. What I found strange
is that the button that was defined in Python successfully switched the
screen, while the one defined in kv did not. Perhaps I'm not accessing the
`App` class method properly?
Here is the code of the issue:
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import Screen, ScreenManager
from kivy.uix.button import Button
Builder.load_string('''
<MyScreen1>:
Button:
id: my_bt
text: "back"
on_release: app.back
''')
class MyScreen1(Screen):
pass
class TestApp(App):
def here(self, btn):
self.sm.current = "back"
def back(self, btn):
self.sm.current = "here"
def build(self):
self.sm = ScreenManager()
s1 = Screen(name = "here")
bt = Button(text = "here",
on_release = self.here)
s2 = MyScreen1(name = "back")
#s2.ids['my_bt'].bind(on_release = self.back)
self.sm.add_widget(s1)
s1.add_widget(bt)
self.sm.add_widget(s2)
return self.sm
TestApp().run()
So if I define the switching function in kv (`on_release`), I can't go to the
`"here"` screen. But if I uncomment that line in Python and comment the
`on_release: app.back` instead, everything works fine.
I'm pretty sure that this is the correct way to access the current app, since
it doesn't give me any errors (which means that the method was successfully
located)
Answer: That's a subtle difference between kv and python: In kv you actually have to
write the [callback as a function call (a python
expression)](https://kivy.org/docs/api-kivy.lang.html#value-expressions-on-
property-expressions-ids-and-reserved-keywords), in this case:
on_release: app.back(self)
|
Which number is bigger and by how much for random numbers
Question: I'm doing an online tutorial on python, and its asking to write a program that
takes two random integers as parameters and display which integar is larger
and by how much using a void function. But if both random intgars are the same
the def show-larger should handle that too. So in the main section I have
written the code to generate 2 random numbers, I'm not sure how to do the next
step and call show_larger with the integers as arguments. The example
solutions that are given are 3 is larger than 1 by 2 and The integers are
equal, both are 3. This is what I have so far:
def main():
value_1=random.randrange(1,6)
value_2=random.rangrange(1,6)
def show_larger():
difference= value_1=-value_2
if value_1 == value_2:
print('The integers are equal, both are' + str(value_1))
Answer: This would be a simple way of doing it.
import random
def main():
value_1=random.randrange(1,6)
value_2=random.randrange(1,6)
show_larger(value_1, value_2)
def show_larger(value_1, value_2):
if value_1 == value_2:
print('The integers are equal, both are' + str(value_1))
return
else:
print(("value_1" if value_1>value_2 else "value_2") + "is bigger by" + str(abs(value_1 - value_2)))
main()
|
Push button GPIO.FALLING event getting triggered twice
Question: This is my first attempt at coding a Raspberry Pi and a hardware push button
on a breadboard. The program is simple, when a button press is detected, turn
on an LED on the breadboard for 1 second. My code seems to work, but strangely
every so often one button push triggers the callback function twice. I'm a
total programming noob, so I'm not sure if the problem is with my code, or if
the HW or button is somehow actually falling twice. I'm hoping someone here
can help me troubleshoot this strangeness. Here is my code:
#!/usr/bin/env python
import RPi.GPIO as GPIO
import time
LedPin = 11 # pin11 --- led
BtnPin = 12 # pin12 --- button
def setup():
GPIO.setmode(GPIO.BOARD) # Numbers GPIOs by physical location
GPIO.setup(LedPin, GPIO.OUT) # Set LedPin's mode is output
GPIO.setup(BtnPin, GPIO.IN, pull_up_down=GPIO.PUD_UP) # Set BtnPin's mode is input, and pull up to high level(3.3V)
GPIO.output(LedPin, GPIO.HIGH) # Set LedPin high(+3.3V) to off led
def Light(ev=None):
print 'A button press was detected'
GPIO.output(LedPin, 0) # switch led status on
time.sleep(1)
GPIO.output(LedPin, 1) # switch led status off
def loop():
GPIO.add_event_detect(BtnPin, GPIO.FALLING, callback=Light) # wait for Button Press (GPIO Falling)
while True:
pass # Don't do anything, sit forever
def destroy():
GPIO.output(LedPin, GPIO.HIGH) # led off
GPIO.cleanup() # Release resource
if __name__ == '__main__': # Program start from here
setup()
try:
loop()
except KeyboardInterrupt: # When 'Ctrl+C' is pressed, the child program destroy() will be executed.
destroy()
Answer: I found a solution. Using the code
[here](http://raspberrypi.stackexchange.com/questions/28955/unwanted-multiple-
presses-when-using-gpio-button-press-detection), solved my problem.
If you're like me, and getting random duplicate button press events when using
GPIO.add_event_detect, try the linked code instead.
|
are elements of an array in a set?
Question:
import numpy
data = numpy.random.randint(0, 10, (6,8))
test = set(numpy.random.randint(0, 10, 5))
I want an expression whose value is a Boolean array, with the same shape of
`data` (or, at least, can be reshaped to the same shape), that tells me if the
corresponding term in `data` is in `set`.
E.g., if I want to know which elements of `data` are strictly less than `6`, I
can use a single _vectorized_ expression,
a = data < 6
that computes a `6x8` boolean ndarray. On the contrary, when I try an
apparently equivalent boolean expression
b = data in test
what I get is an exception:
TypeError: unhashable type: 'numpy.ndarray'
* * *
### Addendum β benmarching different solutions
_Edit: the possibility #4 below gives wrong results, thanks to hpaulj and
Divakar for getting me on the right track._
Here I compare four different possibilities,
1. What was proposed by Divakar, `np.in1d(data, np.hstack(test))`.
2. One proposal by hpaulj, `np.in1d(data, np.array(list(test)))`.
3. Another proposal by hpaulj, `np.in1d(data, np.fromiter(test, int)).
4. ~~What was proposed in an answer removed by its author, whose name I dont remember,`np.in1d(data, test)`.~~
Here it is the Ipython session, slightly edited to avoid blank lines
In [1]: import numpy as np
In [2]: nr, nc = 100, 100
In [3]: top = 3000
In [4]: data = np.random.randint(0, top, (nr, nc))
In [5]: test = set(np.random.randint(0, top, top//3))
In [6]: %timeit np.in1d(data, np.hstack(test))
100 loops, best of 3: 5.65 ms per loop
In [7]: %timeit np.in1d(data, np.array(list(test)))
1000 loops, best of 3: 1.4 ms per loop
In [8]: %timeit np.in1d(data, np.fromiter(test, int))
1000 loops, best of 3: 1.33 ms per loop
~~`In [9]: %timeit np.in1d(data, test)`
`1000 loops, best of 3: 687 Β΅s per loop`~~
In [10]: nr, nc = 1000, 1000
In [11]: top = 300000
In [12]: data = np.random.randint(0, top, (nr, nc))
In [13]: test = set(np.random.randint(0, top, top//3))
In [14]: %timeit np.in1d(data, np.hstack(test))
1 loop, best of 3: 706 ms per loop
In [15]: %timeit np.in1d(data, np.array(list(test)))
1 loop, best of 3: 269 ms per loop
In [16]: %timeit np.in1d(data, np.fromiter(test, int))
1 loop, best of 3: 274 ms per loop
~~`In [17]: %timeit np.in1d(data, test)`
`10 loops, best of 3: 67.9 ms per loop`~~
In [18]:
~~The better times are given by the (now) anonymous poster's answer.~~
It turns out that the anonymous poster had a good reason to remove their
answer, the results being wrong!
As commented by hpaulj, in the documentation of `in1d` there is a warning
against the use of a `set` as the second argument, but I'd like better an
explicit failure if the computed results could be wrong.
That said, the solution using `numpy.fromiter()` has the best numbers...
Answer: I am assuming you are looking to find a boolean array to detect the presence
of the `set` elements in `data` array. To do so, you can extract the elements
from `set` with
[`np.hstack`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.hstack.html)
and then use
[`np.in1d`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.in1d.html)
to detect presence of **any** element from `set` at **each** position in
`data`, giving us a boolean array of the same size as `data`. Since, `np.in1d`
flattens the input before processing, so as a final step, we need to reshape
the output from `np.in1d` back to its original `2D` shape. Thus, the final
implementation would be -
np.in1d(data,np.hstack(test)).reshape(data.shape)
Sample run -
In [125]: data
Out[125]:
array([[7, 0, 1, 8, 9, 5, 9, 1],
[9, 7, 1, 4, 4, 2, 4, 4],
[0, 4, 9, 6, 6, 3, 5, 9],
[2, 2, 7, 7, 6, 7, 7, 2],
[3, 4, 8, 4, 2, 1, 9, 8],
[9, 0, 8, 1, 6, 1, 3, 5]])
In [126]: test
Out[126]: {3, 4, 6, 7, 9}
In [127]: np.in1d(data,np.hstack(test)).reshape(data.shape)
Out[127]:
array([[ True, False, False, False, True, False, True, False],
[ True, True, False, True, True, False, True, True],
[False, True, True, True, True, True, False, True],
[False, False, True, True, True, True, True, False],
[ True, True, False, True, False, False, True, False],
[ True, False, False, False, True, False, True, False]], dtype=bool)
|
string (file1.txt) search from file2.txt
Question: `file1.txt` contains usernames, i.e.
tony
peter
john
...
`file2.txt` contains user details, just one line for each user details, i.e.
alice 20160102 1101 abc
john 20120212 1110 zjc9
mary 20140405 0100 few3
peter 20140405 0001 io90
tango 19090114 0011 n4-8
tony 20150405 1001 ewdf
zoe 20000211 0111 jn09
...
I want to get a shortlist of user details from `file2.txt` by `file1.txt` user
provided, i.e.
john 20120212 1110 zjc9
peter 20140405 0001 io90
tony 20150405 1001 ewdf
How to use python to do this?
Answer:
import pandas as pd
df1 = pd.read_csv('df1.txt', header=None)
df2 = pd.read_csv('df2.txt', header=None)
df1[0] = df1[0].str.strip() # remove the 2 whitespace followed by the feild
df2 = df2[0].str[0:-2].str.split(' ').apply(pd.Series) # split the word and remove whitespace
df = df1.merge(df2)
Out[26]:
0 1 2 3
0 tony 20150405 1001 ewdf
1 peter 20140405 0001 io90
2 john 20120212 1110 zjc9
|
Python: Encode ordered categories/factors to numeric w/ specific encoding conversion
Question: TLDR: What's the most concise way to encode ordered categories to numeric w/ a
particular encoding conversion? (i.e. one that preserves the ordered nature of
the categories).
["Weak","Normal","Strong"] --> [0,1,2]
* * *
Assuming I have an **ordered** categorical variable like similar to the
example from
[here](http://chrisalbon.com/python/convert_categorical_to_numeric_naively.html):
import pandas as pd
raw_data = {'patient': [1, 1, 1, 2, 2],
'obs': [1, 2, 3, 1, 2],
'treatment': [0, 1, 0, 1, 0],
'score': ['strong', 'weak', 'normal', 'weak', 'strong']}
df = pd.DataFrame(raw_data, columns = ['patient', 'obs', 'treatment', 'score'])
df
obs treatment score
0 1 strong
1 1 weak
2 1 normal
3 2 weak
4 2 strong
I can create a function and apply it across my dataframe to get the desired
conversation:
def score_to_numeric(x):
if x=='strong':
return 3
if x=='normal':
return 2
if x=='weak':
return 1
df['score_num'] = df['score'].apply(score_to_numeric)
df
obs treatment score score_num
0 1 strong 3
1 1 weak 1
2 1 normal 2
3 2 weak 1
4 2 strong 3
**My question: Is there any way I can do this inline? (w/o having to specific
a separate "score_to_numeric" function.**
Maybe using some kind of lambda or replace functionality? Alternatively, this
[SO](http://stackoverflow.com/questions/24458645/label-encoding-across-
multiple-columns-in-scikit-learn) article suggests that Sklearn's
LabelEncoder() is pretty powerful, and by extension may somehow have a way of
handling this, but I haven't figured it out...
Answer: you can use `map()` in conjunction with a dictionary, containing your mapping:
In [5]: d = {'strong':3, 'normal':2, 'weak':1}
In [7]: df['score_num'] = df.score.map(d)
In [8]: df
Out[8]:
patient obs treatment score score_num
0 1 1 0 strong 3
1 1 2 1 weak 1
2 1 3 0 normal 2
3 2 1 1 weak 1
4 2 2 0 strong 3
|
How do I print the output of the exec() function in python 3.5?
Question: How do I have it so that you pass in a python command to the exec() command,
waits for completion, and print out the output of everything that just
happened?
Many of the code out there uses StringIO, something that is not included in
Python 3.5.
Answer: You can't. [Exec just executes in place and returns
nothing](https://docs.python.org/3/library/functions.html#exec). Your best bet
would be to write the command into a script and execute it with
[subprocess](https://docs.python.org/3/library/subprocess.html) if you really
want to catch all the output.
Here's an example for you:
#!/usr/bin/env python3
from sys import argv, executable
from tempfile import NamedTemporaryFile
from subprocess import check_output
with NamedTemporaryFile(mode='w') as file:
file.write('\n'.join(argv[1:]))
file.write('\n')
file.flush()
output = check_output([executable, file.name])
print('output from command: {}'.format(output))
And running it:
$ ./catchandrun.py 'print("hello world!")'
output from command: b'hello world!\n'
$
|
Python shell is restarted every time I do βrun moduleβ inside editor?
Question: I am using python 2 on Ubuntu and when writing `import webbrowser
webbrowser.open("fb.com")` and run the module, the shell restarts and nothing
happens. What is the problem here?
Answer: It's hard to say without any code presented, but most likely you have not
defined which browser it should use and/or you don't have one set as a
default.
Try registering a controller for the browser you want:
import webbrowser
ff_controller = webbrowser.get("firefox")
ff_controller.open("fb.com")
See additional available browser controllers [in the
manual](https://docs.python.org/2/library/webbrowser.html#webbrowser.register).
If this isn't what's wrong post some code.
|
How to display graph and Video file in a single frame/Window in python?
Question: [I want something similar like this image and this is same layout which I
supposed to want.](http://i.stack.imgur.com/m4FXc.jpg)
And with additional note,I want to generate a graph based on the video file
timings.For Eg. 10 sec this graph should be generated and after 20 sec another
graph should be generated.
Is this possible
Answer: I wanted to show that it's even possible to update the plot for each frame at
video rate.
This example will calculate the average pixel intensity along x-axis and
update the plot for every frame. Since you want to update every 10 sec, you
will need some modification. This Clip (Jenny Mayhem) is taken from
<https://www.youtube.com/watch?v=cOcgOnBe5Ag>
[](http://i.stack.imgur.com/s3heu.jpg)
import cv2
import numpy as np
import matplotlib
matplotlib.use('WXAgg') # not sure if this is needed
from matplotlib.figure import Figure
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas
import wx
class VideoPanel(wx.Panel):
def __init__(self, parent, size):
wx.Panel.__init__(self, parent, -1, size=size)
self.Bind(wx.EVT_PAINT, self.OnPaint)
self.parent = parent
self.SetDoubleBuffered(True)
def OnPaint(self, event):
dc = wx.BufferedPaintDC(self)
dc.Clear()
if self.parent.bmp:
dc.DrawBitmap(self.parent.bmp,0,0)
class MyFrame(wx.Frame):
def __init__(self, fp):
wx.Frame.__init__(self, None)
self.bmp = None
self.cap = cv2.VideoCapture(fp)
ret, frame = self.cap.read()
h,w,c = frame.shape
print w,h,c
videopPanel = VideoPanel(self, (w,h))
self.videotimer = wx.Timer(self)
self.Bind(wx.EVT_TIMER, self.OnUpdateVidoe, self.videotimer)
self.videotimer.Start(1000/30.0)
self.graph = Figure() # matplotlib figure
plottPanel = FigureCanvas(self, -1, self.graph)
self.ax = self.graph.add_subplot(111)
y = frame.mean(axis=0).mean(axis=1)
self.line, = self.ax.plot(y)
self.ax.set_xlim([0,w])
self.ax.set_ylim([0,255])
sizer = wx.BoxSizer(wx.HORIZONTAL)
sizer.Add(videopPanel)
sizer.Add(plottPanel)
self.SetSizer(sizer)
self.Fit()
self.Show(True)
def OnUpdateVidoe(self, event):
ret, frame = self.cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
img_buf = wx.ImageFromBuffer(frame.shape[1], frame.shape[0], frame)
self.bmp = wx.BitmapFromImage(img_buf)
# modify this part to update every 10 sec etc...
# right now, it's realtime update (every frame)
y = frame.mean(axis=0).mean(axis=1)
self.line.set_ydata(y)
self.graph.canvas.draw()
self.Refresh()
if __name__ == '__main__':
fp = "Jenny Mayhem and The Fuzz Orchestrator - Gypsy Gentleman (Live at the Lodge on Queen).mp4"
app = wx.App(0)
myframe = MyFrame(fp)
app.MainLoop()
|
Python and CSV; how to truncate all values in a column?
Question: Given a simple CSV file like this:
Django,Gunslinger,101-707
KingSchultz,Dentist,205-707
Tatum,Marshall,615-707
Broomhilda,Wife,910-707
...,...,...
How do you truncate all the values in the last column so that only the first
three digits remain? (unrelated: so they can be used in math operations)
Desired CSV:
Django,Gunslinger,101
KingSchultz,Dentist,205
Tatum,Marshall,615
Broomhilda,Wife,910
...,...,...
Here is what I have tried so far:
import csv
import re
r = csv.reader(open(input.csv))
for row in r:
re.sub('\-.*', '', row[3])
writer = csv.writer(open('output.csv', 'w'))
writer.writerow(row)
I've verified the `regex` in `re.sub` works correctly. Have tried dozens of
variations, many hours searching, but cannot get the desired output.
Answer: `re.sub` returns the string with the substitution. it does not affect the
third argument itself
|
How to Reduce Running (for loop), Python
Question: 1. Following Code is taking too much running time (more than 5min)
2. Is there any good ways to reduce running time.
data.head() # more than 10 year data, Total iteration is around 4,500,000
Open High Low Close Volume Adj Close \
Date
2012-07-02 125500.0 126500.0 124000.0 125000.0 118500 104996.59
2012-07-03 126500.0 130000.0 125500.0 129500.0 239400 108776.47
2012-07-04 130000.0 132500.0 128500.0 131000.0 180800 110036.43
2012-07-05 129500.0 131000.0 127500.0 128500.0 118600 107936.50
2012-07-06 128500.0 129000.0 126000.0 127000.0 149000 106676.54
3. My Code is
import pandas as pd
import numpy as np
from pandas.io.data import DataReader
import matplotlib.pylab as plt
from datetime import datetime
def DataReading(code):
start = datetime(2012,7,1)
end = pd.to_datetime('today')
data = DataReader(code,'yahoo',start=start,end=end)
data = data[data["Volume"] != 0]
return data
data['Cut_Off'] = 0
Cut_Pct = 0.85
for i in range(len(data['Open'])):
if i==0:
pass
for j in range(0,i):
if data['Close'][j]/data['Close'][i-1]<=Cut_Pct:
data['Cut_Off'][j] = 1
data['Cut_Off'][i] = 1
else
pass
4. Above Code takes more than 5 min. Of course, there are "elif" are following(I didn't write above code) I just tested above code.
Is there any good ways to reduce above code running time?
5. additional
buying list is
Open High Low Close Volume Adj Close \
Date
2012-07-02 125500.0 126500.0 124000.0 125000.0 118500 104996.59
2012-07-03 126500.0 130000.0 125500.0 129500.0 239400 108776.47
2012-07-04 130000.0 132500.0 128500.0 131000.0 180800 110036.43
2012-07-05 129500.0 131000.0 127500.0 128500.0 118600 107936.50
2012-07-06 128500.0 129000.0 126000.0 127000.0 149000 106676.54
2012-07-09 127000.0 133000.0 126500.0 131500.0 207500 110456.41
2012-07-10 131500.0 135000.0 130500.0 133000.0 240800 111716.37
2012-07-11 133500.0 136500.0 132500.0 136500.0 223800 114656.28
for exam, i bought 10 ea at 2012-07-02 with 125,500, and as times goes
daily, if the close price drop under 85% of buying price(125,500) then i
will sell out 10ea with 85% of buying price.
for reducing running time, i made buying list also(i didnt show in here)
but it also take more than 2 min with using for loop.
Answer: Rather than iterating over the 4.5MM rows in your data, use pandas' built-in
indexing features. I've re-written the loop at the end of your code as below:
data.loc[data.Close/data.Close.shift(1) <= Cut_Pct,'Cut_Off'] = 1
.loc locates rows that meet the criteria in the first argument. .shift shifts
the rows up or down depending on the argument passed.
|
pass json file as command line argument through parser , is this possible?
Question: I need to overwrite json file parameters to a python dictionary through
command line argument parser. Since, json file is located in the current
working directory but its name can be dynamic , so i want something like below
:-
> python python_script --infile json_file
# python_script:
if __name__ == "__main__":
profileInfo = dict()
profileInfo['profile'] = "enterprisemixed"
profileInfo['nodesPerLan'] = 50
# json_file:
{
"profile":"adhoc",
"nodesPerLan" : 4
}
I tried to add the following lines, but don't know how to load this json data
to the python dictionary :-
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--infile', nargs = 1, help="JSON file to be processed",type=argparse.FileType('r'))
arguments = parser.parse_args()
Answer: Read the JSON file with the name given to `--infile` and update your
`profileInfo`:
import json
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--infile', nargs=1,
help="JSON file to be processed",
type=argparse.FileType('r'))
arguments = parser.parse_args()
# Loading a JSON object returns a dict.
d = json.load(arguments.infile[0])
profileInfo = {}
profileInfo['profile'] = "enterprisemixed"
profileInfo['nodesPerLan'] = 50
print(profileInfo)
# Overwrite the profileInfo dict
profileInfo.update(d)
print(profileInfo)
|
using string.strip() in python to extract specific coloumns
Question:
import requests
from bs4 import BeautifulSoup
f = open('path to create /Price.csv','w')
errorFile = open('path to create /errorPrice.txt','w')
year = 2012; month = 1; day =1
if year<= 2016:
if day > 32:
month += 1
day = 1
if month >12:
year += 1
month = 1
url = 'http://nepalstock.com.np/main/todays_price/index/1/stock-name/desc/YTozOntzOjk6InN0YXJ0RGF0ZSI7czoxMDoiMjAxNi0wNi0wOSI7czoxMjoic3RvY2stc3ltYm9sIjtzOjA6IiI7czo2OiJfbGltaXQiO3M6MjoiNTAiO30?startDate='+str(year)+'-'+str(month)+'-'+str(day)+'&stock-symbol=&_limit=500'
res = requests.get(url)
soup = BeautifulSoup(res.text, 'lxml')
for child in soup.findAll('table'):
for row in child.findAll('tr')[2:]:
for col in row.findAll('td'):
try:
SN = col[3].string.strip()
f.write(SN+'\n')
except Exception as e:
errorFile.write (str(day) + '*************'+ str(e)+'***********************'+ str(col)+'\n')
pass
#day += 1
f.close
errorFile.close
'I wanted to extract col[3] but it wouldn't work and shows nothing I can
traceback in error file although I am a complete noob and maybe mistaken on
that bit.'
Answer: A few things about your code before your actual error:
Use the [`with`](http://effbot.org/zone/python-with-statement.htm) statement
to open files. Manually opening and closing files is unnecessary.
Use `res.content` instead of `res.text` if you do not plan on printing the web
page. If you are passing the page source to another function like `soup.parse`
always use `res.content`.
About your problem:
`row.findAll('td')` is the list of all the table data, from wich you need the
3d index, so you do not need to iterate over it.
Just use it like this:
for child in soup.findAll('table'):
for row in child.findAll('tr')[2:-4]:
cols = row.findAll('td')
SN = cols[3].string.strip()
print(SN)
Also, as you can see by the `-4` the last 4 rows do also not contain any data.
|
dnspython not updated when changing resolv.conf
Question: This snippet works perfect
import dns
import dns.resolver
default = dns.resolver.get_default_resolver()
nameserver = default.nameservers[0]
except that if I change /etc/resolv.conf by hand and call again
get_default_resolver function it doesn't bring me the updated address. I need
to restart python console to see the change effect.
What am I missing? Should I do the change to resolv.conf using the same
library?
Thanks in advance,
Answer: If you're on a non-Debian based Linux and using glibc then you have to be
aware that glibc caches resolv.conf and won't look at it again unless
explicitly told to. Essentially it is up to your application to tell glibc if
resolv.conf has changed and needs to be reloaded by calling `__res_init`. See
[Python not getting IP if cable connected after script has
started](http://stackoverflow.com/questions/13606584/python-not-getting-ip-if-
cable-connected-after-script-has-started) and
<https://sourceware.org/bugzilla/show_bug.cgi?id=984> for details.
|
Trouble importing shared object in Python
Question: I am attempting to import a shared object into my python code, like so:
import bz2
to which I get the following error:
> ImportError: ./bz2.so: cannot open shared object file: No such file or
> directory
Using the imp module, I can verify that Python can actually find it:
>>> import imp
>>> imp.find_module('bz2')
(<open file 'bz2.so', mode 'rb' at 0xb6f085f8>, 'bz2.so', ('.so', 'rb', 3))
The shared object file is in my PYTHONPATH and my LD_LIBRARY_PATH.
Any insights into why I can't import this shared object? Thanks!
Answer: bz2.so is the shared object the provides the bzip functionality (which was
written in C) for the python modules. You don't import it directly when you do
import bz2 , you are actually importing a python module called bz2 which then
uses the .so file.
This usually means you haven't got the development version of the bzip library
installed or you don't have a c compiler setup for the pip installer to use to
build this for you.
You don't say which linux you are using but the general pattern is look in the
package manager for bzip2 dev or devel packages and install those.
|
How do I implement this similarity measure in Python?
Question: I tried implementing the distance measure shown in the image, in Python as
such:
import numpy as np
A = [1, 2, 3, 4, 5, 6, 7, 8, 1]
B = [1, 2, 3, 2, 4, 6, 7, 8, 2]
A = np.asarray(A).flatten()
B = np.asarray(B).flatten()
x = np.sum(1 - np.divide((1 + np.minimum(A, B)), (1 + np.maximum(A, B))))
print("Distance: {}".format(x))
but after testing, it doesn't seem to be the right approach. The maximum value
returned if there's no similarity at all between the given vectors should be
1, with 0 as perfect similiarity. A and B in the image are both vectors with
size m.
Edit: forgot to add that I ignored the part for min(A, B) < 0 as that wont
ever happen for my intentions
[](http://i.stack.imgur.com/QH48e.png)
Answer: This should work. First, we create a matrix `AB` by stacking the columns and
calculate the minimum vector `AB_min` and maximum vector `AB_max` out of that.
Then, we compute `D` as you defined it, making use of `numpy.where` to specify
the two conditions. After that, we sum the elements to get the `D_proposed` as
you defined it. It gives a value of `0.9` for this example.
import numpy as np
A = [1, 2, 3, 4, 5, 6, 7, 8, 1]
B = [1, 2, 3, 2, 4, 6, 7, 8, 2]
AB = np.column_stack((A,B))
AB_min = np.min(AB,1)
AB_max = np.max(AB,1)
print AB_min
print AB_max
D = np.where(AB_min >= 0.,\
1. - (1. + AB_min) / (1. + AB_max),\
1. - (1. + AB_min + abs(AB_min)) / (1. + AB_max + abs(AB_min)))
print D
D_proposed = np.sum(D)
print D_proposed
|
Python 3 - Limiting Memory Usage from a Script
Question: I am using the itertools module to create a list of possible permutations for
the order of letters in a rather long sentence. However, every time I do so I
run out of memory (I have 16GB RAM before anyone asks).
I don't have the code on this machine, however it is not inefficient code
since it is a carbon copy of one of the examples in the documentation, there
are simply too many permutations which Python is trying to do all at once.
The questions is, is there a way of limiting the amount of memory that Python
uses, maybe by giving it a pool? I know I should probably change the code, but
I would benefit from a pool of memory for other projects as well.
I cannot use the Theano module because I am using Conda, which is
incompatible. I have tried the gc module with little effect, but again, the
code is an example, and the sentence is about a dozen characters, printing the
list on screen.
**Edit:**
Here's the main section of my code. _I do not suggest running it_ since it
causes my machine to crash.
import itertools
f = open('File.txt','w')
for key, value in dict.items():
print(list(itertools.permutations((str(counter-value)))),file=f)
The dict variable is a 76 element dictionary containing different characters
which the code checks. The actual function of the code is complicated and fits
into a hundred or so line script, but this is the point that I'm having
problems with. If the code works, it should be calculation literally millions
of permutations. My problem is that it tries to do them all at once. I want to
know if there is some way I can limit it, even if it means the code will run
slower.
Answer: You can just loop over the permutations and write each of them to the file,
like this:
import itertools
f = open('File.txt','w')
for key, value in dict.items():
for i in itertools.permutations((str(counter-value))):
print(i, file=f)
As it is a generator, the items are retrieved one-by-one so your memory won't
be exhausted.
|
Python Matplotlib histogram bin shift
Question: I have created a cumulative (CDF) histogram from a list which is what I
wanted. Then I subtracted a fixed value (by using `x = [ fixed_value - i for i
in myArray]`) from each element in the list to essential just shift the bins
over a fixed amount. This however makes my CDF histogram inverted in the
y-axis. I thought it should look identical to the original except the x-axis
(bins) are shifted by a fixed amount.
So can someone explain what I am doing wrong or give a solution to just
shifting the bins over instead of recreating another histogram with a new
array?
EDIT:
Sometimes I see this error:
>>> plt.hist(l,bins, normed = 1, cumulative = True)
C:\Python27\lib\site-packages\matplotlib\axes.py:8332: RuntimeWarning: invalid value encountered in true_divide
m = (m.astype(float) / db) / m.sum()
But it is not exclusive to the second subtracting case. And plt.hist returns
an NaN array. Not sure if this helps but I am getting closer to figuring it
out I think.
EDIT: ~~Here are my two graphs. The first is the "good" one. The second is the
shifted "bad" one:~~
All I want to do is shift the first one bins over by a fixed amount. However,
when I subtract the same value from each list that is in the histogram it
seems to alter the histogram in the y direction and in the x-direction. Also,
note how the first histogram are all negative values, and the second is
positive. I seemed to fix it by keeping it negative (I use `original_array[i]
- fixed_value <0`, instead `fixed_value - original_array[i] > 0`)
Answer: I think that the problem might be in how you calculate the shifted values.
This example works fine for me:
import numpy as np
import matplotlib.pylab as pl
original_array = np.random.normal(size=100)
bins = np.linspace(-5,5,11)
pl.figure()
pl.subplot(121)
pl.hist(original_array, bins, normed=1, cumulative=True, histtype='step')
offset = -2
modified_array = [original_value + offset for original_value in original_array]
pl.subplot(122)
pl.hist(modified_array, bins, normed=1, cumulative=True, histtype='step')
[](http://i.stack.imgur.com/QYQOO.png)
Note that `numpy` might make your life easier (and for large sizes of
`original_array`, a _lot_ faster); for example if your data is a
`numpy.array`, you can also write it as:
modified_array = original_array + offset
|
cx_Oracle: DLL load failed
Question: I'm trying to `import cx_Oracle` in Python and getting an:
ImportError: DLL load failed: The specified procedure could not be found.
[This post](http://stackoverflow.com/questions/24124110/cx-oracle-dll-load-
failed) suggests that there's a mismatch between the bits of cx_Oracle and the
Oracle Client, but I don't believe that's the case in my situation. I
downloaded cx_Oracle for 64-bit Python 3.5 from the [Unofficial Windows
Binaries page](http://www.lfd.uci.edu/~gohlke/pythonlibs/) and have confirmed
that the 64-bit install of Oracle is the first one on my `PATH` (I also have a
32-bit copy, but it comes after). I am using the "standard" Oracle package
FWIW, not the Instant Client. Also, I have 11g Oracle but the only available
binary of cx_Oracle was 12c. Will that make a difference?
Answer: I've had a few DLL Load failures myself when trying to use cx_Oracle (also
using 11g).
1. I've fixed it by downloading **instant_client-basic (12)**. (I assume you're using windows.)
If you use Linux, there will be some environemnt variables you are going to
need to change (you can find all about it here
<https://blogs.oracle.com/opal/entry/configuring_python_cx_oracle_and>).
2. I don't know why did you download cx_Oracle from that unofficial website, but I'd give the official Python's website, <https://pypi.python.org/pypi/cx_Oracle>, a try.
Hope This helps.
|
Python function to return listed imports gives empty result, but works line by line
Question: Adapting code from
[How to list imported
modules?](http://stackoverflow.com/questions/4858100/how-to-list-imported-
modules)
to look like
def imports():
import types
Module = None
Modules = list()
for name, val in globals().items():
if isinstance(val, types.ModuleType):
Module = val.__name__
Modules.append(Module)
return Modules
and saved as imports.py. Intended to be activated in the form
Modules = imports.imports()
Instead, returns an empty list `Modules`.
Have looked here
[Python.org classes +
generators](https://docs.python.org/3.4/tutorial/classes.html#generators)
here
[Python.org Data structures: list
comprehensions](https://docs.python.org/3.4/tutorial/datastructures.html#list-
comprehensions)
and here
[Python return list from
function](http://stackoverflow.com/questions/9317025/python-return-list-from-
function)
and not getting it.
When I run the function body line by line I get the desired result (a list of
the imported modules stored in `Modules`). When it's run as a defined function
it gives an empty list. Why is my returned list variable empty? I've also
tried `yield` with the same result.
Answer: The [`globals()`
function](https://docs.python.org/3/library/functions.html#globals) returns
the global namespace for the _module it is used in_. You are seeing the
modules that are imported in your `imports` module, and there are 0 such
imports. You can't use this function if you wanted to access the globals of
the code that called your function.
You'd have to use the globals of the _calling frame_ instead; in CPython you
can do this with the [`sys._getframe()`
function](https://docs.python.org/3/library/sys.html#sys._getframe), which
returns a frame object; the `f_globals` attribute on that frame is the global
namespace of the caller of your function:
caller_frame = sys._getframe(1)
for name, val in caller_frame.f_globals.items():
Alternatively, have the caller pass in a namespace; that way you can list the
modules used in _any_ module:
def imports(namespace=None):
import types, sys
if namespace is None:
# default: caller globals
namespace = sys._getframe(1).f_globals
modules = []
for name, val in namespace.items():
if isinstance(val, types.ModuleType):
module_name = val.__name__
modules.append(module_name)
return modules
The above version still uses `sys._getframe(1)` if you call the function
without arguments. But you could use it on any dictionary now:
import string
print(imports(vars(string)))
This uses the [`vars()`
function](https://docs.python.org/3/library/functions.html#vars) to grab the
namespace dictionary of the `string` module, for example. This produces:
>>> import string
>>> imports(vars(string))
['re', '_string']
|
Python / Elaphe generates broken barcodes
Question: I am trying to generate code128 barcodes using Python/Elaphe, which is based
on Barcode Writer In Pure Postscript (BWIPP). Strangely, the barcodes
generated by Elaphe don't match the ones generated by BWIPP and do not conform
to code 128 standard.
In particular, I tried a simple example, the generation of a barcode for the
letter 'A' (capital A):
from elaphe import barcode
b = barcode('code128', 'A')
b.show()
That works just fine, but the generated barcode is missing the right part. It
is 35 pixels wide, where it should be 46. The left part of the barcode matches
the one generated by BWIPP and every other code128 generator - it's only the
right section that is missing.
Anyone know what's wrong?
(Using elaphe 0.6.0 with python 2.7.10 on Kubuntu 15.10)
Answer: See this bug report:
<https://bitbucket.org/whosaysni/elaphe/issues/84/code-128-generation-
produces-unreadable>
It seems that this bug is fixed in the current source version, also the bug is
still marked as new. The the patch which fixed this bug imho:
<https://bitbucket.org/whosaysni/elaphe/commits/19dd8f58c76ac75914e3e4d8ae7db1b9489cbcb8?at=develop>
This patch is from the 2014-10-22, the current version elaphe 0.6.0 on pypi is
from 2013-12-05. If you installed via pip you have the buggy version.
There is a python3 enabled fork of this project
<https://pypi.python.org/pypi/elaphe3>, which was uploaded on the 2016-05-25.
So this fork might contain the necessary bugfix. You could remove elaphe and
install elaphe3.
However, considering that elaphe (at least the non 3 version) looks pretty
abandoned and has GhostScript and PIL as dependencies I would look for another
solution.
|
Protect against null environment variables when using os.path.expandvars
Question: How can I protect against Python's `os.path.expandvars()` treatment of
null/unset environment variables?
From
[os.path](https://docs.python.org/2/library/os.path.html#os.path.expandvars):
> Malformed variable names and references to non-existing variables are left
> unchanged.
>>> os.path.expandvars('$HOME/stuff')
'/home/dennis/stuff'
>>> os.path.expandvars('foo/$UNSET/bar')
'foo/$UNSET/bar'
I could perform this step separately from other path processing
(`expanduser()`, `realpath(),` `normpath()`, etc.) instead of chaining them
all together and check to see if the result is unchanged, but that is normal
when there are no variables present - so I would also have to parse the string
to see if it has any variables. I fear that may not be robust enough.
The issue comes into play when creating a file using the result. I end up with
a file with the variable name as a literal part of the file's name. I want to
instead reject the input with an exception.
Answer: You could use `string.Template`, which uses a similar dollar-sign syntax for
interpolation of variables but will raise `KeyError` if something doesn't
exist rather than leaving it in.
import os
from string import Template
print(Template('$HOME/stuff').substitute(os.environ))
|
Gunicorn Django [CRITICAL] WORKER TIMEOUT
Question: Since I did a pip install google-api-python-client I have my Gunicorn workers
stoping after timeout.
Django==1.5.3
Gunicorn==0.12.2
I'm not really sure if it comes from the pip but I did nothing particular
except a database migration which migrated without error.
I use this command for Gunicorn:
gunicorn_django myapp.py --bind 127.0.0.1:8181 --timeout 120 --log-file /tmp/myapp.gunicorn.log --log-level info --workers 8 --pid /tmp/myapp.pid
I tryed the param --spew to have some trace but it doesn't help me:
[2016-06-13 21:09:52 +0000] [15602] [INFO] Worker exiting (pid: 15602)
[2016-06-13 21:09:52 +0000] [15601] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/gunicorn/arbiter.py", line 557, in spawn_worker
worker.init_process()
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process
self.load_wsgi()
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/gunicorn/workers/base.py", line 136, in load_wsgi
self.wsgi = self.app.wsgi()
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/gunicorn/app/djangoapp.py", line 106, in load
return mod.make_wsgi_application()
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/gunicorn/app/django_wsgi.py", line 37, in make_wsgi_application
if get_validation_errors(s):
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/core/management/validation.py", line 35, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/db/models/loading.py", line 166, in get_app_errors
self._populate()
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/db/models/loading.py", line 72, in _populate
self.load_app(app_name, True)
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/db/models/loading.py", line 96, in load_app
models = import_module('.models', app_name)
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/myapp/prod/apps/admin/models.py", line 5, in <module>
from django.contrib.auth.models import User
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/contrib/auth/models.py", line 18, in <module>
from django.contrib.auth.hashers import (
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/contrib/auth/hashers.py", line 8, in <module>
from django.test.signals import setting_changed
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/test/__init__.py", line 6, in <module>
from django.test.testcases import (TestCase, TransactionTestCase,
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/test/testcases.py", line 35, in <module>
from django.test import _doctest as doctest
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/django/test/_doctest.py", line 104, in <module>
import unittest, difflib, pdb, tempfile
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/pdbpp-0.7.2-py2.7.egg/pdb.py", line 38, in <module>
pdb = import_from_stdlib('pdb')
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/pdbpp-0.7.2-py2.7.egg/pdb.py", line 35, in import_from_stdlib
mydict = execfile(pyfile, result.__dict__)
File "/usr/local/lib/python2.7/pdb.py", line 3, in <module>
"""A Python debugger."""
File "/usr/local/lib/python2.7/pdb.py", line 3, in <module>
"""A Python debugger."""
File "/home/myapp/.local/share/virtualenvs/myapp/lib/python2.7/site-packages/gunicorn/debug.py", line 40, in __call__
line = src[lineno]
IndexError: tuple index out of range
[2016-06-13 21:09:52 +0000] [15601] [INFO] Worker exiting (pid: 15601)
As the problem came in the same time I installed google api client, I suspect
pip to have upgraded some libs that are not compatible with my gunicorn or
Django. I checked the pip log without success also.
If I run my Django app with runserver I can't see any bug, it seems very
related to Gunicorn.
Is there a deeper way to debug Gunicorn ?
Answer: After struggling hours I finally found a clue in the pip log
(HOME/.pip/pip.log) .
Installing google api client upgraded some of my previous libs like these:
Installing collected packages: pyopenssl, six, cryptography, idna, pyasn1, setuptools, enum34, ipaddress, cffi, pycparser
Found existing installation: pyOpenSSL 0.14
Uninstalling pyOpenSSL:
...
Found existing installation: six 1.9.0
Uninstalling six:
...
Found existing installation: cryptography 0.7.1
Uninstalling cryptography:
I noticed also some installing warning for cyptography. I decided to put back
the old libs.
* pyOpenSSL 0.14
* six 1.9.0
* cryptography 0.7.1
And it solved the problem. I don't know if it is pyopenssl or cryptography but
it is getting really boring to have all these libs problems.
Hope this will help someone next time.
|
Recognition faces on a video using python
Question: i have this code
import cv2
import sys
# Get user supplied values
imagePath = sys.argv[1]
cascPath = sys.argv[2]
# Create the haar cascade
faceCascade = cv2.CascadeClassifier(cascPath)
# Read the image
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Detect faces in the image
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags = cv2.cv.CV_HAAR_SCALE_IMAGE
)
print "Found {0} faces!".format(len(faces))
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("Faces found", image)
cv2.waitKey(0)
Which is used to detect faces and make a rectangle around them using my cam. i
want to take the detected face as an image and save it in a folder /test/1.jpg
with the same size of rectangle .. in order to compare it with saved photos ..
and get the persons name how can ths happen ?
Answer: Here is the way to save the image
for (x, y, w, h) in faces:
if(x<0 and y<0):
face= frame[ 0:h, 0:h,:]
elif(y<0):
face= frame[ 0:0+h, x:w,:]
elif(x<0):
face= frame[ y:h, 0:0+w,:]
else:
face= frame[ y:h, x:w,:]
cv2.imwrite("folder /test/1.jpg", face)
|
Selecting a Face and Extruding a Cube in Blender Via Python API
Question: I am working on a project in which I will need to be able to extrude the faces
of a cube via the python API.
I have managed to extrude a plane via the API:
import bpy
bpy.data.objects['Cube'].select = True # Select the default Blender Cube
bpy.ops.object.delete() # Delete the selected objects (default blender Cube)
#Define vertices and faces
verts = [(0,0,0),(0,5,0),(5,5,0),(5,0,0)]
faces = [(0,1,2,3)]
# Define mesh and object variables
mymesh = bpy.data.meshes.new("Plane")
myobject = bpy.data.objects.new("Plane", mymesh)
#Set scene of object
bpy.context.scene.objects.link(myobject)
#Create mesh
mymesh.from_pydata(verts,[],faces)
mymesh.update(calc_edges=True)
bpy.context.scene.objects.active = bpy.context.scene.objects['Plane']
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT')
bpy.data.objects['Plane'].select = True # Select the default Blender Cube
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.extrude_region_move(TRANSFORM_OT_translate={"value":(0, 0, 2)})
I have built my Cube in a similar way but my issue is I can't work out how to
select a face to extrude via the Python API
Please find my Cube Code <http://pastebin.com/PQtMcRAh>
All Help is Appreciated :)
Answer: I'm not too sure what you need here, but if you need this:
[](http://i.stack.imgur.com/8jJVM.png)
Then this is the code you need:
import bpy
import bmesh
bpy.data.objects['Cube'].select = True # Select the default Blender Cube
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.delete() # Delete the selected objects (default blender Cube)
#Define vertices, faces, edges
verts = [(0,0,0),(0,5,0),(5,5,0),(5,0,0),(0,0,5),(0,5,5),(5,5,5),(5,0,5)]
faces = [(0,1,2,3), (4,5,6,7), (0,4,5,1), (1,5,6,2), (2,6,7,3), (3,7,4,0)]
#Define mesh and object
mesh = bpy.data.meshes.new("Cube")
object = bpy.data.objects.new("Cube", mesh)
#Set location and scene of object
object.location = bpy.context.scene.cursor_location
bpy.context.scene.objects.link(object)
#Create mesh
mesh.from_pydata(verts,[],faces)
mesh.update(calc_edges=True)
bpy.data.objects['Cube'].select = True
bpy.context.scene.objects.active = bpy.context.scene.objects['Cube'] # Select the default Blender Cube
#Enter edit mode to extrude
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.normals_make_consistent(inside=False)
bm = bmesh.from_edit_mesh(mesh)
for face in bm.faces:
face.select = False
bm.faces[1].select = True
# Show the updates in the viewport
bmesh.update_edit_mesh(mesh, True)
bpy.ops.mesh.extrude_faces_move(MESH_OT_extrude_faces_indiv={"mirror":False}, TRANSFORM_OT_shrink_fatten={"value":-5, "use_even_offset":True, "mirror":False, "proportional":'DISABLED', "proportional_edit_falloff":'SMOOTH', "proportional_size":1, "snap":False, "snap_target":'CLOSEST', "snap_point":(0, 0, 0), "snap_align":False, "snap_normal":(0, 0, 0), "release_confirm":False})
It expands upon your code. To explain:
After you code, it:
1. Uses `bmesh` to modify the mesh (`bm = bmesh.from_edit_mesh(mesh)`)
2. Deselect all faces (`for face in bm.faces: face.select = False`)
3. Selects the top face (`bm.faces[1].select = True`)
4. Updates the viewport so you can see it (`bmesh.update_edit_mesh(mesh, True)`)
5. Extrudes the top face by 5 units (`bpy.ops.mesh.extrude_faces_move(MESH_OT_extrude_faces_indiv={"mirror":False}, TRANSFORM_OT_shrink_fatten={"value": -VALUE, "use_even_offset":True, "mirror":False, "proportional":'DISABLED', "proportional_edit_falloff":'SMOOTH', "proportional_size":1, "snap":False, "snap_target":'CLOSEST', "snap_point":(0, 0, 0), "snap_align":False, "snap_normal":(0, 0, 0), "release_confirm":False})`)
In order to change the number of units extruded, you can modify the `VALUE`
variable.
|
Skip variable number of iterations in Python for loop
Question: I have a list and a for loop such as these:
mylist = ['foo','foo','foo','bar,'bar','hello']
for item in mylist:
cp = mylist.count(item)
print("You "+item+" are present in "+str(cp)+" copy(ies)")
Output:
You foo are present in 3 copy(ies)
You foo are present in 3 copy(ies)
You foo are present in 3 copy(ies)
You bar are present in 2 copy(ies)
You bar are present in 2 copy(ies)
You dude are present in 1 copy(ies)
**Expected output:**
You foo are present in 3 copy(ies)
You bar are present in 2 copy(ies)
You dude are present in 1 copy(ies)
The idea is thus to skip a variable number of iterations within the for loop,
using something like this script (**not working**):
for item in mylist:
cp = mylist.count(item)
print("You "+item+" are present in "+str(cp)+" copy(ies)")
continue(cp)
The script would thus "jump" `cp` elements in the for loop at every round and
start doing again what it is asked at the item `item + cp`.
I know that you can use `continue` to skip multiple iterations (such as in
[this post](http://stackoverflow.com/questions/22295901/skip-multiple-
iterations-in-loop-python)) but I cannot figure out how to use `continue` to
skip a **variable number of iterations**.
Thanks for your answer! :)
* * *
Edit: similar items are always next to each other.
Answer: You could use a `Counter`:
from collections import Counter
mylist = ['foo','foo','foo','bar','bar','hello']
c = Counter(mylist)
for item, cp in c.items():
print("You "+item+" are present in "+str(cp)+" copy(ies)")
|
Acess Issue on Jira/Atlassian with R
Question: I got a Atlassian/Jira account where projects are listed on. I would like to
import the various issues in order to make some extra analysis. I found a way
to connect to Atlassian/Jira and to import what I want on Python:
from jira import JIRA
import os
impot sys
options = {'server': 'https://xxxxxxxx.atlassian.net'}
jira = JIRA(options, basic_auth=('admin_email', 'admin_password'))
issues_in_proj = jira.search_issues('project=project_ID')
It works very well but I would like to make the same thing in R. Is it
possible ? I found the RJIRA package but there are three problems for me:
1. It's still on a dev version
2. I am unable to install it as the DESCRIPTION file is "malformed".
3. It's based on a jira server URL: "<https://JIRAServer:port/rest/api/>" and I have a xxxxx.atlassian.net URL
I also found out that there are curl queries :
curl -u username:password -X GET -H 'Content-Type: application/json'
"http://jiraServer/rest/api/2/search?jql=created%20>%3D%202015-11-18"
but again it is based on a "<https://JIRAServer:port/rest/api/>" form and in
addition I am using windows.
Do someone have an idea ?
Thank you !
Answer: The "<https://JIRAServer:port/rest/api/>" form is the Jira REST API
<https://docs.atlassian.com/jira/REST/latest/>
As a rest api, it just makes http method calls and gives you data.
All jira instances should expose the rest api, just point your browser to your
jira domain like this:
<https://xxxxx.atlassian.net/rest/api/2/field>
and you will see all the fields you have access to, for example
This means you can use php, java or a simple curl call from linux to get your
jira data. I have not used RJIRA but if you dont want to use it, you can still
use R (which I have not used) and make an HTTP call to the rest api.
These two links on my blog might give you more insight:
<http://javamemento.blogspot.no/2016/06/rest-api-calls-with-resttemplate.html>
<http://javamemento.blogspot.no/2016/05/jira-confluence-3.html>
Good luck :)
|
Fuzzer for Python dictionaries
Question: I am currently looking for a fuzzer for Python dictionaries. I am already
aware of some fuzzing tools such as:
* [Burp](https://portswigger.net/burp/)
* [Peach](http://www.peachfuzzer.com/)
However, they seem a bit broader of what I am looking for. Actually, my goal
is to provide a Python dictionary to a given tool and obtain a new dictionary
very similar to the input one but with some values changed.
For instance, providing
{k1: "aaa", k2: "bbb", k3: "ccc"}
I intend to obtain the following new dictionaries:
{k1: "aaj", k2: "bbb", k3: "ccc"}
{k1: "aaa", k2: "bbr", k3: "ccc"}
{k1: "aaa", k2: "bbb", k3: "ccp"}
...
Are you aware of this kind of tools? Any suggestion will be welcomed.
In the best of the scenarios I would like this to be an open source tool.
EDIT1: I post the code I tryed up to the moment:
def change_randomly(self, v):
from random import randint
import string
new_v = list(v)
pos_value = randint(0, len(v)-1)
random_char = string.letters[randint(0, len(string.letters)-1)]
new_v[pos_value] = str(random_char)
return ''.join(new_v)
For sure, it may be improved, so I look forward for any thought regarding it.
Thanks!
Answer: Based on the comments to the question, why not simply writing a fixed length
template based fuzzer like this:
#! /usr/bin/env python
"""Minimal template based dict string value fuzzer."""
from __future__ import print_function
import random
import string
def random_string(rng, length, chars=string.printable):
"""A random string with given length."""
return ''.join(rng.choice(chars) for _ in range(length))
def dict_string_template_fuzz_gen(rng, dict_in):
"""Given a random number generator rng, and starting from
template dict_in expected to have only strings as values,
this generator function yields derived dicts with random
variations in the string values keeping the length of
those identical."""
while True:
yield dict((k, random_string(rng, len(v))) for k, v in dict_in.items())
def main():
"""Drive a test run of minimal template fuzz."""
k1, k2, k3 = 'ka', 'kb', 'kc'
template = {k1: "aaa", k2: "bbb", k3: "ccc"}
print("# Input(template):")
print(template)
rng = random.SystemRandom()
print("# Output(fuzz):")
for n, fuzz in enumerate(dict_string_template_fuzz_gen(rng,
template), start=0):
print(fuzz)
if n > 3:
break
if __name__ == '__main__':
main()
On the use case input it might yield this:
# Input(template):
{'kc': 'ccc', 'kb': 'bbb', 'ka': 'aaa'}
# Output(fuzz):
{'kc': '6HZ', 'kb': 'zoD', 'ka': '5>b'}
{'kc': '%<\r', 'kb': 'g>v', 'ka': 'Mo0'}
{'kc': 'Y $', 'kb': '4z.', 'ka': '0".'}
{'kc': '^M.', 'kb': 'QY1', 'ka': 'P0)'}
{'kc': 'FK4', 'kb': 'oZW', 'ka': 'G1q'}
So this should give the OP something to start as it might be a bootstrapping
problem, where Python knowledge is only starting ...
I just hacked it in - PEP8 compliant though - and it should work no matter if
Python v2 or v3.
Many open ends to work on ... but should get one going to evaluate, if a
library or some simple enhanced coding might suffice. Only the OP will know
but is welcome to comment on this answer proposal or update the question.
Hints: I nearly always use SystemRandom so you can parallelize more robustly.
There may be faster ways, but performance was not visible to me in the
specification. The print's are of course sprankled in as this is educational
at best. HTH
**Update** : Having read the OP comment on changing only part of the strings
to preserve some similarity, one could exchange above fuzzer function by e.g.:
def dict_string_template_fuzz_len_gen(rng, dict_in, f_len=1):
"""Given a random number generator rng, and starting from
template dict_in expected to have only strings as values,
this generator function yields derived dicts with random
variations in the string values keeping the length of
those identical.
Added as hack the f_len parameter that counts the
characters open to be fuzzed from the end of the string."""
r_s = random_string # shorten for line readability below
while True:
yield dict(
(k, v[:f_len + 1] + r_s(rng, f_len)) for k, v in dict_in.items())
and then have as sample output:
# Input(template):
{'kc': 'ccc', 'kb': 'bbb', 'ka': 'aaa'}
# Output(fuzz):
{'kc': 'cc\t', 'kb': 'bbd', 'ka': 'aa\\'}
{'kc': 'cc&', 'kb': 'bbt', 'ka': 'aa\\'}
{'kc': 'ccg', 'kb': 'bb_', 'ka': 'aaJ'}
{'kc': 'ccc', 'kb': 'bbv', 'ka': 'aau'}
{'kc': 'ccw', 'kb': 'bbs', 'ka': "aa'"}
When calling this function instead of the other.
|
Python 2.6 : piping bash commands containing python variables(inside python script)
Question: I want to run the below bash command from my python script:
stat --printf='%U%G%a' /tmp/file1.csv &&md5sum /tmp/file1.csv |awk '{print $1}'
I have done it using `subprocess.Popen` as below:
Command=subprocess.Popen(["stat --printf='%U%G%a' file1.csv &&md5sum file1.csv|awk '{print $1}'"],stdout=subprocess.PIPE,shell=True)
But instead of hard coding the filename I need to pass a python variable. I
tried
filevar="/tmp/file.csv"
Command=subprocess.Popen(["stat --printf='%U%G%a' filevar &&md5sum filevar|awk '{print $1}'"],stdout=subprocess.PIPE,shell=True)
But the above code is not working.
I have been through all the answers related to `How to pass a python variable
to subprocess`
The best answer I got till now is [piping python variable value to bash script
(inside python script)](http://unix.stackexchange.com/a/227351)
Based on this I tried:
Command=subprocess.Popen(["stat","--printf='%U%G%a'",filevar],stdout=subprocess.PIPE)
Which works great. But when I try to include more commands like `md5sum` it
throws error.
Command=subprocess.Popen(["stat","--printf='%U%G%a'",filevar,"&&","md5sum",filevar],stdout=subprocess.PIPE)
Please suggest how this could be done.
Answer: To support spaces and other shell meta-characters, use
[`pipes.quote()`](https://docs.python.org/2/library/pipes.html#pipes.quote):
#!/usr/bin/env python
import pipes
from subprocess import check_output
path = "/path/to/file.csv"
output = check_output("stat --printf='%U%G%a' {path} && md5sum {path}"
.format(path=pipes.quote(path))
+ "|awk '{print $1}'", shell=True)
To get `check_output()` on Python 2.6, see [What's a good equivalent to
python's subprocess.check_call that returns the contents of
stdout?](http://stackoverflow.com/a/2924457/4279)
Note: `pipes.quote()` is not bullet-proof. Don't pass `path` to the shell
unless it comes from a trusted source otherwise you risk an arbitrary shell
command being executed ([shell
injection](https://en.wikipedia.org/wiki/Code_injection#Shell_injection)).
As an alternative, you could [use `plumbum` to emulate the
pipeline](http://plumbum.readthedocs.io/en/latest/index.html):
#!/usr/bin/env python
from plumbum.cmd import stat, md5sum, awk # $ pip install plumbum
path = "/path/to/file.csv"
stat["--printf=%U%G%a", path]()
output = (md5sum[path] | awk['{print $1}'])()
See [How do I use subprocess.Popen to connect multiple processes by
pipes?](http://stackoverflow.com/q/295459/4279)
Depending on your case, it might make sense to implement the command in pure
Python without external commands.
|
Python: Multivariate Linear Regression: statsmodels.formula.api.ols()
Question: I was trying to find the dependence of total power from various factors like
temperature, humidity etc and had the following code:
from functools import reduce
dfs=[df1,df2,df4,df7]
df_final = reduce(lambda left,right:pd.merge(left,right,left_index=True,right_index=True), dfs)
df_final=df_final.drop(["0_x","0_y",0,4],1)
df_final.columns=["OT","HP","H","TP"]
# df_final.shape output is (8790, 4)
import statsmodels.formula.api as smf
lm = smf.ols(formula='TP ~ OT+HP+H',data=df_final).fit()
lm.summary()
Output:
ValueError Traceback (most recent call last)
<ipython-input-45-c09782ec7959> in <module>()
3 lm = smf.ols(formula='TP ~ OT+HP+H',data=df_final).fit()
4
----> 5 lm.summary()
C:\Anaconda3\lib\site-packages\statsmodels\regression\linear_model.py in summary(self, yname, xname, title, alpha)
1948 top_left.append(('Covariance Type:', [self.cov_type]))
1949
-> 1950 top_right = [('R-squared:', ["%#8.3f" % self.rsquared]),
1951 ('Adj. R-squared:', ["%#8.3f" % self.rsquared_adj]),
1952 ('F-statistic:', ["%#8.4g" % self.fvalue] ),
C:\Anaconda3\lib\site-packages\statsmodels\tools\decorators.py in __get__(self, obj, type)
92 if _cachedval is None:
93 # Call the "fget" function
---> 94 _cachedval = self.fget(obj)
95 # Set the attribute in obj
96 # print("Setting %s in cache to %s" % (name, _cachedval))
C:\Anaconda3\lib\site-packages\statsmodels\regression\linear_model.py in rsquared(self)
1179 def rsquared(self):
1180 if self.k_constant:
-> 1181 return 1 - self.ssr/self.centered_tss
1182 else:
1183 return 1 - self.ssr/self.uncentered_tss
C:\Anaconda3\lib\site-packages\statsmodels\tools\decorators.py in __get__(self, obj, type)
92 if _cachedval is None:
93 # Call the "fget" function
---> 94 _cachedval = self.fget(obj)
95 # Set the attribute in obj
96 # print("Setting %s in cache to %s" % (name, _cachedval))
C:\Anaconda3\lib\site-packages\statsmodels\regression\linear_model.py in ssr(self)
1151 def ssr(self):
1152 wresid = self.wresid
-> 1153 return np.dot(wresid, wresid)
1154
1155 @cache_readonly
ValueError: shapes (8790,4294) and (8790,4294) not aligned: 4294 (dim 1) != 8790 (dim 0)
I dont know why I am getting the shape mismatch here. I even tried it with
smaller datasets and was still getting a similar error. Thanks for reading
through. Any comments on how to share my ipython notebook effectively would
also be helpful.
Answer: One of my data columns was string instead of float and was thus throwing this
error.
|
AttributeError: 'bool' object has no attribute 'count'
Question: I am new to Python and I am writing this code below.
fileName = input("Enter the file name: ")
InputFile = open(fileName, 'r')
text=InputFile.readable()
sentences = text.count('.') + text.count('?') + \
text.count(':') + text.count(';') + \
text.count('!')
I can't get past the count function because of this error below. I have done
some research and tried importing some libraries but that didn't work. Can
someone guide me in the right direction? I feel so lost.
text.count(':') + text.count(';') + \
AttributeError: 'bool' object has no attribute 'count'
Answer: There is a buggy line in your code:
text = InputFile.readable()
Which returns a `boolean` that has no attribute `count`
Should have been:
text = InputFile.read()
|
pytest: how to make dedicated test directory
Question: I want next project structure:
|--folder/
| |--tests/
| |--project/
Lets write simple example:
|--test_pytest/
| |--tests/
| | |--test_sum.py
| |--t_pytest/
| | |--sum.py
| | |--__init__.py
sum.py:
def my_sum(a, b):
return a + b
test_sum.py:
from t_pytest.sum import my_sum
def test_my_sum():
assert my_sum(2, 2) == 5, "math still works"
Let's run it:
test_pytest$ py.test ./
========== test session starts ===========
platform linux -- Python 3.4.3, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /home/step/test_pytest, inifile:
collected 0 items / 1 errors
================= ERRORS =================
___ ERROR collecting tests/test_sum.py ___
tests/test_sum.py:1: in <module>
from t_pytest import my_sum
E ImportError: No module named 't_pytest'
======== 1 error in 0.01 seconds =========
It can't see t_pytest module. It was made like httpie
<https://github.com/jkbrzt/httpie/>
<https://github.com/jkbrzt/httpie/blob/master/tests/test_errors.py>
Why? How can I correct it?
Answer: Thank you, jonrsharpe. Py.test does no magic, I need to make my packages
importable myself. There is one of possible solutions:
$ export PYTHONPATH="${PYTHONPATH}: [Path to folder with my module]"
( PYTHONPATH is one of path sources for sys.path )
If I need to make this change permanently, I need to add this string to
`~/.bashrc`
|
Google TTS- Has anyone had any luck with it recently?
Question: I've been trying my hand at a "JARVIS" like system on my Raspberry Pi 2. I've
tinkered around with eSpeak, Festival and pico but I've found pico to be the
best out of these.
However, pico is very boring to listen to and is completely monotonous. Some
sites have posts from 2013-14, wherein users could use the TTS of Google
Translate. However, recently Google changed some of their policies so these
unofficial apps won't work, so now it asks for a CAPTCHA and an HTTP 503
error.
This is an example of code I found on a website that used to work but stopped
ever since Google's policy changes.
#!/usr/bin/python
import urllib, pycurl, os
def downloadFile(url, fileName):
fp = open(fileName, "wb")
curl = pycurl.Curl()
curl.setopt(pycurl.URL, url)
curl.setopt(pycurl.WRITEDATA, fp)
curl.perform()
curl.close()
fp.close()
def getGoogleSpeechURL(phrase):
googleTranslateURL = "http://translate.google.com/translate_tts? tl=en&"
parameters = {'q': phrase}
data = urllib.urlencode(parameters)
googleTranslateURL = "%s%s" % (googleTranslateURL,data)
return googleTranslateURL
def speakSpeechFromText(phrase):
googleSpeechURL = getGoogleSpeechURL(phrase)
downloadFile(googleSpeechURL,"tts.mp3")
os.system("mplayer tts.mp3 -af extrastereo=0 &")
speakSpeechFromText("testing, testing, 1 2 3.")
Has anybody had any luck with Google TTS?
Answer: You can install gtts package for python available. Then by using it you can
save your text in a mp3 file then play it. A simple example where i have used
gtts for saying hello world is
tts = gTTS(text=, lang="en")
tts.save("hello.mp3")
os.system("mpg321 hello.mp3")
|
Import Cython exposed class from another directory
Question: I have a Python project in which I want to make use of a `C++` class that I
exposed through Cython (really, I just need a specific instance of the class,
as the code below will demonstrate). Because there were a bunch of files
associated with the class I decided to put it in its own package.
In the `__init__.py` file of this package, I have what amounts to the
following code:
from foo import Foo # import the class
bar = Foo(some_parameters)
__all__ = ["bar"]
This works fine when I run `__init__.py` by itself. However, when I try to
access it from outside the directory:
from qux import bar # inside main.py in the parent directory
I get the error traced back to the _same_ `__init__.py`:
File "D:\path\to\qux\\__init__.py", line 2, in <module>
from foo import Foo
ImportError: No module named 'foo'
Recall that `foo` is a Cython file, not pure Python code.
The directory structure looks like this:
main_project\
main.py
(more supporting files here)
qux\
__init__.py
cy_foo.cpp
cy_foo.pyx
foo.cpp
foo.h
foo.cp35-win_amd64.pyd
(more supporting files here)
What's going on?
Answer: I don't think this has anything to do with `Cython` per se, rather, this issue
is due to the fact that when you execute `main.py` in the top level directory,
`Python` will execute `__init__.py` and search in the same directory failing
to locate the `foo` module inside `qux`.
As a solution, change the `import` statement in `__init__.py` to:
from qux.foo import Foo
If for some reason you still need to run `__init__.py` as the `__main__`
script, you can use the oh so familiar `if` clause to check the `__name__`:
if __name__ == "__main__":
from foo import Foo
else:
from qux.foo import Foo
bar = Foo("arguments")
__all__ = ["bar"]
Now, if run as the `__main__` module, `__init__.py` will find `foo`, if not,
it allows others to find it.
|
Combine multiple .csv files with python from different directory paths
Question: I am trying to combine multiple .csv files into one .csv file using the
dataframe in pandas. the tricky part about this is, i need to grab multiple
files from multiple days. Please let me know if this does not make sense. As
it currently stands i cannot figure out how to loop through the directory.
Could you offer some assistance?
import csv
import pandas as pd
import datetime as dt
import glob, os
startDate = 20160613
endDate = 20160614
dateRange = endDate - startDate
dateRange = dateRange + 1
todaysDateFilePath = startDate
for x in xrange(dateRange):
print startDate
startDate = startDate + 1
filePath = os.path.join(r"\\export\path", startDate, "preprocessed")
os.chdir(filePath)
interesting_files = glob.glob("trade" + "*.csv")
print interesting_files
df_list = []
for filename in sorted(interesting_files):
df_list.append(pd.read_csv(filename))
full_df = pd.concat(df_list)
saveFilepath = r"U:\Chris\Test_Daily_Fails"
fileList = []
full_df.to_csv(saveFilepath + '\\Files_For_IN' + "_0613_" + ".csv", index = False)
Answer: IIUC you can create `list` `all_files` and in loop append output from `glob`
to `all_files`:
all_files = []
for x in xrange(dateRange):
print startDate
startDate = startDate + 1
filePath = os.path.join(r"\\export\path", startDate, "preprocessed")
os.chdir(filePath)
all_files = all_files + glob.glob("trade" + "*.csv")
print interesting_files
Also you need first append all values to `df_list` and then only once `concat`
(I indented code for `concat`):
df_list = []
for filename in sorted(interesting_files):
df_list.append(pd.read_csv(filename))
full_df = pd.concat(df_list)
|
Print time value in python
Question: I am trying to print the contents of my dictionary with actual time values
(for example, '6:00 AM') from my workbook. I get a different time format when
I print from 'TimeSheet' than I do 'From'. How can I get the actual time value
to print.
[](http://i.stack.imgur.com/XPHRA.jpg)
import openpyxl
wb = openpyxl.load_workbook('Sample.xlsx')
sheet = wb.get_sheet_by_name('Sheet2')
for i in range(1, 57):
From = sheet.cell(row=i, column=1).value
To = sheet.cell(row=i, column=2).value
Activity = sheet.cell(row=i, column=3).value
TimeSheet = {'From': From, 'To': To, 'Activity': Activity}
print(TimeSheet)
Current output:
{'Activity': 'ACTIVITY', 'From': 'FROM', 'To': 'TO'}
{'Activity': None, 'From': datetime.time(6, 0), 'To': datetime.time(6, 15)}
{'Activity': None, 'From': datetime.time(6, 15), 'To': datetime.time(6, 30)}
{'Activity': None, 'From': datetime.time(6, 30), 'To': datetime.time(6, 45)}
{'Activity': None, 'From': datetime.time(6, 45), 'To': datetime.time(7, 0)}
{'Activity': None, 'From': datetime.time(7, 0), 'To': datetime.time(7, 15)}
{'Activity': None, 'From': datetime.time(7, 15), 'To': datetime.time(7, 30)}
{'Activity': None, 'From': datetime.time(7, 30), 'To': datetime.time(7, 45)}
Answer: You're looking for `strftime` (string-format-time).
>>> from datetime import datetime
>>> datetime.now()
datetime.datetime(2016, 6, 14, 17, 24, 27, 735835)
>>> datetime.now().strftime("%Y %m %d")
'2016 06 14'
The [python
documentation](https://docs.python.org/2/library/datetime.html#strftime-
strptime-behavior) on the subject is pretty extensive, and there's also this
[convenient reference table](http://strftime.org/) for the format language.
|
TemplateNotFound when using Airflow's PostgresOperator with Jinja templating and SQL
Question: When trying to use Airflow's templating capabilities (via Jinja2) with the
PostgresOperator, I've been unable to get things to render. It's quite
possible I'm doing something wrong, but I'm pretty lost as to what the issue
might be. Here's an example to reproduce the TemplateNotFound error I've been
getting:
**airflow.cfg**
airflow_home = /home/gregreda/airflow
dags_folder = /home/gregreda/airflow/dags
**relevant DAG and variables**
default_args = {
'owner': 'gregreda',
'start_date': datetime(2016, 6, 1),
'schedule_interval': None,
'depends_on_past': False,
'retries': 3,
'retry_delay': timedelta(minutes=5)
}
this_dag_path = '/home/gregreda/airflow/dags/example_csv_to_redshift'
dag = DAG(
dag_id='example_csv_to_redshift',
schedule_interval=None,
default_args=default_args
)
**/example_csv_to_redshift/csv_to_redshift.py**
copy_s3_to_redshift = PostgresOperator(
task_id='load_table',
sql=this_dag_path + '/copy_to_redshift.sql',
params=dict(
AWS_ACCESS_KEY_ID=Variable.get('AWS_ACCESS_KEY_ID'),
AWS_SECRET_ACCESS_KEY=Variable.get('AWS_SECRET_ACCESS_KEY')
),
postgres_conn_id='postgres_redshift',
autocommit=False,
dag=dag
)
**/example_csv_to_redshift/copy_to_redshift.sql**
COPY public.table_foobar FROM 's3://mybucket/test-data/import/foobar.csv'
CREDENTIALS 'aws_access_key_id={{ AWS_ACCESS_KEY_ID }};aws_secret_access_key={{ AWS_SECRET_ACCESS_KEY }}'
CSV
NULL as 'null'
IGNOREHEADER as 1;
Calling `airflow render example_csv_to_redshift load_table 2016-06-14` throws
the exception below. Note I'm running into this issue for another DAG as well,
which is why you see the path with `example_redshift_query_to_csv` mentioned.
[2016-06-14 21:24:57,484] {__init__.py:36} INFO - Using executor SequentialExecutor
[2016-06-14 21:24:57,565] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt
[2016-06-14 21:24:57,596] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt
[2016-06-14 21:24:57,763] {models.py:154} INFO - Filling up the DagBag from /home/gregreda/airflow/dags
[2016-06-14 21:24:57,828] {models.py:2040} ERROR - /home/gregreda/airflow/dags/example_redshift_query_to_csv/export_query_to_s3.sql
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 2038, in resolve_template_files
setattr(self, attr, env.loader.get_source(env, content)[0])
File "/usr/local/lib/python2.7/dist-packages/jinja2/loaders.py", line 187, in get_source
raise TemplateNotFound(template)
TemplateNotFound: /home/gregreda/airflow/dags/example_redshift_query_to_csv/export_query_to_s3.sql
[2016-06-14 21:24:57,834] {models.py:2040} ERROR - /home/gregreda/airflow/dags/example_csv_to_redshift/copy_to_redshift.sql
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 2038, in resolve_template_files
setattr(self, attr, env.loader.get_source(env, content)[0])
File "/usr/local/lib/python2.7/dist-packages/jinja2/loaders.py", line 187, in get_source
raise TemplateNotFound(template)
TemplateNotFound: /home/gregreda/airflow/dags/example_csv_to_redshift/copy_to_redshift.sql
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 15, in <module>
args.func(args)
File "/usr/local/lib/python2.7/dist-packages/airflow/bin/cli.py", line 359, in render
ti.render_templates()
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 1409, in render_templates
rendered_content = rt(attr, content, jinja_context)
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 2017, in render_template
return jinja_env.get_template(content).render(**context)
File "/usr/local/lib/python2.7/dist-packages/jinja2/environment.py", line 812, in get_template
return self._load_template(name, self.make_globals(globals))
File "/usr/local/lib/python2.7/dist-packages/jinja2/environment.py", line 774, in _load_template
cache_key = self.loader.get_source(self, name)[1]
File "/usr/local/lib/python2.7/dist-packages/jinja2/loaders.py", line 187, in get_source
raise TemplateNotFound(template)
jinja2.exceptions.TemplateNotFound: /home/gregreda/airflow/dags/example_csv_to_redshift/copy_to_redshift.sql
Any ideas towards a fix are much appreciated.
Answer: Standard [PEBCAK error](https://en.wikipedia.org/wiki/User_error).
There was an issue specifying the path to the SQL template within the given
Airflow task, which needed to be relative.
copy_s3_to_redshift = PostgresOperator(
task_id='load_table',
sql='/copy_to_redshift.sql',
params=dict(
AWS_ACCESS_KEY_ID=Variable.get('AWS_ACCESS_KEY_ID'),
AWS_SECRET_ACCESS_KEY=Variable.get('AWS_SECRET_ACCESS_KEY')
),
postgres_conn_id='postgres_redshift',
autocommit=False,
dag=dag
)
Additionally, the SQL template needed to be changed slightly (note the
`params. ...` this time):
COPY public.pitches FROM 's3://mybucket/test-data/import/heyward.csv'
CREDENTIALS 'aws_access_key_id={{ params.AWS_ACCESS_KEY_ID }};aws_secret_access_key={{ params.AWS_SECRET_ACCESS_KEY }}'
CSV
NULL as 'null'
IGNOREHEADER as 1;
|
Find numpy array values bounding an input value
Question: I have a value, say 2016 and a sorted numpy array: `[2005, 2010, 2015, 2020,
2025, 2030]`. What is the pythonic way to find the 2 values in the array that
bound 2016. In this case, the answer will be an array [2015, 2020].
Not sure how to do it other than loop, but hoping for a more numpy based
solution
\--EDIT:
you can assume that you will never get a value that is in the array, I
prefilter for that
Answer: You could do something like this:
In[1]: import numpy as np
In[2]: x = np.array([2005, 2010, 2015, 2020, 2025, 2030])
In[3]: x
Out[3]: array([2005, 2010, 2015, 2020, 2025, 2030])
In[4]: x[x > 2016].min()
Out[4]: 2020
In[5]: x[x < 2016].max()
Out[5]: 2015
In[6]: def bound(value, arr):
return arr[arr < value].max(), arr[arr > value].min()
In[7]: bound(2016, x)
Out[7]: (2015, 2020)
|
Python 3.5 install pyvenv
Question: I am trying to get a virtual environment for a repo that requires python 3.5.
I am using Debian, and from what I can tell, python 3.5 does not have an
aptitude package. After reading some posts, it was recommended to download 3.5
source code and compile it.
After running the make and install, python3.5 was installed to /usr/local/bin.
I added that to the $PATH variable.
Here is where I ran into problems. After I ran:
$ cd project-dir
$ pyvenv env
$ source env/bin/activate
$ pip install -r requirements.txt
I was getting issues with needing sudo to install the proper packages. I ran:
$ which pip
and it turns out that pip was still using the /usr/local/bin version of pip.
$ echo $PATH
returned
/home/me/project-dir/env/bin:/usr/local/bin:/usr/bin:/bin: ...
I am assuming that because the /usr/local path came after the virtual
environment's path in my PATH variable, it is using that version of pip
instead of my virtual environments.
What would be the best way to run the correct version of pip within the
virtualenv? The two options I can think of is moving the binaries over to
/usr/bin or modifying the activate script in my virtual env to place the
virtualenv path after /usr/local.
Answer: **Option 1** You can upgrade pip in a virtual environment manually by
executing
pip install -U pip
**Option 2** Good method to upgrade pip inside that package `python -m
ensurepip --upgrade` does indeed upgrade the pip version in the system (if it
is lower than the version in ensurepip).
You are facing this problem, because venv uses
[ensurepip](https://docs.python.org/dev/library/ensurepip.html#module-
ensurepip) to add pip into new environments:
> Unless the --without-pip option is given, ensurepip will be invoked to
> bootstrap pip into the virtual environment.
Ensurepip package won't download from the internet or grab files from anywhere
else, because all required components are already included into the package.
Doing so would add security flaws and is thus unsupported.
Ensurepip is not designed to give you the newest pip, but just "a" pip. To get
the newest one use the manual way at the beginning of this post.
To check ensurepip version you can type into python console `import ensurepip
print(ensurepip.version())`
**More Findings for further reading:**
1. To upgrade ensurepip manually using files - <https://github.com/python/cpython/commit/f649e9c44631c07e707842c42747b651b986dcc4>
2. [What's the proper way to install pip, virtualenv, and distribute for Python?](http://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python)
3. [Comprehensive beginner's virtualenv tutorial?](http://stackoverflow.com/questions/5844869/comprehensive-beginners-virtualenv-tutorial/14717552#14717552)
|
Redirect log to file before process finished
Question: test.py (work):
import time
_, a, b = [1, 2, 3]
print a
print b
run the code: python test.py > test.log
you will get the log in test.log
test.py (not work):
import time
_, a, b = [1, 2, 3]
print a
print b
while True:
time.sleep(5)
But this one you get None in the log.
How do I get log before the program finished, without the python log
module(just use the redirect '>')?
Answer: Python buffers `stdout` by default so the log gets written to disk in chunks.
You can turn off the buffering a few different ways, here are two. You can use
the `-u` option when you call the script, ie:
python -u test.py
You can use the enviornment varialbe `PYTHONUNBUFFERED`:
export PYTHONUNBUFFERED=true
python test.py
|
Tensorflow ImportError on OS X
Question: TL;DR getting `ImportError: cannot import name pywrap_tensorflow ` when trying
to use TensorFlow on El Capitan.
Details: I followed the TensorFlow installation instructions for Mac OS X from
[here](https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html#pip-
installation).
> Mac OS X, CPU only, Python 2.7:
>
>
> $ export
> TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/tensorflow-0.9.0rc0-py2-none-
> any.whl
>
> $ export
> TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/tensorflow-0.9.0rc0-py2-none-
> any.whl
>
> $ sudo pip install --upgrade $TF_BINARY_URL
>
These steps were successful.
So let's try it:
22:54:00/tensorflow $ipython
Python 2.7.11 (default, Jan 22 2016, 08:29:18)
Type "copyright", "credits" or "license" for more information.
IPython 4.2.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
[TerminalIPythonApp] WARNING | File not found: '/shared/.pythonstartup'
In [1]: import tensorflow as tf
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-41389fad42b5> in <module>()
----> 1 import tensorflow as tf
/git/tensorflow/tensorflow/__init__.py in <module>()
21 from __future__ import print_function
22
---> 23 from tensorflow.python import *
/git/tensorflow/tensorflow/python/__init__.py in <module>()
46 _default_dlopen_flags = sys.getdlopenflags()
47 sys.setdlopenflags(_default_dlopen_flags | ctypes.RTLD_GLOBAL)
---> 48 from tensorflow.python import pywrap_tensorflow
49 sys.setdlopenflags(_default_dlopen_flags)
50
ImportError: cannot import name pywrap_tensorflow
Answer: **TL;DR:** Don't run `ipython` (or `python`) from the root of the TensorFlow
git repository when you want to `import tensorflow`.
I answered a similar question
[here](http://stackoverflow.com/a/35963479/3574081). The easiest solution is
to `cd` out of the current directory (e.g. `cd ~`) before running `ipython`.
This will prevent Python from being confused by the `tensorflow` source
subdirectory in the current path. The `./tensorflow` directory in the git
repository doesn't contain all of the generated code (such as
`pywrap_tensorflow`) that is needed to run TensorFlow, but does contain a file
called `__init__.py`, and this confuses the Python interpreter.
|
Opencv python error
Question: I followed [this](http://www.samontab.com/web/2014/06/installing-
opencv-2-4-9-in-ubuntu-14-04-lts/#comment-72178) to install opencv. When I
tested the C and Java samples, they worked fine. But the python samples
resulted in a
import cv2
ImportError: No module named cv2
How can I fix this?
I am using python 2.7 and ubuntu 14.04.
Answer: Ok. For using OpenCv in Python, you gotta do one more step. Find the file
called cv2.so or cv2.pyd (I'm not sure which one it is.)in the OpenCv
installation directory.. copy paste it to your site-packages folder inside
Python installation directory.
|
how to convert this curl command to some Python codes that do the same thing?
Question: I am trying to download my data by using Fitbit API. I have figured out how to
obtain a certain day's data, which is good. And here is the curl command I
used:
curl -i -H "Authorization: Bearer (here goes a very long token)" https://api.fitbit.com/1/user/-/activities/heart/date/2016-6-14/1d/1sec/time/00:00/23:59.json >> heart_rate_20160614.json
However, I would like to collect hundreds of days' data and I don't want to do
that manually. So I think I could use a Python loop. I read some other topics
like [this one](http://stackoverflow.com/questions/3246021/python-equivalent-
of-curl-http-post) and [this
one](http://stackoverflow.com/questions/31507988/trying-to-convert-a-curl-
post-with-authorization-to-python) but still don't know how to 'translate'
these curl commands into python language by using urllib2.
I have tried this:
import urllib2
url = 'https://api.fitbit.com/1/user/-/activities/heart/date/today/1d/1sec/time/00:00/00:01.json'
data = '{Authorization: Bearer (here goes a very long token)}'
req = urllib2.Request(url,data)
f = urllib2.urlopen(req)
but the got an error says "HTTP Error 404: Not Found"
So what is the correct way to 'translate' this curl command to python
language? Thanks!
Answer: The problem comes from the construction of the `Request` object : by default,
the second parameter is the data that you want to pass along with the request.
Instead, you have to specify that you want to pass headers. This is the
correct way to do it :
import urllib2
url = 'https://api.fitbit.com/1/user/-/activities/heart/date/2016-6-14/1d/1sec/time/00:00/23:59.json'
hdr = {'Authorization': 'Bearer (token)'}
req = urllib2.Request(url,headers=hdr)
f = urllib2.urlopen(req)
This wields a 401 on my side, but should work with your token.
You can have more informations on urllib2 (and the Request class)
[here](https://docs.python.org/2/library/urllib2.html#urllib2.Request)
However, I suggest you take a look at [Requests](http://docs.python-
requests.org/en/master/), which is in my opinion easier to use, and very well
documented.
Hope it'll be helpful.
|
Pandas python updating values in a table based on preexisting values and conditions
Question: I have a dataframe:
import pandas as pd
df=pd.DataFrame({
'Player': ['John','John','John','Steve','Steve','Ted', 'James','Smitty','SmittyJr','DJ'],
'Name': ['A','B', 'A','B','B','C', 'A','D','D','D'],
'Group':['2A','1B','2A','2A','1B','1C','2A','1C','1C','2A'],
'Medal':['G', '?', '?', 'S', 'B','?','?','?','G','?']
})
df = df[['Player','Group', 'Name', 'Medal']]
print(df)
I want to update all the '?' in the column `Medal` with values for any of the
rows with matching `Name` & `Group` columns that are already filled in.
For example since the first `row 0` is `Name:A, Group:2A, Medal:G`, then the
'?' on `row 6` and `2` would be 'G'
The results should look like:
res=pd.DataFrame({
'Player': ['John','John','John','Steve','Steve','Ted', 'James','Smitty','SmittyJr','DJ'],
'Name': ['A','B', 'A','B','B','C', 'A','D','D','D'],
'Group':['2A','1B','2A','2A','1B','1C','2A','1C','1C','2A'],
'Medal':['G', 'B', 'G', 'S', 'B','?','G','G','G','?']
})
res = res[['Player','Group', 'Name', 'Medal']]
print(res)
What is the most efficient way to do this?
Answer: Another solution with [`replace`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.replace.html) `?` by last value (with
[`iloc`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.iloc.html)) of sorted `Medal` (with
[`sort_values`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.sort_values.html)) in each group:
df['Medal'] = df.groupby(['Group','Name'])['Medal']
.apply(lambda x: x.replace('?', x.sort_values().iloc[-1]))
print(df)
Player Group Name Medal
0 John 2A A G
1 John 1B B B
2 John 2A A G
3 Steve 2A B S
4 Steve 1B B B
5 Ted 1C C ?
6 James 2A A G
7 Smitty 1C D G
8 SmittyJr 1C D G
9 DJ 2A D ?
**Timings** :
In [81]: %timeit (df.groupby(['Group','Name'])['Medal'].apply(lambda x: x.replace('?', x.sort_values().iloc[-1])))
100 loops, best of 3: 4.13 ms per loop
In [82]: %timeit (df.replace('?', np.nan).groupby(['Name', 'Group']).apply(lambda df: df.ffill().bfill()).fillna('?'))
100 loops, best of 3: 11.3 ms per loop
|
Subsets and Splits