text
stringlengths 226
34.5k
|
---|
Pull JSON data from internet and print in Python?
Question: I'm developing a Twitch chat bot in Python. However, I'm having some trouble
with a feature that has been requested a lot. I need to pull the
"gameserverid" and "gameextrainfo" data from a JSON file. [example
file](http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key=605C90955CFDE6B1CB7D2EFF5FE824A0&steamids=76561198022404556)
import urllib2
import json
req = urllib2.Request("http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key=605C90955CFDE6B1CB7D2EFF5FE824A0&steamids=76561198022404556")
opener = urllib2.build_opener()
f = opener.open(req)
json = json.loads(f.read())
currentlyPlaying = json[gameextrainfo]
gameServer = json[gameserverid]
This is the code I've got at the moment. I want to get it so that other
commands can print the variables "currentlyPlaying" and "gameServer" to the
IRC chat. However, when I do this, I get this in the console :
Traceback (most recent call last):
File "N:/_DEVELOPMENT/Atlassian Cloud/TwitchChatBot/Testing/grabplayerinfofromsteam.py", line 1, in <module>
import urllib2
ImportError: No module named 'urllib2'
Any ideas? I'm in a Windows environment, running on the latest version of
Python (Python 3.5.1)
Answer:
try:
import urllib.request as urllib2
except ImportError:
import urllib2
but dont use `urllib2`, use requests!
pip install requests
<http://docs.python-requests.org/en/master/>
|
Can't execute Python Pandas set_value
Question: Got a problem with Pandas in Python 3.5
I read local csv using Pandas, the csv contains pure data, no header involved.
Then I assigned column name using
df= pd.read_csv(filePath, header=None)
df.columns=['XXX', 'XXX'] #for short, totally 11 cols
The csv has 11 columns, one of them is string, others are integer.
Then I tried to replace string column by integer value in a loop, cell by cell
for i, row in df.iterrows():
print(i, row['Name'])
df.set_value(i, 'Name', 123)
intrger 123 is an example, not every cell under this column is 123. print
function works well if I remove set_value, but with
df.set_value(i, 'Name', 123)
Then error info:
> Traceback (most recent call last): File "D:/xxx/test.py", line 20, in
> df.set_value(i, 'Name', 233)
>
> File "E:\Users\XXX\Anaconda3\lib\site-packages\pandas\core\frame.py", line
> 1862, in set_value series = self._get_item_cache(col)
>
> File "E:\Users\XXX\Anaconda3\lib\site-packages\pandas\core\generic.py", line
> 1351, in _get_item_cache res = self._box_item_values(item, values)
>
> File "E:\Users\XXX\Anaconda3\lib\site-packages\pandas\core\frame.py", line
> 2334, in _box_item_values
>
> return self._constructor(values.T, columns=items, index=self.index)
>
> AttributeError: 'BlockManager' object has no attribute 'T'
But if I create a dataframe manually in code
df = pd.DataFrame(index=[0, 1, 2], columns=['x', 'y'])
df['x'] = 2
df['y'] = 'BBB'
print(df)
for i, row in df.iterrows():
df.set_value(i, 'y', 233)
print('\n')
print(df)
It worked. I am wondering maybe there is something I am missing?
Thanks!
Answer: The cause of the original error:
Pandas DataFrame set_value(index, col, value) method will return the posted
obscure AttributeError: 'BlockManager' object has no attribute 'T' when the
dataframe being modified has duplicate column names.
The error can be reproduced using the code above by @Windtalker where the only
change made is that the column names are now both 'x' rather than 'x' and 'y'.
import pandas as pd
df = pd.DataFrame(index=[0, 1, 2], columns=['x', 'x'])
df['x'] = 2
df['y'] = 'BBB'
print(df)
for i, row in df.iterrows():
df.set_value(i, 'y', 233)
print('\n')
print(df)
Hopefully this helps someone else diagnose the same issue.
|
Python Read Fortran Binary File
Question: I'm trying to read a binary file output from Fortran code below, but the
results are not the same from output file. Can anyone help me? Thanks in
advance.
Fortran 77 code:
program test
implicit none
integer i,j,k,l
real*4 pcp(2,3,4)
open(10, file='pcp.bin', form='unformatted')
l = 0
do i=1,2
do j=1,2
do k=1,2
print*,k+l*2
pcp(i,j,k)=k+l*2
l = l + 1
enddo
enddo
enddo
do k=1,4
write(10)pcp(:,:,k)
enddo
close(10)
stop
end
I'm try to use the Python code below:
from scipy.io import FortranFile
f = FortranFile('pcp.bin', 'r')
a = f.read_reals(dtype=float)
print(a)
Answer: Because you are writing `real*4` data on a sequential file, simply try
replacing `dtype=float` to `dtype='float32'` (or `dtype=np.float32`) in
read_reals():
>>> from scipy.io import FortranFile
>>> f = FortranFile( 'pcp.bin', 'r' )
>>> print( f.read_reals( dtype='float32' ) )
[ 1. 9. 5. 13. 0. 0.]
>>> print( f.read_reals( dtype='float32' ) )
[ 4. 12. 8. 16. 0. 0.]
>>> print( f.read_reals( dtype='float32' ) )
[ 0. 0. 0. 0. 0. 0.]
>>> print( f.read_reals( dtype='float32' ) )
[ 0. 0. 0. 0. 0. 0.]
The obtained data correspond to each `pcp(:,:,k)` in Fortran, as verified by
do k=1,4
print "(6f8.3)", pcp(:,:,k)
enddo
which gives (with `pcp` initialized to zero)
1.0 9.0 5.0 13.0 0.0 0.0
4.0 12.0 8.0 16.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0
But because `>>> help( FortranFile )` says
> An example of an unformatted sequential file in Fortran would be written
> as::
>
> `OPEN(1, FILE=myfilename, FORM='unformatted')`
>
> `WRITE(1) myvariable`
>
> Since this is a non-standard file format, whose contents depend on the
> compiler and the endianness of the machine, caution is advised. Files from
> gfortran 4.8.0 and gfortran 4.1.2 on x86_64 are known to work.
>
> Consider using Fortran direct-access files or files from the newer Stream
> I/O, which can be easily read by `numpy.fromfile`.
it may be simpler to use `numpy.fromfile()` depending on cases (as shown in
StanleyR's answer).
|
Pythonic way to access Flask request arguements
Question: In my Flask app I am setting a number of local variables that have come in via
an API call, using the
from flask import request
.
.
submission_id = request.args.get('submission_id')
grader = request.args.get('grader')
grading_factor = float(request.args.get('grading_factor'))
answer_key = request.args.get('answer_key')
submission_key = request.args.get('submission_key')
What is a less repetitive or otherwise more Pythonic way of setting these 5
variables?
Answer:
from flask import Flask, request
app = Flask(__name__)
class DotDict(object):
def __init__(self, inner):
self._inner = inner
def __getattr__(self, item):
return self._inner.get(item)
def get(self, item, default=None):
return self._inner.get(item, default)
class LazyAttribute(object):
def __init__(self, obj, attr):
self.obj = obj
self.attr = attr
def __getattribute__(self, item):
return getattr(getattr(object.__getattribute__(self, 'obj'),
object.__getattribute__(self, 'attr')),
item)
rargs = DotDict(LazyAttribute(request, 'args'))
@app.route("/")
def hello():
print rargs.a, rargs.c, rargs.get('d', 3)
return "Hello World!"
if __name__ == "__main__":
app.run(debug=True)
Accessing <http://localhost:5000/?a=1> prints `1 None 3` in the terminal.
The `LazyAttribute` class is because calling just `DotDict(request.args)`
outside of a request context throws an error. The alternative is to make a
function:
def rargs():
return DotDict(request.args)
but I wanted to make usage as smooth as possible.
|
python array form input
Question: in Python input is `[[1,2,3],[2,3,4]]` form
not like integer form 1 2 3 2 3 4
but having []
in code
input =[]
inputfunc() //get input array
print input
You input
[[1,2,3],[2,3,4]]
python result will print
[[1,2,3],[2,3,4]]
So can I save this variable? or I must using String parsing?
Answer: You can use
[`ast.literal_eval()`](https://docs.python.org/2/library/ast.html?highlight=literal_eval#ast.literal_eval)
to convert that string into a python object like this:
>>> import ast
>>> ast.literal_eval('[[1,2,3],[2,3,4]]')
[[1, 2, 3], [2, 3, 4]]
>>> L=ast.literal_eval('[[1,2,3],[2,3,4]]')
>>> type(L)
<class 'list'>
>>> L
[[1, 2, 3], [2, 3, 4]]
>>>
`ast` stands for Abstract Syntax Tree. `literal_eval()` is much more safe than
`eval()`.
Quoting from official doc:
> ast.literal_eval(node_or_string) Safely evaluate an expression node or a
> Unicode or Latin-1 encoded string containing a Python literal or container
> display. The string or node provided may only consist of the following
> Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
> and None.
>
> This can be used for safely evaluating strings containing Python values from
> untrusted sources without the need to parse the values oneself. It is not
> capable of evaluating arbitrarily complex expressions, for example involving
> operators or indexing.
|
How to extract the line in the text file that contains matching phrases?
Question: I have a text corpus which contains many lines of sentences. I was hoping to
extract the lines that contains the key words.
I wrote a simple python script but I get no value at all.
My python Script:
corpus = []
with open('CatList2.text') as f:
for line in f:
corpus.append(line.rstrip())
with open('Test.text') as f1:
with open('Text', 'a') as f2:
for line in f1.readlines():
for phrase in corpus:
if phrase in line:
f2.write(line)
The following is an example of wiki.en.text:
Alluvium (from the Latin, alluvius, from alluere, "to wash against") is loose, unconsolidated (not cemented together into a solid rock) soil or sediments, which has been eroded, reshaped by water in some form, and redeposited in a non-marine setting
Geoarchaeology is a multi-disciplinary approach which uses the techniques and subject matter of geography, geology and other Earth sciences to examine topics which inform archaeological knowledge and thought. Geoarchaeologists study the natural physical processes that affect archaeological sites such as geomorphology, the formation of sites through geological processes and the effects on buried sites and artifacts post-deposition. Geoarchaeologists' work frequently involves studying soil and sediments as well as other geographical concepts to contribute an archaeological study. Geoarchaeologists may also use computer cartography, geographic information systems (GIS) and digital elevation models (DEM) in combination with disciplines from human and social sciences and earth sciences.[1] Geoarchaeology is important to society because it informs archaeologists about the geomorphology of the soil, sediments and the rocks on the buried sites and artifacts they're researching on. By doing this we are able locate ancient cities and artifacts and estimate by the quality of soil how "prehistoric" they really are.
A Geopark is a unified area that advances the protection and use of geological heritage in a sustainable way, and promotes the economic well-being of the people who live there.[1] There are Global Geoparks and National Geoparks.
Spatial analysis or spatial statistics includes any of the formal techniques which study entities using their topological, geometric, or geographic properties. Spatial analysis includes a variety of techniques, many still in their early development, using different analytic approaches and applied in fields as diverse as astronomy, with its studies of the placement of galaxies in the cosmos, to chip fabrication engineering, with its use of "place and route" algorithms to build complex wiring structures. In a more restricted sense, spatial analysis is the technique applied to structures at the human scale, most notably in the analysis of geographic data.
Spatial mismatch is the mismatch between where low-income households reside and suitable job opportunities. In its original formulation (see below) and in subsequent research, it has mostly been understood as a phenomenon affecting African-Americans, as a result of residential segregation, economic restructuring, and the suburbanization of employment.
Distance decay is a geographical term which describes the effect of distance on cultural or spatial interactions. The distance decay effect states that the interaction between two locales declines as the distance between them increases. Once the distance is outside of the two locales' activity space, their interactions begin to decrease.
Cold is the presence of low temperature, especially in the atmosphere.[4] In common usage, cold is often a subjective perception. A lower bound to temperature is absolute zero, defined as 0.00 °K on the Kelvin scale, an absolute thermodynamic temperature scale. This corresponds to −273.15 °C on the Celsius scale, −459.67 °F on the Fahrenheit scale, and 0.00 °R on the Rankine scale.
My CatList which contains my search phrases is as follows:
Alluvium
Anatopism
The result I am hoping for is :
Alluvium (from the Latin, alluvius, from alluere, "to wash against") is loose, unconsolidated (not cemented together into a solid rock) soil or sediments, which has been eroded, reshaped by water in some form, and redeposited in a non-marine setting
As only Alluvism which is contained in CatList also appears in Wiki.en.text
I have no idea why I am not able to get the result. Please help me. Thank you.
Weird I gotten this error:
Traceback (most recent call last):
File "JRTry.py", line 2, in <module>
phrases = open("Test.text").readLines()
AttributeError: 'file' object has no attribute 'readLines'
I read in ( [Error while using '<file>.readlines()'
function](http://stackoverflow.com/questions/21544486/error-while-using-file-
readlines-function)) and I have placed `for line in f1.readlines():` yet it
still give me an error, any idea?
Answer: The problem is that when you read the keywords from the file, you are also
getting the newlines.
You can use `rstrip` to remove it (see [this SO
post](http://stackoverflow.com/questions/275018/how-can-i-remove-chomp-a-
newline-in-python)).
Python interpreter:
>>> with open("test") as f:
... for line in f:
... a.append(line)
...
>>> a
['foo\n'] #see that there's a newline?
Instead, use
a.append(line.rstrip()) #this will remove the newline
|
python stacktrace at firefoxdriver.prototype.findelement internal
Question: I'm using `selenium` with `firefox` to load a webpage. The page uses `ajax` to
load new content on clicking the **Display More Results** button.
However when I try to find this button and simulate a click, it gives the
following `Stacktrace` error. Can anyone tell me what I'm doing wrong ?
**Here's my code:**
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Firefox()
driver.get("http://dir.indiamart.com/search.mp?ss=plastic+machinery")
try:
elem = elem = WebDriverWait(driver, 20).until(lambda driver : driver.find_element_by_id('scroll2'))
elem.click()
print "clicked"
except Exception as e:
print e
driver.close()
**Here's the error I'm getting:**
Message:
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///tmp/tmpAQcOR7/extensions/[email protected]/components/driver-component.js:10770)
at FirefoxDriver.prototype.findElement (file:///tmp/tmpAQcOR7/extensions/[email protected]/components/driver-component.js:10779)
at DelayedCommand.prototype.executeInternal_/h (file:///tmp/tmpAQcOR7/extensions/[email protected]/components/command-processor.js:12661)
at DelayedCommand.prototype.executeInternal_ (file:///tmp/tmpAQcOR7/extensions/[email protected]/components/command-processor.js:12666)
at DelayedCommand.prototype.execute/< (file:///tmp/tmpAQcOR7/extensions/[email protected]/components/command-processor.js:12608)
Answer: I'm not sure what the problem is because there is no error message. However,
this syntax looks rather strange:
elem = elem = WebDriverWait(...
Also, I noticed that when I went to that page the div with the id "scroll2"
was not visible. The visible one was "scroll4". You'll probably have to write
some other code to look for the visible div with "scroll" in it.
|
Why is this SPARQL query missing so many results?
Question: (First off, my apologies as this is a blatant cross-post. I thought
opendata.SE would be the place for this, but it's gotten barely any views
there and it appears to not be a very active site in general, so I figure I
ought to try it here as it's programming-related.)
I'm trying to get a list of major cities in the world: their name, population,
and location. I found what looked like a good query on
[Wikidata](https://query.wikidata.org/#%23Largest%20cities%20of%20the%20world%0A%23defaultView%3ABubbleChart%0ASELECT%20DISTINCT%20%3FcityLabel%20%3Fpopulation%20%3Fgps%20WHERE%20%7B%0A%20%20%3Fcity%20%28wdt%3AP31%2Fwdt%3AP279*%29%20wd%3AQ515.%0A%20%20%3Fcity%20wdt%3AP1082%20%3Fpopulation.%0A%20%20%3Fcity%20wdt%3AP625%20%3Fgps.%0A%20%20FILTER%20%28%3Fpopulation%20%3E%3D%20500000%29%20.%0A%20%20SERVICE%20wikibase%3Alabel%20%7B%20bd%3AserviceParam%20wikibase%3Alanguage%20%22en%22.%20%7D%0A%7D%0AORDER%20BY%20DESC%28%3Fpopulation%29),
slightly tweaking one of their built-in query examples:
SELECT DISTINCT ?cityLabel ?population ?gps WHERE {
?city (wdt:P31/wdt:P279*) wd:Q515.
?city wdt:P1082 ?population.
?city wdt:P625 ?gps.
FILTER (?population >= 500000) .
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
}
ORDER BY DESC(?population)
The results, at first glance, appear to be good, but it's missing a ton of
important cities. For example, San Francisco (population 800,000+) and Seattle
(population 650,000+) are not in the list, when I specifically asked for all
cities with a population greater than 500,000.
Is there something wrong with my query? If not, there must be something wrong
with the data Wikidata is using. Either way, how can I get a valid data set,
with an API I can query from a Python script? (I've got the script all working
for this; I'm just not getting back valid data.)
from SPARQLWrapper import SPARQLWrapper, JSON
from geopy.distance import great_circle
def parseCoords(gps):
base = gps[6:-1]
coords=base.split()
return (float(coords[1]), float(coords[0]))
sparql = SPARQLWrapper("https://query.wikidata.org/sparql")
sparql.setReturnFormat(JSON)
sparql.setQuery("""SELECT DISTINCT ?cityLabel ?population ?gps WHERE {
?city (wdt:P31/wdt:P279*) wd:Q515.
?city wdt:P1082 ?population.
?city wdt:P625 ?gps.
FILTER (?population >= 500000) .
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
}
ORDER BY DESC(?population)""")
queryResults = sparql.query().convert()
cities = [(city["cityLabel"]["value"], int(city["population"]["value"]), parseCoords(city["gps"]["value"])) for city in queryResults["results"]["bindings"]]
print (cities)
Answer: The population of seattle is simply not in this database.
If you execute:
#Largest cities of the world
#defaultView:BubbleChart
SELECT * WHERE {
wd:Q5083 wdt:P1082 ?population.
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
}
You get zero results. Altought the instance `wd:Q5083`(seattle) exists, it
does not have a predicate `wdt:P1082`(population).
|
Way to mask functions on Python Object
Question: I have a class that inherit from OrderedDict, but I don't know if this is the
right way to accomplish what I need.
I would like the class to have the duel method of the javascript '.' notation
like `obj.<property>` and I would also like the users to be able to access the
class properties like `obj['myproperty']` but I was to hide all the key() and
get() functions. The inheritance model is providing good functionality, but it
cluttering up the object with additional methods that are not really needed.
Is it possible to get the dictionary behavior without all the other functions
coming along?
For this discussion, let's assume my class is this:
from six.moves.urllib import request
import json
class MyClass(OrderedDict):
def __init__(self, url):
super(MyClass, self).__init__(url=url)
self._url = url
self.init()
def init(self):
# call the url and load the json
req = request.Request(self._url)
res = json.loads(request.urlopen(req).read())
for k,v in res.items():
setattr(self, k, v)
self.update(res)
self.__dict__.update(res)
if __name__ == "__main__":
url = "https://sampleserver5.arcgisonline.com/ArcGIS/rest/services?f=json"
props = MyClass(url=url)
props.currentVersion
Is there another way to approach this dilemma?
Thanks
Answer: If all you want is `x['a']` to work the same way as `x.a` without any other
functionality of dictionaries, then don't inherit from `dict` or
`OrderedDict`, instead just forward key/indice operations ([`__getitem__`,
`__setitem__` and
`__delitem__`](https://docs.python.org/3/reference/datamodel.html#object.__getitem__))
to [attribute
operations](https://docs.python.org/3/library/functions.html#getattr):
class MyClass(object):
def __getitem__(self,key):
try: #change the error to a KeyError if the attribute doesn't exist
return getattr(self,key)
except AttributeError:
pass
raise KeyError(key)
def __setitem__(self,key,value):
setattr(self,key,value)
def __delitem__(self,key):
delattr(self,key)
As an added bonus, because these special methods don't check the instance
variables for the method name it doesn't break if you use the same names:
x = MyClass()
x['__getitem__'] = 1
print(x.__getitem__) #still works
print(x["__getattr__"]) #still works
The only time it will break is when trying to use `__dict__` since that is
where the instance variables are actually stored:
>>> x = MyClass()
>>> x.a = 4
>>> x.__dict__ = 1 #stops you right away
Traceback (most recent call last):
File "<pyshell#36>", line 1, in <module>
x.__dict__ = 1
TypeError: __dict__ must be set to a dictionary, not a 'int'
>>> x.__dict__ = {} #this is legal but removes all the previously stored values!
>>> x.a
Traceback (most recent call last):
File "<pyshell#38>", line 1, in <module>
x.a
AttributeError: 'MyClass' object has no attribute 'a'
In addition you can still use the normal dictionary methods by using `vars()`:
x = MyClass()
x.a = 4
x['b'] = 6
for k,v in vars(x).items():
print((k,v))
#output
('b', 6)
('a', 4)
>>> vars(x)
{'b': 6, 'a': 4}
|
Buildozer Android Kivy Fail on importing requirement
Question: I have tried to import a python module on a kivy app for android, anyway
whenever i add a requirement in buildozer.spec the build fails out putting
this:
# Run 'pip install --download-cache=/home/arctia/.buildozer/cache --target=/home/arctia/Scrivania/Kivy/Programs/Domo_Skills/.buildozer/applibs openpyxl'
no such option: --download-cache
# Command failed: pip install --download-cache=/home/arctia/.buildozer/cache --target=/home/arctia/Scrivania/Kivy/Programs/Domo_Skills/.buildozer/applibs openpyxl
# Buildozer failed to execute the last command
Answer: Make sure your buildozer and python-for-android are up to date, this is fixed
in the master branches. For buildozer, you should be able to get the latest
stable release. For python-for-android, delete the .buildozer directory in
your build dir and run buildozer again, it should fetch the newest version.
|
Blender to Three.js (.JSON) and OBJ to .JS texture mapping issues
Question: I'm stumped. I've looked at all related threads I could find, none quite
describe my issue.
I'm trying to get either a .JSON or .JS version of a model to export and
texture map the same as an .OBJ version when rendered via WebGL. Below is an
image illustrative of what's happening:

**Setup:** Win7, Blender 2.77, Three.js (downloaded 5/22/16), Three.js (.json)
Blender Exporter, Python OBJ > .JS converter script (convert_obj_three.py)
python convert_obj_three.py -i xxx.obj -o xxx.js
**Steps:**
* WoW model exported from WoW ModelViewer as .OBJ
* WoW model imported to Blender
* Blender Edit mode to "Remove Doubles"
* Blender Export to .OBJ
* Three.js THREE.OBJLoader() script to test on Web
Results in "**GOOD** " example (left case in image)
* Blender Export to .JSON (trying every config imaginable and then some)
* Three.js THREE.JSONLoader() script to test on Web
Results in "**BAD** " example (middle case in image)
* Blender Export to .OBJ
* CMD line Python script to do conversion from OBJ to .JS
* Three.js THREE.JSONLoader() script to test on Web
Results in "**WORSE** " example (middle case in image). For some reason in
this last instance not just the Hair texture, but portions of skin of the
model's face get wonky as well.
You'll likely notice the light rendering varies a bit in each case as well.
I look at the .JSON and .JS files and have no idea how to manually edit if
that's ultimately what's needed.
Perhaps certain models simply won't texture properly?
Answer: Your UV's for the head of your model are outside the range [ 0, 1 ].
When you load your JSON model, set
texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
three.js r.77
|
Python Regex for finding the name and the following numbers multiple times
Question: I need to parse an output text file that has a lot of information,
specifically about weighing and calibrating masses. There is a data table in
this text file that has the name of the mass being tested, its nominal weight,
density, and other properties of the mass.
[Here's](http://i.stack.imgur.com/r7nxF.png) a picture of what this part of
the text file looks like. I want to have five capture groups, for each column.
Right now, I have
tablePattern = r'\[mg\]\s*(.{4,15})\s+(\d*)\s*(\d*)\s*(\d*)\s*(\d*)'
tableMatches = re.findall(tablePattern, text)
However, this gives me matches I don't want, and it doesn't return all the
capture groups I want. Any help would be appreciated!
Answer: You will need to loop through your file and process each line of input but
this should work. Let me know if it does not, along with some real data.txt.
You can add more groups to this and make them optional by placing a + at the
end of the group to handle additional columns of data.
import re
p = re.compile('^(.*)\s+(-\d+.\d+|\d+.\d+)\s+(-\d+.\d+|\d+.\d+)\s+(-\d+.\d+|\d+.\d+)\s+(-\d+.\d+|\d+.\d+)$')
m = p.match( 'b 100g 1dot 100.0000 5.63334 0.0000002 -339.3333' )
if m:
print('Weight Being Tested: ', m.group(1))
print('Nominal Value: ', m.group(2))
print('Density: ', m.group(3))
print('Expansion: ', m.group(4))
print('Correction: ', m.group(3))
else:
print('No match')
# Weight Being Tested: b 100g 1dot
# Nominal Value: 100.0000
# Density: 5.63334
# Expansion: 0.0000002
# Correction: 5.63334
|
Interpreter written in Python not working properly
Question: No, I'm not done with my interpreter questions yet. Sorry.
So I've written another interpreter for a programming language I made up in
Python. The programming language has an interesting concept: you only get some
basic commands. To "get" more complex commands, you have to write functions
and combine the simple command given to you.
Anyways, I've written a simple interpreter for the language.
The problem is: the simple commands work, but defining functions doesn't.
Here's the interpreter (I've removed code that isn't useful for solving the
problem).
class Interpreter:
acc = 0
defined = {}
def read(self, cmd):
tokens = cmd.replace("\n", ";").split(";")
for token in tokens:
self.parse(token)
def parse(self, cmd):
if cmd.startswith("def(") and cmd.endswith(")"):
cmd = cmd[2:]
cmd = cmd[:-1]
cmd = cmd.split("|")
self.defined[cmd[0]] = cmd[1]
elif cmd in self.defined:
self.read(self.defined[cmd])
elif cmd == "1":
self.acc += 1
elif cmd == "2":
print(self.acc)
elif cmd == "i":
self.acc = int(input(">> "))
i = Interpreter()
while 1:
i.read(input("> "))
You can define functions using the syntax `def(name|code)`. For example,
`def(cat|i;2)`.
Now, onto the problem I'm having. It's impossible to define functions. They
just don't work. It doesn't throw an error or anything. It does nothing.
Here's the code I tried to use:
def(c|i;2)
c
It should get input and display it, but instead, it doesn't do anything.
This works, though:
i;2
It seems to me like the problem is somewhere in the `if
cmd.startswith("def(")` if statement, because everything except the functions
work.
Answer: When solving these kinds of problems it is imperative to be able to see what
is going on while the program is running. You can e.g. use a debugger or you
can go with the age old debug print method (like I've done below).
I've extended the interpreter with a `p` command that prints the `acc`, and
made it accept any integer, otherwise it is the same.
The problem you're having is that you destroy the input before you can store
it in `defined`. I solve it by only using `\n` to split outer commands and `;`
to split commands inside a `def`.
import textwrap
class Interpreter:
acc = 0
defined = {}
def read(self, cmd):
cmd = textwrap.dedent(cmd).strip()
lines = cmd.split("\n")
for line in lines:
print '==> parsing:', line
self.parse(line)
def parse(self, cmd):
if cmd.startswith("def(") and cmd.endswith(")"):
print '::found def',
name, code = cmd[4:-1].split('|') # def( is 4 characters..
self.defined[name] = code.replace(';', '\n') # read() expects commands divided by \n, so replace ; before storing in self.defined
print self.defined
elif cmd in self.defined:
print '::found defined name', cmd, '=>', `self.defined[cmd]`
self.read(self.defined[cmd])
elif cmd == "i":
self.acc = int(input(">> "))
elif cmd == "p":
print(self.acc)
else:
self.acc += int(cmd)
intp = Interpreter()
intp.read("""
def(c|i;2)
c
p
""")
the output from a run:
(dev) go|c:\srv\tmp> python pars.py
==> parsing: def(c|i;2)
::found def {'c': 'i\n2'}
==> parsing: c
::found defined name c => 'i\n2'
==> parsing: i
>> 5
==> parsing: 2
==> parsing: p
7
writing an interpreter that recursively calls itself in this way has some
major limitations since every function call in the compiled language requires
a function call in the host language (Python). A better way is to transform
the program into a stack of commands, then pop a command from the stack and
execute it. You're done when the stack is empty. Function calls will then just
involve pushing the value of the defined symbol onto the stack. I've extended
your interpreter to do this below. I've added a command `x0` which will exit a
function call if `acc` is zero (and I push a `$marker` onto the stack before
calling a function so I know where the function call started):
def debug(*args):
pass
# print '[dbg]', ' '.join(str(a) for a in args)
class Interpreter:
acc = 0
defined = {}
commands = [] # the stack
def compile(self, program):
program = textwrap.dedent(program).strip()
lines = program.split("\n")
lines.reverse()
self.commands += lines
while self.commands:
command = self.commands.pop()
debug('==> running:', command, 'stack:', self.commands)
self.run_command(command)
def run_command(self, cmd):
if cmd.startswith("def(") and cmd.endswith(")"):
name, code = cmd[4:-1].split('|')
self.defined[name] = code.split(';')
debug('::found def', self.defined)
elif cmd in self.defined:
debug('::found defined name', cmd, '=>', `self.defined[cmd]`)
# mark command stack before executing function
self.commands += ['$marker']
self.commands += list(reversed(self.defined[cmd]))
elif cmd == '$marker':
pass # do nothing (we get here if a def doesn't have an x0 when the acc is zero)
elif cmd == 'x0':
# exit function call if acc is zero
if self.acc == 0:
while self.commands: # pop the stack until we get to the $marker
tmp = self.commands.pop()
if tmp == '$marker':
break
elif cmd == "i":
self.acc = int(input(">> "))
elif cmd == "p":
print(self.acc)
else:
self.acc += int(cmd)
we can now write recursive functions:
intp = Interpreter()
intp.compile("""
4
def(c|-1;x0;p;c)
c
p
""")
which outputs:
(dev) go|c:\srv\tmp> python pars.py
3
2
1
0
instead of an accumulator (`acc`) it is probably more expressive to use the
stack for values too, so e.g. `5;p` would push `5` on the stack, then `p`
would print the top element on the stack. Then you could implement addition
like `5;2;+` meaning `push 5`, `push 2`, and let `+` mean `add top two items
on stack and push the result`... I'll leave that as an excercise ;-)
|
Python BadYieldError when yielding Future
Question: I'm fairly new to programming using coroutines and I'm trying to build a
database interface for a custom database for a Tornado web server using Python
2.7.10. However, I keep getting a `BadYieldError`. I feel that this is
probably me not understanding how to fully use `tornado.gen.coroutine` with
Python's `yield`. I think there may be something wrong with how I'm yielding a
Future.
This is my code that keeps failing, where the `testGet` function at line 15 is
a simulation of a foreign database access.
from tornado import gen
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
@gen.coroutine
def get(self):
response = "Hello, world"
response = yield db_interface("default", "user_query")
self.write(str(response))
# meant to demonstrate
def testGet(query):
gen.sleep(2)
response = query
return response
@gen.coroutine
def db_interface(db, key):
print str(db)
d = yield testGet(key)
raise gen.Return(d)
def make_app():
return tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
app = make_app()
app.listen(8889)
tornado.ioloop.IOLoop.current().start()
Output:
default
ERROR:tornado.application:Uncaught exception GET / (127.0.0.1)
HTTPServerRequest(protocol='http', host='127.0.0.1:8889', method='GET', uri='/', version='HTTP/1.1', remote_ip='127.0.0.1', headers={'Accept-Language': 'en-CA,en;q=0.8,fr;q=0.6', 'Accept-Encoding': 'gzip, deflate, sdch', 'Host': '127.0.0.1:8889', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36', 'Dnt': '1', 'Connection': 'keep-alive', 'Cookie': 'ui-auth-127.0.0.1%3A8091=4f1301ef98f1b6fbd80ad059cd5aa2dc', 'Cache-Control': 'max-age=0', 'Upgrade-Insecure-Requests': '1'})
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/tornado/web.py", line 1415, in _execute
result = yield result
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 870, in run
value = future.result()
File "/usr/local/lib/python2.7/site-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 876, in run
yielded = self.gen.throw(*exc_info)
File "test.py", line 12, in get
response = yield db_interface("default", "user_query")
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 870, in run
value = future.result()
File "/usr/local/lib/python2.7/site-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 876, in run
yielded = self.gen.throw(*exc_info)
File "test.py", line 26, in db_interface
d = yield testGet(key)
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 870, in run
value = future.result()
File "/usr/local/lib/python2.7/site-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 956, in handle_yield
self.future = convert_yielded(yielded)
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 1026, in convert_yielded
raise BadYieldError("yielded unknown object %r" % (yielded,))
BadYieldError: yielded unknown object 'user_query'
ERROR:tornado.access:500 GET / (127.0.0.1) 2011.67ms
I've been trying to look for how to do this properly on [Tornado's official
documentation](http://www.tornadoweb.org/en/stable/guide/coroutines.html) and
on [this blog](https://emptysqua.re/blog/refactoring-tornado-coroutines/), but
I don't think I've been able to fully grasp what they're saying about how to
use coroutines.
Answer: Either turn `testGet` into a coroutine like this so that it returns a future:
@gen.coroutine
def testGet(query):
gen.sleep(2)
raise gen.Return(query)
Or don't use `yield` on it: `d = testGet(key)`.
If you do either, the code works. You should be using `yield` on functions
that return `Future`s.
|
HttpResponseRedirect' object has no attribute 'client'
Question: Django 1.9.6
I'd like to write some unit test for checking redirection.
Could you help me understand what am I doing wrongly here.
Thank you in advance.
The test:
from django.test import TestCase
from django.core.urlresolvers import reverse
from django.http.request import HttpRequest
from django.contrib.auth.models import User
class GeneralTest(TestCase):
def test_anonymous_user_redirected_to_login_page(self):
user = User(username='anonymous', email='[email protected]', password='ttrrttrr')
user.is_active = False
request = HttpRequest()
request.user = user
hpv = HomePageView()
response = hpv.get(request)
self.assertRedirects(response, reverse("auth_login"))
The result:
## ERROR: test_anonymous_user_redirected_to_login_page
(general.tests.GeneralTest)
Traceback (most recent call last): File
"/home/michael/workspace/photoarchive/photoarchive/general/tests.py", line 44,
in test_anonymous_user_redirected_to_login_page self.assertRedirects(response,
reverse("auth_login")) File
"/home/michael/workspace/venvs/photoarchive/lib/python3.5/site-
packages/django/test/testcases.py", line 326, in assertRedirects
redirect_response = response.client.get(path, QueryDict(query),
AttributeError: 'HttpResponseRedirect' object has no attribute 'client'
* * *
Ran 3 tests in 0.953s
What pdb says:
-> self.assertRedirects(response, reverse("auth_login"))
(Pdb) response
<HttpResponseRedirect status_code=302, "text/html; charset=utf-8", url="/accounts/login/">
Answer: You need to add a client to the response object. See the updated code below.
from django.test import TestCase, Client
from django.core.urlresolvers import reverse
from django.http.request import HttpRequest
from django.contrib.auth.models import User
class GeneralTest(TestCase):
def test_anonymous_user_redirected_to_login_page(self):
user = User(username='anonymous', email='[email protected]', password='ttrrttrr')
user.is_active = False
request = HttpRequest()
request.user = user
hpv = HomePageView()
response = hpv.get(request)
response.client = Client()
self.assertRedirects(response, reverse("auth_login"))
|
Python OpenCV import error with python 3.5
Question: I am having some difficulties installing opencv with python 3.5.
I have linked the cv files, but upon `import cv2` I get an error saying
`ImportError:
dlopen(/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-
packages/cv2.so, 2): Symbol not found: _PyCObject_Type` or more specifically:
> /Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5
> /Users/Jamie/Desktop/tester/test.py Traceback (most recent call last): File
> "/Users/Jamie/Desktop/tester/test.py", line 2, in import cv File
> "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-
> packages/cv.py", line 1, in from cv2.cv import *
> ImportError:dlopen(/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-
> packages/cv2.so, 2): Symbol not found: _PyCObject_Type Referenced from:
> /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-
> packages/cv2.so Expected in: flat namespace in
> /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-
> packages/cv2.so
I have linked cv.py and cv2.so from location
`/usr/local/Cellar/opencv/2.4.12_2/lib/python2.7/site-packages` correctly into
`/Library/Frameworks/Python.framework/Versions/3.5/bin`
Would anybody be able to help please?
Thanks very much
Answer: Found an answer - follow the instructions on [this
website](http://www.pyimagesearch.com/2015/06/29/install-opencv-3-0-and-
python-3-4-on-osx/) BUT you have to change to the version of python you are
using.
Also, I didn't bother with the virtual environments.
And lastly cv2.so is actually called cv2.cpython-35m-darwin.so in the
build/lib folder that you make.
Then it works.
|
Generating a tree that may have branches that terminate [Python]
Question: I'm trying to create a script that will generate a tree in the following way:
if number is ever: split into 3 numbers: number // 2, number * 2, number + 15.
If number is odd: split into 2 numbers: number +1, number * 4. These branches
continue until a branch is either: greater than 100, equal to 6, a perfect
square (square root of number is an integer). I'm having a problem with having
either 2 or 3 branches for different conditions. Here is my code:
import numpy as np
class Node(object):
def __init__(self,number,parent):
self._parent = parent
self._number = number
self._satisfied = number > 100 or number == 6 or np.sqrt(number) % 1 == 0
self._branch1 = None
self._branch2 = None
self._branch3 = None
self._depth = parent.depth + 1 if parent != None else 1
@property
def parent(self):
return self._parent
@property
def number(self):
return self._number
@property
def satisfied(self):
return self._satisfied
@property
def depth(self):
return self._depth
@property
def branch1(self):
return self._branch1
@branch1.setter
def branch1(self,value):
self._branch1 = value
@property
def branch2(self):
return self._branch2
@branch2.setter
def branch2(self,value):
self._branch2 = value
@property
def branch3(self):
return self._branch3
@branch3.setter
def branch3(self,value):
self._branch3 = value
def print_all_chains(node,chain=[]):
if node.branch1 is None:
chain.append(node.number)
print '{0}: {1}'.format(node.satisfied, chain)
else:
print_all_chains(node.branch1, chain[:] + [node.number])
print_all_chains(node.branch2, chain[:] + [node.number])
print_all_chains(node.branch3, chain[:] + [node.number])
def make_daughters(number):
if number % 2 == 0: #even
daughters = [number // 2, number * 2, number + 15]
else:
daughters = [number + 1, number * 4, None]
return daughters
def build_tree(node, maxDepth):
if not node.satisfied and node.depth<maxDepth:
daughters = make_daughters(node.number)
node.branch1 = Node(daughters[0], node)
build_tree(node.branch1,maxDepth)
node.branch2 = Node(daughters[1], node)
build_tree(node.branch2,maxDepth)
node.branch3 = Node(daughters[2], node)
build_tree(node.branch3, maxDepth)
def find_decay(number):
root = Node(number,None)
build_tree(root,maxDepth=3)
print_all_chains(root)
if __name__ == '__main__':
find_decay(int(raw_input('Number: ')))
Answer: Why not use a list to keep the branch information? Then it can be rewritten
into this way without adding a **None** type element in the two branch
situation.
daughters = make_daughters(node.number)
node.branch = [Node(d, node) for d in daughters]
Without worries about **None** type elements
|
Getting the rotation of a face normal from the Axis
Question: I have been trying to get the Euler angles to each axis of a face normal
vector but I haven't been successful so far. I tried using the direction
cosines for each axis, but I haven't been successful as the returned value was
completely off. Here's my code so far, I'm pretty sure there's a problem with
my logic, specially in my math, but I cant really figure it out. I'm using
python and the Maya for this. I'm modifying the manipulator pivot to check if
the result. Thanks In advance! Here's my code:
import maya.cmds as cmds
import math as math
def Difference(a,b):
return list(set(a)- set(b))
def VertPosition(vert, axis):
return cmds.xform(vert,q = 1,ws = 1, t = 1)[axis]
def GetFaceNormal(face):
cmds.select(face,r=1)
faceNormal = cmds.polyInfo(fn=True)
return faceNormal
def GetNumericNormalValue(normalInfoString):
faceNormalDirection = [0,0,0]
normalInfoString = str(normalInfoString).split(' ')
faceNormalDirection[0] = float(normalInfoString[-3])
faceNormalDirection[1] = float(normalInfoString[-2])
faceNormalDirection[2] = float(normalInfoString[-1][:-5])
return faceNormalDirection
def NormalizeVector(vector):
vLength = VLength(vector)
for axis in range(0,3):
vector[axis] = vector[axis]/vLength
return vector
def VLength(vector):
vLength = 0
for axis in range(0,3):
vLength += pow(vector[axis],2)
vLength = math.sqrt(vLength)
return vLength
def GetAngleToAxis(vector):
vLength = VLength(vector)
angleToAxis = [0,0,0]
for axis in range(0,3):
angleToAxis[axis] += math.degrees(math.acos(vector[axis]/vLength))
return angleToAxis
faceSelection = cmds.filterExpand(sm=34)
normal = GetNumericNormalValue(GetFaceNormal(faceSelection))
normal = NormalizeVector(normal)
normalAngles=GetAngleToAxis(normal)
cmds.manipPivot(o=normalAngles)
Answer: There are a few ways to interpret the data but if given a vector you want the
respective rotations per major axis you take the inverse tangents of the
components of the vector. The choice of components determines the axis of
rotation:
inVec = /(input vector)/
rotZ = atan(inVec.y/inVec.x)
rotY = atan(inVec.x/inVec.z)
rotX = atan(inVec.y/inVec.z)
results are in radians.
|
How do you run , in Windows , this python script meant for Linux?
Question: I am using Python 3.5 on a Windows 7 computer.
I am trying to run a python script designed to run in Linux found in this book
Learning Python Network Programming by Dr. M. O. Faruque Sarker and Sam
Washington Jun 17, 2015 Chapter 6. IP and DNS GeoIP look-ups
..........The code is saved in my C drive as C:\Python35\geoiplookup.py I had
renamed it from "6_3_geoip_lookup.py" to "geoiplookup.py" ..........Here is
the code and also shown in a more user friendly image at
<http://imgur.com/PM196AV> ..........
import socket
from geoip import geolite2
import argparse
if __name__ == '__main__':
# Setup commandline arguments
parser = argparse.ArgumentParser(description='Get IP Geolocation info')
parser.add_argument('--hostname', action="store", dest="hostname", required=True)
# Parse arguments
given_args = parser.parse_args()
hostname = given_args.hostname
ip_address = socket.gethostbyname(hostname)
print("IP address: {0}".format(ip_address))
match = geolite2.lookup(ip_address)
if match is not None:
print('Country: ',match.country)
print('Continent: ',match.continent)
print('Time zone: ', match.timezone)
..........This script will show an output similar to the following (in Linux):
$ python 6_3_geoip_lookup.py --hostname=amazon.co.uk
IP address: 178.236.6.251
Country: IE
Continent: EU
Time zone: Europe/Dublin
..........I tried running the .py file in Python GUI shell in Windows using
the following commands without success. Please see picture at
<http://imgur.com/wZ4m1S5> ..........
import geoiplookup
--hostname=amazon.co.uk
SyntaxError: can't assign to operator
--hostname='amazon.co.uk'
SyntaxError: can't assign to operator
'--hostname'='amazon.co.uk'
SyntaxError: can't assign to literal
'--hostname'=amazon.co.uk
SyntaxError: can't assign to literal
Before you accused me of not trying this in the command line:
C:\Python35>python geoiplookup.py --hostname=amazon.co.uk
I have, and I got an error message:
IP address: 178.236.6.251
Traceback (most recent call last):
File "geoiplookup.py", line 17, in <module> match = geolite2.lookup(ip_address)
File "c:\Python35\lib\site-packages\geoip.py", line 382, in _read_mmdb_metadata
TypeError: a bytes-like object is required, not 'str'
c:\Python35>
Please advise . Thanks a million.
Thank you very much.
Answer: You aren't running the Python script, you're in the Python interpreter and
attempting to run each line of code.
Look at your Linux output:
$ python 6_3_geoip_lookup.py --hostname=amazon.co.uk
IP address: 178.236.6.251
Country: IE
Continent: EU
Time zone: Europe/Dublin
That first line is important. _That_ is what you should be running on your
Windows command line:
C:\Python35>python geoiplookup.py --hostname=amazon.co.uk
|
Getting correct values from variables in functions in Tkinter
Question: This is likely a simple problem related to how variables stored in a
dictionary are given values within a function.
I'm trying to get the value of a Boolean variable I'm storing in a dictionary
`(self.controller.Page1_data["Step1Complete"])` that I set to `"True"` in the
code below.
#!/usr/local/bin/env python3
import tkinter as tk # python3
from tkinter import BooleanVar
TITLE_FONT = ("Helvetica", 18, "bold")
class SampleApp(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
container = tk.Frame(self)
container.pack(side="top", fill="both", expand=True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.Page1_data={"Step1Complete": BooleanVar()}
self.frames = {}
for F in (StartPage, PageFifteen):
page_name = F.__name__
frame = F(container, self)
self.frames[page_name] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame("StartPage")
def show_frame(self, page_name):
frame = self.frames[page_name]
frame.tkraise()
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
controller.title("Part B Data Collection")
controller.geometry("600x500")
label = tk.Label(self, text="Welcome to the Part B Test!", font=TITLE_FONT)
label.pack(side="top", fill="x", pady=10)
button1 = tk.Button(self, text="Complete Step 1", command=self.MakeStep1Complete)
button1.place(relx=0.385, rely=0.65)
def MakeStep1Complete(self):
Step1Complete=True
self.controller.Page1_data["Step1Complete"]=Step1Complete
self.controller.show_frame("PageFifteen")
class PageFifteen(tk.Frame):
def StatusCheck(self):
Step1Complete=self.controller.Page1_data["Step1Complete"]
print("True or false: at Step 15, Step 1 completed -a ")
print(Step1Complete)
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(self, text="Check the data", font=TITLE_FONT)
label.pack(side="top", fill="x", pady=10)
self.StatusCheck()
tk.Label(self, text="Click on each of the buttons below to review the data you have inputted").place(relx=0.15, rely=0.12)
Step1Complete=self.controller.Page1_data["Step1Complete"].get()
print("True or false: at Step 15, Step 1 completed - b")
print(Step1Complete)
button17=tk.Button(self, text="Check if we did everything",
command=self.StatusCheck)
button17.place(relx=0.7, rely=0.75)
if __name__ == "__main__":
app = SampleApp()
app.mainloop()
When I execute the above (now edited per Florent's suggestions) code, I get
the following results:
True or false: at Step 15, Step 1 completed -a
False
True or false: at Step 15, Step 1 completed - b
False
True or false: at Step 15, Step 1 completed -a
True
The first result is from the first execution of the `StatusCheck` function,
where I simply call the function. The second result is from when I am
explicitly executing the command
`(Step1Complete=self.controller.Page1_data["Step1Complete"].get())` that ought
to (but doesn't) get the correct Boolean data. The third result, and the only
result that gets the correct Boolean data, is generated when the user clicks
the `"Check if we did everything"`, which calls the `StatusCheck` function
from the `tk.Button` command config option.
Why might this happen? Why would the `StatusCheck` function not work the first
time I execute it but then work the second time I execute it. Ideally, I'd
like the code to immediately find that the `"Step1Complete"` variable has been
set to `"True"` without having the user click a button.
I have reason to believe that the problem has something to do with which
functions write values to variables, since I'm able to make the program work
correctly when I set `"Step1Complete"` to `"True"` within the `__init__`
function in the `StartPage` class.
Being a Python N00b, I know I must be missing something pretty obvious, but I
really appreciate any help you could provide.
Answer: The short version of the answer is this: When you are using tkinter variables
you need to use set and get functions to access the value of the variables.
Also in an unrelated issue, in a class you don't need to define a function
before calling it and it is better to have __init__ on top of your class (cf
class PageFifteen).
Now, specifically for your code and why you get those prints: currently, you
call 2 prints when creating the PageFifteen Frame:
* in the first on, you print a tk.BooleanVar, which gives you its handle and not its value (var.get() to print the value)
* in the second one, you correctly use get() and print the current value of a your var (which is the default of a BooleanVar: False)
Then you show StartPage, click on button and you overwrite your Booleanvar
with the bool True, that's why your last StatusCheck works without a get.
In your code you need to replace:
def StatusCheck(self):
Step1Complete=self.controller.Page1_data["Step1Complete"]
print("True or false: at Step 15, Step 1 completed -a ")
print(Step1Complete)by
by:
def StatusCheck(self):
Step1Complete=self.controller.Page1_data["Step1Complete"].get()
print("True or false: at Step 15, Step 1 completed -a ")
print(Step1Complete)
and
def MakeStep1Complete(self):
Step1Complete=True
self.controller.Page1_data["Step1Complete"]=Step1Complete
self.controller.show_frame("PageFifteen")
by:
def MakeStep1Complete(self):
Step1Complete=True
self.controller.Page1_data["Step1Complete"].set(Step1Complete)
self.controller.show_frame("PageFifteen")
|
Matching two string lists that partially match into another list
Question: I am trying to match a List containing strings (50 strings) with a list
containing strings that are part of some of the strings of the previous list
(5 strings). I will post the complete code in order to give context below but
I also want to give a short example:
List1 = ['abcd12', 'efgh34', 'ijkl56', 'mnop78']
List2 = ['abc', 'ijk']
I want to return a list of the strings from `List1` that have matches in
`List2`. I have tried to do something with `set.intersection` but it seems you
can't do partial matches with it (or at I can't with my limited abilities). I
also tried `any()` but I had no success making it work with my lists. In my
book it says I should use a nested loop but I don't know which function I
should use and how regarding lists.
Here is the complete code as reference:
#!/usr/bin/env python3.4
# -*- coding: utf-8 -*-
import random
def generateSequences (n):
L = []
dna = ["A","G","C","T"]
for i in range(int(n)):
random_sequence=''
for i in range(50):
random_sequence+=random.choice(dna)
L.append(random_sequence)
print(L)
return L
def generatePrefixes (p, L):
S = [x[:20] for x in L]
D = []
for i in range(p):
randomPrefix = random.choice(S)
D.append(randomPrefix)
return S, D
if __name__ == "__main__":
L = generateSequences(15)
print (L)
S, D = generatePrefixes(5, L)
print (S)
print (D)
edit: As this was flagged as a possible duplicate i want to edit this in order
to say that in this post python is used and the other is for R. I don't know R
and if there are any similarities but it doesn't look like that to me at first
glance. Sorry for the inconvenience.
Answer: Using a nested for loop:
def intersect(List1, List2):
# empty list for values that match
ret = []
for i in List2:
for j in List1:
if i in j:
ret.append(j)
return ret
List1 = ['abcd12', 'efgh34', 'ijkl56', 'mnop78']
List2 = ['abc', 'ijk']
print(intersect(List1, List2))
|
custom authentication in backend - Creating user at runtime without saving
Question: While searching online on how to accomplish custom authentication in Django I
came across [this](http://django.zone/blog/posts/custom-authentication-
backends-django/) and [this](https://www.djangorocks.com/tutorials/creating-a-
custom-authentication-backend/creating-a-simple-authentication-backend.html)
article. Both of these articles specified the same instructions. Currently i
have something like this. (Taken from first article)
class Client(models.Model):
email = models.EmailField(unique=True, max_length=100)
password = models.CharField(max_length=128)
Then in another python file I have this
from .models import Client
class ClientAuthBackend(object):
def authenticate(self, username=None, password=None):
try:
user = Client.objects.get(email=username)
return user
if password == 'master':
# Authentication success by returning the user
return user
else:
# Authentication fails if None is returned
return None
except Client.DoesNotExist:
return None
def get_user(self, user_id):
try:
return Client.objects.get(pk=user_id)
except Client.DoesNotExist:
return None
I just started using Django and have kind of skipped the model section for db
interaction since in my current project I am using RAW and custom SQL due to
certain reasons. My question is where does the
user = Client.objects.get(email=username)
get its user from . Do I have to make an entry into a database ? What I want
to do is to create a user during runtime and not save to the database.I tried
doing this
#The following creates and saves a user in the db
u =User.objects.create_user('john', '[email protected]', 'johnpassword',cust_address="my_custom_address",cust_msg="Users custom message")
The above returns a `Client.DoesNotExist` exception.
Answer: > My question is where does the
>
>
> user = Client.objects.get(email=username)
>
>
> get its user from
Apparently, `Client` is a `models.Model`. This means that represents a single
record of the relevant table in your database, wherever that is depending on
the relevant `settings.py` setting.
Therefore, the table representing that model can be created with the next
Django migration and lots of other useful things Django allows for.
So essentialy the above statement instructs the [Django
ORM](https://docs.djangoproject.com/ja/1.9/topics/db/) to fetch all `Client`
records from that particular table with that specific email. If no such
entries exist, none will be returned.
> I tried doing this
>
>
> u =User.objects.create_user('john', '[email protected]',
> 'johnpassword',cust_address="my_custom_address",cust_msg="Users custom
> message")
>
This is where you complicate things a bit. The `create_user` method is not
part of Django ORM, but part of the [Django default auth model
manager](https://docs.djangoproject.com/es/1.9/ref/contrib/auth/#django.contrib.auth.models.UserManager.create_user)
`django.contrib.auth.models.User`. You should either provide such a method
yourself, or easier to just use the standard
[`create`](https://docs.djangoproject.com/ja/1.9/topics/db/queries/#creating-
objects) method provided with the default Django model manager.
Not saving the user model even on some cache does not make sense at all, as it
implies that the user registers each time he or she wishes to login.
Having said all those, I would strongly recommend you to read the official
Django documentation. All the above are covered, the documentation is very
comprehensive and not that long. You can then read and understand tutorials on
the wild which may or may not be correct or up to date.
Take a good read specifically on the [Customizing authentication
topic](https://docs.djangoproject.com/ja/1.9/topics/auth/customizing/), as it
provides additional methods far easier for the beginner.
|
Getting list of keywords from JSON
Question: I have encountered a problem and I don't understand why it printed out this
way.
Below is my code, please forgive me for the bad formatting as I am new to
programming, this is to open a text file which has a bunch of keywords
import urllib2
import json
f1 = open('CatList.text')
lines = f1.readlines()
for line in lines:
url ='https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle='+line+'&cmlimit=100'
print(url)
json_obj = urllib2.urlopen(url)
data = json.load(json_obj)
#to write the result
f2 = open('SubList.text', 'w')
f2.write(url)
for item in data['query']:
for i in data['query']['categorymembers']:
f2.write((i['title']).encode('utf8')+"\n")
I get the error:
Traceback (most recent call last):
File "Test2.py", line 16, in <module>
json_obj = urllib2.urlopen(url)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 402, in open
req = meth(req)
File "/usr/lib/python2.7/urllib2.py", line 1113, in do_request_
raise URLError('no host given')
urllib2.URLError: <urlopen error no host given>
I am not sure what this error means but I tried this to print the url.
import urllib2
import json
f1 = open('CatList.text')
f2 = open('SubList.text', 'w')
lines = f1.readlines()
for line in lines:
url ='https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle='+line+'&cmlimit=100'
print(url)
f2.write(url+'\n')
The results I have gotten were weird (below is part of the result):
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Branches of geography
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography by place
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography awards and competitions
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography conferences
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography education
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Environmental studies
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Exploration
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geocodes
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geographers
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geographical zones
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geopolitical corridors
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:History of geography
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Land systems
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Landscape
&cmlimit=100
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography-related lists
&cmlimit=100
Notice that the URL is broken up into 2 parts
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography-related lists
&cmlimit=100
instead of
https://en.wikipedia.org/w/api.php?action=query&format=json&list=categorymembers&cmtitle=Category:Geography-related lists&cmlimit=100
**My first question is how can I fix this?**
**Secondly, is this what that is giving me the error?**
My CatList.text is as follows:
Category:Branches of geography
Category:Geography by place
Category:Geography awards and competitions
Category:Geography conferences
Category:Geography education
Category:Environmental studies
Category:Exploration
Category:Geocodes
Category:Geographers
Category:Geographical zones
Category:Geopolitical corridors
Category:History of geography
Category:Land systems
Category:Landscape
Category:Geography-related lists
Category:Lists of countries by geography
Category:Navigation
Category:Geography organizations
Category:Places
Category:Geographical regions
Category:Surveying
Category:Geographical technology
Category:Geography terminology
Category:Works about geography
Category:Geographic images
Category:Geography stubs
Sorry for the long post. I really appreciate your help. Thank you.
Answer: Friend, Generally '\n' is used for new line . Same sense , In a file there is
hidden '\n' character between each lines.
So at **lines = f1.readlines()** it includes '\n' in end of all lines. This is
the problem.
To avoid this, you should read as **f1.read.splitlines()** .
|
MD5 hash With ftplib
Question:
>>>import ftplib
>>>import hashlib
>>>ftp = ftplib.FTP('localhost','username','password')
>>>ftp.cwd('Server')
'250 Directory successfully changed.'
>>>m=hashlib.md5()
>>> file = open('Desktop/space.png','rb')
>>> m.update("space.png")
>>> dd = m.hexdigest()
>>> dd
'f646cdee237bd84e33485eb03c9228ac'
>>> ftp.storbinary('STOR '+dd, file)
'226 Transfer complete.'
>>> file.close()
>>> ftp.quit()
'221 Goodbye.'
>>>
How to store different hexadecimal digits file? As this Python code keep
storing the same hexadecimal digits file. Anyone got any idea?
Answer: MD5 of `"space.png"` will always be `f646cdee237bd84e33485eb03c9228ac`. You
don't explain what you're trying to do exactly. If you're trying to use unique
filenames which are MD5 hashes, you can try something like this.
import uuid
m.update(str(uuid.uuid4()))
dd = m.hexdigest()
ftp.storbinary('STOR '+dd, file)
`uuid.uuid4()` will generate a universally unique identifier whose MD5 hash
will practically be unique.
|
Python argparse option concatenation
Question: Normally you can concatenate options like '-abbb', which will expand to '-a -b
-b -b'. Counts would be 1 for a, abd 3 for b.
However when mixing prefix_chars I see something different ...
import argparse
parser = argparse.ArgumentParser( prefix_chars='-+' )
parser.add_argument( '-x', action='count', dest='counter1' )
parser.add_argument( '+x', action='count', dest='counter2' )
args = parser.parse_args( '-xxx +xxx -xxx'.split() )
print( 'counter1 = ' + str(args.counter1) )
print( 'counter2 = ' + str(args.counter2) )
Running this results in:
counter1 = 8
counter2 = 1
Apparently '+xxx' doesn't expand to '+x +x +x', but to '+x -x -x'.
Changing the prefix_chars to '+-' results in:
counter1 = 2
counter2 = 7
Now '-xxx' expands to '-x +x +x'.
Is this defined behaviour, or am I missing something?
Answer: This was patched in late 2010, in early 2.7
<http://bugs.python.org/issue9352>
================
I'm not aware of bug/issues or code changes that would affect this, but I
could dig into it.
For a start, strings of single prefix options are handled rather deeply in the
parsing. In the current `argparse.py` the relevant code is:
class ArgumentParser
def _parse_known_args
# function to convert arg_strings into an optional action
def consume_optional(start_index):
match_argument = self._match_argument
action_tuples = []
while True:
...
chars = self.prefix_chars # e.g. the `-+` parameter
if arg_count == 0 and option_string[1] not in chars:
action_tuples.append((action, [], option_string))
char = option_string[0]
option_string = char + explicit_arg[0]
new_explicit_arg = explicit_arg[1:] or None
optionals_map = self._option_string_actions
if option_string in optionals_map:
action = optionals_map[option_string]
explicit_arg = new_explicit_arg
else:
msg = _('ignored explicit argument %r')
raise ArgumentError(action, msg % explicit_arg)
It's the pair of lines:
char = option_string[0]
option_string = char + explicit_arg[0]
that preserves the initial `-/+` when handling the repeated characters (in the
unparsed `explicit_arg` string.
I can imagine the case where the code split `+xyz` into `+x`,`-y`,`-z`, and
was corrected to use `+x`,`+y`,`+z`. But it will require some digging into
bug/issues and/or the Python repository to find out if and when that change
was made.
What does your problem `argparse.py` have at this point?
|
Python multiline regex delimiter
Question: Having this multiline variable:
raw = '''
CONTENT = ALL
TABLES = TEST.RAW_1
, TEST.RAW_2
, TEST.RAW_3
, TEST.RAW_4
PARALLEL = 4
'''
The structure is always `TAG = CONTENT`, both strings are NOT fixed and
`CONTENT` could contain new lines.
I need a **`regex`** to get:
[('CONTENT', 'ALL'), ('TABLES', 'TEST.RAW_1\n , TEST.RAW_2\n , TEST.RAW_3\n , TEST.RAW_4\n'), ('PARALLEL', '4')]
Tried multiple combinations but I'm not able to _stop_ the `regex` _engine_ at
the right point for `TABLES` tag as its content is a _multiline string_
delimited by the next tag.
Some _attempts_ from the interpreter:
>>> re.findall(r'(\w+?)\s=\s(.+?)', raw, re.DOTALL)
[('CONTENT', 'A'), ('TABLES', 'T'), ('PARALLEL', '4')]
>>> re.findall(r'^(\w+)\s=\s(.+)?', raw, re.M)
[('CONTENT', 'ALL'), ('TABLES', 'TEST.RAW_1'), ('PARALLEL', '4')]
>>> re.findall(r'(\w+)\s=\s(.+)?', raw, re.DOTALL)
[('CONTENT', 'ALL\nTABLES = TEST.RAW_1\n , TEST.RAW_2\n , TEST.RAW_3\n , TEST.RAW_4\nPARALLEL = 4\n')]
Thanks!
Answer: You can use a positive lookahead to make sure you lazily match the value
correctly:
(\w+)\s=\s(.+?)(?=$|\n[A-Z])
^^^^^^^^^^^^
To be used with a DOTALL modifier so that a `.` could match a newline symbol.
The `(?=$|\n[A-Z])` lookahead will require `.+?` to match up to the end of
string, or up to the newline followed with an uppercase letter.
See the [regex demo](https://regex101.com/r/dP0wM5/2).
And alternative, faster regex (as it is an unrolled version of the expression
above) - but DOTALL modifier should NOT be used with it:
(\w+)\s*=\s*(.*(?:\n(?![A-Z]).*)*)
See [another regex demo](https://regex101.com/r/dP0wM5/3)
**Explanation** :
* `(\w+)` \- Group 1 capturing 1+ word chars
* `\s*=\s*` \- a `=` symbol wrapped with optional (0+) whitespaces
* `(.*(?:\n(?![A-Z]).*)*)` \- Group 2 capturing 0+ sequences of:
* `.*` \- any 0+ characters other than a newline
* `(?:\n(?![A-Z]).*)*` \- 0+ sequences of:
* `\n(?![A-Z])` \- a newline symbol not followed with an uppercase ASCII letter
* `.*` \- any 0+ characters other than a newline
[Python demo](https://ideone.com/PMdsd1):
import re
p = re.compile(r'(\w+)\s=\s(.+?)(?=$|\n[A-Z])', re.DOTALL)
raw = '''
CONTENT = ALL
TABLES = TEST.RAW_1
, TEST.RAW_2
, TEST.RAW_3
, TEST.RAW_4
PARALLEL = 4
'''
print(p.findall(raw))
|
pymssql package does not work with lambda in aws
Question: How do we create a pymssql package for lambda. I tried creating it using pip
install pymssql -t . When I run my lambda function it complaints saying that
Unable to import module 'lambda_function': No module named lambda_function
I follow the steps on this link
<http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-
deployment-package.html>
I have a windows machine
Answer: Finally i could do it. It didnt worked with windows packages so used ubuntu to
package freetds.so file and it worked.
|
Animate GMapPlot w/ Python/Bokeh
Question: I'm a very nooby programmer and this is my first Stack Overflow question. :)
So I'm trying to animate a car's trip on google maps using Python. I used
matplotlib at first and could get a dot animated over the path line... then I
tried using bokeh and successfully got the path to overlay on google maps...
My problem is that I haven't found a good way to do both (animate plot over
google maps).
My data is in the form of Lat/Long coordinates.
Any advice? Thanks in advance!
EDIT: Here's my code that does the gmapplot... I'd rather have this and no
animation than animation with no GMAP. My goal is to animate that "car" dot.
import numpy as np
from bokeh.io import output_file, show, vform
from bokeh.models.widgets import Dropdown
from bokeh.models import (GMapPlot, GMapOptions, ColumnDataSource, Line, Circle,
DataRange1d, PanTool, WheelZoomTool, BoxSelectTool, HoverTool)
data = np.genfromtxt('Desktop\Temp Data for Python\test data 3.csv', delimiter=',',
names=True)
map_options = GMapOptions(lat=np.average(data['Latitude']),
lng=np.average(data['Longitude']), map_type="roadmap", zoom=13)
plot = GMapPlot(x_range=DataRange1d(), y_range=DataRange1d(), map_options=map_options,
title="My Drive")
source = ColumnDataSource(data=dict(lat=data['Latitude'], lon=data['Longitude'],
speed=data['GpsSpeed'],))
path = Line(x="lon", y="lat", line_width = 2, line_color='blue')
car = Circle(x=data['Longitude'][0], y=data['Latitude'][0], size=5, fill_color='red')
plot.add_glyph(source, path)
plot.add_glyph(source, car)
plot.add_tools(PanTool(), WheelZoomTool(), BoxSelectTool(),
HoverTool(tooltips=[("Speed", "@speed"),]))
output_file("gmap_plot.html")
show(plot)
Answer: This may not be exactly what you are looking for, but you could have a slider
widget that controls the position of your car dot. The slider example found in
the bokeh docs (or github repository, I can't remember) helped me when I
started using sliders.
Just so you are aware, I was having problems with latlng points showing up in
the correct locations. There is about a 10px offset. This is an open issue
(github issue 2964).
The following code currently is just producing a generic bokeh Figure, but in
theory, if you change it from a `Figure` to a `GMapPlot` it _should_ work. I
wasn't able to get this working with GMapPlots directly. I think this may be
because of github issue 3737. I can't even run the Austin example from the
bokeh docs.
Hopefully this is what you had in mind
from bokeh.plotting import Figure, ColumnDataSource, show, vplot
from bokeh.io import output_file
from bokeh.models import (Slider, CustomJS, GMapPlot,
GMapOptions, DataRange1d, Circle, Line)
import numpy as np
output_file("path.html")
# Create path around roundabout
r = 0.000192
x1 = np.linspace(-1,1,100)*r
x2 = np.linspace(1,-1,100)*r
x = np.hstack((x1,x2))
f = lambda x : np.sqrt(r**2 - x**2)
y1 = f(x1)
y2 = -f(x2)
y = np.hstack((y1,y2))
init_x = 40.233688
init_y = -111.646784
lon = init_x + x
lat = init_y + y
# Initialize data sources.
location = ColumnDataSource(data=dict(x=[lon[0]], y=[lat[0]]))
path = ColumnDataSource(data=dict(x=lon, y=lat))
# Initialize figure, path, and point
"""I haven't been able to verify that the GMapPlot code below works, but
this should be the right thing to do. The zoom may be totally wrong,
but my latlng points should be a path around a roundabout.
"""
##options = GMapOptions(lat=40.233681, lon=-111.646595, map_type="roadmap", zoom=15)
##fig = GMapPlot(x_range=DataRange1d(), y_range=DataRange1d(), map_options=options)
fig = Figure(plot_height=600, plot_width=600)
c = Circle(x='x', y='y', size=10)
p = Line(x='x', y='y')
fig.add_glyph(location, c)
fig.add_glyph(path, p)
# Slider callback
callback = CustomJS(args=dict(location=location, path=path), code="""
var loc = location.get('data');
var p = path.get('data');
t = cb_obj.get('value');
/* set the point location to the path location that
corresponds to the slider position */
loc['x'][0] = p['x'][t];
loc['y'][0] = p['y'][t];
location.trigger('change');
""")
# The way I have written this, 'start' has to be 0 and
# 'end' has to be the length of the array of path points.
slider = Slider(start=0, end=200, step=1, callback=callback)
show(vplot(fig, slider))
|
What Arguments to use while doing a KS test in python with student's t distribution?
Question: I have data regarding metallicity in stars, I want to compare it with a
student's t distribution. To do this I am running a Kolmogorov-Smirnov test
using scipy.stats.kstest on python `KSstudentst =
scipy.stats.kstest(data,"t",args=(a,b))` But I am unable to find what the
arguments are supposed to be. I know the student's t requires a degree of
freedom (df) parameter but what is the other parameter. Also which one of the
two is the df parameter. In the documentation for scipy.stats.t.cdf the inputs
are the position at which value is to be calculated and df, but in the KS test
there is no sense in providing the position.
Answer: Those seem like the arguments for
[`scipy.stats.t.cdf`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.t.html):
`(df, loc=0, scale=1)` for standard t. Since they have default values, you
need to pass a tuple but it can be a singleton (`args = (2, )` for df=2)
import scipy.stats as ss
import numpy as np
np.random.seed(0)
data = np.random.randn(100)
ss.kstest(data, "t", args = (2, ))
Out[45]: KstestResult(statistic=0.093219139130061066, pvalue=0.33069879934011182)
Or passing loc=0 and scale=1, the same results:
ss.kstest(data, "t", args = (2, 0, 1))
Out[46]: KstestResult(statistic=0.093219139130061066, pvalue=0.33069879934011182)
|
How to run Python Autobahn applications in production?
Question: I'm using Autobahn on a project and while reading the [running autobahn
components section](http://autobahn.ws/python/wamp/programming.html#running-
components) of the docs I've come across the fact that you can run Application
components using the built in ApplicationRunner. E.g.,
from autobahn.asyncio.wamp import ApplicationRunner
runner = ApplicationRunner(url=u"ws://localhost:8080/ws", realm=u"realm1")
runner.run(MyComponent)
The
[docs](http://autobahn.ws/python/reference/autobahn.asyncio.html#autobahn.asyncio.wamp.ApplicationRunner)
specifically state the application runner is for development,
> This class is a convenience tool mainly for development and quick hosting of
> WAMP application components.
I want to know how WAMP application components (created via Autobahn or
otherwise) are supposed to hosted in production? As in, are there production
runners? I.e. what's the gunicorn of this area?
Answer: If Crossbar is running on the same machine that your application is, then you
can configure Crossbar to start it automatically.
Otherwise, `ApplicationRunner` is a convenience API, you can always do what it
does (start event loop, init the connections, etc) manually if you want. You
can check
[here](http://autobahn.ws/python/_modules/autobahn/twisted/wamp.html#ApplicationRunner)
for the source code to see exactly what all it does.
|
How do I add category names to my seaborn boxplot when my data is from a python dictionary?
Question: I have some data that is sitting in a python dictionary of lists.
How can I use the keys from the dictionary as category labels for this
boxplot? Here is a sample of the dictionary, plot_data:
plot_data {
'Group1': [0.02339976, 0.03235323, 0.12835462, 0.10238375, 0.04223188],
'Group2': [0.02339976, 0.03235323, 0.12835462, 0.10238375, 0.04223188]
}
This code is probably a mess, but here it is:
data = plot_data.values()
#Get data in proper format
fixed_data = list(sorted(data))
#Set up the graph parameters
sns.set(context='notebook', style='whitegrid')
sns.axlabel(xlabel="Groups", ylabel="Y-Axis", fontsize=16)
#Plot the graph
sns.boxplot(data=fixed_data, whis=np.inf, width=.18)
sns.swarmplot(data=fixed_data, size=6, edgecolor="black", linewidth=.9)
[](http://i.stack.imgur.com/RzLiE.png)
Answer: Here how to add category labels "manually":
import seaborn as sns, matplotlib.pyplot as plt, operator as op
plot_data = {
'Group1': range(10,16),
'Group2': range(5,15),
'Group3': range(1,5)
}
# sort keys and values together
sorted_keys, sorted_vals = zip(*sorted(plot_data.items(), key=op.itemgetter(1)))
# almost verbatim from question
sns.set(context='notebook', style='whitegrid')
sns.axlabel(xlabel="Groups", ylabel="Y-Axis", fontsize=16)
sns.boxplot(data=sorted_vals, width=.18)
sns.swarmplot(data=sorted_vals, size=6, edgecolor="black", linewidth=.9)
# category labels
plt.xticks(plt.xticks()[0], sorted_keys)
plt.show()
And here the output:
[](http://i.stack.imgur.com/xN3Vz.png)
|
RoboBrowser BadRequestKeyError(key)
Question: I am trying to sign in to a website using RoboBrowser and I am stuck with a
error message.
My code:
from robobrowser import RoboBrowser
browser = RoboBrowser()
def login():
browser.open('https://www.kijiji.ca/t-login.html')
form = browser.get_form(id="login-form")
form.fields["LoginEmailOrNickname"].value = "an_email_address"
form.fields["login-password"].value = "a_password"
form.fields["login-rememberMe"].value = "true"
browser.submit_form(form)
login()
The error message:
Traceback (most recent call last):
File "/home/rojaslee/Desktop/kijiji_poster/kijiji_poster.py", line 16, in <module>
login()
File "/home/rojaslee/Desktop/kijiji_poster/kijiji_poster.py", line 11, in login
form.fields["LoginEmailOrNickname"].value = ["an_email_address"]
File "/usr/local/lib/python3.4/dist-packages/werkzeug/datastructures.py", line 744, in __getitem__
raise exceptions.BadRequestKeyError(key)
werkzeug.exceptions.BadRequestKeyError: 400: Bad Request
Answer: The HTML code from the web site you want to log in is as follows:
<section>
<label for="LoginEmailOrNickname">Email Address or Nickname:</label>
<input id="LoginEmailOrNickname" name="emailOrNickname" req="req" type="text" value="" maxlength="128"><span class="field-message" data-for="LoginEmailOrNickname"></span>
</section>
<section>
<label for="login-password">Password:</label>
<input id="login-password" name="password" req="req" type="password" value="" maxlength="64"><span class="field-message" data-for="login-password"></span>
<a id="LoginForgottenPassword" href="/t-forgot-password.html">Forgot your password?</a>
</section>
To put a value on the form fields you have to get the **name** attribute, not
the id.
This code should work:
form.fields["emailOrNickname"].value = "an_email_address"
form.fields["password"].value = "a_password"
form.fields["rememberMe"].value = "true"
If you need to get the fields of the form you can print them:
print form.fields
|
Cannot install bob.measure python pacakge
Question: I have installed all the dependencies of bob.measure according to the graph
presented in <https://github.com/idiap/bob/wiki/Dependencies> and
<https://github.com/idiap/bob/wiki/Installation>:
However, I cannot install the package this is the traceback:
omar@ubuntuv2:~/bob.measure$ sudo python setup.py
Traceback (most recent call last):
File "setup.py", line 50, in <module>
boost_modules = boost_modules,
File "/usr/local/lib/python2.7/dist-packages/bob/blitz/extension.py", line 52, in __init__
BobExtension.__init__(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/bob/extension/__init__.py", line 294, in __init__
bob_includes, bob_libraries, bob_library_dirs, bob_macros = get_bob_libraries(self.bob_packages)
File "/usr/local/lib/python2.7/dist-packages/bob/extension/__init__.py", line 186, in get_bob_libraries
pkg = importlib.import_module(package)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/bob/math/__init__.py", line 6, in <module>
bob.extension.load_bob_library('bob.math', __file__)
File "/usr/local/lib/python2.7/dist-packages/bob/extension/__init__.py", line 237, in load_bob_library
ctypes.cdll.LoadLibrary(full_libname)
File "/usr/lib/python2.7/ctypes/__init__.py", line 443, in LoadLibrary
return self._dlltype(name)
File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /usr/local/lib/python2.7/dist-packages/bob/math/libbob_math.so: undefined symbol: dsyevd_
All the bob.measure dependencies are up to date. I have an Ubuntu 14.04
system.
Answer: Install dependencies on Ubuntu:
sudo apt-get install libboost-all-dev
sudo apt-get install libblitz0-dev
sudo apt-get install cmake
sudo apt-get install libhdf5-serial-dev
sudo apt-get install libtiff5
sudo apt-get install libtiff5-dev
sudo apt-get install libtiff-tools
sudo apt-get install giflib-dbg
Try installing using pip
$ pip install numpy
$ pip install bob.extension
$ pip install bob.blitz
$ pip install bob.core
$ pip install bob.io.base
$ pip install bob.io.image
Install bob.measure
Install following dependencies if not present
sudo apt-get install libopenblas-dev
sudo apt-get install libcppnetlib-dev
sudo apt-get install python-netlib
apt-get install libfreetype6-dev
Use pip3
sudo pip3 install bob.measure
|
How do I open all files in a directory in python?
Question: I have the code
import os
def Load():
for filename in os.listdir("directoryPath"):
content = open(filename, "r")
Load()
And I would like to know how to load the files that `filename` returns, at the
moment I just get an error saying `FileNotFoundError: [Errno 2] No such file
or directory: 'Adjectives.txt'`
Answer: [`os.listdir()`](https://docs.python.org/2/library/os.html#os.listdir) returns
only the filename, not the full path. You need to pass the whole path to
`open`. You can use
[`os.path.join`](https://docs.python.org/2/library/os.path.html#os.path.join)
to combine directory and filename:
content = open(os.path.join('directoryPath', filename), 'r')
|
Trim an ordered dict to the last x items
Question: I'm trying to trim an ordered dict to the last x items.
I have the following code, which works but doesn't seem very pythonic.
Is there a better way of doing this?
import collections
d = collections.OrderedDict()
# SNIP: POPULATE DICT HERE!
d = collections.OrderedDict(d.items()[-3:])
Answer: If you wish to trim the dictionary in place, then you can pop the offending
items:
for k in d.keys()[:-3]:
d.pop(k)
(On python 3, you'll need to convert `.keys()` to a list).
If you're wishing to create a new `OrderedDict`, then its not clear quite what
is "unpythonic" about your current approach.
|
Unable to locate the installed graph-tool package in Python
Question: I spent 1 hr 30mins to install graph-tool package. Installation declared that
it was successful. When I tried to import it says "no module by name
graph_tool..." I guess I am missing the path or link to this module. How to
link or import?
Also, when I ran the command "pip freeze" it does not show the graph_tool
package installed. Please help to resolve these problems. Thanks.
Answer: You are probably using the system's python, whereas graph-tool was installed
for macport's python. You should call the interpreter corresponding to
macport's version, usually `/usr/local/bin/python`.
|
How Can I Write Scripy To See Url Website?
Question: In Title I Said My Opinion I Wanna See URL After This (?) For example: this
site WWW.name.com has many name after this(?) like these php?id= php?cat=
php?page= Search in URL and show all things after question mark By python3
please
Answer: Here try this code:
from urlparse import urlparse
rurl = urlparse('http://name.com/data.php?id=5')
query = rurl.query
print query
The code above will print the query string which is "id=5"
|
Struggling with Python Regex for a very specific array
Question: I'm trying to make a regex method (if you can find an easier method, please
tell)
For example: I need the lines that are marked with "!" at the end
[ExpertSingle]
{
192 = N 0 0
384 = N 0 0
576 = N 0 0
768 = N 0 0
960 = N 0 0
}
**Edit** : replaced with actual data
Answer: You could just find the lines that with one or more numbers
import re
inputStr = """[ExpertSingle]
{
192 = N 0 0
270 = N 1 0
270 = N 2 0
360 = N 0 0
}"""
goodLines = re.findall(r"\d+.+", inputStr)
print(goodLines)
This outputs: `['192 = N 0 0', '270 = N 1 0', '270 = N 2 0', '360 = N 0 0']`
If you wanted to be ultra strict and only find words in the format of some
digits, a space, an equals, a space, a letter, a space, a digit, a space, then
another digit you could use
goodLines = re.findall(r"\d+\s=\s\w\s\d\s\d", inputStr)
|
to trim a any substring inside bracket in the string
Question: I have string in python :
line=r"X:\folder\Code\Mod\ACCSC1C1.c 351: Error -> Warning 550 Symbol XXX (line 34) not accessed"
and I want to trim this line like to remove (line 34). But for different case
line number varies like line may be like:
X:\ACCSC1C1.c 333: Error -> Warning 4' (line 536) not accessed
X:\ACCSddC1.c 633: Error -> Warning 8' (line 111) not accessed
so my output should come like:
X:\ACCSC1C1.c 333: Error -> Warning 4' not accessed
X:\ACCSddC1.c 633: Error -> Warning 8' not accessed
I used wildcard '*' but it is not working even eliminating brackets () showing
some errors , usually am using re module.
Thanks
Answer: Try this:
import re
line=r"X:\folder\Code\Mod\ACCSC1C1.c 351: Error -> Warning 550 Symbol XXX (line 34) not accessed"
re.sub("\(line \d+\)", '', line)
> > 'X:\folder\Code\Mod\ACCSC1C1.c 351: Error -> Warning 550 Symbol XXX not
> accessed'
From the documentation for `sub`:
> re.sub(pattern, repl, string, count=0, flags=0) Return the string obtained
> by replacing the leftmost non-overlapping occurrences of pattern in string
> by the replacement repl. If the pattern isn’t found, string is returned
> unchanged.
|
Plotting 3D Polygons in Python 3
Question: In my quest to somehow get 3D polygons to actually plot, I came across the
following script (EDIT: modified slightly): [Plotting 3D Polygons in python-
matplotlib](http://stackoverflow.com/questions/4622057/plotting-3d-polygons-
in-python-matplotlib)
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.collections import Poly3DCollection
import matplotlib.pyplot as plt
fig = plt.figure()
ax = Axes3D(fig)
x = [0,1,1,0]
y = [0,0,1,1]
z = [0,1,0,1]
verts = [zip(x, y,z)]
ax.add_collection3d(Poly3DCollection(verts),zs=z)
plt.show()
But when I run that, I get the following error message:
TypeError: object of type 'zip' has no len()
It seems that this may be a Python 2 vs. 3 thing, as I am running in Python 3,
and that post is five years old. So I changed the third-to-last line to:
verts = list(zip(x, y, z))
Now verts shows up in the variable list, but I still get an error:
TypeError: zip argument #1 must support iteration
What? How do I fix this?
Answer: I've had a similar problem with the zipping. I support the thesis it is a
python 2.x vs 3.x thing.
However, I've found somewhere that apparently works:
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import matplotlib.pyplot as plt
fig = plt.figure()
ax = Axes3D(fig)
x = [0, 1, 1, 0]
y = [0, 0, 1, 1]
z = [0, 1, 0, 1]
verts = [list(zip(x, y, z))]
print(verts)
ax.add_collection3d(Poly3DCollection(verts), zs='z')
plt.show()
I've thus made two changes:
1. replaced the line: `from matplotlib.collections import Poly3DCollection` by: `from matplotlib.mplot3.art3d import Poly3DCollection`.
I don't know where your import statement originates from, but it didn't seem
to work for me
2. changed the line: `verts = list(zip(x,y,z))` to `verts = [list(zip(x, y, z))]`
Somehow, the latter seems to work. Having just started myself with python, I
cannot offer an iron-clad explanation. However, here goes nothing: the class
Poly3DCollection requires as the first input parameter a "collection", hence a
list of lists. In this case, only list is given, which is assumed thus missed
a level. By adding another level to it (via the [...]) it worked.
I've got no idea if this explanation makes sense, however it fits intuitively
to me ;)
These modifications seem to work, as this code creates the desired 3D polygon
(I've noticed that since this is my first post, I'm not allowed to post a
proof-of-the-pudding figure.... )
hope this was useful,
kind regards
|
Differences in image dimensions from cv2 (python) and torch/image (libpng)
Question: I use cv2.imread and cv2.imdecode depending on if I am loading an image from
disk or from url. Comparatively, I use image.load to load from disk, which
utilizes libpng. When using cv2, my image.shape outputs with (height, width,
channels). However when using torch, the shape is (channels, height, width).
I am curious as to why this is and how I can get the two to equate. My goal is
to combine many images, downloaded with cv2, into a torch tensor utilizing the
(channels, height, width) dimensions. I have tried to reshape the numpy arrays
when downloaded with cv2 but the tensors do not match those downloaded with
torch.
Answer: Different libraries may store the image data in different memory formats -
this is completely up to the library and its purpose (speed of traversing the
image data, memory efficieny, etc...).
A possible solution (without further 3rd-party tools) for your problem can be
the use of `transpose`. A simple example:
import numpy as np
x = np.random.random((3, 15, 17))
print(x.shape)
# transpose axes with this order
y = x.transpose((1,2,0))
print(y.shape)
# for the sake of testing the euqality of the respective slides:
print(np.linalg.norm(x[0,:,:] - y[:,:,0]))
Sample Output:
(3, 15, 17)
(15, 17, 3)
0.0
|
what is the purpose of db=0 when connecting to redis server in python?
Question: In the redis python client documentation most examples have db=0 passed in
constructor parameters. <https://pypi.python.org/pypi/redis>
What is the use of db=0 and in what case should i use other values instead of
0?
import redis
redis_client = redis.Redis(host='localhost', port=6379, db=0)
Answer: There are 16 databases that a redis server starts up with. They are labeled
0-15. If you chose to and organized your data accordingly, you could
manipulate data in each of these. When connecting, you need to choose one. The
default choice is `db=0`. However, it could have been any number in the range
0-15.
|
python-fabric-manually set host parameter
Question: I want to insert my env.host manually.I have file having list of diff
env.host.something like this:
My host.py:
@task
def v2bg():
env.hosts = ["12.12.11.132","13.10.18.22"]
@task
def api():
env.hosts = ["4.3.81.27:2201", "4.3.1.7:2202", "20.3.18.07:2203", "60.83.581.27:2204"]
@task
def emailapi():
env.hosts = ["30.40.50.56:2201", "40.40.36.56:2202"]
@task
def v2emailapi():
env.hosts = ["20.18.148.30:2201", "20.18.48.3:2202"]
My emailapi.py
import host
from fabric.api import *
from fabric.context_managers import *
from fabric.contrib import *
env.user = "ubuntu"
env.key_filename = "~/Documents/Pem/sec"
env.parallel = True
env.warn_only = True
I want my emailapi.py manually pick up these Ip by importing host.py file. How
can i do that i am still in confuse??Please help me out.
"networks": {
"v4": [
{
"ip_address": "104.131.186.241",
"netmask": "255.255.240.0",
"gateway": "104.131.176.1",
"type": "public"
}
],
I more thing, if i want to retrieve my ip_address and set automatically
env.host how can i acheive it.
Answer: I think you want to do this. Let me know if it works.
my_roles.json
`{ "ip": [ "[email protected]", "[email protected]" ], "ip1": [
"[email protected]", "[email protected]" ] } ` fabfile.py
from fabric.api import env, run, task
import json
def load_roles():
with open('my_roles.json') as f:
env.roledefs = json.load(f)
load_roles()
@task
def my_task():
run("hostname")
CLI
`fab -R ip my_task`
output from running "my_task" for each of web1 and web2 is here
|
Get instance variables in order in Python
Question: Let's say I have the following class:
class Foo(object):
def __init__(self, value):
self.d = value
self.a = value
self.s = value
self.k = value
I want to [retrieve the instance
variables](http://stackoverflow.com/questions/109087/how-to-get-instance-
variables-in-python) in the order of declaration.
I tried with `vars()` without any success:
list(vars(Foo('value')).keys())
# ['a', 'k', 's', 'd']
What I would like:
list(magic_method(Foo('value')).keys())
# ['d', 'a', 's', 'k']
**Edit** :
Obviously, there would be a different value for each field.
My goal is to generate XML thanks to the object variables. To be valid, the
XML tags has to be in the correct order.
This, combined with `__iter__` override would allow me to only have to manage
dictionaries of object to generate my XML.
Let's take a library as example. Imagine you have a class `Book`, `Date`,
`Person`, `Author` and `Borrower`:
class Book(object):
def self.__init__()
self._borrower = Borrower()
self._author = Author()
class Date(object)
def __init__(self, date):
self._date = date
class Person(object):
def __init__(self, name):
self._name = name
class Author(object):
def __init__(self):
self._person = Person("Daniel")
class Borrower(object):
def __init__(self):
self._person = Person("Jack")
self._date = Date("2016-06-02")
I would like to create the following XML:
<Book>
<Borrower>
<Person>Jack</Person>
<Date>2016-06-02</Date>
</Borrower>
<Author>
<Person>Daniel</Person>
</Author>
</Book>
I know the classes might look weird (like Date here), but I wanted to make the
problem as simple as possible (and there are fields that make perfect sense).
In practice, I would query a database and probably pass an record identifier
in initializers. The point is that there are some data that respects the same
syntax (i.e. Person here).
To summarize, I would like to create such an XML using Python objects. Order
_matters_. That's why I wanted to retrieve variables in order for that
purpose: I could then extract the class and generate the XML tag.
Answer: If you want ordering of object variables you can use something like that:
from collections import OrderedDict
class FooModel(object):
def __new__(cls, *args, **kwargs):
instance = object.__new__(cls)
instance.__odict__ = OrderedDict()
return instance
def __setattr__(self, key, value):
if key != '__odict__':
self.__odict__[key] = value
object.__setattr__(self, key, value)
def keys(self):
return self.__odict__.keys()
def iteritems(self):
return self.__odict__.iteritems()
class Foo(FooModel):
def __init__(self, value):
self.d = value
self.a = value
self.s = value
self.k = value
Output:
>>> f = Foo('value')
>>> f.x = 5
>>> f.y = 10
>>> f.a = 15
>>> f2 = Foo('value')
>>> print "f.keys()", f.keys()
f.keys() ['d', 'a', 's', 'k', 'x', 'y']
>>> print "f2.keys()", f2.keys()
f2.keys() ['d', 'a', 's', 'k']
print list(f.iteritems())
[('d', 'value'), ('a', 15), ('s', 'value'), ('k', 'value'), ('x', 5), ('y', 10)]
|
Stdin Stdout python
Question: For my work i'm used to work with matlab. No i try to learn the basic skills
of python as well. Currently I'm working on on the following excersise:
> You are interested in extracting all of the occurrences that look like this
>
> `<Aug22-2008> <15:37:37> Bond Energy LDA -17.23014168 eV`
>
> In particular, you want to gather the numerical values (eg, `-17.23014168`),
> and print them out. Write a script that reads the output file from standard
> input, and uses regular expressions to locate the values you want to
> extract. Have your script print out all the values to standard output.
This is the code I use:
import os,re
from string import rjust
dataEx=re.compile(r'''
^\s*
<Aug22-2008>
\s+
<\d{2}:\d{2}:\d{2}>
\s+
Bond
\s
Energy
\s
LDA
\s+
((\+|-)?(\d*)\.?\d*)
''',re.VERBOSE)
f=open('Datafile_Q2.txt','r')
line = f.readline()
while line != '':
line = f.readline() # Get next line
m = dataEx.match(line)
if m:
# print line
print m.group(1)
With this code I'm able to find all values in the datafile they ask for.
However I do have a few questions. Firstly, they ask specific something about
stdin and stdout. No I'm wondering do I use the right code to read the output
file from standard input and do I really print out all the values to standard
output in this way? Futhermore, I'm wondering whether there is a better or
more easy way to find the required values?
Answer: To find the numbers your looking for I would use a positive lookbehind and
lookahead function in your regular expression.
(?<=Bond Energy LDA ).*(?= eV)
This checks to see if the thing you are looking at is proceeded by 'Bond
Energy LDA' and followed by 'eV' but does not include them in the string you
extract. So assuming that the numbers you are looking for are always proceeded
and followed by those two things you can find them like that.
A nice way to read from stdin is to use the sys python module.
import sys
Then you can read lines straight from stdin:
import sys
import re
from line in sys.stdin:
matchObj = re.search(r '(?<=Bond Energy LDA ).*(?= eV)', line, re.I)
if(matchObj):
print(matchObj.group())
If the regular expression is not found on the line then matchObj will be null
skipping the if statement. If it is found the search will return a matchObj
containing groups. You can then print the group to stdout as print will by
default print to stdout if no file is given.
|
passing textures to GLSL
Question: I'm trying to read a texture in a shader. And I only get a uniform screen
(only one value seems to be read). If I comment the shader lines declaration
(64 to 74), my texture is displayed correctly. So I guess it is correctly
declared.
here is my code:
#!/usr/bin/python
import sys
import numpy as np
try:
from OpenGL.GL import *
from OpenGL.GL import shaders
from OpenGL.GLU import *
from OpenGL.GLUT import *
except:
print '''ERROR: PyOpenGL not installed properly.'''
################################################################################
# GLOBALS
screen_w = 800
screen_h = 600
################################################################################
# SHADERS
f_shader = """
#version 120
uniform sampler2D texture_w;
void main() {
vec2 c = vec2(int(gl_FragCoord[0]),int(gl_FragCoord[1]));
vec4 t = texture2D(texture_w, c);
gl_FragColor = t;
}
"""
v_shader = """
#version 120
void main() {
gl_FrontColor = gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
"""
################################################################################
# FUNCS
def setup():
glEnable(GL_DEPTH_TEST)
FRAGMENT_SHADER = shaders.compileShader(f_shader, GL_FRAGMENT_SHADER)
VERTEX_SHADER = shaders.compileShader(v_shader, GL_VERTEX_SHADER)
# Weight texture declaration
img_data = np.random.rand(screen_w, screen_h, 3)
img_data2 = img_data*255.0
texture = glGenTextures(1)
glPixelStorei(GL_UNPACK_ALIGNMENT,1)
glBindTexture(GL_TEXTURE_2D, texture)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, screen_w, screen_h, 0, GL_RGB, GL_UNSIGNED_BYTE, img_data2)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
shader = shaders.compileProgram(VERTEX_SHADER, FRAGMENT_SHADER)
shaders.glUseProgram(shader)
u_loc = glGetUniformLocation(shader, "texture_w")
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D, texture)
glUniform1i(u_loc, 0)
def display():
"Global Display function"
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glEnable(GL_TEXTURE_2D)
glColor(1,1,1)
glBegin(GL_TRIANGLES)
glTexCoord2f(0, 0)
glVertex(-1,-screen_w/float(screen_h),-2)
glTexCoord2f(1, 1)
glVertex(1,screen_w/float(screen_h),-2)
glTexCoord2f(0, 1)
glVertex(-1,screen_w/float(screen_h),-2)
glTexCoord2f(0, 0)
glVertex(-1,-screen_w/float(screen_h),-2)
glTexCoord2f(1, 0)
glVertex(1,-screen_w/float(screen_h),-2)
glTexCoord2f(1, 1)
glVertex(1,screen_w/float(screen_h),-2)
glEnd()
glutSwapBuffers()
def reshape (w, h):
global screen_w, screen_h
glViewport (0, 0, w, h)
screen_w, screen_h = w, h
glMatrixMode (GL_PROJECTION)
glLoadIdentity ()
glOrtho(-1, 1, -float(w)/float(h), float(w)/float(h), 1, 10)
glMatrixMode (GL_MODELVIEW)
def keyboard(key, x, y):
global mode
if key == chr(27):
sys.exit(0)
elif key == 'f':
glutFullScreen()
else:
print key
################################################################################
# MAIN
glutInit(sys.argv)
glutInitDisplayString("double rgba depth samples=4")
glutInitWindowSize (screen_w, screen_h)
glutCreateWindow ('Weights')
setup()
glutDisplayFunc(display)
glutReshapeFunc(reshape)
glutKeyboardFunc(keyboard)
glutMainLoop()
Does someone has any idea ?
Answer: Sorry guys, I just found the answer. Here are the shaders lines I'll use:
f_shader = """
#version 120
uniform sampler2D texture_w;
void main() {
gl_FragColor = texture2D(texture_w, gl_TexCoord[0].xy);
}
"""
v_shader = """
#version 120
void main() {
gl_FrontColor = gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
"""
Thanks anyway :),
mike
|
Error in appending matrices in python
Question: I have a set of features and labels for 6 different weeks stored in variable
`FEATURES_DATA` and `TARGET` respectively.
What I want to do is to train a decision tree on growing features and labels.
So, training on first week of data and testing on second week, then, training
on first two weeks and testing on third week and so on...
To give an idea about my dataset:
print np.asarray(FEATURES_DATA).shape
print np.asarray(FEATURES_DATA[0][0]).shape
print ""
print FEATURES_DATA[0]
outputs:
(6L, 1L)
(463511L, 40L)
[ array([[3, 3, 3, ..., 7, 7, 7],
[3, 3, 3, ..., 7, 7, 7],
[3, 3, 3, ..., 7, 7, 7],
...,
[2, 2, 2, ..., 6, 6, 6],
[2, 2, 2, ..., 6, 6, 6],
[2, 2, 2, ..., 6, 6, 6]], dtype=uint8)]
Here is the main code:
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
features = np.asarray(FEATURES_DATA)
labels = np.asarray(TARGET)
for i in xrange(5):
Xtrain = np.concatenate(features[:i][0])
print Xtrain.shape
Ytrain = np.concatenate(labels[:i][0])
Xtest = FEATURES_DATA[i+1][0]
Ytest = TARGET[i+1][0]
clf_DT = DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=None, min_samples_split=5000)
clf_DT.fit(Xtrain, Ytrain)
I get the following error on `Xtrain` concatenation line:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-5-5d87466a6a03> in <module>()
6
7 for i in xrange(5):
----> 8 Xtrain = np.concatenate(features[:i][0])
9 print Xtrain.shape
10 Ytrain = np.concatenate(labels[:i][0])
IndexError: index 0 is out of bounds for axis 0 with size 0
Any help? Thanks
Answer: I got the solution to my problem. Initializing an empty matrix will solve the
problem.
Xtrain=np.empty(shape=[0, 40])
for i in xrange(5):
Xtrain=np.concatenate((Xtrain,FEATURES_DATA[i][0]))
print Xtrain.shape
which gives the output
(463511L, 40L)
(955280L, 40L)
(1502984L, 40L)
(1969719L, 40L)
(2569141L, 40L)
|
Aeroo Reports: Error while generating the report. ascii
Question: I used aeroo report for generating xls report in openerp 7, I created ods file
containing data as input file, and xls as output [](http://i.stack.imgur.com/Os6pB.png)
I used python function return arabic string,
def _test(self, data):
res=[]
for item in data :
month_move=datetime.strptime(str(item.date), '%Y-%m-%d %H:%M:%S').year
if month_move == date.today().year:
res.append(item)
return res,'الاتبللا'
and i have some date is an arabic letter, when i tried to print the report, i
got this error msg:
Aeroo Reports: Error while generating the report. ascii
[](http://i.stack.imgur.com/Nuj8Z.png)
I got that error if i use the function, but if i dont used it, the report
generate correctly,
I used Genshi package version 6.0, and libreoffice version 5.1, and openerp
version 7 , and python version 2.7
forgive me for my bad english, and thank an advence [](http://i.stack.imgur.com/EURPO.png)
Answer: It looks like there is an encoding problem with the second return argument:
> 'الاتبللا'
You did not prefix the string with `u`, which is needed in Python 2. Even
better, [import
unicode_literals](http://stackoverflow.com/questions/809796/any-gotchas-using-
unicode-literals-in-python-2-6).
|
Creating file in python indicating if integer in in another list
Question: I have a list of integers, say `[2, 4, 9]`, then I need to create a csv file
like this:
1,0
2,1
3,0
4,1
5,0
6,0
7,0
8,0
9,1
Basically, for each integer, I need to check if the integer is in the list, if
it is I write `1`, otherwise `0`.
I could create a double list, do a for loop, check if the integer is in the
list, and then append the `(i,0)` or `(i,1)` depending on whether `i` is in
the list, but I am sure there is a far more efficient way to do this. These
lists are big, a billion integers or so. I looked if there was something in
numpy to help me, but could not find anything.
Answer: Use [list
comprehensions](https://docs.python.org/3/tutorial/datastructures.html#list-
comprehensions) to create your data and use the module
[csv](https://docs.python.org/2/library/csv.html) to write to a csv file.
s = set([2, 4, 9])
lists = [(i, 1) if i in s else (i, 0) for i in range(1, 10)]
# write to a csv file
import csv
with open('test.csv', 'w') as f:
writer = csv.writer(f)
writer.writerows(lists)
* * *
Since your data size is quite big, it is better to write to a file while
creating, as shown below.
import csv
s = set([2, 4, 9])
with open('test.csv', 'w') as f:
writer = csv.writer(f)
for i in range(1, 10000000):
t = (i, 1) if i in s else (i, 0)
writer.writerow(t)
|
python collecting data from dataframe at specific row location
Question: I am importing a data set with about 200 columns with unique column names into
pandas dataframe using read_csv.
Data.columns
Index([u'SAVERECORDER', u'SAVECHANNEL', u'STARTTIME', u'INT001', u'INT002',
u'INT003', u'INT004', u'INT005', u'INT006', u'INT007',
...
u'INT092', u'INT093', u'INT094', u'INT095', u'INT096', u'INT097',
u'INT098', u'INT099', u'INT100', u'LSTIME'],
dtype='object', length=104)
I want to extract 100th row from each of the columns INT001 through INT099.
Had it been few columns I would have done it manually but I don't have a clue
as to how to extract certain row number from all and store in a numpy array.
All columns INT001 through INT099 contain temperature values as double.
Answer: I would use a generator to create a list of all the columns you want like
from string import zfill
import numpy as np
cols = ['INT0' + zfill(str(i), 2) for i in np.arange(1, 100, 1)]
Then you can use pandas .iloc to index by integer location as read in
[here](http://pandas.pydata.org/pandas-
docs/version/0.17.0/generated/pandas.DataFrame.iloc.html)
After that you can call .sum() on the array.
|
anacondo env couldn't import any of the packages
Question: **pip list inside conda env:** pip list matplotlib (1.4.0) nose (1.3.7) numpy
(1.9.1) pandas (0.15.2) pip (8.1.2) pyparsing (2.0.1) python-dateutil (2.4.1)
pytz (2016.4) scikit-learn (0.15.2) scipy (0.14.0) setuptools (21.2.1) six
(1.10.0) wheel (0.29.0)
**which python:**
/Users/xxx/anaconda/envs/pythonenvname/bin/python
(pythonenvname)pc-xx-xx:oo xxx$ **which pip**
/Users/xxx/anaconda/envs/pythonenvname/bin/pip
python Python 3.4.4 |Anaconda custom (x86_64)| (default, Jan 9 2016, 17:30:09)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin Type "help", "copyright",
"credits" or "license" for more information.
> > > import pandas as pd **error:** **sh: sysctl: command not found**
Answer: Finally, I figured out the answer. It is all about the PATH variable. It was
pointing to os python rather than anaconda python. Thanks all for your time.
|
Install pip<v8 in python3.2
Question: I fail to install pip in python3.2. The newest version of pip (v8.x) seems to
not support python3.2 any more.
So i tried: sudo python3 get-pip.py 'pip<8' but it still seems to install
v8.x.
**Output:**
UserWarning: Support for Python 3.0-3.2 has been dropped. Future versions will fail here.
Traceback (most recent call last):
File "get-pip.py", line 19178, in <module>
main()
File "get-pip.py", line 195, in main
bootstrap(tmpdir=tmpdir)
File "get-pip.py", line 82, in bootstrap
import pip
File "/tmp/tmpec9tur/pip.zip/pip/__init__.py", line 16, in <module>
File "/tmp/tmpec9tur/pip.zip/pip/vcs/mercurial.py", line 9, in <module>
File "/tmp/tmpec9tur/pip.zip/pip/download.py", line 36, in <module>
File "/tmp/tmpec9tur/pip.zip/pip/utils/ui.py", line 15, in <module>
File "/tmp/tmpec9tur/pip.zip/pip/_vendor/progress/bar.py", line 48
empty_fill = u'∙'
^
SyntaxError: invalid syntax
Any ideas?
Answer: I found the solution:
I used the wrong get-pip, as per documentation of get-pip
use <https://bootstrap.pypa.io/3.2/get-pip.py>
|
gspread.exceptions.SpreadsheetNotFound
Question: I am writing a python(ver 3) script to access google doc using gspread.
1) import gspread
2) from oauth2client.service_account import ServiceAccountCredentials
3) scope = ['https://spreadsheets.google.com/feeds']
4) credentials = ServiceAccountCredentials.from_json_keyfile_name(r'/path/to/jason/file/xxxxxx.json',scope)
5) gc = gspread.authorize(credentials)
6) wks = gc.open("test").sheet1
**test** is a **google sheet** which seems to be opened and read fine but if I
try to read from a Office excel file it gives me error.here is what they look:
[](http://i.stack.imgur.com/fQ5P7.png)
The folder which test and mtg are under is shared with the email I got in json
file.Also both files were shared with that email.
Tried:
wks = gc.open("mtg.xls").sheet1
and
wks = gc.open("mtg.xls").<NameOfFirstSheet>
and
wks = gc.open("mtg").<NameOfFirstSheet>
error:
> /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-
> packages/gspread/client.py", line 152, in open raise SpreadsheetNotFound
> gspread.exceptions.SpreadsheetNotFound
Answer: There is no `.xls` to be added at the end of the file name, the data is saved
in a different format (and can later be exported as `.xls`).
Try to break your code into:
ss = open("MTG_Collection_5_14_16")
ws = ss.worksheet("<NameOfFirstSheet>")
and post the error message if any.
`Spreadsheet` instances have an attribute `sheet1` because it is the default
name for the first worksheet. `ss.sheet1` actually returns the worksheet with
index 0, no matter what its name is.
If you want to access another worksheet, you need to use one of
`ss.worsheet("<title>")` or `ss.get_worksheet(<index>)`.
`ss.<NameOfFirstSheet>` will not work.
|
New to Python, can't find bug
Question: I am new to Python (3rd day) and I was just trying to creat a basic Rock,
Paper, Scissors. I am seeing a bug that I can't locate in the code and was
hoping somebody could help. Here is the output below with the code following:
Welcome to Rock, Paper, Scissors!
Player 1 name?b
Player 2 name?s
3
2
1
GO!
Rock, Paper or Scissors?rock
b threw rock
s threw rock
Draw! Get Ready!.
3
2
1
GO!
Rock, Paper or Scissors?rock
b threw rock
s threw scissors
b win.
Rematch?no
Goodbye.
b win.
Rematch?
# After entering "no" for rematch, it is printing "b win." also asking
"Rematch?" again. Here is the code below:
import time
import random
picks=["rock","paper","scissors"]
answers=["yes","no"]
rock="rock"
paper="paper"
scissors="scissors"
yes="yes"
no="no"
invalid=""
#############################################Defining Functions#########################################################
def rename1():
global name1
while True:
if name1 is invalid:
name1 = input("Player 1 name?")
if name1 is not invalid:
break
def rename2():
global name2
while True:
if name2 is invalid:
name2 = input("Player 2 name?")
if name2 is not invalid:
break
def rematchinvalid():
global rematch
while True:
if rematch not in answers:
print("Invalid, try again..")
rematch = input("Rematch?")
if rematch in answers:
break
def Rethrow1():
global P1
while True:
if P1 not in picks:
print("Invalid, try again..")
P1 = input("Rock, Paper, or Scissors?")
if P1 in picks:
break
def start():
print("3")
time.sleep(1)
print("2")
time.sleep(1)
print("1")
time.sleep(1)
print("GO!")
def RPS():
global P1
global P2
global rematch
P1 = input("Rock, Paper or Scissors?")
P2 = random.choice(picks)
if P1 not in picks:
Rethrow1()
battle()
winner()
def battle():
print(name1," threw ",P1)
print(name2," threw ",P2)
def winner():
global rematch
if P1 == P2:
print("Draw! Get Ready!.")
start()
RPS()
if P1 == rock and P2 == scissors:
print(name1," win.")
if P1 == rock and P2 == paper:
print(name2," win.")
if P1 == scissors and P2 == rock:
print(name2," win.")
if P1 == scissors and P2 == paper:
print(name1," win.")
if P1 == paper and P2 == rock:
print(name1," win.")
if P1 == paper and P2 == scissors:
print(name2," win.")
rematch = input("Rematch?")
if rematch not in answers:
rematchinvalid()
replay()
def replay():
if rematch == yes:
start()
RPS()
if rematch == no:
print("Goodbye.")
################################################Game Start##############################################################
print("Welcome to Rock, Paper, Scissors!")
name1 = input("Player 1 name?")
if name1 is invalid:
rename1()
name2 = input("Player 2 name?")
if name2 is invalid:
rename2()
start()
RPS()
Also, if you have any recommendations on how to clean up the code, that would
be appreciated!
Thanks
Answer: Take a look at your function `winner`:
def winner():
global rematch
if P1 == P2:
print("Draw! Get Ready!.")
start()
RPS()
if P1 == rock and P2 == scissors:
print(name1," win.")
if P1 == rock and P2 == paper:
print(name2," win.")
if P1 == scissors and P2 == rock:
print(name2," win.")
if P1 == scissors and P2 == paper:
print(name1," win.")
if P1 == paper and P2 == rock:
print(name1," win.")
if P1 == paper and P2 == scissors:
print(name2," win.")
rematch = input("Rematch?")
if rematch not in answers:
rematchinvalid()
replay()
When there is a tie, you print _Draw! Get Ready!._ , call `start()`, call
`RPS()`, then what? Then, instead of exiting the function, you let the control
flowed right into the code below, which displayed the winner name one more
time, then asked for rematch one more time before exiting the function. I
leave it up to you to fix it.
As for recommendation: please do not use global variables.
# Update
Here is some suggestion to eliminate global variables: pass information into
functions and return information from function. For example, here is a way to
eliminate the global variable `rematch`. This variable is first used in
`winner()` and then passed to `replay()`. Also, `rematchinvalid()` get the
user's input into this variable and pass it back to `winner`, so the
information flow for rematch is:
rematchinvalid <==> winner ==> replay
With that in mind, we can fix `rematchinvalid()` as such:
def rematchinvalid(rematch):
# Remove the global statement here
while True:
if rematch not in answers:
print("Invalid, try again..")
rematch = input("Rematch?")
if rematch in answers:
break
return rematch # Return rematch to the caller
As for `winner()`, we will receive information from `rematchinvalid()` and
pass it on to `replay()`:
def winner():
# Remove global statement
if P1 == P2:
print("Draw! Get Ready!.")
start()
RPS()
return # Fix for your problem
if P1 == rock and P2 == scissors:
print(name1," win.")
if P1 == rock and P2 == paper:
print(name2," win.")
if P1 == scissors and P2 == rock:
print(name2," win.")
if P1 == scissors and P2 == paper:
print(name1," win.")
if P1 == paper and P2 == rock:
print(name1," win.")
if P1 == paper and P2 == scissors:
print(name2," win.")
rematch = input("Rematch?")
if rematch not in answers:
rematch = rematchinvalid(rematch) # Get the valid rematch
replay(rematch) # Pass rematch to replay
Finally, for replay, we can accept `rematch` as a parameter:
def replay(rematch):
if rematch == yes:
start()
RPS()
if rematch == no:
print("Goodbye.")
That should take care of `rematch`. You can apply this method to eliminate
other variables as well.
|
Import config from sphinx subfolder
Question: I've created a sphinx project in a subfolder ("docs") inside my project root.
When I run the make file autodoc it cannot find the config file and raises
errors.
Is there a way to resolve this?
[project
structure](https://www.syshell.net/extra_upload/37608030/project_structure.txt)
[conf.py](https://www.syshell.net/extra_upload/37608030/conf.py)
[Makefile](https://www.syshell.net/extra_upload/37608030/Makefile)
[myreplicator.py](https://www.syshell.net/extra_upload/37608030/myreplicator.py)
[myreplicator.rst](https://www.syshell.net/extra_upload/37608030/myreplicator.rst)
This is the error:
/vagrant/python/mymongo/doc/myreplicator.rst:4: WARNING: autodoc: failed to import module 'myreplicator'; the following exception was raised:
Traceback (most recent call last):
File "/var/venv/mymongo/lib/python3.5/site-packages/sphinx /ext/autodoc.py", line 518, in import_object
__import__(self.modname)
File "/vagrant/python/mymongo/myreplicator.py", line 20, in <module>
logging.config.fileConfig('conf/logging.conf')
File "/opt/py35/lib/python3.5/logging/config.py", line 76, in fileConfig
formatters = _create_formatters(cp)
File "/opt/py35/lib/python3.5/logging/config.py", line 109, in _create_formatters
flist = cp["formatters"]["keys"]
File "/opt/py35/lib/python3.5/configparser.py", line 956, in __getitem__
raise KeyError(key)
KeyError: 'formatters'
Thanks, Giovanni
Answer: I have to your folder structure diagram looks very messy.
Well, so from what I can see, there are two possible reasons.
### 1\. System PATH
You didn't put the directory of `myreplicator.py` into the `sys.path` so that
the `sphinx.autodoc.py` cannot locate your module (since it will only search
the module in system path. Therefore you need to insert your project folder
into the `sys.path` first). Which can be done by including the following in
`conf.py`
import os
import sys
sys.path.insert(0, os.path.abspath('../')) # based on what I see in your structure, the myrepicator.py is in the parent directory of conf.py
### 2\. Third-party Pacakge
Inside the `myreplicator.py`, there are some third-party modules cannot be
loaded, most likely some C type libs (like `.pyd`). In this case, you can mock
those third-party modules which cannot be imported by using below codes
from mock import MagicMock, Mock # if you are using python2
# from unittest.mock import MagicMock, Mock # if you are using python3
class Mock(MagicMock):
@classmethod
def __getattr__(cls, name):
return Mock()
MOCK_MODULES = ['mymongolib ', 'numpy', 'pandas'] # packages you want to mock
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
However, from the exception you post, the chance of this error is small.
Overall, I think most likely the error is due to the first reason.
|
Python -- Optimize system of inequalities
Question: I am working on a program in Python in which a small part involves optimizing
a system of equations / inequalities. Ideally, I would have wanted to do as
can be done in Modelica, write out the equations and let the solver take care
of it.
The operation of solvers and linear programming is a bit out of my comfort
zone, but I decided to try anyway. The problem is that the general design of
the program is object-oriented, and there are many different possibilities of
combinations to form up the equations, as well as some non-linearities, so I
have not been able to translate this into a linear programming problem (but I
might be wrong) .
After some research I found that the [Z3](https://github.com/Z3Prover/z3)
solver seemed to do what I wanted. I came up with this (this looks like a
typical case of what I would like to optimize):
from z3 import *
a = Real('a')
b = Real('b')
c = Real('c')
d = Real('d')
e = Real('e')
g = Real('g')
f = Real('f')
cost = Real('cost')
opt = Optimize()
opt.add(a + b - 350 == 0)
opt.add(a - g == 0)
opt.add(c - 400 == 0)
opt.add(b - d * 0.45 == 0)
opt.add(c - f - e - d == 0)
opt.add(d <= 250)
opt.add(e <= 250)
opt.add(cost == If(f > 0, f * 50, f * 0.4) + e * 40 + d * 20 +
If(g > 0, g * 50, g * 0.54))
h = opt.minimize(cost)
opt.check()
opt.lower(h)
opt.model()
Now this works, and gives me the result I want, despite it not being extremely
fast (I need to solve such systems several thousands of times). But I am not
sure I am using the right tool for the job (Z3 is a "theorem prover").
The API is basically exactly what I need, but I would be curious if other
packages allow a similar syntax. Or should I try to formulate the problem in a
different way to allow a standard LP approach? (although I have no idea how)
Answer: Z3 is the most powerful solver I have found for such flexible systems of
equations. Z3 is an excellent choice now that it is released under the MIT
license.
There are a lot of different types of tools with overlapping use cases. You
mentioned linear programming -- there are also theorem provers, SMT solvers,
and many other types of tools. Despite marketing itself as a theorem prover,
Z3 is often marketed as an SMT solver. At the moment, SMT solvers are leading
the pack for the flexible and automated solution of coupled algebraic
equations and inequalities over the booleans, reals, and integers, and in the
world of SMT solvers, Z3 is king. Take a look at [the results of the last SMT
comp if you want evidence of
this.](http://smtcomp.sourceforge.net/2015/results-summary.shtml?v=1446209369)
That being said, if your equations are all linear, then you might also find
better performance with CVC4. It doesn't hurt to shop around.
If your equations have a very controlled form (for example, minimize some
function subject to some constraints) then you might be able to get better
performance using a numerical library such as GSL or NAG. However, if you
really need the flexibility, then I doubt you are going to find a better tool
than Z3.
|
python3: UTF-8 encoding in http.server
Question: I have encoding problems when serving a simple web page in python3, using
BaseHTTPRequestHandler.
Here is a working example:
#!/usr/bin/python3
# -*- coding: utf-8 -*
from http.server import BaseHTTPRequestHandler, HTTPServer
from os import curdir, sep, remove
import cgi
HTML_FILE_NAME = 'test.html'
PORT_NUMBER = 8080
# This class will handles any incoming request from the browser
class myHandler(BaseHTTPRequestHandler):
# Handler for the GET requests
def do_GET(self):
self.path = HTML_FILE_NAME
try:
with open(curdir + sep + self.path, 'r') as f:
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
self.wfile.write(bytes(f.read(), 'UTF-8'))
return
except IOError:
self.send_error(404, 'File Not Found: %s' % self.path)
try:
# Create a web server and define the handler to manage the incoming request
with open(HTML_FILE_NAME, 'w') as f:
f.write('<!DOCTYPE html><html><body> <p> My name is Jérôme </p> </body></html>')
print('Started httpserver on port %i.' % PORT_NUMBER)
#Wait forever for incoming http requests
HTTPServer(('', PORT_NUMBER), myHandler).serve_forever()
except KeyboardInterrupt:
print('Interrupted by the user - shutting down the web server.')
server.socket.close()
remove(HTML_FILE_NAME)
The expected result is to serve a web page displaying _My name is Jérôme_.
Instead, I have: _My name is Jérôme_
As you can see, the html page is correctly encoded, with
`self.wfile.write(bytes(f.read(), 'UTF-8'))`, so I think the problem comes
from the web server.
How to tell the web server to serve the page in UTF-8?
Answer: Your web server is already sending the text encoded to UTF-8 but you need to
tell your browser the encoding of the bytes it receives. The HTTP spec.
declares ISO-8995-1 as the default.
The HTTP standard way of doing is this is to tag the `Content-type` header
value with a `charset` sub-key.
Therefore, you should change your code to read:
self.send_header('Content-type', 'text/html; charset=utf-8')
Also, watch out for the encoding of your HTML file. Without an encoding given
to `open()`, it'll be guessed based on your locale. This won't break anything,
unless you end up running this script where the locale is `C`, `POSIX` or non-
latin Windows.
|
Python loop with default value otherwise infinite
Question: I want to create a loop which runs n-times where:
1. 'n' can be changed freely
2. 'n' should have a default value when no value is passed
3. 'n' should run forever when wanted
My current code looks like this:
n = raw_input('number of runs: ') # Get user input for 'n'
if n == '': # Empty input -> run the loop 1000 times (condition 2)
a = 1000 # Default value
elif n == 'oo': # Run the loop forever
a = 9000 # This value isn't important could be 1 aswell see while statement
else:
a = n # Run the loop for the value provided in raw_input
i = 1
while i <= int(a):
if n == 'oo':
a += 1 # If n=∞, increment 'a' after each iteration so the loop never stops
print i # Print 'i' to see if everything works like expected
i += 1
How can I make this code more elegant (I feel like there is an easier way)?
How can I make my loop (for n='oo') stop whenever I want without using a
keyboard interrupt?
Answer: Use a `for`-loop, and different iterables:
from itertools import count
n = raw_input('number of runs: ')
if n == '': #empty input-> run the loop 1000 times
counter = xrange(1, 1001)
elif n == 'oo': #run the loop forever
counter = count(1)
else:
counter = xrange(1, int(n)+1) #otherwise run the loop for the value provided above
for i in counter:
print i #print i to see if everything works like expected
|
Why a "stack smashing detected" once ZeroMQ C++ client runs against a python Server?
Question: I'm trying to learn how to use **`ZeroMQ`** lib to exchange data ( a simple
'hello' ) between a client and a server.
* * *
## python ``client`` \- python ``server`` implementation works
With success I have created a client and a server in python and I could
exchange data.
* * *
## `C++` ``client`` \- python ``server`` implementation not
The next step was to create a client in **`C++`** and a server in python. I've
done like this:
`C++` client, **`client_cpp.cpp`** :
#include <zmq.hpp>
#include <string>
#include <iostream>
int main ()
{
zmq::context_t context (1);
zmq::socket_t socket (context, ZMQ_REQ);
std::cout << "Connecting to hello world server…" << std::endl;
socket.connect ("tcp://localhost:5555");
std::string message = "hello";
zmq::message_t request (message.size());
memcpy (request.data (), (message.c_str()), (message.size()));
socket.send (request);
return 0;
}
I've compiled it using `CMake` with the following **`CMakeLists.txt`** :
cmake_minimum_required(VERSION 2.8)
project(ZmqProject)
# This will file libzmq.so file from /usr/local/lib
FIND_FILE(ZMQLIB libzmq.so /usr/local/lib)
IF(NOT ZMQLIB)
MESSAGE(SEND_ERROR "Ah.. Cannot find library libzmq.so.")
ENDIF(NOT ZMQLIB)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
set(SOURCE_FILES client_cpp.cpp)
add_executable(ZmqProject ${SOURCE_FILES})
# The following line will link with libzmq.so
TARGET_LINK_LIBRARIES( ZmqProject ${ZMQLIB})
Then, the `python` server, **`server_python.py`** , is the following:
# -*- coding: utf-8 -*-
import time
import zmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555")
message = socket.recv()
print("Received data: %s" % message)
Executing those programs, this kind of error accours in the client in `C++`:
*** stack smashing detected ***: ./ZmqProject terminated
Aborted (core dumped)
and I can not receive nothing in the `python` server.
~~**`Q1:`** Why, what is wrong?~~
* * *
**`EDIT 1`** \- running the client through `valgrind` this is the output,
maybe it can be usefull:
valgrind --leak-check=full -v ./ZmqProject
==12291== Memcheck, a memory error detector
==12291== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==12291== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==12291== Command: ./ZmqProject
==12291==
--12291-- Valgrind options:
--12291-- --leak-check=full
--12291-- -v
--12291-- Contents of /proc/version:
--12291-- Linux version 3.19.0-59-generic (buildd@lgw01-39) (gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) ) #66~14.04.1-Ubuntu SMP Fri May 13 17:27:10 UTC 2016
--12291-- Arch and hwcaps: AMD64, LittleEndian, amd64-cx16-rdtscp-sse3-avx
--12291-- Page sizes: currently 4096, max supported 4096
--12291-- Valgrind library directory: /usr/lib/valgrind
--12291-- Reading syms from /home/fds/Scrivania/ClientProgram/build /ZmqProject
--12291-- Reading syms from /lib/x86_64-linux-gnu/ld-2.19.so
--12291-- Considering /lib/x86_64-linux-gnu/ld-2.19.so ..
--12291-- .. CRC mismatch (computed 46abf574 wanted 3ca2d3ca)
--12291-- Considering /usr/lib/debug/lib/x86_64-linux-gnu/ld-2.19.so ..
--12291-- .. CRC is valid
--12291-- Reading syms from /usr/lib/valgrind/memcheck-amd64-linux
--12291-- Considering /usr/lib/valgrind/memcheck-amd64-linux ..
--12291-- .. CRC mismatch (computed 4f1eed43 wanted a323a3ab)
--12291-- object doesn't have a symbol table
--12291-- object doesn't have a dynamic symbol table
--12291-- Scheduler: using generic scheduler lock implementation.
--12291-- Reading suppressions file: /usr/lib/valgrind/default.supp
==12291== embedded gdbserver: reading from /tmp/vgdb-pipe-from-vgdb-to- 12291-by-fds-on-???
==12291== embedded gdbserver: writing to /tmp/vgdb-pipe-to-vgdb-from- 12291-by-fds-on-???
==12291== embedded gdbserver: shared mem /tmp/vgdb-pipe-shared-mem-vgdb- 12291-by-fds-on-???
==12291==
==12291== TO CONTROL THIS PROCESS USING vgdb (which you probably
==12291== don't want to do, unless you know exactly what you're doing,
==12291== or are doing some strange experiment):
==12291== /usr/lib/valgrind/../../bin/vgdb --pid=12291 ...command...
==12291==
==12291== TO DEBUG THIS PROCESS USING GDB: start GDB like this
==12291== /path/to/gdb ./ZmqProject
==12291== and then give GDB the following command
==12291== target remote | /usr/lib/valgrind/../../bin/vgdb --pid=12291
==12291== --pid is optional if only one valgrind process is running
==12291==
--12291-- REDIR: 0x4019c50 (ld-linux-x86-64.so.2:strlen) redirected to 0x380764b1 (???)
--12291-- Reading syms from /usr/lib/valgrind/vgpreload_core-amd64- linux.so
--12291-- Considering /usr/lib/valgrind/vgpreload_core-amd64-linux.so ..
--12291-- .. CRC mismatch (computed fc68135e wanted 45f5e986)
--12291-- object doesn't have a symbol table
--12291-- Reading syms from /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so
--12291-- Considering /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so ..
--12291-- .. CRC mismatch (computed ae683f7e wanted 08c06df2)
--12291-- object doesn't have a symbol table
==12291== WARNING: new redirection conflicts with existing -- ignoring it
--12291-- old: 0x04019c50 (strlen ) R-> (0000.0) 0x380764b1 ???
--12291-- new: 0x04019c50 (strlen ) R-> (2007.0) 0x04c2e1a0 strlen
--12291-- REDIR: 0x4019a00 (ld-linux-x86-64.so.2:index) redirected to 0x4c2dd50 (index)
--12291-- REDIR: 0x4019c20 (ld-linux-x86-64.so.2:strcmp) redirected to 0x4c2f2f0 (strcmp)
--12291-- REDIR: 0x401a970 (ld-linux-x86-64.so.2:mempcpy) redirected to 0x4c31da0 (mempcpy)
--12291-- Reading syms from /usr/local/lib/libzmq.so.4.2.0
--12291-- Reading syms from /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
--12291-- Considering /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19 ..
--12291-- .. CRC mismatch (computed dc37bb90 wanted ea8c3b40)
--12291-- object doesn't have a symbol table
--12291-- Reading syms from /lib/x86_64-linux-gnu/libgcc_s.so.1
--12291-- Considering /lib/x86_64-linux-gnu/libgcc_s.so.1 ..
--12291-- .. CRC mismatch (computed 6116126e wanted 54e3f1f2)
--12291-- object doesn't have a symbol table
--12291-- Reading syms from /lib/x86_64-linux-gnu/libc-2.19.so
--12291-- Considering /lib/x86_64-linux-gnu/libc-2.19.so ..
--12291-- .. CRC mismatch (computed ac9b5ddb wanted a10d05bf)
--12291-- Considering /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.19.so ..
--12291-- .. CRC is valid
--12291-- Reading syms from /lib/x86_64-linux-gnu/libpthread-2.19.so
--12291-- Considering /lib/x86_64-linux-gnu/libpthread-2.19.so ..
--12291-- .. CRC mismatch (computed 88040ace wanted 71b58165)
--12291-- Considering /usr/lib/debug/lib/x86_64-linux-gnu/libpthread-2.19.so ..
--12291-- .. CRC is valid
--12291-- Reading syms from /lib/x86_64-linux-gnu/librt-2.19.so
--12291-- Considering /lib/x86_64-linux-gnu/librt-2.19.so ..
--12291-- .. CRC mismatch (computed 9efd3dd5 wanted ae0f290f)
--12291-- Considering /usr/lib/debug/lib/x86_64-linux-gnu/librt-2.19.so ..
--12291-- .. CRC is valid
--12291-- Reading syms from /lib/x86_64-linux-gnu/libm-2.19.so
--12291-- Considering /lib/x86_64-linux-gnu/libm-2.19.so ..
--12291-- .. CRC mismatch (computed 08659659 wanted 7ce1b39a)
--12291-- Considering /usr/lib/debug/lib/x86_64-linux-gnu/libm-2.19.so ..
--12291-- .. CRC is valid
--12291-- REDIR: 0x5696c50 (libc.so.6:strcasecmp) redirected to 0x4a25720 (_vgnU_ifunc_wrapper)
--12291-- REDIR: 0x5698f40 (libc.so.6:strncasecmp) redirected to 0x4a25720 (_vgnU_ifunc_wrapper)
--12291-- REDIR: 0x5696420 (libc.so.6:memcpy@GLIBC_2.2.5) redirected to 0x4a25720 (_vgnU_ifunc_wrapper)
--12291-- REDIR: 0x56946b0 (libc.so.6:rindex) redirected to 0x4c2da30 (rindex)
--12291-- REDIR: 0x56929b0 (libc.so.6:strlen) redirected to 0x4c2e0e0 (strlen)
--12291-- REDIR: 0x5695e90 (libc.so.6:__GI_memcmp) redirected to 0x4c30b80 (__GI_memcmp)
--12291-- REDIR: 0x5690f60 (libc.so.6:strcmp) redirected to 0x4a25720 (_vgnU_ifunc_wrapper)
--12291-- REDIR: 0x5749e00 (libc.so.6:__strcmp_ssse3) redirected to 0x4c2f1b0 (strcmp)
--12291-- REDIR: 0x514ee20 (libstdc++.so.6:operator new(unsigned long, std::nothrow_t const&)) redirected to 0x4c2b340 (operator new(unsigned long, std::nothrow_t const&))
--12291-- REDIR: 0x568c660 (libc.so.6:malloc) redirected to 0x4c2ab10 (malloc)
--12291-- REDIR: 0x568d130 (libc.so.6:calloc) redirected to 0x4c2cbf0 (calloc)
--12291-- REDIR: 0x514ed90 (libstdc++.so.6:operator new(unsigned long)) redirected to 0x4c2b070 (operator new(unsigned long))
--12291-- REDIR: 0x5759c90 (libc.so.6:__memmove_ssse3_back) redirected to 0x4c2f450 (memcpy@GLIBC_2.2.5)
--12291-- REDIR: 0x514d0f0 (libstdc++.so.6:operator delete(void*)) redirected to 0x4c2c250 (operator delete(void*))
--12291-- REDIR: 0x569b620 (libc.so.6:memcpy@@GLIBC_2.14) redirected to 0x4a25720 (_vgnU_ifunc_wrapper)
--12291-- REDIR: 0x56a1ec0 (libc.so.6:__memcpy_sse2_unaligned) redirected to 0x4c2f6b0 (memcpy@@GLIBC_2.14)
--12291-- REDIR: 0x5695e50 (libc.so.6:bcmp) redirected to 0x4a25720 (_vgnU_ifunc_wrapper)
--12291-- REDIR: 0x5769c60 (libc.so.6:__memcmp_sse4_1) redirected to 0x4c30c00 (__memcmp_sse4_1)
Connecting to hello world server…
--12291-- REDIR: 0x5690d40 (libc.so.6:__GI_strchr) redirected to 0x4c2db90 (__GI_strchr)
--12291-- REDIR: 0x5692e20 (libc.so.6:__GI_strncmp) redirected to 0x4c2e930 (__GI_strncmp)
--12291-- REDIR: 0x5695300 (libc.so.6:__GI_strstr) redirected to 0x4c32030 (__strstr_sse2)
--12291-- REDIR: 0x5695b00 (libc.so.6:memchr) redirected to 0x4c2f390 (memchr)
--12291-- REDIR: 0x569b670 (libc.so.6:__GI_memcpy) redirected to 0x4c2fc90 (__GI_memcpy)
--12291-- REDIR: 0x568cd00 (libc.so.6:free) redirected to 0x4c2bd80 (free)
--12291-- REDIR: 0x569d9b0 (libc.so.6:strchrnul) redirected to 0x4c319b0 (strchrnul)
--12291-- REDIR: 0x5748450 (libc.so.6:__strncasecmp_avx) redirected to 0x4c2eb60 (strncasecmp)
--12291-- REDIR: 0x5690fa0 (libc.so.6:__GI_strcmp) redirected to 0x4c2f200 (__GI_strcmp)
--12291-- REDIR: 0x5696ae0 (libc.so.6:__GI_stpcpy) redirected to 0x4c30da0 (__GI_stpcpy)
--12291-- Reading syms from /lib/x86_64-linux-gnu/libnss_files-2.19.so
--12291-- Considering /lib/x86_64-linux-gnu/libnss_files-2.19.so ..
--12291-- .. CRC mismatch (computed 69b3fb24 wanted 71fe8a31)
--12291-- Considering /usr/lib/debug/lib/x86_64-linux-gnu/libnss_files-2.19.so ..
--12291-- .. CRC is valid
--12291-- REDIR: 0x5692430 (libc.so.6:__GI_strcpy) redirected to 0x4c2e2a0 (__GI_strcpy)
--12291-- REDIR: 0x569d7a0 (libc.so.6:rawmemchr) redirected to 0x4c319f0 (rawmemchr)
--12291-- REDIR: 0x5746de0 (libc.so.6:__strcasecmp_avx) redirected to 0x4c2ea80 (strcasecmp)
*** stack smashing detected ***: ./ZmqProject terminated
==12291==
==12291== Process terminating with default action of signal 6 (SIGABRT)
==12291== at 0x5640C37: raise (raise.c:56)
==12291== by 0x5644027: abort (abort.c:89)
==12291== by 0x567D2A3: __libc_message (libc_fatal.c:175)
==12291== by 0x5714BBB: __fortify_fail (fortify_fail.c:38)
==12291== by 0x5714B5F: __stack_chk_fail (stack_chk_fail.c:28)
==12291== by 0x401D32: main (in /home/fds/Scrivania/ClientProgram/build/ZmqProject)
--12291-- Discarding syms at 0x74ff2a0-0x7504eb3 in /lib/x86_64-linux-gnu/libnss_files-2.19.so due to munmap()
==12291==
==12291== HEAP SUMMARY:
==12291== in use at exit: 0 bytes in 0 blocks
==12291== total heap usage: 767 allocs, 767 frees, 176,711 bytes allocated
==12291==
==12291== All heap blocks were freed -- no leaks are possible
==12291==
==12291== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
==12291== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Aborted (core dumped)
* * *
**`EDIT 2`** \- Today I tried to compile my c++ code using `g++` not `CMake`
as before.
g++ -o client_cpp client_cpp.cpp -lzmq
Now c++ client - python server **works**!
**`Q2:`** How can I change `CMakeLists.txt` to be sure that my code can work
with it?
I need to use **`CMAKE`**
**`EDIT 3`** How can I "traslate" the following command line into
`CMakeLists.txt`?
g++ -o client_cpp client_cpp.cpp -lzmq
Answer: ## `Step 0:` isolate the sweet-spot
As proposed in comments above, kindly update, **run** & **post** output from a
trivial self-diagnosing code modification:
int main ()
{
zmq::context_t context (1);
// std::cout << "[1] .... + errno; // ----------------------------[1]
zmq::socket_t socket (context, ZMQ_REQ);
// std::cout << "[2] .... + errno; // ----------------------------[2]
std::cout << "Connecting to hello world server…" << std::endl;
socket.connect ("tcp://localhost:5555");
// std::cout << "[3] .... + errno; // ----------------------------[3]
std::string message = "hello";
zmq::message_t request (message.size());
// std::cout << "[4] .... + errno; // ----------------------------[4]
memcpy (request.data (), (message.c_str()), (message.size()));
// std::cout << "[5] .... + errno; // ----------------------------[5]
socket.send (request);
// std::cout << "[6] .... + errno; // ----------------------------[6]
return 0;
}
## `Step 1:` [re-read the `ZeroMQ API` details](http://api.zeromq.org/) for
interpreting the _errno_
`ZeroMQ` team has put a remarkable effort to API documentation and you can
only benefit from reading full details + debugging tips contained there.
As you will learn there, **_errno_** is typically associated with **_rc_**
return-codes, so as to increase contex-specific state-resolution strategies in
states, when API-service calls run into (non)-standard situations.
## Example: ...ØMQ team is particularly smart & pedantic on this since early
releases- kudos
> The **`zmq_send()`** function shall return number of bytes in the message if
> successful. Otherwise it shall return **`-1`** and set **_errno_** to one of
> the values defined below.
>
> **Errors**
>
> **`EAGAIN`** Non-blocking mode was requested and the message cannot be sent
> at the moment.
>
> **`ENOTSUP`** The `zmq_send()` operation is not supported by this socket
> type.
>
> **`EFSM`** The `zmq_send()` operation cannot be performed on this socket at
> the moment due to the socket not being in the appropriate state. This error
> may occur with socket types that switch between several states, such as
> `ZMQ_REP`. See the messaging patterns section of `zmq_socket(3)` for more
> information.
>
> **`ETERM`** The ØMQ context associated with the specified socket was
> terminated.
>
> **`ENOTSOCK`** The provided socket was invalid.
>
> **`EINTR`** The operation was interrupted by delivery of a signal before the
> message was sent.
>
> **`EHOSTUNREACH`** The message cannot be routed.
/* Send a multi-part
message consisting
of three parts to socket */
rc = zmq_send ( socket, "ABC", 3, ZMQ_SNDMORE ); assert (rc == 3);
rc = zmq_send ( socket, "DEFGH", 5, ZMQ_SNDMORE ); assert (rc == 5);
/* Final part;
no more parts to follow */
rc = zmq_send ( socket, "JK", 2, 0 ); assert (rc == 2);
## `Step 2:` review the root cause of the failed call
Without the posted output from `**`Step 0`**` there is no chance to proceed to
learn **_Why_** , so please start with this one with a reasonable certainty
that it will isolate the exact location **_Where_** the panic-state has
originated.
This helps everybody ( incl. here ) to move to re-interpreting the `**`Step
1`**` for answering **_Why_**.
## `Always:` take due care for resources release & termination
Distributed systems' architectures are not as trivial as demo examples show. A
unifying design practice is that you - the programme designer - are
responsible end-to-end for all primary functions plus also for all blocking /
panic states, inti which the distributed system, or any it's smaller part, can
get into including the final release of each resource that was (dynamically)
allocated anywhere in the distributed system. Complex? Sure. Demanding? Sure.
Self-discipline self-dictatorship? Always.
This goes a lot beyond the O/P scope, but while python hides many resource-
management duties from user-programming, other language bindings / wrappers
need not be so generous.
**Always** prepare, including via handled exceptions, ZeroMQ-socket resources
to `.close()` in a non-blocking mode ( `LINGER = 0` ).
**Always** explicitly, including via handled exceptions, `.close()` ZeroMQ-
sockets before ending the code.
**Always** explicitly, including via handled exceptions, release all hidden
IO-threads factory by **`.term()`** the ZeroMQ **`Context`** instance.
|
Python logging exceptions with traceback, but without displaying messages twice
Question: If I run the following code:
import logging
logger = logging.getLogger('creator')
try:
# some stuff
except Exception as exception:
logger.exception(exception)
I get the following output on the screen:
creator : ERROR division by zero
Traceback (most recent call last):
File "/graph_creator.py", line 21, in run
1/0
ZeroDivisionError: division by zero
**Are there ways to get such a output?**
creator : ERROR ZeroDivisionError: division by zero
Traceback (most recent call last):
File "/graph_creator.py", line 21, in run
1/0
Of course, I can get this (but I don't like it):
creator : ERROR Сaught exception (and etc...)
Traceback (most recent call last):
File "/graph_creator.py", line 21, in run
1/0
ZeroDivisionError: division by zero
Answer: If you called `exception` like this:
logger.exception('%s: %s', exception.__class__.__name__, exception)
then you could get the exception class name in the initial line.
If you need more precise changes, you can use a custom `Formatter` subclass
which formats things exactly as you like. This would need to override
`format_exception` to change the formatting of the traceback.
|
Python Webdriver Multithread
Question: I'm trying to spawn multiple webdriver instances with the code from:
<http://www.ibm.com/developerworks/aix/library/au-threadingpython/>
import time
import Queue
import urllib2
import threading
from selenium import webdriver
from BeautifulSoup import BeautifulSoup
hosts = ["http://yahoo.com", "http://google.com", "http://amazon.com",
"http://ibm.com", "http://apple.com"]
queue = Queue.Queue
out_queue = Queue.Queue
class Login_Driver(threading.Thread):
def __init__(self, queue, out_queue, driver):
threading.Thread.__init__(self)
self.queue = queue
self.out_queue = out_queue
self.driver = driver
print driver.title
def run(self):
while True:
#grabs host from queue
host = self.queue.get()
#grabs urls of hosts and then grabs chunk of webpage
driver.get(host)
chunk = driver.page_source()
#place chunk into out queue
self.out_queue.put(chunk)
#signals to queue job is done
self.queue.task_done()
class Poster(threading.Thread):
def __init__(self, driver, out_queue):
self.out_queue = out_queue
self.driver = driver
print driver.name
def run(self):
while True:
#grabs host from queue
chunk = self.out_queue.get()
#parse the chunk
soup = BeautifulSoup(chunk)
print soup.findAll(['title'])
#signals to queue job is done
self.out_queue.task_done()
start = time.time()
def main():
#spawn a pool of threads, and pass them queue instance
for i in range(5):
driver = webdriver.Firefox()
t = Login_Driver(queue, out_queue, driver)
t.setDaemon(True)
t.start()
time.sleep(20)
#populate queue with data
for host in hosts:
queue.put(host)
for i in range(5):
dt = Poster(out_queue)
dt.setDaemon(True)
dt.start()
#wait on the queue until everything has been processed
queue.join()
out_queue.join()
main()
print "Elapsed Time: %s" % (time.time() - start)
It errors: TypeError: unbound method get() must be called with Queque instance
as first argument (got nothing instead)
I'm a newbie on threads, classes, processes, can you please tell me what is
more ok to use, threads or processes and if can give me an example would be
great. Thank you guys.
**UPDATE**
Working code:
import time
import Queue
import urllib2
import threading
from selenium import webdriver
from BeautifulSoup import BeautifulSoup
hosts = ["http://yahoo.com", "http://google.com", "http://amazon.com",
"http://ibm.com", "http://apple.com"]
queue = Queue.Queue()
out_queue = Queue.Queue()
class Login_Driver(threading.Thread):
#def __init__(self, driver):
def __init__(self, queue, out_queue, driver):
threading.Thread.__init__(self)
self.queue = queue
self.out_queue = out_queue
self.driver = driver
print "In init first class.."
def run(self):
while True:
#grabs host from queue
host = self.queue.get()
#grabs urls of hosts and then grabs chunk of webpage
self.driver.get(host)
chunk = self.driver.page_source
#place chunk into out queue
self.out_queue.put(chunk)
#signals to queue job is done
print self.driver.title
self.queue.task_done()
class Poster(threading.Thread):
def __init__(self, out_queue, driver):
threading.Thread.__init__(self)
self.out_queue = out_queue
self.driver = driver
print "In init a second class.."
def run(self):
while True:
#grabs host from queue
chunk = self.out_queue.get()
#parse the chunk
soup = BeautifulSoup(chunk)
print soup.findAll(['title'])
#signals to queue job is done
print self.driver.name
self.out_queue.task_done()
start = time.time()
def main():
#spawn a pool of threads, and pass them queue instance
for i in range(5):
driver = webdriver.Firefox()
t = Login_Driver(queue, out_queue, driver)
t.setDaemon(True)
t.start()
print "Started webdriver: --- "+str(i)+" --- from main"
print "All started"
time.sleep(3)
#populate queue with data
for host in hosts:
queue.put(host)
print "Opening website: "+host
print "All sites passed for opening.."
time.sleep(3)
for i in range(5):
dt = Poster(out_queue, driver)
dt.setDaemon(True)
dt.start()
print "Starting second class/title and name beautifull soup and webdriver: --- "+str(i)+" --- from main"
print "Started secound class.."
time.sleep(3)
#wait on the queue until everything has been processed
queue.join()
out_queue.join()
print "out_queue.join()"
main()
print "Elapsed Time: %s" % (time.time() - start)
Answer: You are not instantiating the Queue correctly. Instead of,
queue = Queue.Queue
out_queue = Queue.Queue
it should be
queue = Queue.Queue()
out_queue = Queue.Queue()
|
math.floor(N) vs N // 1
Question: I am wondering if anyone can give me any insight into how the following may be
the same / different in **Python3** :
N // 1
and
from math import floor
floor(N)
I tried the following, which seems to indicate that they are equivalent:
import math
import random
for _ in range(0, 99999):
f = random.random()
n = random.randint(-9999, 9999)
N = f * n
n_div = N // 1; n_mth = math.floor(N)
if n_div != n_mth:
print("N // 1: {} | math.floor(N): {}".format(n_div, n_mth))
else: # yes, I realize this will always run
print("Seem the same to me")
Thanks for comments below. Updated test to the following, which clearly shows
`float // N` returns a `float`, while `math.floor(N)` returns an `int` **in
python3**. As I understand it, this behavior is different in **python2** ,
where `math.ceil` and `math.floor` return `float`s.
Also note how unusual/silly it would be to use `math.ceil` or `math.floor` on
an `int` instead of a `float`: either function operating on an `int` simply
returns that `int`.
import math
import random
for _ in range(0, 99):
N = random.uniform(-9999, 9999)
n_div = N // 1; n_mth = math.floor(N)
if n_div != n_mth:
print("N: {} ... N // 1: {} | math.floor(N): {}".format(N, n_div, n_mth))
elif type(n_div) != type(n_mth):
print("N: {} ... N // 1: {} ({}) | math.floor(N): {} ({})".format(N, n_div, type(n_div), n_mth, type(n_mth)))
else:
print("Seem the same to me")
Answer: You will spot a difference when using floats:
>>> 1000.5//1
1000.0
>>> floor(1000.5)
1000
`floor` returns an integer. For most cases `1000` and `1000.0` are equivalent,
but not always.
|
Python - Obtain a list of parenthetical tuples from string
Question: I have a string such as: `(1,2,3,'4.1),(4.2)',5,6,7),(8,9,10)`. The output I
need to obtain is the list: `[ ((1,2,3,'4.1),(4.2)',5,6,7), (8,9,10) ]` I
believe I need a regex in order to perform this task. How can I do so?
Thank you.
Answer: You might be able to evaluate the string directly (after putting it in a
list).
from ast import literal_eval
string = "(1,2,3,'4.1),(4.2)',5,6,7),(8,9,10)"
literal_eval('[{}]'.format(string))
# [(1, 2, 3, '4.1),(4.2)', 5, 6, 7), (8, 9, 10)]
|
import tensor with mat format to tensorflow
Question: I am new in tensorflow and Python. I have an image data set in Matlab in a
tensor with size : 96*96*5000 (image size , number of images) and I need to
import it to Tensorflow. I looked at the Tensorflow tutorial, which says I
should use TFRecords or CSV formats but I think this means that I save each
image separately in csv format. Is there any way to directly import my tensor
to tensorflow?
Answer: As Olivier said in [his
comment](http://stackoverflow.com/questions/37620330/import-tensor-with-mat-
format-to-tensorflow#comment62727615_37620330), the easiest solution is to
convert the data into a Numpy array, and use TensorFlow's [feeding
mechanism](https://www.tensorflow.org/versions/r0.8/how_tos/reading_data/index.html#feeding)
to pass the data into your TensorFlow model.
The
[`scipy.io.loadmat()`](http://docs.scipy.org/doc/scipy/reference/tutorial/io.html)
function in SciPy can be used to load a Matlab `.mat` file into Python, as a
dictionary mapping Matlab matrix names (as strings) to Numpy arrays.
|
Getting "not supported calendar message.ics" attachment with outlook email invite from Python
Question: Trying to write a python script to send an outlook email invite. Used
instructions from here: <http://www.baryudin.com/blog/sending-outlook-
appointments-python.html> However, I keep getting a "not supported calendar
message.ics" attachemnt with my email. Code is below:
import random
import smtplib
import email.MIMEText
import email.MIMEBase
from datetime import datetime
from email.MIMEMultipart import MIMEMultipart
import icalendar
from pytz import timezone
LOCAL_TZ = timezone("US/Eastern")
def send_appointment(subject, description):
start_dt = LOCAL_TZ.localize(datetime(2016, 6, 3, 17, 0, 0))
end_dt = LOCAL_TZ.localize(datetime(2016, 6, 3, 18, 0, 0))
cal = icalendar.Calendar()
cal.add("prodid", "-//My oncall calendar application//test.com")
cal.add("version", "2.0")
cal.add("method", "REQUEST")
event = icalendar.Event()
event.add("attendee", MY_EMAIL)
event.add("organizer", MY_EMAIL)
event.add("status", "confirmed")
event.add("summary", subject)
event.add("description", description)
event.add("location", "my desk")
event.add("dtstart", start_dt)
event.add("dtend", end_dt)
event.add("dtstamp", start_dt)
event["uid"] = random.random()
event.add("priority", 5)
event.add("sequence", 1)
event.add("created", LOCAL_TZ.localize(datetime.now()))
alarm = icalendar.Alarm()
alarm.add("action", "DISPLAY")
alarm.add("description", "Reminder")
alarm.add("TRIGGER;RELATED=START", "-PT{0}H".format(1))
event.add_component(alarm)
cal.add_component(event)
msg = MIMEMultipart("alternative")
msg["Subject"] = subject
msg["From"] = MY_EMAIL
msg["To"] = MY_EMAIL
msg["Content-class"] = "urn:content-classes:calendarmessage"
msg.attach(email.MIMEText.MIMEText(description))
filename = "invite.ics"
part = email.MIMEBase.MIMEBase("text", "calendar", method="REQUEST", name=filename)
part.set_payload(cal.to_ical())
email.Encoders.encode_base64(part)
part.add_header("Content-Description", filename)
part.add_header("Content-class", "urn:content-classes:calendarmessage")
part.add_header("Filename", filename)
part.add_header("Path", filename)
msg.attach(part)
s = smtplib.SMTP("localhost")
s.sendmail(MY_EMAIL, MY_EMAIL, msg.as_string())
s.quit()
def main():
send_appointment("test", "desc")
if __name__=="__main__":
main()
NOTE: I defined MY_EMAIL but just not putting it here.
The .ics file generated is below.
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//My oncall calendar application//test.com
METHOD:REQUEST
BEGIN:VEVENT
SUMMARY:test
DTSTART;TZID=US/Eastern;VALUE=DATE-TIME:20160603T170000
DTEND;TZID=US/Eastern;VALUE=DATE-TIME:20160603T180000
DTSTAMP;VALUE=DATE-TIME:20160603T210000Z
UID:0.669475599056
SEQUENCE:1
ATTENDEE:[email protected]
CREATED;VALUE=DATE-TIME:20160603T204723Z
DESCRIPTION:desc
LOCATION:my desk
ORGANIZER:[email protected]
PRIORITY:5
STATUS:confirmed
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:Reminder
TRIGGER;RELATED=START:-PT1H
END:VALARM
END:VEVENT
END:VCALENDAR
EDIT: Ok figured out the problem I think. It only understands utc timezone. So
if I changed the start and end datetimes to the following then the I get a
proper calendar invite.
start_dt = datetime(2016, 6, 3, 17, 0, 0, tzinfo=pytz.utc)
end_dt = datetime(2016, 6, 3, 18, 0, 0, tzinfo=pytz.utc)
Not entirely sure why it cannot understand other time zones. Looking at the
.ics file, the date time fields look like the following.
DTSTART;VALUE=DATE-TIME:20160603T170000Z
DTEND;VALUE=DATE-TIME:20160603T180000Z
as opposed to
DTSTART;TZID=US/Eastern;VALUE=DATE-TIME:20160603T170000
DTEND;TZID=US/Eastern;VALUE=DATE-TIME:20160603T180000
The question now is why it cannot handle other timezones but I guess that
should be another post.
Answer: Ok figured out the problem I think. It only understands utc timezone. So if I
changed the start and end datetimes to the following then the I get a proper
calendar invite.
start_dt = datetime(2016, 6, 3, 17, 0, 0, tzinfo=pytz.utc)
end_dt = datetime(2016, 6, 3, 18, 0, 0, tzinfo=pytz.utc)
Not entirely sure why it cannot understand other time zones. Looking at the
.ics file, the date time fields look like the following.
DTSTART;VALUE=DATE-TIME:20160603T170000Z
DTEND;VALUE=DATE-TIME:20160603T180000Z
as opposed to
DTSTART;TZID=US/Eastern;VALUE=DATE-TIME:20160603T170000
DTEND;TZID=US/Eastern;VALUE=DATE-TIME:20160603T180000
|
How to capture prompts in stdout/stderr?
Question: I'm trying to record a user's terminal session in a log file; fairly simply, I
made a Python wrapper for `ghci` (interactive Haskell) that looks like:
#!/usr/bin/env python
import os
cmd = 'ghci 2>&1 | tee hs.log'
os.system(cmd)
However, this only captures what is printed _back_ to the user, and not the
prompts/what the user has typed in. So if the session looks like:
$ ghci 2>$1 | tee hs.log
GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help
Prelude> 1+2
3
Prelude> 3+4
7
Prelude>
`hs.log` only has:
$ cat hs.log
GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help
3
7
How do you capture both the output and the input during an interactive
terminal session?
Answer: You can use the [`script`](http://man7.org/linux/man-pages/man1/script.1.html)
command to capture both input and output.
cmd = 'script hs.log ghci'
Note that this captures all the raw input and output from the terminal. You'll
see all the user's editing, and if the program is full-screen you'll see all
its escape sequences to move the cursor around. See the linked documentation
for full details.
|
ipython fails to start
Question: I had previously installed ipython against python2.7. After I installed
`python3-pip` using
$sudo apt-get install python3-pip
and ran the following command
$sudo pip3 install ipython
, I'm unable to start `ipython`. I get the following errors:
kv@kv:~$ ipython
Traceback (most recent call last):
File "/usr/local/bin/ipython", line 7, in <module>
from IPython import start_ipython
File "/usr/lib/python2.7/dist-packages/IPython/__init__.py", line 43, in <module>
from .config.loader import Config
File "/usr/lib/python2.7/dist-packages/IPython/config/loader.py", line 165
except KeyError, e:
^
SyntaxError: invalid syntax
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in <module>
from apport.report import Report
File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in <module>
import apport.fileutils
File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in <module>
from apport.packaging_impl import impl as packaging
File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 20, in <module>
import apt
File "/usr/lib/python2.7/dist-packages/apt/__init__.py", line 23, in <module>
import apt_pkg
ImportError: /usr/lib/python2.7/dist-packages/apt_pkg.so: undefined symbol: _Py_ZeroStruct
Original exception was:
Traceback (most recent call last):
File "/usr/local/bin/ipython", line 7, in <module>
from IPython import start_ipython
File "/usr/lib/python2.7/dist-packages/IPython/__init__.py", line 43, in <module>
from .config.loader import Config
File "/usr/lib/python2.7/dist-packages/IPython/config/loader.py", line 165
except KeyError, e:
^
SyntaxError: invalid syntax
How can I make `ipython` work under both `python2.7` and `python3.4`
Updated :
kv@kv:~$ ipython2
No command 'ipython2' found, did you mean:
Command 'python2' from package 'python-minimal' (main)
Command 'ipython' from package 'ipython' (universe)
Command 'ipython3' from package 'ipython3' (universe)
ipython2: command not found
and for `kv@kv:~$ ipython3` I get the error mentioned earlier
Answer: For me below commands works to shift b/w python versions.
To start ipyton for python 2
$ ipython2
To start ipython for python 3
$ ipython3
|
How to write a binary search that looks for words in a list
Question: I was wondering how to write a binary search that looks for the most popular
baby names from 2009 - 2014, the names are stored in a list and then I when I
receive a prompt asking for what i would like to search for then i would type
in the name and it would give how many times it iterated over to find the
certain name, and also the names are in JSON format and alphabetical so i can
compare characters.I'm planning to write this in python 3. Any help would be
greatly appreciated.
termToFind = input("What would you like to find? ")
tempMeds = []
for me in allMeds:
if len(me) >= len(termToFind):
tempMeds.append(me)
findLength = len(termToFind)
currentPos = len(tempMeds) // 2
stop = False
iterations = 0
amountFound = 0
prevVal = ""
Answer: Why do you need to use a binary search? What if you used
[`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter)?
Just pass your list (doesn't matter if it's sorted or not) of baby names to
`Counter` and then lookup whatever name you're interested in:
import json
from collections import Counter
json_baby_names = '["aardvark", "apple", "apple", ...., "zeus"]'
baby_name_counts = Counter(json.loads(json_baby_names))
>>> baby_name_counts['apple']
2
If you want the most popular name, you can use `Counter.most_common()`:
>>> baby_name_counts.most_common(1)
[('apple', 2)]
|
Insert pandas dataframe to mysql using sqlalchemy
Question: I simply try to write a pandas dataframe to local mysql database on ubuntu.
from sqlalchemy import create_engine
import tushare as ts
df = ts.get_tick_data('600848', date='2014-12-22')
engine = create_engine('mysql://user:[email protected]/db_name?charset=utf8')
df.to_sql('tick_data',engine, flavor = 'mysql', if_exists= 'append')
and it pop the error
biggreyhairboy@ubuntu:~/git/python/fjb$ python tushareDB.py
Error on sql SHOW TABLES LIKE 'tick_data'
Traceback (most recent call last):
File "tushareDB.py", line 13, in <module>
df.to_sql('tick_data', con = engine,flavor ='mysql', if_exists= 'append')
File "/usr/lib/python2.7/dist-packages/pandas/core/frame.py", line 1261, in to_sql
self, name, con, flavor=flavor, if_exists=if_exists, **kwargs)
File "/usr/lib/python2.7/dist-packages/pandas/io/sql.py", line 207, in write_frame
exists = table_exists(name, con, flavor)
File "/usr/lib/python2.7/dist-packages/pandas/io/sql.py", line 275, in table_exists
return len(tquery(query, con)) > 0
File "/usr/lib/python2.7/dist-packages/pandas/io/sql.py", line 90, in tquery
cur = execute(sql, con, cur=cur)
File "/usr/lib/python2.7/dist-packages/pandas/io/sql.py", line 53, in execute
con.rollback()
AttributeError: 'Engine' object has no attribute 'rollback'
the dataframe is not empty, database is ready without tables, i have tried
other method to create table in python with mysqldb and it works fine.
a related question: [Writing to MySQL database with pandas using SQLAlchemy,
to_sql](http://stackoverflow.com/questions/30631325/writing-to-mysql-database-
with-pandas-using-sqlalchemy-to-sql) but no actual reason was explained
Answer: You appear to be using an older version of pandas. I did a quick git bisect to
find the version of pandas where line 53 contains `con.rollback()`, and found
pandas at v0.12, which is before SQLAlchemy support was added to the `execute`
function.
If you're stuck on this version of pandas, you'll need to use a raw DBAPI
connection:
df.to_sql('tick_data', engine.raw_connection(), flavor='mysql', if_exists='append')
Otherwise, update pandas and use the engine as you intend to. Note that you
don't need to use the `flavor` parameter when using SQLAlchemy:
df.to_sql('tick_data', engine, if_exists='append')
|
Passing a numpy array to C++
Question: I have some code writen in Python for which the output is a numpy array, and
now I want to send that output to `C++` code, where the heavy part of the
calculations will be performed.
I have tried using cython's `public cdef`, but I am running on some issues. I
would appreciate your help! Here goes my code:
`pymodule.pyx`:
from pythonmodule import result # result is my numpy array
import numpy as np
cimport numpy as np
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
cdef public void cfunc():
print 'I am in here!!!'
cdef np.ndarray[np.float64_t, ndim=2, mode='c'] res = result
print res
Once this is cythonized, I call:
`pymain.c`:
#include <Python.h>
#include <numpy/arrayobject.h>
#include "pymodule.h"
int main() {
Py_Initialize();
initpymodule();
test(2);
Py_Finalize();
}
int test(int a)
{
Py_Initialize();
initpymodule();
cfunc();
return 0;
}
I am getting a `NameError` for the `result` variable at `C++`. I have tried
defining it with pointers and calling it indirectly from other functions, but
the array remains invisible. I am pretty sure the answer is quite simple, but
I just do not get it. Thanks for your help!
Answer: ### Short Answer
The NameError was cause by the fact that Python couldn't find the module, the
working directory isn't automatically added to your
[**`PYTHONPATH`**](https://docs.python.org/2/using/cmdline.html#envvar-
PYTHONPATH). Using [**`setenv`**](http://man7.org/linux/man-
pages/man3/setenv.3.html) with `setenv("PYTHONPATH", ".", 1);` in your `C/C++`
code fixes this.
### Longer Answer
There's an easy way to do this, apparently. With a python module
`pythonmodule.py` containing an already created array:
import numpy as np
result = np.arange(20, dtype=np.float).reshape((2, 10))
You can structure your `pymodule.pyx` to export that array by using the
**[`public`](http://docs.cython.org/src/userguide/external_C_code.html#using-
cython-declarations-from-c)** keyword. By adding some auxiliary functions,
you'll generally won't need to touch neither the Python, nor the Numpy
`C-API`:
from pythonmodule import result
from libc.stdlib cimport malloc
import numpy as np
cimport numpy as np
cdef public np.ndarray getNPArray():
""" Return array from pythonmodule. """
return <np.ndarray>result
cdef public int getShape(np.ndarray arr, int shape):
""" Return Shape of the Array based on shape par value. """
return <int>arr.shape[1] if shape else <int>arr.shape[0]
cdef public void copyData(float *** dst, np.ndarray src):
""" Copy data from src numpy array to dst. """
cdef float **tmp
cdef int i, j, m = src.shape[0], n=src.shape[1];
# Allocate initial pointer
tmp = <float **>malloc(m * sizeof(float *))
if not tmp:
raise MemoryError()
# Allocate rows
for j in range(m):
tmp[j] = <float *>malloc(n * sizeof(float))
if not tmp[j]:
raise MemoryError()
# Copy numpy Array
for i in range(m):
for j in range(n):
tmp[i][j] = src[i, j]
# Assign pointer to dst
dst[0] = tmp
Function `getNPArray` and `getShape` return the array and its shape,
respectively. `copyData` was added in order to just extract the
[**`ndarray.data`**](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ndarray.data.html#numpy-
ndarray-data) and copy it so you can then finalize Python and work without
having the interpreter initialized.
A sample program (in `C`, `C++` should look identical) would look like this:
#include <Python.h>
#include "numpy/arrayobject.h"
#include "pyxmod.h"
#include <stdio.h>
void printArray(float **arr, int m, int n);
void getArray(float ***arr, int * m, int * n);
int main(int argc, char **argv){
// Holds data and shapes.
float **data = NULL;
int m, n;
// Gets array and then prints it.
getArray(&data, &m, &n);
printArray(data, m, n);
return 0;
}
void getArray(float ***data, int * m, int * n){
// setenv is important, makes python find
// modules in working directory
setenv("PYTHONPATH", ".", 1);
// Initialize interpreter and module
Py_Initialize();
initpyxmod();
// Use Cython functions.
PyArrayObject *arr = getNPArray();
*m = getShape(arr, 0);
*n = getShape(arr, 1);
copyData(data, arr);
if (data == NULL){ //really redundant.
fprintf(stderr, "Data is NULL\n");
return ;
}
Py_DECREF(arr);
Py_Finalize();
}
void printArray(float **arr, int m, int n){
int i, j;
for(i=0; i < m; i++){
for(j=0; j < n; j++)
printf("%f ", arr[i][j]);
printf("\n");
}
}
Always remember to set:
setenv("PYTHONPATH", ".", 1);
**before** you call `Py_Initialize` so Python can find modules in the working
directory.
The rest is pretty straight-forward. It might need some additional error-
checking and **definitely** needs a function to free the allocated memmory.
### Alternate Way w/o Cython:
Doing it the way you are attempting is way hassle than it's worth, you would
probably be better off using
[`numpy.save`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.save.html#numpy.save)
to save your array in a `npy` binary file and then use some [_C++ library that
reads that file for you_](https://github.com/rogersce/cnpy).
|
Python pip install errors (feather-format)
Question: I've been trying to install the Feather file format for Python
(<https://pypi.python.org/pypi/feather-format>) for the last few days with no
luck. I'm using Anaconda2 on Windows 10. I get the following errors when I try
to pip install feather-format:
Collecting feather-format
Using cached feather-format-0.2.0.tar.gz
Requirement already satisfied (use --upgrade to upgrade): cython>=0.21 in c:\users\pete\anaconda2\lib\site-packages (from feather-format)
Building wheels for collected packages: feather-format
Running setup.py bdist_wheel for feather-format ... error
Complete output from command C:\Users\pete\Anaconda2\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\pete\\appdata\\local\\temp\\pip-build-g4egjo\\feather-format\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d c:\users\pete\appdata\local\temp\tmp1gwqlopip-wheel- --python-tag cp27:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-2.7
creating build\lib.win-amd64-2.7\feather
copying feather\api.py -> build\lib.win-amd64-2.7\feather
copying feather\compat.py -> build\lib.win-amd64-2.7\feather
copying feather\version.py -> build\lib.win-amd64-2.7\feather
copying feather\__init__.py -> build\lib.win-amd64-2.7\feather
creating build\lib.win-amd64-2.7\feather\tests
copying feather\tests\test_reader.py -> build\lib.win-amd64-2.7\feather\tests
copying feather\tests\__init__.py -> build\lib.win-amd64-2.7\feather\tests
copying feather\libfeather.pxd -> build\lib.win-amd64-2.7\feather
copying feather\ext.pyx -> build\lib.win-amd64-2.7\feather
running build_ext
building 'feather.ext' extension
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
creating build\temp.win-amd64-2.7\Release\feather
creating build\temp.win-amd64-2.7\Release\src
creating build\temp.win-amd64-2.7\Release\src\feather
C:\Users\pete\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Ifeather -IC:\Users\pete\Anaconda2\lib\site-packages\numpy\core\include -Ic:\users\pete\appdata\local\temp\pip-build-g4egjo\feather-format\src -IC:\Users\pete\Anaconda2\include -IC:\Users\pete\Anaconda2\PC /Tpfeather/ext.cpp /Fobuild\temp.win-amd64-2.7\Release\feather/ext.obj -std=c++11 -O3
cl : Command line warning D9002 : ignoring unknown option '-std=c++11'
cl : Command line warning D9002 : ignoring unknown option '-O3'
ext.cpp
C:\Users\pete\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Include\xlocale(342) : warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc
feather/ext.cpp(279) : fatal error C1083: Cannot open include file: 'stdint.h': No such file or directory
error: command 'C:\\Users\\pete\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\amd64\\cl.exe' failed with exit status 2
----------------------------------------
Failed building wheel for feather-format
Running setup.py clean for feather-format
Failed to build feather-format
Installing collected packages: feather-format
Running setup.py install for feather-format ... error
Complete output from command C:\Users\pete\Anaconda2\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\pete\\appdata\\local\\temp\\pip-build-g4egjo\\feather-format\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\pete\appdata\local\temp\pip-itehjl-record\install-record.txt --single-version-externally-managed --compile:
Compiling feather/ext.pyx because it changed.
[1/1] Cythonizing feather/ext.pyx
running install
running build
running build_py
creating build
creating build\lib.win-amd64-2.7
creating build\lib.win-amd64-2.7\feather
copying feather\api.py -> build\lib.win-amd64-2.7\feather
copying feather\compat.py -> build\lib.win-amd64-2.7\feather
copying feather\version.py -> build\lib.win-amd64-2.7\feather
copying feather\__init__.py -> build\lib.win-amd64-2.7\feather
creating build\lib.win-amd64-2.7\feather\tests
copying feather\tests\test_reader.py -> build\lib.win-amd64-2.7\feather\tests
copying feather\tests\__init__.py -> build\lib.win-amd64-2.7\feather\tests
copying feather\libfeather.pxd -> build\lib.win-amd64-2.7\feather
copying feather\ext.pyx -> build\lib.win-amd64-2.7\feather
running build_ext
building 'feather.ext' extension
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
creating build\temp.win-amd64-2.7\Release\feather
creating build\temp.win-amd64-2.7\Release\src
creating build\temp.win-amd64-2.7\Release\src\feather
C:\Users\pete\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Ifeather -IC:\Users\pete\Anaconda2\lib\site-packages\numpy\core\include -Ic:\users\pete\appdata\local\temp\pip-build-g4egjo\feather-format\src -IC:\Users\pete\Anaconda2\include -IC:\Users\pete\Anaconda2\PC /Tpfeather/ext.cpp /Fobuild\temp.win-amd64-2.7\Release\feather/ext.obj -std=c++11 -O3
cl : Command line warning D9002 : ignoring unknown option '-std=c++11'
cl : Command line warning D9002 : ignoring unknown option '-O3'
ext.cpp
C:\Users\pete\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Include\xlocale(342) : warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc
feather/ext.cpp(303) : fatal error C1083: Cannot open include file: 'stdint.h': No such file or directory
error: command 'C:\\Users\\pete\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\amd64\\cl.exe' failed with exit status 2
----------------------------------------
Command "C:\Users\pete\Anaconda2\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\pete\\appdata\\local\\temp\\pip-build-g4egjo\\feather-format\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\pete\appdata\local\temp\pip-itehjl-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\pete\appdata\local\temp\pip-build-g4egjo\feather-format\
You are using pip version 8.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
It all seems to have to do with c++. I've installed (and reinstalled) the
Microsoft Visual C++ Compiler for Python (<https://www.microsoft.com/en-
us/download/details.aspx?id=44266>). I also installed Visual Studio 2015.
I **am** able to install feather-format using
conda install feather-format
but I get the following errors when I include feather in a script:
Unhandled exception in thread started by <function wrapper at 0x000000001967B208>
Traceback (most recent call last):
File "C:\Users\pete\Anaconda2\lib\site-packages\django\utils\autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "C:\Users\pete\Anaconda2\lib\site-packages\django\core\management\commands\runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "C:\Users\pete\Anaconda2\lib\site-packages\django\utils\autoreload.py", line 249, in raise_last_exception
six.reraise(*_exception)
File "C:\Users\pete\Anaconda2\lib\site-packages\django\utils\autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "C:\Users\pete\Anaconda2\lib\site-packages\django\__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\pete\Anaconda2\lib\site-packages\django\apps\registry.py", line 108, in populate
app_config.import_models(all_models)
File "C:\Users\pete\Anaconda2\lib\site-packages\django\apps\config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\pete\Anaconda2\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "C:\Users\pete\Projects\datm2\core\models.py", line 22, in <module>
import feather
File "C:\Users\pete\Anaconda2\lib\site-packages\feather\__init__.py", line 17, in <module>
from feather.api import read_dataframe, write_dataframe
File "C:\Users\pete\Anaconda2\lib\site-packages\feather\api.py", line 21, in <module>
import feather.ext as ext
ImportError: No module named ext
It cant seem to compile (?) the file "ext.pyx" from the source directory. Any
help would be appreciated and thanks very much.
Answer: I have faced the same problem when I was installing Feather Format for Python.
You can try this code on your Command Line
conda install feather-format -c conda-forge
The reason of the error might be the wrong compiler for C++, you can use
conda-forge as a compiler. This has already integrated in Anaconda.
|
How can I decode python-requests URL?
Question: I know if I want to send an HTTP request, I can form it as a dictionary before
sending it:
payload = dict(
username="something",
password="something_else"
)
r = requests.get('http://example.com', params=payload)
Does it do the opposite as well? If my request happens to redirect me to
another URL, I can see where I was redirected with:
print r.url
But if that URL happens to be something like:
http://example.com/somepage.htm?varone=test&vartwo=3451&varthree=something
and I happen to want to pull out the vartwo from that URL (the string 3451),
does the requests module provide any easy way to do this just by specifying
that I want the value of "vartwo"? Another dictionary, or mapping object of
some sort perhaps? Or is my only option to use urllib.unquote to decode the
url, and parse it as a string?
Answer: Use [urlparse module](https://docs.python.org/2/library/urlparse.html) (Python
2.x; in Python 3.x, it was renamed to
[urllib.parse](https://docs.python.org/3/library/urllib.parse.html)).
from urlparse import urlparse
url = 'http://www.gurlge.com:80/path/file.html;params?a=1#fragment'
o = urlparse(url)
print o.params
|
Recover the last state of a program
Question: I would like to develop a program, which can recover the last loop in a
program that is using a sqlite3 database.
I have a Raspberry Pi running where the source is Python, the system can have
a power failure and restart for a while.
The program can be initiated from boot but it can be difficult to get back to
the last loop of the program. Let's consider the print function as a Python
syntax with a delay of 5 seconds each, which means there are 4 different
programs running synchronized.
The below program doesn't work as expected, could someone please help me to
solve the puzzle?
import thread
import time
import sqlite3
conn = sqlite3.connect('testdatabase.db')
conn.isolation_level = None
c = conn.cursor()
c.execute("SELECT ID from LAST_STATE")
fetch=c.fetchone()
def morning_u():
conn = sqlite3.connect('testdatabase.db')
c = conn.cursor()
c.execute("UPDATE LAST_STATE SET ID=1")
conn.commit()
c.close()
conn.close()
def noon_u():
conn = sqlite3.connect('testdatabase.db')
c = conn.cursor()
c.execute("UPDATE LAST_STATE SET ID=2")
conn.commit()
c.close()
conn.close()
def afternoon_u():
conn = sqlite3.connect('testdatabase.db')
c = conn.cursor()
c.execute("UPDATE LAST_STATE SET ID=3")
conn.commit()
c.close()
conn.close()
def evening_u():
conn = sqlite3.connect('testdatabase.db')
c = conn.cursor()
c.execute("UPDATE LAST_STATE SET ID=4")
conn.commit()
c.close()
conn.close()
def morning():
print ("morning")
time.sleep(5)
return
def noon():
print ("noon")
time.sleep(5)
return
def afternoon():
print ("afternoon")
time.sleep(5)
def evening():
print ("evening")
time.sleep(5)
morning_u()
while True:
if fetch[0] is 1:
morning()
noon_u()
if fetch[0] is 2:
noon()
afternoon_u()
if fetch[0] is 3:
afternoon()
evening_u()
if fetch[0] is 4:
evening()
morning_u()
Database information
conn = sqlite3.connect('testdatabase.db')
conn.execute('''CREATE TABLE LAST_STATE
(ID INT PRIMARY KEY NOT NULL);''')
conn.execute("INSERT INTO LAST_STATE (ID) \
VALUES (1)");
Edited program as per comment, please suggest for improvement
import datetime
import time
import logging
import sqlite3
conn = sqlite3.connect('testdatabase.db')
conn.isolation_level = "IMMEDIATE"
c = conn.cursor()
c.execute("SELECT ID from LAST_STATE")
state=c.fetchone()
def morning_u():
c.execute("UPDATE LAST_STATE SET ID=1")
conn.commit()
def noon_u():
c.execute("UPDATE LAST_STATE SET ID=2")
conn.commit()
def afternoon_u():
c.execute("UPDATE LAST_STATE SET ID=3")
conn.commit()
def evening_u():
c.execute("UPDATE LAST_STATE SET ID=4")
conn.commit()
def morning():
print ("morning")
time.sleep(5)
def noon():
print ("noon")
time.sleep(5)
def afternoon():
print ("afternoon")
time.sleep(5)
def evening():
print ("evening")
time.sleep(5)
while True:
if state[0] is 1:
morning()
try:
noon_u()
except:
print ("error1")
if state[0] is 2:
noon()
try:
afternoon_u()
except:
print ("error2")
if state[0] is 3:
afternoon()
try:
evening_u()
except:
print ("error3")
if state[0] is 4:
evening()
try:
morning_u()
except:
print ("error4")
Answer: I see two immediate problems that could cause your bug.
First, `evening` calls `morning_u` but no other function does that and it's
redundant with the loop.
Second, you're never updating `fetch` (which is a poor name, consider
something more like `state` or `time_of_day`) in the loop. The loop will keep
doing the same thing over and over again.
Also you're disconnecting and reconnecting to the database with every function
call. This is inefficient, and it could cause concurrency issues. The
functions would be better written to accept an existing database connection.
And, finally, you have `conn.isolation_level = None` which auto-commits, but
you're calling `commit`. The commit is a no-op, but it can fool the reader
into thinking you're using transactions. In general, avoid auto-commit, and
use transactions. Transactions are very important for concurrency. [SQLite
defaults to odd implicit
transactions](https://docs.python.org/2/library/sqlite3.html#sqlite3-controlling-
transactions) which might not do what you want and are hard to puzzle out. I'd
recommend using explicit transactions. Probably
[immediate](https://www.sqlite.org/lang_transaction.html) for the most
predictable behavior, but not exclusive. Set `conn.isolation_level =
"IMMEDIATE"`.
* * *
Design wise, you say you want a crash-recovery system, but you're trying to
use it as a concurrent system. These are two, mostly exclusive things.
Crash-recovery assumes there is only one process which owns the state at any
time. It loads up, it gets the old state, and then it merrily chugs along
assuming it owns the state and changing it as it likes. _Nobody else is
changing the state_. It might even guarantee this with an exclusive lock on
the table.
Concurrency assumes many processes are all updating the state. Since you're
building a state-machine, each change to the state depends on the existing
state. All changes to the state have to be atomic: start a transaction (ie.
get a lock), read the state, update the state, and commit. All uses of the
state also have to be atomic: start a transaction (ie. get a read-lock to
prevent the state from changing), read the state, use the state, end the
transaction.
For crash-recovery you only need to read the state once, at the start of the
program. You only have to write it when you change it.
For concurrency, you have to read the state at the start of the program, and
any time you want to use it. While you're using it, you have to get a shared
lock to make sure it doesn't change in the meantime.
You have the code necessary for crash recovery, all you're missing is making
sure the state (ie. `fetch[0]`) is updated by the state change functions. You
have none of the code for concurrency, but you don't need it.
|
Parse json from mysql in flask to get a field from record
Question: Mind that I am new to flask and python for that matter, I appreciate any help
that anyone gives. I'm looking to access one of the fields of my JSON
response(just the field not the entire response), how should I go about
parsing the response. Image of the response attached below,thanks.
# This is my main thread
from flask import Flask,render_template,request
from Qhandler import Qhandler
from MakePlayer import MakePlayer
app = Flask(__name__)
@app.route('/createplayer',methods=['GET','POST'] )
def showCreatePlayer():
if request.method == 'POST':
MakePlayer(request.form['playername'],request.form['playerteam'],request.form['playerrole'], request.form['playerpos'])
return "created player: <br>"+request.form['playername']+" "+request.form['playerteam']+" "+request.form['playerrole']+" "+request.form['playerpos']
return render_template("createPlayer.html")
@app.route('/sucess')
def success():
return "success"
@app.route('/showplayers')
def showPlayers():
Q = Qhandler()
return Q.displayQuery(""" select * from Player""")
if __name__ == '__main__':
app.run(debug=True)
### This is my query handler
from flask import Flask, jsonify, json
from flaskext.mysql import MySQL
class Qhandler(object):
#global mysql
global cursor
global connection
global mysql
# database connection
app = Flask(__name__)
mysql = MySQL()
app.config['MYSQL_DATABASE_USER'] = 'root'
app.config['MYSQL_DATABASE_PASSWORD'] = 'root'
app.config['MYSQL_DATABASE_DB'] = 'Optimizer'
app.config['MYSQL_DATABASE_HOST'] = 'localhost'
mysql.init_app(app)
def ins(self,query):
try:
connection=mysql.connect()
cursor = connection.cursor()
cursor.execute(query)
connection.commit()
except:
print "error running query"
finally:
#cursor.close()
connection.close()
def displayQuery(self,query):
try:
connection = mysql.connect()
cursor = connection.cursor()
cursor.execute(query)
fetchedData = cursor.fetchall()
fetchedData = jsonify(fetchedData)
#fetchedData = json.dumps(fetchedData)
#record = json.loads(fetchedData)
#print "the resonse is here:"
return fetchedData
except:
print "error running query"
finally:
#cursor.close()
connection.close()
### current response is
[screenshot of results](http://i.stack.imgur.com/GYAJz.png)
Answer: Use "fetchedData = json.dumps(fetchedData)" instead of "fetchedData =
jsonify(fetchedData)" then create a json decoder and parse the response, refer
to below :
def displayQuery(self,query):
try:
connection = mysql.connect()
cursor = connection.cursor()
cursor.execute(query)
fetchedData = cursor.fetchall()
fetchedData = json.dumps(fetchedData)
#create a json decoder
d = json.JSONDecoder()
fieldPlayerName = d.decode(fetchedData)
#parse the json that is returned ( fieldPlayerName[0][1])
print "should print the field with the player name",fieldPlayerName[0][1]
return fieldPlayerName[0][1]
|
Minify all CSS and Javascript in a directory
Question: I am trying to make a python script that will package the contents of a
directory and minify all of the JavaScript and CSS scripts.
If I used the code below (bottom of the post), and the directory structure
inside of `theme_files` was such:
\
|-assets\
| |-css\
| | |-theme.css
| | |-stylesheet.css
| |
| |-js\
| | |-theme.js
| | |-page.js
|
|-index.html
_(Is there a better way to do that?)_
It would output the whole directory structure into the generated `.pak` file
properly. However, the minified css and javascript files have no content
inside of them other than their own file name.
**Example:** the content of the file (supposedly minified) `theme.css` would
be "theme.css"
That's it. Nothing else. One line.
Any idea what I'm doing wrong?
* * *
import io
import os
import zipfile
import rcssmin
import rjsmin
pakName = input("Theme Name: ").replace(" ", "_").lower()
themePak = zipfile.ZipFile(pakName +".tpk", "w")
for dirname, subdirs, files in os.walk("theme_files"):
themePak.write(dirname)
for filename in files:
if not filename.endswith((".css", ".js")):
themePak.write(os.path.join(dirname, filename))
if filename.endswith(".css"):
cssMinified = io.StringIO()
cssMinified.write(rcssmin.cssmin(filename, keep_bang_comments=True))
themePak.writestr(os.path.join(dirname, filename), cssMinified.getvalue())
if filename.endswith(".js"):
jsMinified = io.StringIO()
jsMinified.write(rjsmin.jsmin(filename, keep_bang_comments=True))
themePak.writestr(os.path.join(dirname, filename), jsMinified.getvalue())
themePak.close()
Answer: As stated by @Squall,
> rcssmin.cssmin() and rjsmin.jsmin() expect the first element to be the CSS
> respectively JS code to minify as string. You have to open and read the CSS
> and JS files by yourself.
* * *
if filename.endswith(".css"):
with open(os.path.join(dirname, filename), "r") as assetfile:
assetdata = assetfile.read().replace("\n", "")
cssMinified = io.StringIO()
cssMinified.write(rcssmin.cssmin(assetdata, keep_bang_comments=True))
themePak.writestr(os.path.join(dirname, filename), cssMinified.getvalue())
if filename.endswith(".js"):
with open(os.path.join(dirname, filename), "r") as assetfile:
assetdata = assetfile.read().replace("\n", "")
jsMinified = io.StringIO()
jsMinified.write(rjsmin.jsmin(assetdata, keep_bang_comments=True))
themePak.writestr(os.path.join(dirname, filename), jsMinified.getvalue())
The changes in my `if` statements in the above code open the asset files as
strings, then pass them along for minification.
I learned the hard way that you have to be sure to `os.path.join()` the
filenames and the directories.
with open(os.path.join(dirname, filename), "r") as assetfile:
assetdata = assetfile.read().replace("\n", "")
Then minify `assetdata` and write to file. (In this case, memory object.)
|
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() python dbscan 3 dimensions point
Question: I want to do clustering using DBSCAN algorithm with a dataset that contains 3
points. This is the dataset :
1 5 7
12 8 9
2 4 10
6 3 21
11 13 0
6 3 21
11 13 0
3 7 1
1 9 2
1 5 7
I do clustering with this code :
from math import sqrt, pow
def __init__(eps=0.1, min_points=2):
eps = 10
min_points = 2
visited = []
noise = []
clusters = []
dp = []
def cluster(data_points):
visited = []
dp = data_points
c = 0
for point in data_points:
if point not in visited:
visited.append(point)
print point
neighbours = region_query(point)
#print neighbours
if len(neighbours) < min_points:
noise.append(point)
else:
c += 1
expand_cluster(c, neighbours)
#cluster(data_points)
def expand_cluster(cluster_number, p_neighbours):
cluster = ("Cluster: %d" % cluster_number, [])
clusters.append(cluster)
new_points = p_neighbours
while new_points:
new_points = pool(cluster, new_points)
def region_query(p):
result = []
for d in dp:
distance = (((d[0] - p[0])**2 + (d[1] - p[1])**2 + (d[2] - p[2])**2)**0.5)
print distance
if distance <= eps:
result.append(d)
return result
#p_neighbours = region_query(p=pcsv)
def pool(cluster, p_neighbours):
new_neighbours = []
for n in p_neighbours:
if n not in visited:
visited.append(n)
n_neighbours = region_query(n)
if len(n_neighbours) >= min_points:
new_neighbours = unexplored(p_neighbours, n_neighbours)
for c in clusters:
if n not in c[1] and n not in cluster[1]:
cluster[1].append(n)
return new_neighbours
@staticmethod
def unexplored(x, y):
z = []
for p in y:
if p not in x:
z.append(p)
return z
in this code there are `point` and `n` variables which are same with
`data_points` that contains the dataset. If I read manual I guess this code
can work actually, but when I run `cluster()` function there is an error.
Traceback (most recent call last):
File "<ipython-input-39-77eb6be20d82>", line 2, in <module>
if n not in visited:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
I don't know why this code still get that error, whereas I change `n` or
`point` variable with index data. Do you have any idea what's wrong with this
code ? how can I make it work?
thank you for your help..
Answer: The error emerges from these lines:
if point not in visited:
visited.append(point)
The `in` operator calls `list.__contains__`, which iterates over the items in
the `visited` list to see if any of them are equal to `point`. However,
equality tests between numpy arrays do not yield a single Boolean value, but
rather an array of bools representing the element-wise comparisons of the
items in the arrays. For instance, the result of `array([1, 2]) == array([1,
3])` is `array([True, False])`, not just `False`.
That's OK so far. Comparisons in Python are allowed to return whatever kind of
object they want. However, when equality is being tested by `in`, it needs a
Boolean result in the end, so `bool` is called on the result of the
comparison. The exception you received comes from `bool(array([...]))`, which
as the message says, is ambiguous. Should `bool(array([True, False]))` be
`True` or `False`? The library refuses to guess for you.
Unfortunately, I don't think there is a really good way to work around this.
Perhaps you could convert your points to tuples before saving them in
`visited`? As a nice side effect, this would let you use a `set` rather than a
list (since tuples are hashable).
Another issue you may have is that equality testing between floats is
inherently prone to inaccuracy. Two numbers that _should_ be equal, may not in
fact be equal when compared using floats derived by different calculations.
For instance, `0.1 + 0.2 == 0.3` is `False` because the rounding doesn't work
out the same way on both sides of the equals sign. So, even if you have two
points that _should_ be equal, you may not be able to detect them in your data
using only equality tests. You'd need to compute their difference and compare
it to some small `espilon` value, estimating the maximum error that could have
grown out of your computations.
|
Image won't load on python-based webserver
Question: I've built a simple web server that gets a request and send a response. So
when the server gets an invalid request, like "localhost/not-a-page", the
server will send a response with the content of the HTML file "404.html" the
webpage should display an image. So far, so good.
But when the 404 page loads up, the page can't find the image. The HTML part
is correct and works offline. I've tried to move the image to serval
locations, relative to the Python script, relative to the HTML. But it just
can't find it. Hi I'm trying to make the server as low-level as I can, I want
to learn how servers work. So I'm not using any server-related libraries. I'm
using only the socket library of Python.
I'll appreciate any help to resolve this problem without using other
libraries,
**EDIT** Here is the relevant Python part :
import socket
import threading
import os
default_error_page = """\
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8">
<title>Error response</title>
</head>
<body>
<center>
<h1>Response</h1>
<p>Error code: %(code)d</p>
<p>Message: %(status)s.</p>
</center>
</body>
</html>
"""
default_header_status = "HTTP/1.1 %(code)d %(status)s\r\n"
default_header_content_type = "Content-Type: text/html; charset=utf-8\r\n\r\n"
buffer_size = 1024
def get_page(code):
page = default_error_page
if code == 200:
pass
else:
file = open(os.path.dirname(__file__) + "/www/not-found.html", 'r')
page = file.read()
return page
class BaseServer:
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_name = ""
host_name = ""
host_port = 8000 # default port
is_shutdown = False
def __init__(self, name):
self.server_name = name
def start_server(self):
thread = threading.Thread(target=self.run_server(), name=self.server_name)
thread.start()
def run_server(self):
self.server_socket.bind((self.host_name, self.host_port)) # bind to host
self.server_socket.listen()
while not self.is_shutdown:
conn, addr = self.server_socket.accept()
self.handle_request(conn, addr)
def handle_request(self, connection, address):
print(str(address[0]) + " Connected! (port " + str(address[1]) + ")")
result = self.parse_request(connection.recv(buffer_size))
if result == 0:
page = self.parse_response(200)
else:
page = self.parse_response(404)
connection.sendall(bytes(page))
def parse_request(self, data):
if len(data) == 0:
return
strings = str(bytes(data).decode('utf-8')).split('\r\n')
command, path, version = strings[0].split()
print("command - " + command)
print("path - " + path)
print("version - " + version)
status = 1
if path == "/":
status = 0
return status
def parse_response(self, code):
status = "ERROR"
if code == 200:
status = "OK"
elif code == 404:
status = "NOT FOUND"
base_header = (default_header_status % {'code': code, 'status': status})
base_content_type = default_header_content_type
# page = (default_error_page % {'code': code, 'status': status})
page = str(get_page(code))
return_string = str(base_header + base_content_type + page).encode('utf-8')
print(return_string)
return return_string
def main():
server = BaseServer("Home Server")
server.start_server()
if __name__ == "__main__":
main()
And this is the HTML:
<html>
<head>
<link rel="stylesheet" type="text/css" href="/style/main.css"/>
<style>
*{
padding:0;
margin:0;
}
body{
background-color:#ffe6b3;
}
h1{
margin-top:30px;
background-color:#ffcc66;
font-size:3em;
display:inline-block;
color:#3a0000;
}
p{
margin-top:80px;
font-size:2em;
color:#3a0000;
}
#img404{
background-image:url(../images/404.gif);
width:100%;
height:50%;
background-repeat:no-repeat;
background-position:center 20%;
}
</style>
</head>
<body>
<center>
<div class=top>
<h1>ERROR 404</h1>
</div>
<p>
Sorry, we could not find the page :(
</p>
<div id="img404">
</div>
</center>
</body>
</html>
Sorry if it's not very readable, but I'm on the phone.
Dima.
Answer: Don't use relative paths for image like `../images/img.gif`. Rather use full
url or url relative to the root.
* <http://localhost/images/img.gif> \- full url
* /images/img.gif - path relative to root url
|
Is there a way to add sections to QListView in PySide or PyQt?
Question: This question is an exact duplicate of
[this](http://stackoverflow.com/questions/25560547/is-there-a-way-to-add-
sections-to-qlistview) unanswered question, except that I'm using Python.
I've got this.
[](http://i.stack.imgur.com/5qOhq.png)
And am looking for this.
[](http://i.stack.imgur.com/sCX8o.png)
I'm looking for hints as to how to approach this. Here's what I've considered
so far.
1. Add "virtual items" to the model itself. I'd rather not do this, in order to keep the model free of view related data. I intend to add additional views onto this model.
2. Add a proxy model per view. The proxy could add additional items and sort them appropriately. Albeit cleaner than (1), I'm not entirely convinced for the same reasons.
3. Subclass QListView, but I'm struggling to understand what to override.
4. Just write my own view; with a for-loop and QLabels, and synchronise with the model best I can. Gotta do what you gotta do.
Help!
Source.
import sys
from PySide import QtCore, QtGui
Label = QtCore.Qt.DisplayRole
Section = QtCore.Qt.UserRole + 1
class Model(QtCore.QAbstractListModel):
def __init__(self, parent=None):
super(Model, self).__init__(parent)
self.items = list()
def data(self, index, role):
item = self.items[index.row()]
if role == Label:
return item["label"]
if role == Section:
return item["section"]
def append(self, item):
"""Append item to end of model"""
self.beginInsertRows(QtCore.QModelIndex(),
self.rowCount(),
self.rowCount())
self.items.append(item)
self.endInsertRows()
def rowCount(self, parent=None):
return len(self.items)
app = QtGui.QApplication(sys.argv)
model = Model()
for item in ({"label": "Ben", "section": "Human"},
{"label": "Steve", "section": "Human"},
{"label": "Alpha12", "section": "Robot"},
{"label": "Mike", "section": "Toaster"}):
model.append(item)
view = QtGui.QListView()
view.setWindowTitle("My View")
view.setModel(model)
view.show()
app.exec_()
### Update 1 - Additional Information
For clarity, this question is about QListView rather than about alternatives
to it. The reason being that the rest of the application is developing in an
MVC-like fashion, where one or more views are drawing the unique set of data
present in the one model.
This particular view includes sections, the other views, which aren't
necessarily QListView's, shouldn't know anything about sections. For example,
one view may be a counter, listing the number of items available. Another
might be a pie, showing the ratio between items starting with the letter 'A'.
For further reference, what I'm looking for is exactly what ListView does in
QML.
* <http://doc.qt.io/qt-5/qml-qtquick-listview.html#section-prop>
That is, a single model with an additional delegate for optional sections. In
this case, the view doesn't require the model to contain these added members,
but rather draws them based on the existing data.
### Example
* [Source](https://github.com/pyblish/pyblish-qml/blob/3c420ac35ce286420475eeff73181f1b5738af26/pyblish_qml/qml/Overview.qml#L85)

### Update 2 - Work in progress
Ok, so I've got the extra items added to the bottom of the view using a
QSortFilterProxyModel, but I'm struggling to understand:
1. How do I assign them their corresponding data?
2. How do I sort them into place, above their "child" items?
[](http://i.stack.imgur.com/y7inR.png)
import sys
from PySide import QtCore, QtGui
Label = QtCore.Qt.DisplayRole
Section = QtCore.Qt.UserRole + 1
IsSection = QtCore.Qt.UserRole + 2
class Item(object):
@classmethod
def paint(cls, painter, option, index):
rect = QtCore.QRectF(option.rect)
painter.save()
if option.state & QtGui.QStyle.State_MouseOver:
painter.fillRect(rect, QtGui.QColor("#DEE"))
if option.state & QtGui.QStyle.State_Selected:
painter.fillRect(rect, QtGui.QColor("#CDD"))
painter.drawText(rect.adjusted(20, 0, 0, 0),
index.data(Label))
painter.restore()
@classmethod
def sizeHint(cls, option, index):
return QtCore.QSize(option.rect.width(), 20)
class Section(object):
@classmethod
def paint(self, painter, option, index):
painter.save()
painter.setPen(QtGui.QPen(QtGui.QColor("#666")))
painter.drawText(QtCore.QRectF(option.rect), index.data(Label))
painter.restore()
@classmethod
def sizeHint(self, option, index):
return QtCore.QSize(option.rect.width(), 20)
class Delegate(QtGui.QStyledItemDelegate):
def paint(self, painter, option, index):
if index.data(IsSection):
return Section.paint(painter, option, index)
else:
return Item.paint(painter, option, index)
def sizeHint(self, option, index):
if index.data(IsSection):
return Section.sizeHint(option, index)
else:
return Item.sizeHint(option, index)
class Model(QtCore.QAbstractListModel):
def __init__(self, parent=None):
super(Model, self).__init__(parent)
self.items = list()
def data(self, index, role):
item = self.items[index.row()]
return {
Label: item["label"],
Section: item["section"],
IsSection: False
}.get(role)
def append(self, item):
self.beginInsertRows(QtCore.QModelIndex(),
self.rowCount(),
self.rowCount())
self.items.append(item)
self.endInsertRows()
def rowCount(self, parent=None):
return len(self.items)
class Proxy(QtGui.QSortFilterProxyModel):
def data(self, index, role):
if index.row() >= self.sourceModel().rowCount():
return {
Label: "Virtual Label",
Section: "Virtual Section",
IsSection: True
}.get(role)
return self.sourceModel().data(index, role)
def rowCount(self, parent):
sections = 0
prev = None
for item in self.sourceModel().items:
cur = item["section"]
if cur != prev:
sections += 1
prev = cur
# Note: This includes 1 additional, duplicate, section
# for the bottom item. Ordering of items in model is important.
return self.sourceModel().rowCount() + sections
def index(self, row, column, parent):
return self.createIndex(row, column, parent)
def mapToSource(self, index):
if not index.isValid():
return QtCore.QModelIndex()
return self.sourceModel().createIndex(index.row(),
index.column(),
QtCore.QModelIndex())
def parent(self, index):
return QtCore.QModelIndex()
app = QtGui.QApplication(sys.argv)
model = Model()
for item in ({"label": "Ben", "section": "Human"},
{"label": "Steve", "section": "Human"},
{"label": "Alpha12", "section": "Robot"},
{"label": "Mike", "section": "Toaster"},
{"label": "Steve", "section": "Human"},
):
model.append(item)
proxy = Proxy()
proxy.setSourceModel(model)
delegate = Delegate()
view = QtGui.QListView()
view.setWindowTitle("My View")
view.setModel(proxy)
view.setItemDelegate(delegate)
view.show()
app.exec_()
Answer: What you want is a `QTreeWidget` (or `QTreeView` if you want separate
model/views, but you'll have to create your own model for that to work).
tree = QtGui.QTreeWidget()
tree.setHeaderLabels(['Name'])
data = ({"label": "Ben", "section": "Human"},
{"label": "Steve", "section": "Human"},
{"label": "Alpha12", "section": "Robot"},
{"label": "Mike", "section": "Toaster"})
sections = {}
for d in data:
sections.setdefault(d['section'], []).append(d['label'])
for section, labels in sections.items():
section_item = QtGui.QTreeWidgetItem(tree, [section])
for label in labels:
QtGui.QTreeWidgetItem(section_item, [label])
The only other option would be to use a `QListWidget/QListView` and use a
`QItemDelegate` to draw your _section_ items differently than your _label_
items.
|
Biopython: How to get the compound name of the pdb file of a protein?
Question: Ive been trying to solve it with the following term:
structure.header['compound']
But all I get is the molecules id instead of its name!
Answer: In order to get the name of crystal structure, i.e. the name which is shown at
the PDB site, you can use:
print(structure.header['name'])
e.g. (assuming you have `1iah.pdb` in your current working directory)
from Bio.PDB import *
parser = PDBParser()
structure = parser.get_structure('1IAH', '1iah.pdb')
print(structure.header['name'])
will give you
> ' crystal structure of the atypical protein kinase domain of a trp ca-
> channel, chak (adp-mg complex)'
which is identical to the name shown here:
<http://www.rcsb.org/pdb/explore/explore.do?structureId=1IAH>
* * *
**Update in response to the comments**
In order to get the name of the compound one can use:
print(structure.header['compound']['1']['molecule'])
|
Soup.find_all is only returning Some of the results in Python 3.5.1
Question: I'm trying to get all of the urls for thumbnails from my webpage that have the
class = "thumb", but soup.find_all is only printing the most recent 22 or so.
Here is the Code:
import requests
from bs4 import BeautifulSoup
r = requests.get("http://rayleighev.deviantart.com/gallery/44021661/Reddit")
soup = BeautifulSoup(r.content, "html.parser")
links = soup.find_all("a", {'class' : "thumb"})
for link in links:
print(link.get("href"))
Answer: I think you meant to ask about _following the pagination_ and grabbing all the
links in a list. Here is the implementation of that idea - use the `offset`
parameter and grab links until there are no more links present incrementing
the `offset` by 24 (number of links per page):
import requests
from bs4 import BeautifulSoup
offset = 0
links = []
with requests.Session() as session:
while True:
r = session.get("http://rayleighev.deviantart.com/gallery/44021661/Reddit?offset=%d" % offset)
soup = BeautifulSoup(r.content, "html.parser")
new_links = [link["href"] for link in soup.find_all("a", {'class': "thumb"})]
# no more links - break the loop
if not new_links:
break
links.extend(new_links)
print(len(links))
offset += 24
print(links)
|
Iterate python dictionary to assign value to table's fields
Question: Using an API I get a json dictionary as response in the form:
{"transacDet":[{"field1":6933434802,"field2":323499903,"field3":293483842},
{"field1":693433448,"field2":333400300,"field3":298334300}],"responseStatus"
:{"statusMessage":"success","statusCode":0}}
I am looking to create records in a table having the same corresponding fields
as in the json response: field1, field2, field3.
From the above example I would have 2 records:
field1 = 6933434802
field2 = 333400298
field3 = 298334842
and then:
field1 = 693433448
field2 = 333400300
field3 = 298334300
I have to iterate the response but not clear how.
I try using:
for k in resp_json.items()[0][1][0]:
print k
but I'm stuck there. TIA
Answer:
a= {"transacDet":[{"field1":6933434802,"field2":323499903,"field3":293483842},
{"field1":693433448,"field2":333400300,"field3":298334300}],"responseStatus"
:{"statusMessage":"success","statusCode":0}}
import pandas as pd
df =pd.DataFrame(a['transacDet'])
df
Out[11]:
field1 field2 field3
0 6933434802 323499903 293483842
1 693433448 333400300 298334300
iterate the dataframe:
for i in df.iterrows(): print i[1], '\n'
field1 6933434802
field2 323499903
field3 293483842
Name: 0, dtype: int64
field1 693433448
field2 333400300
field3 298334300
Name: 1, dtype: int64
So if you don't want use `pandas`. just use generator (list comprehension),
this is based on @Dilettant 's solution:
[(k, d[k]) for d in resp_json['transacDet'] for k in sorted(d.keys())]
Out[24]:
[(u'field1', 6933434802L),
(u'field2', 323499903),
(u'field3', 293483842),
(u'field1', 693433448),
(u'field2', 333400300),
(u'field3', 298334300)]
|
Python download nested div
Question: I'm trying to find data on a div tag and download it, however, BeautifulSoup
returns 'none', as far as I can tell, not all divs are getting downloaded
which is the cause for BeautifulSoup not to find it.
import requests
from bs4 import BeautifulSoup
#Get login information
URL = raw_input("Enter the URL: ")
username = raw_input("Enter the username: ")
password = raw_input("Enter the password: ")
#Start a session
session = requests.Session()
# Send login data
login_data = {
'user': username,
'password': password,
}
# Authenticate
r = session.post(URL, data = login_data)
#Get page
r = session.get(URL)
html = BeautifulSoup(r.text)
entry = html.find('div',{'class':'x-grid-empty'})
#open file in binary mode
file = open("Website.txt","wb")
file.writelines(str(entry))
file.close()
Answer: If you're sure the problem lies with BeautifulSoup, then give `lxml` a try:
<http://lxml.de/>
In my experience lxml had been far more reliable.
If `requests` is the culprit however, then simply either upgrade your
requests, or use another tool, for one you can use the built-in
[urllib2](https://docs.python.org/2/library/urllib2.html).
|
How can I make sure users are turning in correct input, for Guess game in python 3.5.1?
Question: This is my code:
from random import randint
print("What is your name?")
name=input()
tries=0
number= randint(1,100)
print("Hello, %s guess a number from 0 to 100" % name)
while tries < 8:
print("Take a Guess")
guess=input()
guess=int(guess)
if guess < number:
print("Too Low")
tries += 1
if guess > number:
print("Too High")
tries += 1
if guess == number:
break
if guess == number:
print("You got it, in %s tries" % tries)
if guess != number:
print("Sorry try again!")
But it gives an error:
Traceback (most recent call last):
File "/Users/Zuazua/PycharmProjects/untitled/GuessMyNumber2.py", line 10, in <module>
guess=int(guess)
ValueError: invalid literal for int() with base 10: '23!'
Answer: Change the line `guess=int(guess)` into this:
guess=int(re.search(r'\d+', guess).group())
|
Using python, I need to average value based on two keys in two columns from a CSV file
Question: I have a csv file with 3 columns
TMC, EPOCH, Time
11C12, 1, 24
11C12, 1, 34
11C12, 2, 56
11C12, 2, 78
11C13, 1, 56
11C13, 2, 45
11C13, 2, 64
11C13, 3, 32
11C13, 3, 28
Now I want to have average.py file which calculates average of time for each
combination of TMC, EPOCH and write that to a txt or csv file
The desired output is:
TMC, EPOCH, Average Time
11C12, 1, average value
11C12, 2, average value
11C13, 1, average value
11C13, 2, average value
11C13, 3, average value
Answer: Use a `defaultdict` to group the elements using the forst two columns as the
key and appending the times then average and write to the new csv:
import csv
from collections import defaultdict
with open("in.csv") as f, open("average.csv", "w") as out:
wr = csv.writer(out)
d = defaultdict(list)
head = next(f)
out.write(head)
for row in csv.reader(f):
d[tuple(row[:2])].append(int(row[-1]))
for k, v in d.items():
out.write("{},{},{}\n".format(k[0], k[1], sum(v, 0.0) / len(v)))
Output:
TMC,EPOCH,Time
11C12,1,29.0
11C12,2,67.0
11C13,1,56.0
11C13,2,54.5
11C13,3,30.0
If you want to keep the order the elements are first seen you can use an
`OrderedDict`:
import csv
from collections import OrderedDict
with open("in.csv") as f, open("average.csv", "w") as out:
wr = csv.writer(out)
d = OrderedDict()
head = next(f)
out.write(head)
for row in csv.reader(f):
d.setdefault(tuple(row[:2]), []).append(int(row[-1]))
for k, v in d.items():
out.write("{},{},{}\n".format(k[0], k[1], sum(v, 0.0) / len(v)))
|
Python Maze Generation
Question: I am trying to make a python maze generator but I keep getting an IndexError:
list index out of range. Any ideas? I'm kinda new to this stuff so I was using
the code from [rosetta
code](http://rosettacode.org/wiki/Maze_generation#Python) on maze generation.
I am not painting the maze right now I just want the algorithm to work right
now.
from random import shuffle, randrange
maze_rows = 10
maze_columns = 10
maze = [[0 for rows in range(maze_rows)] for columns in range(maze_columns)]
def maze_generation(x,y):
maze[y][x] = 1
walk = [(x - 1, y), (x, y + 1), (x + 1, y), (x, y - 1)]
shuffle(walk)
for (xx, yy) in walk:
if maze[yy][xx]: continue
if xx == x:
maze[max(y,yy)][x] = 1
if yy == y:
maze[y][max(x,xx)] = 1
maze_generation(xx,yy)
maze_generation(randrange(maze_rows),randrange(maze_columns))
Answer: Your code has _some_ resemblance to what rosetta code has. Here is my attempt.
I haven't tested it.
from random import shuffle, randrange
def make_maze(w = 16, h = 8):
vis = [[0] * w + [1] for _ in range(h)] + [[1] * (w + 1)]
def walk(x, y):
vis[y][x] = 1
d = [(x - 1, y), (x, y + 1), (x + 1, y), (x, y - 1)]
shuffle(d)
for (xx, yy) in d:
if vis[yy][xx]: continue
walk(xx, yy)
walk(randrange(w), randrange(h))
return vis
print(make_maze())
|
Tensorflow: restoring a graph and model then running evaluation on a single image
Question: I think it would be immensely helpful to the Tensorflow community if there was
a well-documented solution to the crucial task of testing a single new image
against the model created by the [convnet in the CIFAR-10
tutorial](https://www.tensorflow.org/versions/r0.8/tutorials/deep_cnn/index.html).
I may be wrong, but this critical step that makes the trained model usable in
practice seems to be lacking. There is a "missing link" in that tutorial—a
script that would directly load a single image (as array or binary), compare
it against the trained model, and return a classification.
Prior answers give partial solutions that explain the overall approach, but
none of which I've been able to implement successfully. Other bits and pieces
can be found here and there, but unfortunately haven't added up to a working
solution. Kindly consider the research I've done, before tagging this as
duplicate or already answered.
[Tensorflow: How to restore a previously saved model
(python)](http://stackoverflow.com/questions/33759623/tensorflow-how-to-
restore-a-previously-saved-model-python)
[Restoring TensorFlow
model](http://stackoverflow.com/questions/34982492/restoring-tensorflow-model)
[Unable to restore models in tensorflow
v0.8](http://stackoverflow.com/questions/37187597/unable-to-restore-models-in-
tensorflow-v0-8)
<https://gist.github.com/nikitakit/6ef3b72be67b86cb7868>
The most popular answer is the first, in which @RyanSepassi and
@YaroslavBulatov describe the problem and an approach: one needs to "manually
construct a graph with identical node names, and use Saver to load the weights
into it". Although both answers are helpful, it is not apparent how one would
go about plugging this into the CIFAR-10 project.
A fully functional solution would be highly desirable so we could port it to
other single image classification problems. There are several questions on SO
in this regard that ask for this, but still no full answer (for example [Load
checkpoint and evaluate single image with tensorflow
DNN](http://stackoverflow.com/questions/37058236/load-checkpoint-and-evaluate-
single-image-with-tensorflow-dnn)).
I hope we can converge on a working script that everyone could use.
The below script is not yet functional, and I'd be happy to hear from you on
how this can be improved to provide a solution for single-image classification
using the CIFAR-10 TF tutorial trained model.
Assume all variables, file names etc. are untouched from the original
tutorial.
New file: **cifar10_eval_single.py**
import cv2
import tensorflow as tf
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string('eval_dir', './input/eval',
"""Directory where to write event logs.""")
tf.app.flags.DEFINE_string('checkpoint_dir', './input/train',
"""Directory where to read model checkpoints.""")
def get_single_img():
file_path = './input/data/single/test_image.tif'
pixels = cv2.imread(file_path, 0)
return pixels
def eval_single_img():
# below code adapted from @RyanSepassi, however not functional
# among other errors, saver throws an error that there are no
# variables to save
with tf.Graph().as_default():
# Get image.
image = get_single_img()
# Build a Graph.
# TODO
# Create dummy variables.
x = tf.placeholder(tf.float32)
w = tf.Variable(tf.zeros([1, 1], dtype=tf.float32))
b = tf.Variable(tf.ones([1, 1], dtype=tf.float32))
y_hat = tf.add(b, tf.matmul(x, w))
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
print('Checkpoint found')
else:
print('No checkpoint found')
# Run the model to get predictions
predictions = sess.run(y_hat, feed_dict={x: image})
print(predictions)
def main(argv=None):
if tf.gfile.Exists(FLAGS.eval_dir):
tf.gfile.DeleteRecursively(FLAGS.eval_dir)
tf.gfile.MakeDirs(FLAGS.eval_dir)
eval_single_img()
if __name__ == '__main__':
tf.app.run()
Answer: Here's how I ran a single image at a time. I'll admit it seems a bit hacky
with the reuse of getting the scope.
This is a helper function
def restore_vars(saver, sess, chkpt_dir):
""" Restore saved net, global score and step, and epsilons OR
create checkpoint directory for later storage. """
sess.run(tf.initialize_all_variables())
checkpoint_dir = chkpt_dir
if not os.path.exists(checkpoint_dir):
try:
os.makedirs(checkpoint_dir)
except OSError:
pass
path = tf.train.get_checkpoint_state(checkpoint_dir)
#print("path1 = ",path)
#path = tf.train.latest_checkpoint(checkpoint_dir)
print(checkpoint_dir,"path = ",path)
if path is None:
return False
else:
saver.restore(sess, path.model_checkpoint_path)
return True
Here is the main part of the code that runs a single image at a time within
the for loop.
to_restore = True
with tf.Session() as sess:
for i in range(test_img_set):
# Gets the image
images = get_image(i)
images = np.asarray(images,dtype=np.float32)
images = tf.convert_to_tensor(images/255.0)
# resize image to whatever you're model takes in
images = tf.image.resize_images(images,256,256)
images = tf.reshape(images,(1,256,256,3))
images = tf.cast(images, tf.float32)
saver = tf.train.Saver(max_to_keep=5, keep_checkpoint_every_n_hours=1)
#print("infer")
with tf.variable_scope(tf.get_variable_scope()) as scope:
if to_restore:
logits = inference(images)
else:
scope.reuse_variables()
logits = inference(images)
if to_restore:
restored = restore_vars(saver, sess,FLAGS.train_dir)
print("restored ",restored)
to_restore = False
logit_val = sess.run(logits)
print(logit_val)
Here is an alternative implementation to the above using place holders it's a
bit cleaner in my opinion. but I'll leave the above example for historical
reasons.
imgs_place = tf.placeholder(tf.float32, shape=[my_img_shape_put_here])
images = tf.reshape(imgs_place,(1,256,256,3))
saver = tf.train.Saver(max_to_keep=5, keep_checkpoint_every_n_hours=1)
#print("infer")
logits = inference(images)
restored = restore_vars(saver, sess,FLAGS.train_dir)
print("restored ",restored)
with tf.Session() as sess:
for i in range(test_img_set):
logit_val = sess.run(logits,feed_dict={imgs_place=i})
print(logit_val)
|
Extracting repeated patterns from a text file in python
Question: I am looking to extract all the text between a repeating pattern in a text
file. My text file XYZ.txt looks something like this:
Start
This is a great day
End
Start
This is another great day
End
Start
This is 3rd great day
End
I am looking extract the all the text between every start and End, my output
should be like:
This is a great day
This is another great day
This is 3rd great day
and I am looking to save all the outputs as separate HTML file. The code that
I am using is as follows:
import re
with open('XYZ.txt') as myfile:
content = myfile.read()
text = re.search(r'Start\n.*?End', content, re.DOTALL).group()
print(text)
But code above only prints the first line. Not sure how I can print all the
values between the pattern and save them as seperate html files. I would
really appreciate any directions.
Thank You
Answer: You need to use `re.findall` to find all occurrences of regex.
>>> lines
'Start\n\nThis is a great day\n\nEnd\n\nStart\nThis is another great day\n\nEnd\n\nStart\nThis is 3rd great day\nEnd\n'
>>>
>>> re.findall('This is.*day', lines)
['This is a great day', 'This is another great day', 'This is 3rd great day']
|
How do I import Tkinter?
Question: When I try to `import tkinter`, this is the message I get:
Traceback (most recent call last):
File "Num_inc_dec.py", line 1, in <module>
from Tkinter import *
ImportError: No module named Tkinter
I've searched for my error and have found nothing. I've tried just typing
`import Tkinter`. I'm using python 2.7, but just in case I have also tried
using a lower case `t`.
Additional:
I'm using cygwin to launch my `.py` files as I'm new to programming, and that
was in the tutorial to setting up an environment to work in that I used. I
suspect this is part if not the error as I can find nothing online fixing my
problem.
Edit: I fixed it. When installing cygwin I had to enable something to get it
to work. I searched tkinter and enabled it from the thing that came up.
Answer: "The Python Windows Installers include Tcl/Tk as well as Tkinter. These are
essentially a one-click install of everything needed."
<http://tkinter.unpythonic.net/wiki/How_to_install_Tkinter>
I would check that you are actually running the correct version of Python.
Perhaps try explicitly stating you would like to run 2.7 with
$python2.7
>>> import Tkinter
As
import Tkinter
will return an `ImportError` in **python 3.5**. But will execute correctly in
2.7.
Then if the problem has not been solved, just reinstall **python 2** with the
windows installer. Tk and Tkinter are packaged with the python installer.
|
Using Scrapy Itemloader in a loop
Question: I want to use Scrapy on the Dmoz website they use in their tutorials, but
instead of just reading the books in the books URL
(<http://www.dmoz.org/Computers/Programming/Languages/Python/Books/>) by using
the Item/Field pairs, I want to create an Itemloader that will read in the
desired values (name, title, description).
This is my items.py file:
from scrapy.item import Item, Field
from scrapy.contrib.loader import ItemLoader
from scrapy.contrib.loader.processor import Identity
class DmozItem(Item):
title = Field(
output_processor=Identity()
)
link = Field(
output_processor=Identity()
)
desc = Field(
output_processor=Identity()
)
class MainItemLoader(ItemLoader):
default_item_class = DmozItem
default_output_processor = Identity()
And my spider file:
import scrapy
from scrapy.spiders import Spider
from scrapy.loader import ItemLoader
from tutorial.items import MainItemLoader, DmozItem
from scrapy.selector import Selector
class DmozSpider(Spider):
name = 'dmoz'
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"
]
def parse(self, response):
for sel in response.xpath('//div[@class="site-item "]/div[@class="title-and-desc"]'):
l = MainItemLoader(response=response)
l.add_xpath('title', '/a/div[@class="site-title"]/text()')
l.add_xpath('link', '/a/@href')
l.add_xpath('desc', '/div[@class="site-descr "]/text()')
yield l.load_item()
I have tried a number of different alternatives. I suspect the main issue is
in the "response=response" part of the itemloader declaration, but I can't
make heads or tails of the scrapy documentation regarding this. Could using
the selector="blah" syntax be where I should look?
If I run this, I get a list of 22 empty brackets (the correct number of
books). If I change the first slash in each add_xpath line to be a double
slash, I get 22 identical lists containing ALL the data (unsurprisingly).
**How can I write this so the itemloader will make a new list containing the
desired fields for each different book?**
Thank you!
Answer: You need to let your `ItemLoader` work inside a specific _selector_ , not
`response`:
l = MainItemLoader(selector=sel)
l.add_xpath('title', './a/div[@class="site-title"]/text()')
l.add_xpath('link', './a/@href')
l.add_xpath('desc', './div[@class="site-descr "]/text()')
yield l.load_item()
Also note the dots at the beginning of XPath expressions.
|
how to know exception point
Question: I have a source code of 500Mb having more than 5K of files written in Python.
Sometimes I get exception messages but no idea about the line number and file
name of exception. Even sometimes exceptions are not seen on terminal unlit I
specifically use pdb.
Is there any convenient way to get to know about the exception location ?
Thanks.
Answer: Try the traceback module: <https://docs.python.org/3/library/traceback.html>
It contains the tb_lineno function which takes the traceback as a parameter,
returning the line number of the exception.
EDIT:
import sys, traceback
def lumberjack():
bright_side_of_death()
def bright_side_of_death():
return tuple()[0]
try:
lumberjack()
except IndexError:
exc_type, exc_value, exc_traceback = sys.exc_info()
print("*** print_tb:")
traceback.print_tb(exc_traceback, limit=1, file=sys.stdout)
print("*** print_exception:")
traceback.print_exception(exc_type, exc_value, exc_traceback,
limit=2, file=sys.stdout)
print("*** print_exc:")
traceback.print_exc()
print("*** format_exc, first and last line:")
formatted_lines = traceback.format_exc().splitlines()
print(formatted_lines[0])
print(formatted_lines[-1])
print("*** format_exception:")
print(repr(traceback.format_exception(exc_type, exc_value,
exc_traceback)))
print("*** extract_tb:")
print(repr(traceback.extract_tb(exc_traceback)))
print("*** format_tb:")
print(repr(traceback.format_tb(exc_traceback)))
print("*** tb_lineno:", exc_traceback.tb_lineno)
|
Using Python to communicate with web socket using JSON
Question: In order to better understand how websockets are used beyond the basic hello-
world, I set myself the task of getting some data from a page using websockets
and JSON (because the source code of gitxiv is readily available, I chose to
look at <http://gitxiv.com/day/2015/12/31>).
Connecting to this websocket via Python seems to be straightforward
from websocket import create_connection
import websocket
import pprint
websocket.enableTrace(True)
ws=create_connection("ws://gitxiv.com/sockjs/212/2aczpiim/websocket")
result = ws.recv()
print "Received '%s'" % result
result = ws.recv()
print "Received '%s'" % result
I'm not entirely clear about the variables in the ws:// url, like '212'.
Running this code seems to reliably connect (although it is always possible
that failing to have the right variables in there causes the server to refuse
to cooperate later?)
Now if I watch the communication between Firefox and the gitxiv page, I see
that following connection of the websocket the server sends
o
a["{\"server_id\":\"0\"}"]
The above script gets the same response, so it seems that the connection is
successfully made.
However, this is where I stumble. The next step in the communication is that
my browser sends quite a lot of information to the web service, such as the
line:
"["{\"msg\":\"connect\",\"version\":\"1\",\"support\":[\"1\",\"pre2\",\"pre1\"]}"]"
Sending these lines directly using ws.send() results in 'broken framing'.
Sending just:
controlstr='{"msg":"connect","version":"1","support":["1","pre2","pre1"]}';
ws.send(controlstr)
results in something being sent that looks like:
send: '\x81\xbd\xef\x17F8\x945+K\x885|\x1a\x8cx(V\x8at2\x1a\xc350]\x9dd/W\x815|\x1a\xde5j\x1a\x9cb6H\x80e2\x1a\xd5Ld\t\xcd;dH\x9drt\x1a\xc356J\x8a&de\x92'
I get a different error:
'a["{\\"msg\\":\\"error\\",\\"reason\\":\\"Bad request\\"}"]'
It seems, therefore, that there is something wrong in the way that I am
sending this JSON message to the websocket. Does anybody know what format it
expects, and how to achieve it using websocket/websocket-client? Any
clarification/suggestions would be most welcome.
The JSON messages I am looking to send are those that Firefox's Websocket
developer tool reports: here is a screenshot:
[](http://i.stack.imgur.com/mTG8X.png)
Answer: If you look closely at what what's sent through the browser, notice that it's:
["{\"msg\":\"connect\"}"]
This looks an awful lot like an array of JSON strings. Indeed, if you try to
replicate it:
ws.send(json.dumps([json.dumps({'msg': 'connect', 'version': '1', 'support': ['1', 'pre2', 'pre1']})]))
You'll see that you get connected. Here's my entire code:
import json
import pprint
import websocket
from websocket import create_connection
websocket.enableTrace(True)
ws = create_connection('ws://gitxiv.com/sockjs/212/2aczpiim/websocket')
result = ws.recv()
print('Result: {}'.format(result))
result = ws.recv()
print('Result: {}'.format(result))
ws.send(json.dumps([json.dumps({'msg': 'connect', 'version': '1', 'support': ['1', 'pre2', 'pre1']})]))
result = ws.recv()
print('Result: {}'.format(result))
|
interpolate missing values 2d python
Question: I have a 2d array(or matrix if you prefer) with some missing values
represented as `NaN`. The missing values are typically in a strip along one
axis, eg:
1 2 3 NaN 5
2 3 4 Nan 6
3 4 Nan Nan 7
4 5 Nan Nan 8
5 6 7 8 9
where I would like to replace the `NaN`'s by somewhat sensible numbers.
I looked into delaunay triangulation, but found very little documentation.
I tried using [`astropy`'s
convolve](http://docs.astropy.org/en/stable/convolution/index.html) as it
supports use of 2d arrays, and is quite straightforward. The problem with this
is that convolution is not interpolation, it moves all values towards the
average (which could be mitigated by using a narrow kernel).
This question should be the natural 2-dimensional extension to [this
post](http://stackoverflow.com/questions/6518811/interpolate-nan-values-in-a-
numpy-array). Is there a way to interpolate over `NaN`/missing values in a
2d-array?
Answer: Yes you can use
[`scipy.interpolate.griddata`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.griddata.html)
and masked array and you can choose the type of interpolation that you prefer
using the argument `method` usually `'cubic'` do an excellent job:
import numpy as np
from scipy import interpolate
#Let's create some random data
array = np.random.random_integers(0,10,(10,10)).astype(float)
#values grater then 7 goes to np.nan
array[array>7] = np.nan
That looks something like this using
`plt.imshow(array,interpolation='nearest')` :
[](http://i.stack.imgur.com/yrxg8.png)
x = np.arange(0, array.shape[1])
y = np.arange(0, array.shape[0])
#mask invalid values
array = np.ma.masked_invalid(array)
xx, yy = np.meshgrid(x, y)
#get only the valid values
x1 = xx[~array.mask]
y1 = yy[~array.mask]
newarr = array[~array.mask]
GD1 = interpolate.griddata((x1, y1), newarr.ravel(),
(xx, yy),
method='cubic')
This is the final result:
[](http://i.stack.imgur.com/Zmti2.png)
Look that if the nan values are in the edges and are surrounded by nan values
thay can't be interpolated and are kept `nan`. You can change it using the
`fill_value` argument.
# How would this work if there is a 3x3 region of NaN-values, would you get
sensible data for the middle point?
It depends on your kind of data, you have to perform some test. You could for
instance mask on purpose some good data try different kind of interpolation
e.g. cubic, linear etc. etc. with the array with the masked values and
calculuate the difference between the values interpolated and the original
values that you had masked before and see which method return you the minor
difference.
You can use something like this:
reference = array[3:6,3:6].copy()
array[3:6,3:6] = np.nan
method = ['linear', 'nearest', 'cubic']
for i in method:
GD1 = interpolate.griddata((x1, y1), newarr.ravel(),
(xx, yy),
method=i)
meandifference = np.mean(np.abs(reference - GD1[3:6,3:6]))
print ' %s interpolation difference: %s' %(i,meandifference )
That gives something like this:
linear interpolation difference: 4.88888888889
nearest interpolation difference: 4.11111111111
cubic interpolation difference: 5.99400137377
Of course this is for random numbers so it's normal that the result may vary a
lot. So the best thing to do is to test on "on purpose masked" piece of your
dataset and see what happen.
|
How do I identify the user type of an IP address? Using python and requests/bs4/flask
Question: I'm using python with requests/bs4/flask and I'd like to identify the user
type of an incoming IP request to my flask app.
<http://ipleak.net> identifies the type as residential, college, cafe,
corporate, ect but they only check your IP.
[GeoIP2](https://www.maxmind.com/en/geoip2-precision-insights) is the API that
drives ipleak.net for this and the returned parameter is called User Type.
How do I identify an IP type without using their API? I'm okay with less
accuracy/classification. Is there a public API for this? Or can I scrape it
from a whois DB? Or can I identify it in another way?
Answer: I monitored the network traffic and found the call to check the type of IP. I
wrote this function to check any IP that I pass to the function.
import requests
from bs4 import BeautifulSoup
# check type of IP address
def ip_check(ip):
result = requests.get('https://ipleak.net/?mode=ajax&ip={}'.format(ip))
soup = BeautifulSoup(result.text, 'html.parser')
parse = soup.get_text()
final = parse.replace('\xa0', '').replace(ip, '')
return final
|
websocket.recv() never returns inside another event loop
Question: I am currently developing a **server** program in Python that uses the
websockets and asyncio packages.
I got a basic script handling websockets working (Exhibit A). This script
locks when waiting for input, which is not what I want.
The solution for this that I imagine is I can start two asynchronous tasks -
one that handles inputs and one that handles outputs - and start them in a
secondary event loop. I had to do some research about coroutines, and I came
up with Exhibit B as a proof of concept for running two things simultaneously
in an event loop.
Now what I'm stuck on is Exhibit C. When I attempted to use this in a
practical scenario with the websockets package, I found that websocket.recv()
never finishes (or the coroutine never un-pauses - I'm not sure what's going
on exactly). In exhibit A it works fine, and I've determined that the
coroutine definitely runs at least up until that point.
Any ideas?
Exhibit A:
#!/usr/bin/python3
import asyncio
import websockets
import time
# This works great!
async def hello(websocket, path):
while True:
# This line waits for input from socket
name = await websocket.recv()
print("< {}".format(name))
# "echo... echo... echo... echo... echo..."
greeting = ''.join(name + "... " for x in range(5))
await websocket.send(greeting)
print("> {}".format(greeting))
time.sleep(0.1);
start_server = websockets.serve(hello, '', 26231)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
Exhibit B:
#!/usr/bin/python3
import asyncio
import time
class Thing:
def __init__(self):
pass
async def test(self):
for x in range(10):
print("Hello"+str(x))
await asyncio.sleep(0)
def run(self):
# Add the task to the event loop twice
asyncio.ensure_future(self.test())
asyncio.ensure_future(self.test())
t = Thing()
t.run()
loop = asyncio.get_event_loop();
loop.run_forever()
Exhibit C:
#!/usr/bin/python3
import asyncio
import websockets
import time
class WebsocketRequest:
def __init__(self, websocket):
self.websocket = websocket
# Works great
async def handle_oputs(self):
# This works fine - sends a message
# every 10 seconds to the client
while True:
print("sending...")
await self.websocket.send("Hello")
print("> {}".format("Hello"))
time.sleep(10)
# Doesn't work
async def handle_iputs(self):
# This stops at the await and never receives
# any input from the client :/
while True:
try:
print("receiving...")
# This is the line that causes sadness
data = await self.websocket.recv()
print("< {}".format(data))
except:
# This doesn't happen either
print("Listener is dead")
async def run(self):
# This is the part where the coroutine for
# a client get split off into two of them
# to handle input and output separately.
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
asyncio.ensure_future(self.handle_iputs())
asyncio.ensure_future(self.handle_oputs())
loop.run_forever()
class WebsocketServer:
def __init__(self, address):
self.ip = address[0]
self.port = address[1]
async def hello(self, websocket, path):
req = WebsocketRequest(websocket)
await req.run()
def run(self):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
start_server = websockets.serve(self.hello, self.ip, self.port)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
Answer: Maybe the module websocket (unlike websockets) can help you.
The use of WebsocketApp is very simple:
from websocket import WebSocketApp
class ExampleClass(object):
def __init__(self):
websocket.enableTrace(True)
self.ws = websocket.WebSocketApp("ws://echo.websocket.org",
on_message=on_message,
on_error=on_error,
on_close=on_close)
def on_message(ws, msg):
print "Message Arrived:" + msg
def on_error(ws, error):
print error
def on_close(ws):
print "Connection Closed"
def on_open(ws):
ws.send("Hello!")
To download this module: <https://pypi.python.org/pypi/websocket-client>
|
How to input both wind speed and direction data and pcolormesh plot it in python, with both on the same plot?
Question: I have some wind speed and direction data that I need to plot into python. I
got the data from a data file and did some calculations to get the wind speed
and data, and here is what I got:
wind_speed = np.sqrt(u**2+ v**2)
wind_speed
wind_dir_trig_to = (np.arctan2((u/wind_speed), (v/wind_speed)))
wind_dir_trig_to_degrees = (wind_dir_trig_to * (180/np.pi))
wind_dir_trig_from_degrees = wind_dir_trig_to_degrees + 180
wind_dir_trig_from_degrees
wind_dir_cardinal = 90 - wind_dir_trig_from_degrees
wind_dir_cardinal
This is the output of the code shown above:
array([[ 25.05589294, 26.44908142, 25.87358856, ..., 26.09784698,
24.73834229, 23.79068756],
[ 25.00778198, 26.04024506, 25.52288055, ..., 24.56259918,
22.53238678, 21.8249054 ],
[ 23.54372406, 23.90814972, 24.1379776 , ..., 24.2286377 ,
22.26264191, 21.62586975],
...,
[ 26.37328339, 27.58541107, 27.02276611, ..., 22.10659027,
22.27283478, 23.06639862],
[ 26.5234375 , 26.64894867, 25.70041656, ..., 24.78749084,
24.94545746, 25.05831909],
[ 25.57256317, 24.58295441, 23.94006348, ..., 27.29759979,
27.22042084, 26.55405426]], dtype=float32)
How do I use this data to plot both wind speed and direction into the same
pcolormesh graph in python?
Answer: How about:
import matplotlib.pyplot as pl
pl.pcolormesh(wind_dir_cardinal)
|
Obtaining HTML source code with Python cookie
Question:
import urllib
#my url here stored as url
htmlfile = urllib.urlopen(url)
htmltext = htmlfile.read()
print(htmltext)
I'm trying to get source code from a url
I get source code but it is from a different page saying two things; please
enable cookies and this domain has banned your access based on your browser's
signature
Is there any way that anyone knows of to get the source code when the browser
knows your not actually on the page?
Answer: You may have to set an url opener
def createOpener(self):
handlers = []
cj = MyCookieJar();
cj.set_policy(cookielib.DefaultCookiePolicy(rfc2965=True))
cjhdr = urllib2.HTTPCookieProcessor(cj)
handlers.append(cjhdr)
opener = urllib2.build_opener(*handlers)
opener.addheaders = [('User-Agent', self.getUserAgent()),
('Host', 'google.com')]
return opener
where the cookie jar is
class MyCookieJar(cookielib.CookieJar):
def _cookie_from_cookie_tuple(self, tup, request):
name, value, standard, rest = tup
version = standard.get('version', None)
if version is not None:
version = version.replace('"', '')
standard["version"] = version
return cookielib.CookieJar._cookie_from_cookie_tuple(self, tup, request)
At this point you create the opener and fetch the data reading the url handler
like:
def fetchURL(self, url, data=None, headers={}):
request = urllib2.Request(url, data, headers)
self.opener = self.createOpener()
urlHandle = self.opener.open(request)
return urlHandle.read()
It's a good idea to have a `User-Agent` list and read from it:
with open(ffpath) as f:
USER_AGENTS_LIST = f.read().splitlines()
and get a random one from it
index = random.randint(0,len(USER_AGENTS_LIST)-1)
uA=USER_AGENTS_LIST[index]
To have a list of user agent take a look at
[here](http://stackoverflow.com/questions/36510058/javascript-user-agents-api-
to-fetch-and-filter-by-browser-class-type).
This is just to have and idea to do this without any external framework.
|
Python: concurrent file seek
Question: I am looking at a way to allow concurrent file object seeking.
As a test case of file seeking going wary:
#!/usr/bin/env python2
import time, random, os
s = 'The quick brown fox jumps over the lazy dog'
# create some file, just for testing
f = open('file.txt', 'w')
f.write(s)
f.close()
# the actual code...
f = open('file.txt', 'rb')
def fn():
out = ''
for i in xrange(10):
k = random.randint(0, len(s)-1)
f.seek(k)
time.sleep(random.randint(1, 4)/10.)
out += s[k] + ' ' + f.read(1) + '\n'
return out
import multiprocessing
p = multiprocessing.Pool()
n = 3
res = [p.apply_async(fn) for _ in xrange(n)]
for r in res:
print r.get()
f.close()
I have worker processes, which do random seeking within the file, then
`sleep`, then `read`. I compare what they `read` to the actual string
character. I do not print right away to avoid concurrency issues with
printing.
You can see that when `n=1`, it all goes well, but everything goes astray when
`n>1` due to concurrency in the file descriptor.
I have tried to duplicate the file descriptor within `fn()`:
def fn():
fd = os.dup(f)
f2 = os.fdopen(fd)
And then I use `f2`. But it does not seem to help.
How can I do seeking concurrently, i.e. from multiple processes? (In this
case, I could just `open` the file within `fn()`, but this is a MWE. In my
actual case, it is harder to do that.)
Answer: You cannot - Python I/O builds on C's I/O, and there is only one "current file
position" per open file in C. That's inherently shared.
What you can do is perform your seek+read under protection of an interprocess
lock.
Like define:
def process_init(lock):
global seek_lock
seek_lock = lock
and in the main process add this to the `Pool` constructor:
initializer=process_init, initargs=(multiprocessing.Lock(),)
Then whenever you want to seek and read, do it under the protection of that
lock:
with seek_lock:
f.seek(k)
char = f.read(1)
As with any lock, you want to do as little as logically necessary while it's
held. It won't allow concurrent seeking, but it will prevent seeks in one
process from interfering with the seeks in other processes.
It would, of course, be better to open the file in each process, so that each
process has its own notion of file position - but you already said you can't.
Rethink that ;-)
|
How to join() - chapter 8 Automate the Boring Stuff
Question: From Automate The Boring Stuff With Python book: "Create a Mad Libs program
that reads in text files and lets the user add their own text anywhere the
word ADJECTIVE, NOUN, ADVERB, or VERB appears in the text file. For example, a
text file may look like this:
The ADJECTIVE panda walked to the NOUN and then VERB. A nearby NOUN was
unaffected by these events.
The program would find these occurrences and prompt the user to replace them."
I'm almost done, but I can't seem to figure out how to .join the last list on
the file. I've looked online and tried the methods. Instead of getting a
string out of join, I get the whitespace in ' '.join(mod4) separating the
letters within each list string value. [ ' A n ' , ' o n e ' , ' t w o ' , ' a
n d ' , ' t h r e e ' , ' f o u r ' ] Everything else does the job.
#! python3
import re
madText = open('madText.txt', 'w')
text = 'An ADJECTIVE, a NOUN, an ADVERB and a VERB.'
madText.write(text)
madText.close()
content = re.split('\W+', text)
for i in content:
if i == "ADJECTIVE":
replaceRegex = re.compile(r'(ADJECTIVE)')
print('Enter an ADJECTIVE:')
ADJECTIVE = input()
output = replaceRegex.sub(ADJECTIVE, str(content))
elif i == "NOUN":
replaceRegex = re.compile(r'(NOUN)')
print('Enter a NOUN:')
NOUN = input()
output = replaceRegex.sub(NOUN, str(output))
elif i == "ADVERB":
replaceRegex = re.compile(r'(ADVERB)')
print('Enter an ADVERB:')
ADVERB = input()
output = replaceRegex.sub(ADVERB, str(output))
elif i == "VERB":
replaceRegex = re.compile(r'(VERB)')
print('Enter a VERB:')
VERB = input()
output = replaceRegex.sub(VERB, str(output))
content = re.split('\W+', output)
#content = list(output.split(' '))
content = ' '.join(content)
print(content)
madLibs = open('madText2.txt', 'w')
madLibs.write(content)
madLibs.close()
Answer: You have a basic assumption that is preventing you from completing this. Your
assignment of `mod4` is based on previous assignments and their order.
Instead what you should be doing is initializing an `output` variable to `[]`
and appending the words to it as you loop through `content`. You would add
your adlib words along with the real words.
Once you have built your output list, _then_ you use `join` to turn that
`output` into a string.
Also, using `regex` is overkill. Let's assume you've made an `output = []`
before the loop.
if i == 'NOUN':
print('Enter a NOUN:')
noun = input() # raw_input() on Python 2
output += noun
[...]
Now as you hit each adlib token you replace it with the inputted text and
build your output list.
|
How to install Scrapy on Unbuntu 16.04?
Question: I followed [the official
guide](http://doc.scrapy.org/en/master/topics/ubuntu.html#topics-ubuntu), but
got this error message:
The following packages have unmet dependencies:
scrapy : Depends: python-support (>= 0.90.0) but it is not installable
Recommends: python-setuptools but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
I then tried `sudo apt-get python-support`, but found ubuntu 16.04 removed
`python-support`.
Lastly, I tried to install `python-setuptools`, but seems it only would
install python2 instead.
The following additional packages will be installed:
libpython-stdlib libpython2.7-minimal libpython2.7-stdlib python
python-minimal python-pkg-resources python2.7 python2.7-minimal
Suggested packages:
python-doc python-tk python-setuptools-doc python2.7-doc binutils
binfmt-support
The following NEW packages will be installed:
libpython-stdlib libpython2.7-minimal libpython2.7-stdlib python
python-minimal python-pkg-resources python-setuptools python2.7
python2.7-minimal
What should I do to use `Scrapy` in the Python 3 environment on Ubuntu 16.04?
Thanks.
Answer: You should be good with:
apt-get install -y \
python3 \
python-dev \
python3-dev
# for cryptography
apt-get install -y \
build-essential \
libssl-dev \
libffi-dev
# for lxml
apt-get install -y \
libxml2-dev \
libxslt-dev
# install pip
apt-get install -y python-pip
This is an example Dockerfile to test installing scrapy on Python 3, on Ubuntu
16.04/Xenial:
$ cat Dockerfile
FROM ubuntu:xenial
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
# Install Python3 and dev headers
RUN apt-get install -y \
python3 \
python-dev \
python3-dev
# Install cryptography
RUN apt-get install -y \
build-essential \
libssl-dev \
libffi-dev
# install lxml
RUN apt-get install -y \
libxml2-dev \
libxslt-dev
# install pip
RUN apt-get install -y python-pip
RUN useradd --create-home --shell /bin/bash scrapyuser
USER scrapyuser
WORKDIR /home/scrapyuser
Then, after building the Docker image and running a container for it with:
$ sudo docker build -t redapple/scrapy-ubuntu-xenial .
$ sudo docker run -t -i redapple/scrapy-ubuntu-xenial
you can run `pip install scrapy`
Below I'm using `virtualenvwrapper` to create a Python 3 virtualenv:
scrapyuser@88cc645ac499:~$ pip install --user virtualenvwrapper
Collecting virtualenvwrapper
Downloading virtualenvwrapper-4.7.1-py2.py3-none-any.whl
Collecting virtualenv-clone (from virtualenvwrapper)
Downloading virtualenv-clone-0.2.6.tar.gz
Collecting stevedore (from virtualenvwrapper)
Downloading stevedore-1.14.0-py2.py3-none-any.whl
Collecting virtualenv (from virtualenvwrapper)
Downloading virtualenv-15.0.2-py2.py3-none-any.whl (1.8MB)
100% |################################| 1.8MB 320kB/s
Collecting pbr>=1.6 (from stevedore->virtualenvwrapper)
Downloading pbr-1.10.0-py2.py3-none-any.whl (96kB)
100% |################################| 102kB 1.5MB/s
Collecting six>=1.9.0 (from stevedore->virtualenvwrapper)
Downloading six-1.10.0-py2.py3-none-any.whl
Building wheels for collected packages: virtualenv-clone
Running setup.py bdist_wheel for virtualenv-clone ... done
Stored in directory: /home/scrapyuser/.cache/pip/wheels/24/51/ef/93120d304d240b4b6c2066454250a1626e04f73d34417b956d
Successfully built virtualenv-clone
Installing collected packages: virtualenv-clone, pbr, six, stevedore, virtualenv, virtualenvwrapper
Successfully installed pbr six stevedore virtualenv virtualenv-clone virtualenvwrapper
You are using pip version 8.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
scrapyuser@88cc645ac499:~$ source ~/.local/bin/virtualenvwrapper.sh
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/premkproject
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/postmkproject
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/initialize
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/premkvirtualenv
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/postmkvirtualenv
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/prermvirtualenv
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/postrmvirtualenv
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/predeactivate
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/postdeactivate
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/preactivate
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/postactivate
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/get_env_details
scrapyuser@88cc645ac499:~$ export PATH=$PATH:/home/scrapyuser/.local/bin
scrapyuser@88cc645ac499:~$ mkvirtualenv --python=/usr/bin/python3 scrapy11.py3
Running virtualenv with interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /home/scrapyuser/.virtualenvs/scrapy11.py3/bin/python3
Also creating executable in /home/scrapyuser/.virtualenvs/scrapy11.py3/bin/python
Installing setuptools, pip, wheel...done.
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/scrapy11.py3/bin/predeactivate
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/scrapy11.py3/bin/postdeactivate
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/scrapy11.py3/bin/preactivate
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/scrapy11.py3/bin/postactivate
virtualenvwrapper.user_scripts creating /home/scrapyuser/.virtualenvs/scrapy11.py3/bin/get_env_details
And installing scrapy 1.1 is a matter of `pip install scrapy`
(scrapy11.py3) scrapyuser@88cc645ac499:~$ pip install scrapy
Collecting scrapy
Downloading Scrapy-1.1.0-py2.py3-none-any.whl (294kB)
100% |################################| 296kB 1.0MB/s
Collecting PyDispatcher>=2.0.5 (from scrapy)
Downloading PyDispatcher-2.0.5.tar.gz
Collecting pyOpenSSL (from scrapy)
Downloading pyOpenSSL-16.0.0-py2.py3-none-any.whl (45kB)
100% |################################| 51kB 1.8MB/s
Collecting lxml (from scrapy)
Downloading lxml-3.6.0.tar.gz (3.7MB)
100% |################################| 3.7MB 312kB/s
Collecting parsel>=0.9.3 (from scrapy)
Downloading parsel-1.0.2-py2.py3-none-any.whl
Collecting six>=1.5.2 (from scrapy)
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting Twisted>=10.0.0 (from scrapy)
Downloading Twisted-16.2.0.tar.bz2 (2.9MB)
100% |################################| 2.9MB 307kB/s
Collecting queuelib (from scrapy)
Downloading queuelib-1.4.2-py2.py3-none-any.whl
Collecting cssselect>=0.9 (from scrapy)
Downloading cssselect-0.9.1.tar.gz
Collecting w3lib>=1.14.2 (from scrapy)
Downloading w3lib-1.14.2-py2.py3-none-any.whl
Collecting service-identity (from scrapy)
Downloading service_identity-16.0.0-py2.py3-none-any.whl
Collecting cryptography>=1.3 (from pyOpenSSL->scrapy)
Downloading cryptography-1.4.tar.gz (399kB)
100% |################################| 409kB 1.1MB/s
Collecting zope.interface>=4.0.2 (from Twisted>=10.0.0->scrapy)
Downloading zope.interface-4.1.3.tar.gz (141kB)
100% |################################| 143kB 1.3MB/s
Collecting attrs (from service-identity->scrapy)
Downloading attrs-16.0.0-py2.py3-none-any.whl
Collecting pyasn1 (from service-identity->scrapy)
Downloading pyasn1-0.1.9-py2.py3-none-any.whl
Collecting pyasn1-modules (from service-identity->scrapy)
Downloading pyasn1_modules-0.0.8-py2.py3-none-any.whl
Collecting idna>=2.0 (from cryptography>=1.3->pyOpenSSL->scrapy)
Downloading idna-2.1-py2.py3-none-any.whl (54kB)
100% |################################| 61kB 2.0MB/s
Requirement already satisfied (use --upgrade to upgrade): setuptools>=11.3 in ./.virtualenvs/scrapy11.py3/lib/python3.5/site-packages (from cryptography>=1.3->pyOpenSSL->scrapy)
Collecting cffi>=1.4.1 (from cryptography>=1.3->pyOpenSSL->scrapy)
Downloading cffi-1.6.0.tar.gz (397kB)
100% |################################| 399kB 1.1MB/s
Collecting pycparser (from cffi>=1.4.1->cryptography>=1.3->pyOpenSSL->scrapy)
Downloading pycparser-2.14.tar.gz (223kB)
100% |################################| 225kB 1.2MB/s
Building wheels for collected packages: PyDispatcher, lxml, Twisted, cssselect, cryptography, zope.interface, cffi, pycparser
Running setup.py bdist_wheel for PyDispatcher ... done
Stored in directory: /home/scrapyuser/.cache/pip/wheels/86/02/a1/5857c77600a28813aaf0f66d4e4568f50c9f133277a4122411
Running setup.py bdist_wheel for lxml ... done
Stored in directory: /home/scrapyuser/.cache/pip/wheels/6c/eb/a1/e4ff54c99630e3cc6ec659287c4fd88345cd78199923544412
Running setup.py bdist_wheel for Twisted ... done
Stored in directory: /home/scrapyuser/.cache/pip/wheels/fe/9d/3f/9f7b1c768889796c01929abb7cdfa2a9cdd32bae64eb7aa239
Running setup.py bdist_wheel for cssselect ... done
Stored in directory: /home/scrapyuser/.cache/pip/wheels/1b/41/70/480fa9516ccc4853a474faf7a9fb3638338fc99a9255456dd0
Running setup.py bdist_wheel for cryptography ... done
Stored in directory: /home/scrapyuser/.cache/pip/wheels/f6/6c/21/11ec069285a52d7fa8c735be5fc2edfb8b24012c0f78f93d20
Running setup.py bdist_wheel for zope.interface ... done
Stored in directory: /home/scrapyuser/.cache/pip/wheels/52/04/ad/12c971c57ca6ee5e6d77019c7a1b93105b1460d8c2db6e4ef1
Running setup.py bdist_wheel for cffi ... done
Stored in directory: /home/scrapyuser/.cache/pip/wheels/8f/00/29/553c1b1db38bbeec3fec428ae4e400cd8349ecd99fe86edea1
Running setup.py bdist_wheel for pycparser ... done
Stored in directory: /home/scrapyuser/.cache/pip/wheels/9b/f4/2e/d03e949a551719a1ffcb659f2c63d8444f4df12e994ce52112
Successfully built PyDispatcher lxml Twisted cssselect cryptography zope.interface cffi pycparser
Installing collected packages: PyDispatcher, idna, pyasn1, six, pycparser, cffi, cryptography, pyOpenSSL, lxml, w3lib, cssselect, parsel, zope.interface, Twisted, queuelib, attrs, pyasn1-modules, service-identity, scrapy
Successfully installed PyDispatcher-2.0.5 Twisted-16.2.0 attrs-16.0.0 cffi-1.6.0 cryptography-1.4 cssselect-0.9.1 idna-2.1 lxml-3.6.0 parsel-1.0.2 pyOpenSSL-16.0.0 pyasn1-0.1.9 pyasn1-modules-0.0.8 pycparser-2.14 queuelib-1.4.2 scrapy-1.1.0 service-identity-16.0.0 six-1.10.0 w3lib-1.14.2 zope.interface-4.1.3
Finally testing the example project:
(scrapy11.py3) scrapyuser@88cc645ac499:~$ scrapy startproject tutorial
New Scrapy project 'tutorial', using template directory '/home/scrapyuser/.virtualenvs/scrapy11.py3/lib/python3.5/site-packages/scrapy/templates/project', created in:
/home/scrapyuser/tutorial
You can start your first spider with:
cd tutorial
scrapy genspider example example.com
(scrapy11.py3) scrapyuser@88cc645ac499:~$ cd tutorial
(scrapy11.py3) scrapyuser@88cc645ac499:~/tutorial$ scrapy genspider example example.com
Created spider 'example' using template 'basic' in module:
tutorial.spiders.example
(scrapy11.py3) scrapyuser@88cc645ac499:~/tutorial$ cat tutorial/spiders/example.py
# -*- coding: utf-8 -*-
import scrapy
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["example.com"]
start_urls = (
'http://www.example.com/',
)
def parse(self, response):
pass
(scrapy11.py3) scrapyuser@88cc645ac499:~/tutorial$ scrapy crawl example
2016-06-07 11:08:27 [scrapy] INFO: Scrapy 1.1.0 started (bot: tutorial)
2016-06-07 11:08:27 [scrapy] INFO: Overridden settings: {'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'ROBOTSTXT_OBEY': True, 'NEWSPIDER_MODULE': 'tutorial.spiders'}
2016-06-07 11:08:27 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats', 'scrapy.extensions.corestats.CoreStats']
2016-06-07 11:08:27 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-06-07 11:08:27 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-06-07 11:08:27 [scrapy] INFO: Enabled item pipelines:
[]
2016-06-07 11:08:27 [scrapy] INFO: Spider opened
2016-06-07 11:08:28 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-06-07 11:08:28 [scrapy] DEBUG: Crawled (404) <GET http://www.example.com/robots.txt> (referer: None)
2016-06-07 11:08:28 [scrapy] DEBUG: Crawled (200) <GET http://www.example.com/> (referer: None)
2016-06-07 11:08:28 [scrapy] INFO: Closing spider (finished)
2016-06-07 11:08:28 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 436,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 1921,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/404': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 6, 7, 11, 8, 28, 614605),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 6, 7, 11, 8, 28, 24624)}
2016-06-07 11:08:28 [scrapy] INFO: Spider closed (finished)
(scrapy11.py3) scrapyuser@88cc645ac499:~/tutorial$
|
Streaming wrapper around program that writes to multiple output files
Question: There is a program (which I cannot modify) that creates two output files. I am
trying to write a Python wrapper that invokes this program, reads both output
streams simultaneously, combines the output, and prints to stdout (to
facilitate streaming). How can I do this without deadlocking? The following
proof of concept below works fine, but when I apply this approach to the
actual program it deadlocks.
* * *
**Proof of concept** : this is a dummy program, `bogus.py`, that creates two
output files like the program I'm trying to wrap.
#!/usr/bin/env python
from __future__ import print_function
import sys
with open(sys.argv[1], 'w') as f1, open(sys.argv[2], 'w') as f2:
for i in range(1000):
if i % 2 == 0:
print(i, file=f1)
else:
print(i, file=f2)
And here is the Python wrapper that invokes the program and combines its two
outputs (interleaving 4 lines from each at a time).
#!/usr/bin/env python
from __future__ import print_function
from contextlib import contextmanager
import os
import shutil
import subprocess
import tempfile
@contextmanager
def named_pipe():
"""
Create a temporary named pipe.
Stolen shamelessly from StackOverflow:
http://stackoverflow.com/a/28840955/459780
"""
dirname = tempfile.mkdtemp()
try:
path = os.path.join(dirname, 'named_pipe')
os.mkfifo(path)
yield path
finally:
shutil.rmtree(dirname)
with named_pipe() as f1, named_pipe() as f2:
cmd = ['./bogus.py', f1, f2]
child = subprocess.Popen(cmd)
with open(f1, 'r') as in1, open(f2, 'r') as in2:
buff = list()
for i, lines in enumerate(zip(in1, in2)):
line1 = lines[0].strip()
line2 = lines[1].strip()
print(line1)
buff.append(line2)
if len(buff) == 4:
for line in buff:
print(line)
Answer: > I'm seeing big chunks of one file and then big chunks of the other file,
> regardless of whether I write to stdout, stderr, or tty.
If you can't make the child to use line-buffering for files then a simple
solution _to read complete interleaved lines from the output files while the
process is still running as soon as the output becomes available_ is to use
threads:
#!/usr/bin/env python2
from subprocess import Popen
from threading import Thread
from Queue import Queue
def readlines(path, queue):
try:
with open(path) as file:
for line in file:
queue.put(line)
finally:
queue.put(None)
with named_pipes(n=2) as paths:
child = Popen(['python', 'child.py'] + paths)
queue = Queue()
for path in paths:
Thread(target=readlines, args=[path, queue]).start()
for _ in paths:
for line in iter(queue.get, None):
print line.rstrip('\n')
where [`named_pipes(n)` is defined
here](http://stackoverflow.com/a/28840955/4279).
`pipe.readline()` is broken for a non-blocking pipe on Python 2 that is why
threads are used here.
* * *
To print a line from one file followed by a line from another:
with named_pipes(n=2) as paths:
child = Popen(['python', 'child.py'] + paths)
queues = [Queue() for _ in paths]
for path, queue in zip(paths, queues):
Thread(target=readlines, args=[path, queue]).start()
while queues:
for q in queues:
line = q.get()
if line is None: # EOF
queues.remove(q)
else:
print line.rstrip('\n')
If `child.py` writes more lines to one file than another file then the
difference is kept in memory and therefore individual queues in `queues` may
grow unlimited until they fill all the memory. You can set the max number of
items in a queue but then you have to pass a timeout to `q.get()` otherwise
the code may deadlock.
* * *
If you need to print exactly 4 lines from one output file then exactly 4 lines
from another output file, etc then you could slightly modify the given code
example:
while queues:
# print 4 lines from one queue followed by 4 lines from another queue
for q in queues:
for _ in range(4):
line = q.get()
if line is None: # EOF
queues.remove(q)
break
else:
print line.rstrip('\n')
It won't deadlock but it may eat all memory if your child process writes too
much data into one file without writing enough into another file (only the
difference is kept in memory—if the files are relatively equal; the program
supports arbitrary large output files).
|
Subsets and Splits