text
stringlengths 226
34.5k
|
---|
Passing Parameters from Javascript to flask python script
Question: I am new to flask so bare with me.
Currently I have a angularjs file which uses $http.get to call my python flask
script.
In this flask script I want to then call another python script (which is
running pySolr), however the http.get call contains a parameter I wish to pass
to this pySolr script.
Is there any documentation on this/ can it actually be done?
$http.get('http://localhost:5000/python/solr', "$scope.tag");
console.log($scope.tag);
$scope.tag is the variable I need to get
My flask file is as follows:
from flask import Flask
app = Flask(__name__)
@app.route('/python/solr')
def solr():
"MY CODE TO CALL SOLR SCRIPT GOES HERE"
if __name__ == "__main__":
app.run()
any help would be greatly appreciated!
Answer: You should be able to do this using query parameters:
$http.get('http://localhost:5000/python/solr?tag=' + $scope.tag);
console.log($scope.tag);
In flask
from flask import request
@app.route('/python/solr')
def solr():
print request.args # should get tag here
|
MySQLdb and _mysql versions ncompatible: how to upgrade _mysql
Question: I'm running MySQLdb v1.2.3 and getting the following error:
LookupError: unknown encoding: utf8mb4
[This answer](http://stackoverflow.com/questions/21517358/django-mysql-
unknown-encoding-utf8mb4) suggests updating MySQLdb to version 1.2.5. I
updated and am now getting this error:
ImportError: this is MySQLdb version (1, 2, 5, 'final', 1), but _mysql is version (1, 2, 3, 'final', 0)
I'm not sure how to go about updating `_mysql` or how this will change my
setup. Is this just a python module or is it connected in some way to my MySQL
server?
**EDIT:** I've tried running the following three methods:
sudo pip uninstall mysql-python
sudo pip install mysql-python
sudo pip uninstall mysql-python
sudo pip install mysql-python==1.2.5
sudo pip install mysql-python --upgrade
When uninstalling I get
/usr/local/lib/python2.7/dist-packages/_mysql.so
/usr/local/lib/python2.7/dist-packages/_mysql_exceptions.py
/usr/local/lib/python2.7/dist-packages/_mysql_exceptions.pyc
Proceed (y/n)? y
Successfully uninstalled MySQL-python-1.2.3
After that I am unable to import either `MySQLdb` or `_mysql` but reinstalling
always gives me `_mysql` version 1.2.3.
**SECOND EDIT / SOLUTION:** Turns out `_mysql` was installed in two different
places on the server. Uninstalling/installing, as above, upgraded `_mysql` to
v1.2.5 but whenever I then imported `MySQLdb` precedence was given to the
other version of `_mysql` which was not being touched by pip.
Answer: According to the [user manual](http://mysql-
python.sourceforge.net/MySQLdb.html#id3):
> If you want to write applications which are portable across databases, use
> MySQLdb, and avoid using this module directly. _mysql provides an interface
> which mostly implements the MySQL C API. For more information, see the MySQL
> documentation. The documentation for this module is intentionally weak
> because you probably should use the higher-level MySQLdb module.
Basically, _mysql is an object-oriented wrapper for the MySQL C API.
[This post](http://stackoverflow.com/questions/4536103/how-can-i-upgrade-
specific-packages-using-pip-and-a-requirements-file) explains how to use pip
to upgrade one module, a module with all its dependencies, or any combination
thereof. I think that, given the statement, MySQLdb does not have a dependency
on _mysql, and they were not upgraded together. Please visit the link shared.
**EDIT:** After some digging, I found that Ubuntu does not support MySQL
nicely, and just pip doesn't work.
So I went to this [link](http://stackoverflow.com/a/29150749/4900327) and did:
`apt-get install python-dev libmysqlclient-dev`
before doing
`sudo pip install MySQL-python`
This worked nicely for me. For you, I think you may need to upgrade or even
apt-get remove and then reinstall the above two Ubuntu modules `python-dev`
and `libmysqlclient-dev`.
For me, it's working now when installing for the first time; go to a terminal
and enter the python interpreter, then type:
import MySQLdb
MySQLdb.__version__ #I got '1.2.5'
import _mysql
_myql.__version__ #Again, I got '1.2.5'
|
How to use relative import without doing python -m?
Question: I have a folder like this
/test_mod
__init__.py
A.py
test1.py
/sub_mod
__init__.py
B.py
test2.py
And I want to use relatives imports in `test1` and `test2` like this
#test1.py
from . import A
from .sub_mod import B
...
#test2.py
from .. import A
from . import B
...
While I develop `test1` or `test2` I want that those imports to work while I
am in the IDLE, that is if I press `F5` while working in `test2` that every
work fine, because I don't want to do `python -m test_mod.sub_mod.test2` for
instance.
I already check this [python-relative-imports-for-the-billionth-
time](http://stackoverflow.com/questions/14132789/python-relative-imports-for-
the-billionth-time)
Looking at that, I tried this:
if __name__ == "__main__" and not __package__:
__package__ = "test_mod.sub_mod"
from .. import A
from . import B
But that didn't work, it gave this error:
SystemError: Parent module 'test_mod.sub_mod' not loaded, cannot perform relative import
Answer: in the end I found this solution
#relative_import_helper.py
import sys, os, importlib
def relative_import_helper(path,nivel=1,verbose=False):
namepack = os.path.dirname(path)
packs = []
for _ in range(nivel):
temp = os.path.basename(namepack)
if temp:
packs.append( temp )
namepack = os.path.dirname(namepack)
else:
break
pack = ".".join(reversed(packs))
sys.path.append(namepack)
importlib.import_module(pack)
return pack
and I use as
#test2.py
if __name__ == "__main__" and not __package__:
print("idle trick")
from relative_import_helper import relative_import_helper
__package__ = relative_import_helper(__file__,2)
from .. import A
...
then I can use relatives import while working in the IDLE.
|
Scrapy returning 403 error (Forbidden)
Question: I'm very new to Scrapy as well as to using Python. In the past, I have managed
to get a minimal example of Scrapy working but haven't used it since.
Meanwhile, a new version is out (I think the one I used last time was `0.24`)
and I can't, for the life of me, figure out why I'm getting a 403 error no
matter what website I attempt to crawl.
Granted, I have yet to delve into Middlewares and/or Pipelines but I was
hoping to be able to get a minimal example running before exploring any
further. That being said, here's my current code:
# items.py
import scrapy
class StackItem(scrapy.Item):
title = scrapy.Field()
url = scrapy.Field()
# stack_spider.py
#derived from https://realpython.com/blog/python/web-scraping-with-scrapy-and-mongodb/
from scrapy import Spider
from scrapy.selector import Selector
from stack.items import StackItem
class StackSpider(Spider):
handle_httpstatus_list = [403, 404] #kind of out of desperation. Is it serving any purpose?
name = "stack"
allowed_domains = ["stackoverflow.com"]
start_urls = [
"http://stackoverflow.com/questions?pagesize=50&sort=newest",
]
def parse(self, response):
questions = Selector(response).xpath('//div[@class="summary"]/h3')
for question in questions:
self.log(question)
item = StackItem()
item['title'] = question.xpath('a[@class="question-hyperlink"]/text()').extract()[0]
item['url'] = question.xpath('a[@class="question-hyperlink"]/@href').extract()[0]
yield item
# Output
(pyplayground) 22:39 ~/stack $ scrapy crawl stack
2016-03-07 22:39:38 [scrapy] INFO: Scrapy 1.0.5 started (bot: stack)
2016-03-07 22:39:38 [scrapy] INFO: Optional features available: ssl, http11
2016-03-07 22:39:38 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'stack.spiders', 'SPIDER_MODULES': ['stack.spiders'], 'RETRY_TIMES': 5, 'BOT_NAME': 'stack', 'RET
RY_HTTP_CODES': [500, 502, 503, 504, 400, 403, 404, 408], 'DOWNLOAD_DELAY': 3}
2016-03-07 22:39:39 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-03-07 22:39:39 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddlewa
re, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, HttpProxyMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-03-07 22:39:39 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-03-07 22:39:39 [scrapy] INFO: Enabled item pipelines:
2016-03-07 22:39:39 [scrapy] INFO: Spider opened
2016-03-07 22:39:39 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-03-07 22:39:39 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-03-07 22:39:39 [scrapy] DEBUG: Retrying <GET http://stackoverflow.com/questions?pagesize=50&sort=newest> (failed 1 times): 403 Forbidden
2016-03-07 22:39:42 [scrapy] DEBUG: Retrying <GET http://stackoverflow.com/questions?pagesize=50&sort=newest> (failed 2 times): 403 Forbidden
2016-03-07 22:39:47 [scrapy] DEBUG: Retrying <GET http://stackoverflow.com/questions?pagesize=50&sort=newest> (failed 3 times): 403 Forbidden
2016-03-07 22:39:51 [scrapy] DEBUG: Retrying <GET http://stackoverflow.com/questions?pagesize=50&sort=newest> (failed 4 times): 403 Forbidden
2016-03-07 22:39:55 [scrapy] DEBUG: Retrying <GET http://stackoverflow.com/questions?pagesize=50&sort=newest> (failed 5 times): 403 Forbidden
2016-03-07 22:39:58 [scrapy] DEBUG: Gave up retrying <GET http://stackoverflow.com/questions?pagesize=50&sort=newest> (failed 6 times): 403 Forbidden
2016-03-07 22:39:58 [scrapy] DEBUG: Crawled (403) <GET http://stackoverflow.com/questions?pagesize=50&sort=newest> (referer: None)
2016-03-07 22:39:58 [scrapy] INFO: Closing spider (finished)
2016-03-07 22:39:58 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1488,
'downloader/request_count': 6,
'downloader/request_method_count/GET': 6,
'downloader/response_bytes': 6624,
'downloader/response_count': 6,
'downloader/response_status_count/403': 6,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 3, 7, 22, 39, 58, 458578),
'log_count/DEBUG': 8,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 6,
'scheduler/dequeued/memory': 6,
'scheduler/enqueued': 6,
'scheduler/enqueued/memory': 6,
'start_time': datetime.datetime(2016, 3, 7, 22, 39, 39, 607472)}
2016-03-07 22:39:58 [scrapy] INFO: Spider closed (finished)
Answer: Most definitely you are behind a proxy. Check and set appropriately your
`http_proxy`, `https_proxy` environment variables. Cross check with `curl` if
you can get that URL from the terminal.
|
Finding roots of function in Python
Question: I'm trying to calculate the roots for a function using the scipy function
`fsolve`, but an error keeps flagging:
TypeError: 'numpy.array' object is not callable
I assume it's probably easier to define the equation as a function but I've
tried that a few times to no avail.
Code:
import scipy
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
# Constants
wavelength = 0.6328
ncore = 1.462420
nclad = 1.457420
a = 8.335
# Mode Order
l = 0
# Mode parameters
V = (2 * np.pi * a / wavelength) * np.sqrt(ncore**2 - nclad**2)
U = np.arange(0, V, 0.01)
W = np.sqrt(V**2-U**2)
func = U * scipy.special.jv(l+1, U) / scipy.special.jv(l, U) - W * scipy.special.kv(l+1, W) / scipy.special.kv(l, W)
from scipy.optimize import fsolve
x = fsolve(func,0)
print x
StackTrace:
Traceback (most recent call last):
File "<ipython-input-52-081a9cc9c0ea>", line 1, in <module>
runfile('/home/luke/Documents/PythonPrograms/ModeSolver_StepIndex/ModeSolver_StepIndex.py', wdir='/home/luke/Documents/PythonPrograms/ModeSolver_StepIndex')
File "/usr/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 580, in runfile
execfile(filename, namespace)
File "/home/luke/Documents/PythonPrograms/ModeSolver_StepIndex/ModeSolver_StepIndex.py", line 52, in <module>
x = fsolve(func,0)
File "/usr/lib64/python2.7/site-packages/scipy/optimize/minpack.py", line 140, in fsolve
res = _root_hybr(func, x0, args, jac=fprime, **options)
File "/usr/lib64/python2.7/site-packages/scipy/optimize/minpack.py", line 197, in _root_hybr
shape, dtype = _check_func('fsolve', 'func', func, x0, args, n, (n,))
File "/usr/lib64/python2.7/site-packages/scipy/optimize/minpack.py", line 20, in _check_func
res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
TypeError: 'numpy.ndarray' object is not callable
Answer: That is because fsolve takes a function as argument. Try this, Note you still
will encounter some runtime error , you will have to check if your return from
func is properly constructed, I will leave that for you to figure out.
import scipy
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
# Constants
wavelength = 0.6328
ncore = 1.462420
nclad = 1.457420
a = 8.335
# Mode Order
# l = 0
# Mode parameters
V = (2 * np.pi * a / wavelength) * np.sqrt(ncore**2 - nclad**2)
U = np.arange(0, V, 0.01)
W = np.sqrt(V**2-U**2)
def func(l):
return U * scipy.special.jv(l+1, U) / scipy.special.jv(l, U) - W * scipy.special.kv(l+1, W) / scipy.special.kv(l, W)
from scipy.optimize import fsolve
x = fsolve(func,0)
print x
|
How to preserve newlines in argparse version output while letting argparse auto-format/wrap the remaining help message?
Question: I wrote the following code.
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-v', '--version', action='version',
version='%(prog)s 1.0\nCopyright (c) 2016 Lone Learner')
parser.parse_args()
This produces the following output.
$ python foo.py --version
foo.py 1.0 Copyright (c) 2016 Lone Learner
You can see that the newline is lost. I wanted the copyright notice to appear
on the next line.
How can I preserve the new lines in the version output message?
I still want argparse to compute how the output of `python foo.py -h` should
be laid out with all the auto-wrapping it does. But I want the version output
to be a multiline output with the newlines intact.
Answer: `RawTextHelpFormatter` will turn off the automatic wrapping, allowing your
explicit `\n` to appear. But it will affect all the `help` lines. There's no
way of picking and choosing. Either accept the default wrapping, or put
explicit newlines in all of your help lines.
You are getting to a level of pickiness about the help format that you need to
study the `HelpFormatter` code for yourself.
|
parse a string using regex
Question: I have a string
txt = 'text1 & ("text2" | "text3" | "text4") & "text5" ! (text6 | text7 | text8)'
Lets say I want to parse it so I end up with elements that are between
parenthesis. My pattern looks like
pattern = '\(([^\)(]+)\)'
using python I end up with two groups
>>> print re.findall(pattren, text)
['"text2" | "text3" | "text4"', 'text6 | text7 | text8']
Lets say we want to find some thing like
>>> print re.findall(magic_pattren, text )
['& ("text2" | "text3" | "text4")', '! (text6 | text7 | text8)']
Any guesses on what that `magic_pattren` would be. I can work my way to the
desired output using string operations.
>>> print [txt[str.find(txt, a)-3: 1+len(a)+str.find(txt, a)] for a in re.findall(pattren, txt)]
['& ("text2" | "text3" | "text4")', '! (text6 | text7 | text8)']
But this feels a bit clunky and fails if the parenthesis group is in the
beginning. I can add a check on that, but like I said feels a bit clunky. Any
takers?
Answer: You can use the `(?:\B\W\s*)?` optional group at the beginning of the pattern:
import re
p = re.compile(r'(?:\B\W\s*)?\([^()]+\)')
test_str = "(text9 & text10) & text1 & (\"text2\" | \"text3\" | \"text4\") & \"text5\" ! (text6 | text7 | text8)"
print(p.findall(test_str))
Result of the [sample demo](https://ideone.com/0kR8rV): `['(text9 & text10)', '& ("text2" | "text3" | "text4")', '! (text6 | text7 | text8)']`
The `(?:\B\W\s*)?` is a non-capturing group (so that the value is not output
in the result) that can be repeated one or zero times (due to the last `?`),
and it matches a non-word character (`\W`) only if it is preceded with a non-
word character or start of string (`\B`) and followed with 0+ whitespace.
[Here is the regex demo](https://regex101.com/r/gO2iU7/3)
|
Reading csv and importing into datatable (Python)
Question: New to python,so im reading an excel csv and i want to import the results into
a datatable for use in later code, i.e i want to reference column names in
logic to generate results.
below is my code thus far, the excel reading bit is fine, i left that there to
work on, but i cant get datatables to work i think im missing something simple
I have got easy install in and have setup tools on, i have the webpaste kit
installed but i cant reference it. i also just run `datatables-0.4.9/setup.py`
aswell but am not sure how i need to reference this in my script to begin
working on it.
import csv
import datatables
with open('Data/ShowroomData.csv', 'rt') as Data:
SR = csv.reader(Data, delimiter=' ', quotechar='|')
next(Data)
for row in SR:
print (row)
table = DataTable('Data/ShowroomData.csv', 'rt')
for row in table:
print (row['SiteName'], row['BGPAS'])
Answer: If you want to be able to access the attributes of each row by the column
name, I don't think you need
[`datatables`](https://pypi.python.org/pypi/datatables/0.4.9). All you need is
a [`DictReader` from the `csv` standard library
module](https://docs.python.org/2/library/csv.html#csv.DictReader).
The code would look something like this:
import csv
with open('Data/ShowroomData.csv', 'rt') as Data:
SR = csv.DictReader(Data, delimiter=' ', quotechar='|')
for row in SR:
# if you were to print just `row`, you would get a dictionary
# like {'SiteName': 'foo', 'BGPAS': 'bar'}
print (row['SiteName'], row['BGPAS'])
|
Python plotly: remove empty spaces from bar chart
Question: I'm plotting a bar chart with the python library `plotly`, but there's
whitespace between the bars which I don't want.
import plotly
from plotly import graph_objs as go
xvals = [u'12.09', u'12.10', u'12.11', u'12.12', u'12.13', u'13.01', u'13.02']
yvals = [115, 69, 165, 98, 157, 126, 60]
data = [go.Bar(x=xvals,y=yvals)]
plotly.offline.plot(data)
produces this:
[](http://i.stack.imgur.com/F0Eq2.png)
How can I bunch the bars together and get rid of the white space?
Answer: try making the graph logarithmic by giving the xaxis in the layout equal to
'log'
xaxis=dict(
type="log"
)
|
Python multiprocessing and shared numpy array
Question: I have a problem, which is similar to this:
import numpy as np
C = np.zeros((100,10))
for i in range(10):
C_sub = get_sub_matrix_C(i, other_args) # shape 10x10
C[i*10:(i+1)*10,:10] = C_sub
So, apparently there is no need to run this as a serial calculation, since
each submatrix can be calculated independently. I would like to use the
multiprocessing module and create up to 4 processes for the for loop. I read
some tutorials about multiprocessing, but wasn't able to figure out how to use
this to solve my problem.
Thanks for your help
Answer: A simple way to parallelize that code would be to use a
[`Pool`](https://docs.python.org/3.5/library/multiprocessing.html#multiprocessing.pool.Pool)
of processes:
pool = multiprocessing.Pool()
results = pool.starmap(get_sub_matrix_C, ((i, other_args) for i in range(10)))
for i, res in enumerate(results):
C[i*10:(i+1)*10,:10] = res
I've used
[`starmap`](https://docs.python.org/3.5/library/multiprocessing.html#multiprocessing.pool.Pool.starmap)
since the `get_sub_matrix_C` function has more than one argument (`starmap(f,
[(x1, ..., xN)])` calls `f(x1, ..., xN)`).
Note however that serialization/deserialization may take significant time _and
space_ , so you may have to use a more low-level solution to avoid that
overhead.
* * *
It looks like you are running an outdated version of python. You can replace
`starmap` with plain `map` but then you have to provide a function that takes
a single parameter:
def f(args):
return get_sub_matrix_C(*args)
pool = multiprocessing.Pool()
results = pool.map(f, ((i, other_args) for i in range(10)))
for i, res in enumerate(results):
C[i*10:(i+1)*10,:10] = res
|
Creating a diverging color palette with a "midrange" instead of a "midpoint"
Question: I am using python seaborn package to generate a diverging color palette
(seaborn.diverging_palette).
I can choose my two extremity colors, and define if the center is light->
white or dark->black (`center`parameter). But what I would like is to extend
this center part color (white in my case) to a given range of values.
For example, my values are from 0 to 20. So, my midpoint is 10. Hence, only 10
is in white, and then it becomes more green/more blue when going to 0/20. I
would like to keep the color white from 7 to 13 (3 before/after the midpont),
and then start to move to green/blue.
I found the `sep` parameter, which extends or reduces this center white part.
But I can't find any explanation on what its value means, in order to find
which value of `sep`would correspond to 3 each side of the midpoint for
example.
Does anybody know the relationship between sep and the value scale ? Or if
another parameter could do the expected behaviour ?
Answer: It seems the `sep` parameter can take any integer between `1` and `254`. The
fraction of the colourmap that will be covered by the midpoint colour will be
equal to `sep/256`.
Perhaps an easy way to visualise this is to use the `seaborn.palplot`, with
`n=256` to split the palette up into 256 colours.
Here is a palette with `sep = 1`:
sns.palplot(sns.diverging_palette(0, 255, sep=1, n=256)
[](http://i.stack.imgur.com/zodUO.png)
And here is a palette with `sep = 8`
sns.palplot(sns.diverging_palette(0, 255, sep=8, n=256)
[](http://i.stack.imgur.com/suF9M.png)
Here is `sep = 64` (i.e. one quarter of the palette is the midpoint colour)
sns.palplot(sns.diverging_palette(0, 255, sep=64, n=256)
[](http://i.stack.imgur.com/Ld2U8.png)
Here is `sep = 128` (i.e. one half is the midpoint colour)
sns.palplot(sns.diverging_palette(0, 255, sep=128, n=256)
[](http://i.stack.imgur.com/wR7if.png)
And here is `sep = 254` (i.e. all but the colours on the very edge of the
palette are the midpoint colour)
sns.palplot(sns.diverging_palette(0, 255, sep=254, n=256)
[](http://i.stack.imgur.com/3XPIg.png)
## Your specific palette
So, for your case where you have a range of `0 - 20`, but a midpoint range of
`7 - 13`, you would want the fraction of the palette to be the midpoint to be
`6/20`. To convert that to `sep`, we need to multiply by 256, so we get `sep =
256 * 6 / 20 = 76.8`. However, `sep` must be an integer, so lets use `77`.
Here is a script to make a diverging palette, and plot a colorbar to show that
using `sep = 77` leaves the correct midpoint colour between 7 and 13:
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
# Create your palette
cmap = sns.diverging_palette(0,255,sep=77, as_cmap=True)
# Some data with a range of 0 to 20
x = np.linspace(0,20,20).reshape(4,5)
# Plot a heatmap (I turned off the cbar here, so I can create it later with ticks spaced every integer)
ax = sns.heatmap(x, cmap=cmap, vmin=0, vmax=20, cbar = False)
# Grab the heatmap from the axes
hmap = ax.collections[0]
# make a colorbar with ticks spaced every integer
cmap = plt.gcf().colorbar(hmap)
cmap.set_ticks(range(21))
plt.show()
[](http://i.stack.imgur.com/PzwkH.png)
|
Where to put logging setup code in a flask app?
Question: I'm writing my first Flask application. The application itself runs fine. I
just have a newbie question about logging in production mode.
The basic structure:
app/
app/templates/
app/static
config.py
flask/... <- virtual env with flask + extensions
run.py
The application is started by `run.py` script:
#!flask/bin/python
import os.path
import sys
appdir = os.path.dirname(os.path.abspath(__file__))
if appdir not in sys.path:
sys.path.insert(1, appdir)
from app import app as application
if __name__ == '__main__':
application.run(debug=True)
and is started either directly or from an Apache 2.4 web server. I have these
lines in the apache config:
WSGIPythonHome /usr/local/opt/app1/flask
WSGIScriptAlias /app1 /usr/local/opt/app1/run.py
In the former case, the `debug=True` is all I need for the development.
I'd like to have some logging also for the latter case, i.e. when running
under Apache on a production server. Following is a recommendation from the
Flask docs:
if not app.debug:
import logging
from themodule import TheHandlerYouWant
file_handler = TheHandlerYouWant(...)
file_handler.setLevel(logging.WARNING)
app.logger.addHandler(file_handler)
It needs some customization, but that's what I want - instructions for the
case when `app.debug` flag is not set. Similar recommendation was given also
here: [How do I write Flask's excellent debug log message to a file in
production?](http://stackoverflow.com/questions/14037975/how-do-i-write-
flasks-excellent-debug-log-message-to-a-file-in-production)
Please help: where do I have to put this code?
* * *
UPDATE: based on the comments by davidism and the first answer I've got I
think the app in the current simple form is not suitable for what I was asking
for. I will modify it to use different sets of configuration data as
recommended here: <http://flask.pocoo.org/docs/0.10/config/#development-
production> . If my application were larger, I would follow the pech0rin's
answer.
UPDATE2: I think the key here is that the environment variables should control
how the application is to be configured.
Answer: I have had a lot of success with setting up my logging configurations inside a
`create_app` function. This uses the [application factory
pattern](http://flask.pocoo.org/docs/0.10/patterns/appfactories/). This allows
you to pass in some arguments or a configuration class. The application is
then specifically created using your parameters.
This allows you initialize the application, setup logging, and do whatever
else you want to do, before the application is sent back to be run.
For example:
def create_app(dev=False):
app = Flask(__name__)
if dev:
app.config['DEBUG'] = True
else:
...
app.logger.addHandler(file_handler)
return app
This has worked very well for me in production environments. YMMV
|
Numpy loadtxt: ValueError: Wrong number of columns
Question: Having the file TEST.txt structured as following:
a 45
b 45 55
c 66
When I try to open it:
import numpy as np
a= np.loadtxt(r'TEST.txt',delimiter='\t',dtype=str)
I have got the following error:
> ValueError: Wrong number of columns at line 2
It's clearly due to the fact that the second line has three columns instead of
two, but I can't find an answer to my problem using the documentation.
Is there anyway I can fix it keeping all the data into an array?
In Matlab I can do something like:
a=textscan(fopen('TEST.txt'),'%s%s%s');
Something similar in Python would be apreciated.
Answer: Try `np.genfromtxt`. It handles missing values; `loadtxt` does not. Compare
their docs.
Missing values can be tricky when the delimiter is white space, but with tabs
it should be ok. If there still are problems, test it with a `,` delimiter.
oops - you still need the extra delimiter
eg.
a, 34,
b, 43, 34
c, 34
Both `loadtxt` and `genfromtxt` accept any iterable that delivers the txt line
by line. So a simple thing is to `readlines`, tweak the lines that have
missing values and delimiters, and pass that list of lines to the loader. Or
you can write this a 'filter' or generator. This approach has been described
in a number of previous SO questions.
In [36]: txt=b"""a\t45\t\nb\t45\t55\nc\t66\t""".splitlines()
In [37]: txt
Out[37]: [b'a\t45\t', b'b\t45\t55', b'c\t66\t']
In [38]: np.genfromtxt(txt,delimiter='\t',dtype=str)
Out[38]:
array([['a', '45', ''],
['b', '45', '55'],
['c', '66', '']],
dtype='<U2')
I'm using Python3 so the byte strings are marked with a 'b' (for baby and me).
For strings, this is overkill; but `genfromtxt` makes it easy to construct a
structured array with different dtypes for each column. Note that such array
is 1d, with named fields - not numbered columns.
In [50]: np.genfromtxt(txt,delimiter='\t',dtype=None)
Out[50]:
array([(b'a', 45, -1), (b'b', 45, 55), (b'c', 66, -1)],
dtype=[('f0', 'S1'), ('f1', '<i4'), ('f2', '<i4')])
to pad the lines I could define a function like:
def foo(astr,delimiter=b',',cnt=3,fill=b' '):
c = astr.strip().split(delimiter)
c.extend([fill]*cnt)
return delimiter.join(c[:cnt])
and use it as:
In [85]: txt=b"""a\t45\nb\t45\t55\nc\t66""".splitlines()
In [87]: txt1=[foo(txt[0],b'\t',3,b'0') for t in txt]
In [88]: txt1
Out[88]: [b'a\t45\t0', b'a\t45\t0', b'a\t45\t0']
In [89]: np.genfromtxt(txt1,delimiter='\t',dtype=None)
Out[89]:
array([(b'a', 45, 0), (b'a', 45, 0), (b'a', 45, 0)],
dtype=[('f0', 'S1'), ('f1', '<i4'), ('f2', '<i4')])
|
apscheduler Lost connection to MySQL server during query
Question: I use apscheduler to execute regular job, and I got some error on it.
"Lost connection to MySQL server during query"
To find the answer, I try some test on it, and found out if my database(MySQL)
"wait_timeout" is less than schedule interval time then this error occur.
(sorry here I made some mistake...is less than...)
in the test:
* my job setting
scheduler.add_job(period_job, 'interval', minutes=5, id='my_job_id')
* my database setting
wait_timeout = 60
* my test code
from apscheduler.schedulers.background import BackgroundScheduler
from flask import Flask
app = Flask(__name__)
scheduler = BackgroundScheduler({'apscheduler.jobstores.default': {
'type': 'sqlalchemy',
'url': 'mysql+pymysql://user:pass@url:3306/test_apscheduler?charset=utf8'
},
'apscheduler.executors.default': {
'class': 'apscheduler.executors.pool:ThreadPoolExecutor',
'max_workers': '20'
},
'apscheduler.executors.processpool': {
'type': 'processpool',
'max_workers': '5'
},
'apscheduler.job_defaults.coalesce': 'false',
'apscheduler.job_defaults.max_instances': '3',
'apscheduler.timezone': 'UTC',
})
scheduler.start()
@app.route('/')
def hello_world():
scheduler.add_job(period_job, 'interval', minutes=5, id='my_job_id')
return 'Hello World!'
def period_job():
print("hihi")
if __name__ == '__main__':
app.run()
total error message:
Exception in thread APScheduler:
Traceback (most recent call last):
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\engine\base.py", line 1139, in _execute_context context)
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\engine\default.py", line 450, in do_execute cursor.execute(statement, parameters)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\cursors.py", line 158, in execute result = self._query(query)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\cursors.py", line 308, in _query conn.query(q)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 820, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 1002, in _read_query_result result.read()
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 1285, in read first_packet = self.connection._read_packet()
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 946, in _read_packet packet_header = self._read_bytes(4)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 982, in _read_bytes 2013, "Lost connection to MySQL server during query")
pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Python34\lib\threading.py", line 921, in _bootstrap_inner self.run()
File "C:\Python34\lib\threading.py", line 869, in run self._target(*self._args, **self._kwargs)
File "C:\Users\skuo\apshcduler\lib\site-packages\apscheduler\schedulers\blocking.py", line 27, in _main_loop wait_seconds = self._process_jobs()
File "C:\Users\skuo\apshcduler\lib\site-packages\apscheduler\schedulers\base.py", line 801, in _process_jobs for job in jobstore.get_due_jobs(now):
File "C:\Users\skuo\apshcduler\lib\site-packages\apscheduler\jobstores\sqlalchemy.py", line 69, in get_due_jobs return self._get_jobs(self.jobs_t.c.next_run_time <= timestamp)
File "C:\Users\skuo\apshcduler\lib\site-packages\apscheduler\jobstores\sqlalchemy.py", line 131, in _get_jobs for row in self.engine.execute(selectable):
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\engine\base.py", line 1991, in execute return connection.execute(statement, *multiparams, **params)
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\engine\base.py", line 914, in execute return meth(self, multiparams, params)
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\sql\elements.py", line 323, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params)
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\engine\base.py", line 1010, in _execute_clauseelement compiled_sql, distilled_params
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\engine\base.py", line 1146, in _execute_context context)
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\engine\base.py", line 1341, in _handle_dbapi_exception exc_info
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\util\compat.py", line 200, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\util\compat.py", line 183, in reraise raise value.with_traceback(tb)
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\engine\base.py", line 1139, in _execute_context context)
File "C:\Users\skuo\apshcduler\lib\site-packages\sqlalchemy\engine\default.py", line 450, in do_execute cursor.execute(statement, parameters)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\cursors.py", line 158, in execute result = self._query(query)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\cursors.py", line 308, in _query conn.query(q)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 820, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 1002, in _read_query_result result.read()
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 1285, in read first_packet = self.connection._read_packet()
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 946, in _read_packet packet_header = self._read_bytes(4)
File "C:\Users\skuo\apshcduler\lib\site-packages\pymysql\connections.py", line 982, in _read_bytes 2013, "Lost connection to MySQL server during query")
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: 'SELECT apscheduler_jobs.id, apscheduler_jobs.job_state \nFROM apscheduler_jobs \nWHERE apscheduler_jobs.next_run_time <= %(next_run_time_1)s ORDER BY apscheduler_jobs.next_run_time'] [parameters: {'next_run_time_1': 1457445220.361246}]
does anyone know what happened to this? and how to fix it?
Answer: what is the setting of interactive_timeout ?
**wait_timeout:**
Description: Time in seconds that the server waits for a connection to become
active before closing it. The session value is initialized when a thread
starts up from either the global value, if the connection is **non-
interactive** , or from the interactive_timeout value, if the connection is
interactive.
|
Formatting Errors in Python
Question: I've never used Python and have copied some script (with permission) from
someone online, so I'm not sure why the code is dropping. I'm hoping someone
can understand it and put it right for me!
from os import walk
from os.path import join
#First some options here.
!RootDir = "C:\\Users\\***\\Documents\\GoGames"
!OutputFile = "C:\\Users\\***\\Documents\\GoGames\\protable.csv"
Properties = !!['pb', 'pw', 'br', 'wr', 'dt', 'ev', 're']
print """
SGF Database Maker
==================
Use this program to create a CSV file with sgf info.
"""
def getInfo(filename):
"""Read out file info here and return a dictionary with all the
properties needed."""
result = !![]
file = open(filename, 'r')
data = file.read(1024) read at most 1kb since we assume all relevant info is in the beginning
file.close()
for prop in Properties:
try:
i = data.lower().index(prop)
except !ValueError:
result.append((prop, ''))
continue
try:
value = data![data.index('![', i)+1 : data.index(']', i)]
except !ValueError:
value = ''
result.append((prop, value))
return dict(result)
!ProgressCounter = 0
file = open(!OutputFile, "w")
file.write('^Filename^;^PB^;^BR^;^PW^;^WR^;^RE^;^EV^;^DT^\n')
for root, dirs, files in walk(!RootDir):
for name in files:
if name![-3:].lower() != "sgf":
continue
info = getInfo(join(root, name))
file.write('^'+join(root, name)+'^;^'+info!['pb']+'^;^'+info!['br']+'^;^'+info!['pw']+'^;^'+info!['wr']+'^;^'+info!['re']+'^;^'+info!['ev']+'^;^'+info!['dt']+'^\n')
!ProgressCounter += 1
if (!ProgressCounter) % 100 == 0:
print str(!ProgressCounter) + " games processed."
file.close()
print "A total of " + str(!ProgressCounter) + " have been processed."
Using Netbeans IDE I get the following error:
!RootDir = "C:\\Users\\***\\Documents\\GoGames"
^
SyntaxError: mismatched input '' expecting EOF
I have previously been able to step through the code as far as file.close(),
where I go an error "does not match outer indentation level".
Anyone able to put the syntax of this code right for me?
Answer: Remove the exclamation marks in front of variable names, list declarations
(`!![]`) and in `except` clauses (`except !ValueError`), this is not valid
Python syntax.
|
How to search for a child key in an xml and pass it to another parent with Python/bs4?
Question: Here is a sample of the xml I am working with:
<bare>
<key name="plus.root" value="/apps/mobile/plus"/>
<key name="local.root" value="/apps/net/plus"/>
<key name="slack.messaging.root" value="/apps/root/docs"/>
</bare>>
<app name="social">
<key name="social.password" value="secret">
<key name="user" value = "secret">
</app>
<app name="plus">
<key name="user" value = "secret">
</app>
I am trying to look through each key under "bare" and if the first word
matches an app name, move the key/value under the app key (as child). So for
example, plus.root would be removed from the bare section and added under the
"app name=plus section". If the app name does not exist, the key should be
left alone under the bare section.
Currently my code looks like this, but I'm having trouble figuring out
properly do this.
from bs4 import BeautifulSoup, Tag
soup = BeautifulSoup(data, "xml")
apps = soup.find("app")
bare = soup.root.bare
# loop over all the "key's under "bare"
for key in bare.find_all("key"):
app_name = key["name"].split(".")[0]
# find apps that match name of the bare key
app = apps.find("app", {"name": app_name})
#if we find any, ???append the key to the app???? then remove the key from the bare section
if app:
key = key.extract()
app.append(key)
# remove "bare"
bare.extract()
print(soup.prettify())
Is there a better way to do this?
Answer: Here is a working sample to get you started. Here we are moving all of the
keys under the "bare" to under a separate "apps" tag, grouping by app name:
from bs4 import BeautifulSoup, Tag
data = """
<root>
<bare>
<key name="plus.root" value="/apps/mobile/plus"/>
<key name="local.root" value="/apps/net/plus"/>
<key name="slack.messaging.root" value="/apps/root/docs"/>
</bare>
<apps/>
</root>
"""
soup = BeautifulSoup(data, "xml")
apps = soup.find("apps")
bare = soup.root.bare
# loop over all the "key"s under "bare"
for key in bare.find_all("key"):
app_name = key["name"].split(".")[0]
# find app and create if not exists
app = apps.find("app", {"name": app_name})
if not app:
app = soup.new_tag("app")
app.attrs["name"] = app_name
apps.append(app)
# remove the key from "bare" and append to a specific app
key = key.extract()
app.append(key)
# remove "bare"
bare.extract()
print(soup.prettify())
Here is the result:
<?xml version="1.0" encoding="utf-8"?>
<root>
<apps>
<app name="plus">
<key name="plus.root" value="/apps/mobile/plus"/>
</app>
<app name="local">
<key name="local.root" value="/apps/net/plus"/>
</app>
<app name="slack">
<key name="slack.messaging.root" value="/apps/root/docs"/>
</app>
</apps>
</root>
|
Python Guessing game
Question: I am a beginner in python using 2.7.11 and i have made a guessing game. Here
is my code so far
def game():
import random
random_number = random.randint(1,100)
tries = 0
low = 0
high = 100
while tries < 8:
if(tries == 0):
guess = input("Guess a random number between {} and {}.".format(low, high))
tries += 1
try:
guess_num = int(guess)
except:
print("That's not a whole number!")
break
if guess_num < low or guess_num > high:
print("That number is not between {} and {}.".format(low, high))
break
elif guess_num == random_number:
print("Congratulations! You are correct!")
print("It took you {} tries.".format(tries))
playAagain = raw_input ("Excellent! You guessed the number! Would you like to play again (y or n)? ")
if playAagain == "y" or "Y":
game()
elif guess_num > random_number:
print("Sorry that number is too high.")
high = guess_num
guess = input("Guess a number between {} and {} .".format(low, high))
elif guess_num < random_number:
print("Sorry that number is too low.")
low = guess_num
guess = input("Guess a number between {} and {} .".format(low, high))
else:
print("Sorry, but my number was {}".format(random_number))
print("You are out of tries. Better luck next time.")
game()
1. How would i incorporate a system that makes it so Each time the user guesses the correct number it includes feedback giving the fewest number of guesses it took to correctly guess the number. Like a high score on how many guesses it took them and to change it only if it was beaten
Answer: you can create a static variable like this : `game.highscore = 10`
* and you update it each time when the user wins the game (check if tries less than highscore)
|
python requests POST basic authentication returning 200 with empty body
Question:
import requests
s = requests.Session()
r = requests.Request('POST', 'https://'+url+'?name=value')
prep = r.prepare()
prep.headers = {'User-Agent': 'curl/7.38.0', 'Accept': '*/*', 'Authorization': 'Basic <load of hex>==', 'Content-Type': 'application/json'}
response = s.send(prep)
output:
DEBUG:requests.packages.urllib3.connectionpool:"POST /url?name=value HTTP/1.1" 200 None
Why am I getting 200 indicating authentication success yet no json returned
giving me the necessary credentials? (If I tamper with the Authorization
header it returns 403 as expected).
I've taken the request headers directly from a successful curl request to the
same service. Why is requests not returning anything?
successful cURL log:
$ curl -v https://url?name=value -X POST -H "Content-Type: application/json" -u <user:secret>
* Hostname was NOT found in DNS cache
* Trying <ip>...
* Connected to url port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* Server certificate:
* <cert detail>
* SSL certificate verify ok.
* Server auth using Basic with user '<user>'
> POST url?name=value HTTP/1.1
> Authorization: Basic <hex encoded string>==
> User-Agent: curl/7.38.0
> Host: <host>
> Accept: */*
> Content-Type: application/json
>
< HTTP/1.1 200 OK
* Server openresty/1.7.4.1 is not blacklisted
< Server: openresty/1.7.4.1
< Date: Tue, 08 Mar 2016 15:47:08 GMT
< Content-Type: application/json; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Correlation-Id: <id>
< Strict-Transport-Security: max-age=31536000
< Cache-Control: private, no-cache, no-store, no-transform, max-age=0
< X-XSS-Protection: 1; mode=block
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< X-Content-Type-Options: nosniff
< X-Frame-Options: deny
<
{"access_token": "<hex value>", "token_type": "bearer", "expires_in": "3599", "scope": "<bunch of authorisations>", "jti": "<string>"}
* Connection #0 to host <host> left intact
Answer: When I was using requests the server was returning an empty response (even
though authentication is a success and http code is 200).
I'm not sure how to avoid this using requests so I did the following in plain
old urllib:
import urliblib.request
headers = {'User-Agent': 'blah', 'Authorization': 'Basic <hex>==', 'Accept': '*/*', 'Content-Type': 'application/json'}
req = urllib.request.Request(url, headers=headers, method='POST')
print(urllib.request.urlopen(req).readlines())
The response now returns with correct data. I'm assuming there is some caching
issue with the server as now when I use all the extra headers requests adds in
via urllib I get a similar successful response.
|
Python - AttributeError
Question: So for a line class I'm doing, I keep getting an error that says
AttributeError: Line instance has no attribute 'point0'
I'm declaring the line like this:
def __init__(self, point0, point1):
self.x = point0
self.y = point1
def __str__(self):
return '%d %d' % (int(round(self.point0)), int(round(self.point1)))
And I get the x and y from my point class which should already be float values
so I don't need to check for an error in my init method however I do check to
see if point0 and point1 are floats in my rotate method:
def rotate(self, a):
if not isinstance(a, float) or not isinstance(self.point0, float) or not isinstance(self.point1, float):
raise Error("Parameter \"a\" illegal.")
self.point0 = math.cos(a) * self.point0 - math.sin(a) * self.point1
self.point1 = math.sin(a) * self.point0 + math.cos(a) * self.point1
So why does python keep saying that it has no attribute point0? I also tried
changing my _init_ method to look like this:
def __init__(self, point0, point1):
self.point0 = point0
self.point1 = point1
But when I do that the error says point0 has no attribute float. So why do I
keep getting this error? Here's the code I'm using to test:
p0 = Point(0.0, 1.0)
p1 = Point(2.0, 3.0)
line = Line(p0,p1)
print line
Answer: I'm curious... how much do you know about scope in Python?
In your class, you have a member variable named x and another named y. Your
**init** function accepts an argument called point0 and another called point1.
It saves point0 in the x member variable, and point1 in y. Then, in your
rotate function, you attempt to access a variable called point0. Do you see
the problem?
An important thing to understand when programming (and this is true in most
programming languages, if not all of them) is that the name of an argument
doesn't affect the name of that data elsewhere. I can pass a variable called
foo into a function that takes an argument called bar. In that function, I
have to refer to the data as bar because that's the name of the variable.
Later, after I've called that function, the name of the variable is still foo,
because only the variable inside the function is called bar. Does that make
sense?
|
pySerial very strange behaviour ... Code works when executed in shell but not in a script
Question: I'm struggling with pySerial. To be brief ... The code below works great when
executed in the Python Shell ...
>>> import serial
>>> s=serial.Serial("COM5", 9600)
>>> while(1):
s.write("#")
s.readline()
Produces the output below in the shell:
1L
'56.73\r\n'
1L
'56.73\r\n'
When the same code is written in a script say "readSerial.py" the script will
either not transmit the hashtag that triggers the serial device to transmit
the data, or will not receive the replied data.
I'm using pySerial 3, but have noticed the same behavior with 2.7. Using
Python 2.7.10 64 bit on Win10. But also noticed this behavior on Raspberry Pi
with /dev/ttyACM0. I would really like to have this solved. I'm not that
experienced in Python so this might be an oversight.
Hardware is checked and double checked.
Thanks,
KK
* * *
Thanks, but I really know how to print data from Python. The problem is really
with pySerial. Here is the complete code, please don't discus errors in
commented out code. These are of no concern here.
#from numpy import array
#import matplotlib.animation as animation
import time
import serial as s
#data = array([])
Arduino = s.Serial("COM5", 9600)
i = 0
while (1):
try:
Arduino.write("#")
time.sleep(.1)
inString = Arduino.readline()
data = float(inString)
print i, ":", data
i += 1
time.sleep(1)
except KeyboardInterrupt:
break
Arduino.close()
But like said this doesn't work. As far as I can tell the readline() function
does not return. And ... there 's really no point in making it return by
setting the tx timeout. To add to the mistery; When the code is debugged (i.e.
stepped trough) it does work.
Thanks in advance,
KK
Answer: From the [FAQ ](https://pythonhosted.org/pyserial/appendix.html#how-to):
> **Example works in serial.tools.miniterm but not in script.**
>
> **The RTS and DTR lines are switched when the port is opened.** This may
> cause some processing or reset on the connected device. In such a cases an
> immediately following call to write() may not be received by the device.
>
> A **delay after opening the port, before the first write()** , is
> recommended in this situation. E.g. a time.sleep(1)
|
Using TensorFlow with Sage
Question: I've written something in TensorFlow that makes use of some nice group theory
functions that work very easily in Sage (and seem prohibitively difficult to
code from scratch). The Sage part works on its own, and the TensorFlow part
works on its own, but I can't figure out how to get them working together.
Specifically: I can make a file test.py using Sage functions and run it from
the command line using:
sage --python test.py
with no problem. But calling a function defined in test.py from a .py file
using TensorFlow fails ("Import error, no module named Sage"), presumably
because Sage (6.x) uses Python 2.6.x, while TensorFlow uses Python 2.7 or
3.3+.
Is there a way around this?
Thanks!
EDIT: I'm not sure if this is relevant, but if I fire up normal Python (the
kind TensorFlow uses), I get this:
from sage.env import SAGE_LOCAL
SAGE_LOCAL
which outputs `'$SAGE_ROOT/local'`.
However if I fire up Sage first I get this:
sage
SAGE_LOCAL
which outputs '`'/usr/lib/sagemath/local'`.
I just upgraded to Sage 7.0 if that matters (this didn't work in 6.10 either,
though).
Answer: Here's something **NOT** to do (yet); don't just take whatever Sage install
you happen to have and do:
$ sage -pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp27-none-any.whl
Even though this "works", it also had several worrying messages about
upgrading numpy and six, which completely broke the numpy part of my Sage
installation. This was with Sage-6.9.
Which means you have to make sure you have a Sage that has the right versions
of Numpy and six. With the latest development version, we do, apparently:
$ cd /path/to/my/bleeding/edge/sage/directory
$ ./sage -pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp27-none-any.whl
Collecting tensorflow==0.7.1 from https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp27-none-any.whl
Using cached https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp27-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six>=1.10.0 in ./local/lib/python2.7/site-packages/six-1.10.0-py2.7.egg (from tensorflow==0.7.1)
Collecting protobuf==3.0.0b2 (from tensorflow==0.7.1)
Using cached protobuf-3.0.0b2-py2.py3-none-any.whl
Collecting wheel (from tensorflow==0.7.1)
Using cached wheel-0.29.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): numpy>=1.10.1 in ./local/lib/python2.7/site-packages (from tensorflow==0.7.1)
Requirement already satisfied (use --upgrade to upgrade): setuptools in ./local/lib/python2.7/site-packages/setuptools-20.1.1-py2.7.egg (from protobuf==3.0.0b2->tensorflow==0.7.1)
Installing collected packages: protobuf, wheel, tensorflow
Successfully installed protobuf-3.0.0b2 tensorflow-0.7.1 wheel-0.29.0
You are using pip version 8.0.2, however version 8.1.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
And then I don't get any failures.
So one has to be careful, but apparently it's possible. However, you
definitely have to use it from "within" Sage; Sage-as-distribution wouldn't
allow you to use your other tensorflow with it.
|
How to plot multiple regression 3D plot in python
Question: I am not a scientist, so please assume that I do not know the jargon of
experienced programmers, or the intricacies of scientific plotting techniques.
Python is the only language I know (beginner+, maybe intermediate).
**Task** : Plot the results of a multiple regression (z = f(x, y) ) as a two
dimensional plane on a 3D graph (as I can using OSX’s graphing utility, for
example, or as implemented here [Plot Regression
Surface](http://stackoverflow.com/questions/7863906/plot-regression-surface)
with R).
After a week searching **Stackoverflow** and reading various documentations of
**matplotlib** , **seaborn** and **mayavi** I finally found [Python the
simplest way to plot 3d
surface](http://stackoverflow.com/questions/12423601/python-the-simplest-way-
to-plot-3d-surface/25586869#25586869) which sounded promising. So here is my
data and code:
**_First try with matplotlib:_**
shape: (80, 3)
type: <type 'numpy.ndarray'>
zmul:
[[ 0.00000000e+00 0.00000000e+00 5.52720000e+00]
[ 5.00000000e+02 5.00000000e-01 5.59220000e+00]
[ 1.00000000e+03 1.00000000e+00 5.65720000e+00]
[ 1.50000000e+03 1.50000000e+00 5.72220000e+00]
[ 2.00000000e+03 2.00000000e+00 5.78720000e+00]
[ 2.50000000e+03 2.50000000e+00 5.85220000e+00]
……]
import matplotlib
from matplotlib.ticker import MaxNLocator
from matplotlib import cm
from numpy.random import randn
from scipy import array, newaxis
Xs = zmul[:,0]
Ys = zmul[:,1]
Zs = zmul[:,2]
surf = ax.plot_trisurf(Xs, Ys, Zs, cmap=cm.jet, linewidth=0)
fig.colorbar(surf)
ax.xaxis.set_major_locator(MaxNLocator(5))
ax.yaxis.set_major_locator(MaxNLocator(6))
ax.zaxis.set_major_locator(MaxNLocator(5))
fig.tight_layout()
plt.show()
All I get is an empty 3D coordinate frame with the following error message:
RuntimeError: Error in qhull Delaunay triangulation calculation: singular
input data (exitcode=2); use python verbose option (-v) to see original qhull
error.
I tried to see if I could play around with the plotting parameters and checked
this site <http://www.qhull.org/html/qh-impre.htm#delaunay>, but I really
cannot make sense of what I am supposed to do.
**_Second try with mayavi:_**
Same data, divided into 3 numpy arrays:
type: <type 'numpy.ndarray'>
X: [ 0 500 1000 1500 2000 2500 3000 ….]
type: <type 'numpy.ndarray'>
Y: [ 0. 0.5 1. 1.5 2. 2.5 3. ….]
type: <type 'numpy.ndarray'>
Z: [ 5.5272 5.5922 5.6572 5.7222 5.7872 5.8522 5.9172 ….]
Code:
from mayavi import mlab
def multiple3_triple(tpl_lst):
X = xs
Y = ys
Z = zs
# Define the points in 3D space
# including color code based on Z coordinate.
pts = mlab.points3d(X, Y, Z, Z)
# Triangulate based on X, Y with Delaunay 2D algorithm.
# Save resulting triangulation.
mesh = mlab.pipeline.delaunay2d(pts)
# Remove the point representation from the plot
pts.remove()
# Draw a surface based on the triangulation
surf = mlab.pipeline.surface(mesh)
# Simple plot.
mlab.xlabel("x")
mlab.ylabel("y")
mlab.zlabel("z")
mlab.show()
All I get is this:
[](http://i.stack.imgur.com/7MNcs.png)
If this matters, I am using the 64 bit version of Enthought's Canopy on OSX
10.9.3
Will be grateful for any input on what I am doing wrong.
EDIT: Posting the final code that worked, in case it helps someone.
'''After the usual imports'''
def multiple3(tpl_lst):
mul = []
for tpl in tpl_lst:
calc = (.0001*tpl[0]) + (.017*tpl[1])+ 6.166
mul.append(calc)
return mul
fig = plt.figure()
ax = fig.gca(projection='3d')
'''some skipped code for the scatterplot'''
X = np.arange(0, 40000, 500)
Y = np.arange(0, 40, .5)
X, Y = np.meshgrid(X, Y)
Z = multiple3(zip(X,Y))
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1,cmap=cm.autumn,
linewidth=0, antialiased=False, alpha =.1)
ax.set_zlim(1.01, 11.01)
ax.set_xlabel(' x = IPP')
ax.set_ylabel('y = UNRP20')
ax.set_zlabel('z = DI')
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
[](http://i.stack.imgur.com/ABq81.png)
Answer: for matplotlib, you can base off the [surface
example](http://matplotlib.org/examples/mplot3d/surface3d_demo.html) (you're
missing plt.meshgrid):
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X**2 + Y**2)
Z = np.sin(R)
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
ax.set_zlim(-1.01, 1.01)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
|
Forever repeat of an eventloop after each loop completion, in python 3.4
Question: I am using python 3.4. I'm trying to have a loop of several tasks running
asynchronously and when one circle of loop was complete then again loop starts
from beginning. It is necessary to start the loop again only after the last
round was complete. I thought this code might be what I'm looking for but it
only runs once.
import asyncio
@asyncio.coroutine
def some_task(name, number):
print('task ', name, ' started')
yield from asyncio.sleep(number)
print('task ', name, ' finished')
@asyncio.coroutine
def loop_executer(loop, tasks):
if not loop.is_running():
loop.run_until_complete(asyncio.wait(tasks))
tasks = [
asyncio.ensure_future(some_task("A", 2)),
asyncio.ensure_future(some_task("B", 5)),
asyncio.ensure_future(some_task("C", 4))]
ev_loop = asyncio.get_event_loop()
ev_loop.create_task(loop_executer(ev_loop, tasks))
ev_loop.run_forever()
Answer: There is no mechanism to repeat tasks - wrap it `while` loop.
import asyncio
@asyncio.coroutine
def some_task(name, number):
print('task ', name, ' started')
yield from asyncio.sleep(number)
print('task ', name, ' finished')
@asyncio.coroutine
def loop_executer(loop):
# you could use even while True here
while loop.is_running():
tasks = [
some_task("A", 2),
some_task("B", 5),
some_task("C", 4)
]
yield from asyncio.wait(tasks)
ev_loop = asyncio.get_event_loop()
ev_loop.create_task(loop_executer(ev_loop))
ev_loop.run_forever()
You don't have to use `ensure_future` on coroutines.
|
Sort and add elements of a list based on a common attribute
Question: I have a csv file that has the data:
California C1 A 1
. . . .
. . . .
. . . .
. . . .
so it looks like this when viewed in python:
['California','C1','A',1]
['Hawaii','H1','B',2]
['California','C1','A',3]
['California','C2','A',4]
['Hawaii','H1','A',5]
['Hawaii','H1','A',6]
['California','C1','B',7]
['Hawaii','H2','B',8]
['California','C1',B',9]
['Hawaii','H2','A',10]
I wanted to have the output as top 1 of each list, as follows:
['California','C1',B',16]
['California','C2','A',4]
['Hawaii','H1','A',11]
['Hawaii','H2','A',10]
basically. I wanted to sum the last part of the list based on the first 3
attributes of the list then return the top 1 given the three attributes. My
code are as follows:
import collections
def top_1(list):
ranking = collections.Counter(list)
return [elem for elem, _ in sorted(counts.most_common(),key=lambda x:(‐x[1], x[0]))
[:1]]
csvReader =csv.reader(open('data.csv','rb'), delimiter=',', quotechar='"')
data = []
for i in range(int(line[3]):
data.append([line[0], line[1], line[2]))
print top_1(data)
but it does not give me the output that I am expecting.
Answer: The following approach should give you the desired output:
from collections import Counter
from itertools import groupby, islice
import csv
counts = Counter()
with open('data.csv', 'rb') as f_input:
csv_input = csv.reader(f_input)
for row in csv_input:
counts.update({tuple(row[:3]) : int(row[3])})
output = []
for k, g in groupby(sorted(counts.iteritems(), key=lambda x:(x[0][0], -x[1])), lambda x:x[0][0]):
output.extend([list(e[0]) + [e[1]] for e in islice(g, 0, 2)])
print output
This will display:
[['California', 'C1', 'B', 16], ['California', 'C2', 'A', 4], ['Hawaii', 'H1', 'A', 11], ['Hawaii', 'H2', 'A', 10]]
|
Python: how to compute a fast measure of robustness on a network?
Question: I am working with a regular `NxN` network, and I need to determine a measure
of its robustness (namely, the ability to withstand failures). To do this, I
am using the [average node
connectivity](https://en.wikipedia.org/wiki/Connectivity_\(graph_theory\)),
which is described by [this
function](https://networkx.github.io/documentation/latest/reference/generated/networkx.algorithms.connectivity.connectivity.average_node_connectivity.html#networkx.algorithms.connectivity.connectivity.average_node_connectivity).
However, this calculation is proving extremely slow and computationally
demanding, as you can see below. I am supposed to run the script below
`60,000` times, so time is a very crucial factor. For this reason I am willing
to reduce the size of the network, but I want to find the best compromise
between network size and computational demand.
**My question:**
**Is there a faster way to come up with the same result? Or is there another
measure you suggest in order to avoid long computations?**
The script and the timings:
'''
Timing the average node connectivity function
'''
from __future__ import division
import networkx as nx
import time
#Lattice network
N=10 #This can be 10, 20, 30, ...
G=nx.grid_2d_graph(N,N)
pos = dict( (n, n) for n in G.nodes() )
labels = dict( ((i, j), i + (N-1-j) * N ) for i, j in G.nodes() )
nx.relabel_nodes(G,labels,False)
inds=labels.keys()
vals=labels.values()
inds.sort()
vals.sort()
pos2=dict(zip(vals,inds))
start_time = time.clock()
conn=nx.average_node_connectivity(G)
print('N: '+str(N))
print('Avg node conn: '+str(round(conn, 3)))
print("--- %s seconds ---" % (time.clock() - start_time))
The first two timings:
N: 10
Avg node conn: 3.328
--- 6.80954619325 seconds --- #This must be multiplied by 60,000
N: 20
Avg node conn: 3.636
--- 531.969059161 seconds --- #This must be multiplied by 60,000
Answer: The average node connectivity calculated here is the average of local node
connectivity over all **pairs** of nodes of G. So this function will go over
all possible pairs which makes it so slow. One suggestion would be to leave
the size of your network as you want it but then randomly sample from all
possible pairs of nodes and compute a connectivity estimate based on that
sample.
|
not able to run threads simultaneosly in python
Question:
from threading import Thread
import time
def Function1():
print "11"
print "12"
time.sleep(5)
print "13"
print "14"
def Function2():
print "21"
print "22"
time.sleep(10)
print "23"
print "24"
for i in range(3)
t1= Thread(target=Function1())
t2= Thread(target=Function2())
t1.start()
t2.start()
Above program runs sequentially...
11
12
13
14
21
22
23
24
11
12
13
14
21
22
23
24
11
12
13
14
21
22
23
24
how to run two functions(threads) simultaneously?? I don't want to use
multiprocessing.. I need to write python script for performance testing...for
that i need threads to run simultaneously Is there any way to solve this
problem?
Answer: > how to run two functions(threads) simultaneously? I don't want to use
> multiprocessing..
Unfortunately, you can't really have these two simultaneously (or at least,
you can't run things truly concurrently using `threading`). This is an
[inherent limitation of the CPython interpreter's
GIL](http://jessenoller.com/blog/2009/02/01/python-threads-and-the-global-
interpreter-lock).
The only thing that `threading` gives is single-core context switching, where
the interpreter will run a function on a single core, then swap it out
temporarily and run a different function on single core, etc. This is possibly
useful for applications that do something while, e.g., monitoring user input,
but that's about it.
|
need help on configuring fortigate firewall CLI code for bulk ipmacbinding by importing a ip/mac entries list file
Question: i am facing a problem in ip/mac binding in fortigate 200d, the thing is i have
a list of 3000 entries of IP/MAC addresses, i have kept them in a csv file.
This is what i am looking for
1.I want to write a code which can import that file
2.I want execute this code snippet inside a loop until all entries are
updated.
config firewall ipmacbinding table
edit <index_int>
set ip <address_ipv4>
set mac <address_hex>
set name <name_str>
set status {enable}
end
3. with the help of above code snippet each time i have to manually enter the the IP,MAC and Name values for 3000-times, instead i just want to import a file and from that file values should be added.
4. In few places i came to know that it can achieved with the help of perl/python script but i am not aware of that.
i googled but i didn't find anywhere about this information, so i hope that i
would get help to get this task done.
Thanks.
Format of CSV File is
Index IP Mac name
1 10.10.17.1 aa:bb:cc:00:11:22 first
2 10.10.17.2 cc:dd:ee:ff:22:33 second
3 10.10.17.3 33:44:11:3f:00:88 third
[Formal of CSV File](http://i.stack.imgur.com/tj4VP.jpg)
Answer: I have never used the fortigate CLI so i will assume you know how it works and
what to do with it. below is a small attempt that if it doesnt work exactly
will hopefully put you on the correct trail. I have assumed that when you run
the config command, the terminal normally waits for user input. so in this
case the perl script will pipe in that input.
use strict;
use warnings;
my $csv_file = shift;
open (my $cfh, '<', $csv_file) or die "Unable to open $csv_file: $!";
my @headers = split (' ', <$cfh>);
while(<$cfh>){
my %config;
my @data = split(' ');
@config{@headers}=@data;
open(my $firewall, '|-', 'config firewall ipmacbinding table') or die "Unable to open 'config firewall ipmacbinding table': $!";
print $firewall "edit ",$config{'Index'},"\n";
print $firewall "set ip ",$config{'IP'},"\n";
print $firewall "set mac ",$config{'Mac'},"\n";
print $firewall "set name ",$config{'name'},"\n";
print $firewall "set status {enable}\n";
print $firewall "end\n";
close $firewall;
}
the above is written as an attempt to help you get started on how to make this
work. as i said i have no experience with fotigate so you may need to tweek
this a bit.
If i chose to print this just to my terminal screen as output like this
use strict;
use warnings;
my $csv_file = shift;
open (my $cfh, '<', $csv_file) or die "Unable to open $csv_file: $!";
my @headers = split (' ', <$cfh>);
while(<$cfh>){
my %config;
my @data = split(' ');
@config{@headers}=@data;
#open(my $firewall, '|-', 'config firewall ipmacbinding table') or die "Unable to open 'config firewall ipmacbinding table': $!";
print "config firewall ipmacbinding table\n";
print "\tedit ",$config{'Index'},"\n";
print "\tset ip ",$config{'IP'},"\n";
print "\tset mac ",$config{'Mac'},"\n";
print "\tset name ",$config{'name'},"\n";
print "\tset status {enable}\n";
print "end\n";
#close $firewall;
}
it produces the following
config firewall ipmacbinding table
edit 1
set ip 10.10.17.1
set mac aa:bb:cc:00:11:22
set name first
set status {enable}
end
config firewall ipmacbinding table
edit 2
set ip 10.10.17.2
set mac cc:dd:ee:ff:22:33
set name second
set status {enable}
end
config firewall ipmacbinding table
edit 3
set ip 10.10.17.3
set mac 33:44:11:3f:00:88
set name third
set status {enable}
end
hopefully this is enough for you to get started.
|
Django: How to return to previous URL
Question: Novice here who learned to develop a web app with python using Flask. Now I'm
trying to learn django 1.9 by redoing the same app with django. Right now I am
stuck at trying to get the current URL and pass it as an argument so that the
user can come back once the action on the next page is completed.
In Flask, to return to a previous URL, I would use the 'next' parameter and
the request.url to get the current url before changing page.
In the template you would find something like this:
<a href="{{ url_for('.add_punchcard', id=user.id, next=request.url) }}">Buy punchcard :</a>
and in the view:
redirect(request.args.get("next"))
I thought it would be about the same with django, but I cannot make it work. I
did find some suggestions, but they are for older django version(older than
1.5) and do not work anymore(and they are pretty convulsed as solutions goes!)
Right now, in my view I am using
return redirect(next)
Note: The use of return redirect in django seems very recent itself if I judge
by solutions on the web that always seem to use return HttpResponse(..., so I
take it alot of changes happened lately in how to do things.
and in the template I have
<a href="{% url 'main:buy_punchcard' member.id next={{ request.path }} %}">Buy punchcard</p>
but this actually return an error
> Could not parse the remainder: '{{' from '{{'
I did add the context_processors in settings.py
TEMPLATE_CONTEXT_PROCESSORS = (
'django.core.context_processors.request',
)
But this is only the last error in a very long streak of errors. Bottom line
is, I can't make it work.
As such, anyone could point me in the right direction as to what is the way to
do this in django 1.9? It look like a pretty basic function so I thought it
would be easier somehow.
Answer: If you want `next` to be included in the query string, then move it outside of
the `url` tag:
<a href="{% url 'main:buy_punchcard' member.id %}?next={{ request.path }}">Buy punchcard</p>
In your view, you can fetch `next` from `request.GET`, and return the redirect
response using either `HttpResponseRedirect` or the `redirect` shortcut.
from django.utils.http import is_safe_url
next = request.GET.get('next', '/default/url/')
# check that next is safe
if not is_safe_url(next):
next = '/default/url/'
return redirect(next)
Note that **it might not be safe to redirect to a url fetched from the query
string**. For example, it could link to a different domain. Django has a
method
[`is_safe_url`](https://github.com/django/django/blob/2bdc9616f469c6b303bdc2711305ce9a1abbdcb6/django/contrib/auth/views.py)
that it uses to check next urls when logging in or out.
|
Python: Multiprocessing - Suds - _pickle.PicklingError: Can't pickle <class> attribute lookup failed
Question: I'm making SOAP WSDL connection and then I want to run function in another
procces (threading is not suitable)
from suds.client import Client
class dTest:
def setup(self, client):
ws = Client("http://localhost?wsdl")
# then I use some SOAP API methods and return dictionary with results
return result_dict
def test(self, name):
# in this function I use another SOAP API methods
return
def main(self, client):
result_dict = self.setup(client)
for name, num in result_dict.items():
p = multiprocessing.Process(target=self.test, args=[name])
p.start() # <- on this line I have an error
p.join(timeout)
if p.is_alive():
p.terminate
Then I have an error
File "D:\IPWStest\test\dTest.py", line 318, in main
p.start()
File "C:\Python34\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Python34\lib\multiprocessing\context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Python34\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "C:\Python34\lib\multiprocessing\popen_spawn_win32.py", line 66, in __init__
reduction.dump(process_obj, to_child)
File "C:\Python34\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'suds.sudsobject.UserCredentialsType'>: attribute lookup UserCredentialsType on suds.sudsobject failed
I can't find any solution for this issue. How can I pickle this line?
> result_dict = self.setup(client)
Is another decision may be exist? Can anyone help me? What does this error
means?
Answer: Problem was in `setup` function. Web-service connection `ws` in setup uses in
`test` function. That's why this object cann't be pickle. If you using
`multiprocessing` and `suds` or any similar lib, new connection should be
created in every new process.
|
Regular expressions: replace comma in string, Python
Question: Somehow puzzled by the way regular expressions work in python, I am looking to
replace all commas inside strings that are preceded by a letter and followed
either by a letter or a whitespace. For example:
2015,1674,240/09,PEOPLE V. MICHAEL JORDAN,15,15
2015,2135,602832/09,DOYLE V ICON, LLC,15,15
The first line has effectively 6 columns, while the second line has 7 columns.
Thus I am trying to replace the comma between (N, L) in the second line by a
whitespace (N L) as so:
2015,2135,602832/09,DOYLE V ICON LLC,15,15
This is what I have tried so far, without success however:
new_text = re.sub(r'([\w],[\s\w|\w])', "", text)
Any ideas where I am wrong?
Help would be much appreciated!
Answer: The pattern you use, `([\w],[\s\w|\w])`, is _consuming_ a word char (= an
alphanumeric or an underscore, `[\w]`) before a `,`, then matches the comma,
and then matches (and again, consumes) 1 character - a whitespace, a word
character, or a literal `|` (as inside the character class, the pipe character
is considered a literal pipe symbol, not alternation operator).
So, the main problem is that `\w` matches both letters and digits.
You can actually leverage lookarounds:
(?<=[a-zA-Z]),(?=[a-zA-Z\s])
See the [regex demo](https://regex101.com/r/bV0rN1/1)
The `(?<=[a-zA-Z])` is a positive lookbehind that requires a letter to be
right before the `,` and `(?=[a-zA-Z\s])` is a positive lookahead that
requires a letter or whitespace to be present right after the comma.
Here is a [Python demo](https://ideone.com/Ur8YfF):
import re
p = re.compile(r'(?<=[a-zA-Z]),(?=[a-zA-Z\s])')
test_str = "2015,1674,240/09,PEOPLE V. MICHAEL JORDAN,15,15\n2015,2135,602832/09,DOYLE V ICON, LLC,15,15"
result = p.sub("", test_str)
print(result)
If you still want to use `\w`, you can exclude digits and underscore from it
using an opposite class `\W` inside a negated character class:
(?<=[^\W\d_]),(?=[^\W\d_]|\s)
See [another regex demo](https://regex101.com/r/bV0rN1/2)
|
Variable apparently not defined when it is
Question: Having an issue with a function I created in a separate file.
Here's my root program:
#Import TKINTER toolset:
from tkinter import *
from mousexy import *
#Starting variables:
#Defining mouse x and y coordinates
global mouse_x
global mouse_y
mouse_x = 0
mouse_y = 0
#Main window:
window = Tk()
window.title = ("Solomon's animation tool")
#Workspace and Canvas:
global wrkspace
wrkspace = Frame(window, bg="red",width=640,height=480)
global canvas
canvas = Canvas(wrkspace,bg="white",width=640,height=480)
#Keyframe editor: (DO LATER)
#Test for finding mouse xy
canvas.bind("<Button-1>",find_mouse_xy)
wrkspace.pack()
canvas.pack()
#Runs window:
window.mainloop()
and here's my function in a separate file (mousexy.py)
def find_mouse_xy(event):
mouse_x = canvas.winfo_pointerx()
mouse_y = canvas.winfo_pointery()
print ("x: " + str(mouse_x))
print ("y: " + str(mouse_y))
when I run my root program and click, the console tells me that `canvas` is
not defined when it clearly is, what am I doing wrong?
mouse_x = canvas.winfo_pointerx()
NameError: name 'canvas' is not defined
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python34\lib\tkinter\__init__.py", line 1487, in __call__
return self.func(*args)
File "C:\Users\SOLLUU\Documents\Python\Animation software\mousexy.py", line 2, in find_mouse_xy
mouse_x = canvas.winfo_pointerx()
NameError: name 'canvas' is not defined
>>>
Answer: `find_mouse_xy` is looking for `mousexy.canvas`. You defined
`__main__.canvas`. They are two entirely separate variables.
What you probably want is
def find_mouse_xy(event):
# Coordinate of the mouse when the event occurred.
mouse_x = event.x
mouse_y = event.y
# What object was clicked? This handler could
# be attached to many different widgets in your program.
where = event.widget
# ...
|
Python countdown clock with GUI
Question: I'm having problems with a countdown clock that I was making in Python for a
Raspberry Pi. I need to have a countdown clock that counts down from 60
minutes. When time runs out it should display a red text "GAME OVER".
I've already made one using TKinter and a `for` loop for the actual timer but
I couldn't find any way to stop the `for` loop. I gave up on it.
Is there anyone nice enough to maybe write the actual timer and timer stopping
part? I'm good enough at python and TKinter to do everything else that I need.
Answer: I'd recommend using [generators](https://wiki.python.org/moin/Generators) to
handle your for loop and will provide a minimal **example** but on
StackOverflow no one is going to "write the **actual** timer and timer
stopping part" (see [What topics can I ask
here](http://stackoverflow.com/help/on-topic))
Note this is an example I had **before this question was posted** and thought
it would be helpful to you.
import tkinter as tk
def run_timer():
for time in range(60):
label["text"] = time #update GUI here
yield #wait until next() is called on generator
root = tk.Tk()
label = tk.Label()
label.grid()
gen = run_timer() #start generator
def update_timer():
try:
next(gen)
except StopIteration:
pass #don't call root.after since the generator is finished
else:
root.after(1000,update_timer) #1000 ms, 1 second so it actually does a minute instead of an hour
update_timer() #update first time
root.mainloop()
you will still need to figure out for yourself how to implement
`after_cancel()` to stop it and the red "GAME OVER" text.
|
Does there exist a surefire, cross platform way to reproduce a SIGBUS?
Question: This question is out of pure curiosity; personally I have seen this signal
being raised, but only rarely so.
I asked on [the C chatroom](http://chat.stackoverflow.com/rooms/54304/c)
whether there was a reliable way to reproduce it. And on this very room, [user
@Antti Haapala](http://chat.stackoverflow.com/users/918959) found one. At
least on Linux x86_64 systems... And after some fiddling around, the same
pattern was reproducible with three languages -- however, only on x86_64 Linux
based systems since these were the only systems this could be tested on...
Here's how:
## C
$ cat t.c
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/mman.h>
int main () {
int fd = open ("empty", O_RDONLY);
char *p = mmap (0, 40960, PROT_READ, MAP_SHARED, fd, 0);
printf("%c\n", p[4096]);
}
$ :>empty
$ gcc t.c
$ ./a.out
Bus error (core dumped)
## Python
$ cat t.py
import mmap
import re
import os
with open('empty', 'wb') as f:
f.write(b'a' * 4096)
with open('empty', 'rb') as f:
# memory-map the file, size 0 means whole file
mm = mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ)
os.system('truncate --size 0 empty')
b'123' in mm
$ python t.py
Bus error (core dumped)
## Java
$ cat Test.java
import java.io.IOException;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import java.util.Random;
public final class Test
{
private static final int SIZE = 4096;
private static final Path VICTIM = Paths.get("/tmp/somefile");
public static void main(final String... args)
throws IOException
{
// Create our victim; delete it first if it already exsists
Files.deleteIfExists(VICTIM);
Files.createFile(VICTIM);
final Random rnd = new Random();
final byte[] contents = new byte[SIZE];
rnd.nextBytes(contents);
Files.write(VICTIM, contents);
try (
final FileChannel channel = FileChannel.open(VICTIM,
StandardOpenOption.READ, StandardOpenOption.WRITE);
) {
final MappedByteBuffer buffer
= channel.map(FileChannel.MapMode.READ_ONLY, 0L, SIZE);
channel.truncate(0L);
buffer.get(rnd.nextInt(SIZE));
}
}
}
$ javac Test.java
$ strace -ff -o TRACE java Test
Exception in thread "main" java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
at Test.main(Test.java:35)
fge@erwin:~/tmp$ grep -w SIGBUS TRACE.*
TRACE.15850:rt_sigaction(SIGBUS, NULL, {SIG_DFL, [], 0}, 8) = 0
TRACE.15850:rt_sigaction(SIGBUS, {0x7fe3db71b480, ~[RTMIN RT_1], SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x7fe3dc5d7d10}, {SIG_DFL, [], 0}, 8) = 0
TRACE.15850:--- SIGBUS {si_signo=SIGBUS, si_code=BUS_ADRERR, si_addr=0x7fe3dc9fb5aa} ---
Again: all the examples above are only on Linux x86_64 systems; I have nothing
else at my disposal.
Would there be a way to reproduce this on other systems?
Side questions: if the examples above were reproducible on systems not having
`SIGBUS`, what would happen?
Answer: Perhaps not what you were looking for but gets the job done.
$ cat t2.c
#include <signal.h>
int main(){raise(SIGBUS);}
|
PyCharm module import error
Question: I am new to PyCharm and am having difficulty importing modules that I have
written into the Python console. If I try to import a module that is native to
Python I can import that module without difficulty but if I try to import a
module that I have written I get an ImportError: No module named
'ModuleITriedToImportName'. For instance here is a simple self written module
to pickle files called "filepickle":
import pickle
def saveDbase(filename, object):
file = open(filename, 'wb')
#pickle.dump(object, file) # pickle to file
#pickle.dump(object, open(filename, 'wb'))
pickle.dump(object, file)
file.close() # any file-like object will do
def loadDbase(filename):
file = open(filename, 'rb')
object = pickle.load(file) # unpickle from file
file.close() # recreates object in memory
return object
If I try to "import pickle" at the PyCharm Python Console then the import
works without any error. If I try to "import filepickle" I receive the error
message:
ImportError: No module named 'filepickle'
The module filepickle works just fine if I run filepickle within PyCharm but I
am unable to import filepickle in the Python console. If anybody knows how to
get PyCharm to allow me to import modules that I have written into the PyCharm
Python console I would appreciate the help.
Answer: I couldn't reproduce your error (PyCharm 5.0.4, OS X 10.10.5, Python
3.4.3/2.7.6). You could try run this code in a console to find out the current
working directory, and if it's not the same as filepickle's one, most likely
it is the problem.
import os
os.getcwd()
|
Write Geopy location to CSV file
Question: I'm trying to write the output of this `geopy` object to a `csv` file, but it
puts each letter in a different column and prints the latitude and longitude
on a different line. How can I fix that?
I would like to be able to run this function at different times and print the
new address to the next line. Saving the data not overwriting it. Can this be
done with write `csv` in python?
from geopy.geocoders import Nominatim
import csv
def loc_find(file):
'''
This function take in a user given location and gives back the address with
city, zip code, state, county and country.
It also provides latitude and longitude.
'''
geolocator = Nominatim()
loc_input = raw_input("Add the location you would like data back for: ")
location = geolocator.geocode(loc_input)
print(location.address)
print((location.latitude, location.longitude))
with open(r"", 'w') as fp:
a = csv.writer(fp)
data = location
a.writerows(data)
Answer: Well, you are passing a single
[`Location`](https://geopy.readthedocs.org/en/1.10.0/index.html?highlight=location#geopy.location.Location)
object as
[`rows`](https://docs.python.org/2/library/csv.html#csv.csvwriter.writerows)
(a list of row objects), you should pass it as a single row.
Replace:
a.writerows(location)
With:
a.writerow(location)
|
scrapy general parse workflow
Question: I'm new to python and scrapy and wish to understand the methodology. I have
tried the official tutorial on scrapy and followed it but it is only a basic
example. My requirement described below is different and only a little more
complex.
There is a site which displays Items from a db.
For each Item, I need to take attributes from each individual Item page and
the search results (listings) page. The search results page URL is in the
format:
http://example.com/search?&start_index=0
Changing _start_index_ will change where the results start from. Only 10
records are displayed per results page.
Results are displayed in table cells in the format:
link | Desc. | Status
I need to retrieve Desc. and Status attributes, then follow the link to a page
containing more details, which I will also retrieve for Item.
I wish to retrieve a given number of records from any starting index. My
current method using scrapy is shown below (edited for brevity):
import scrapy
from scrapy.exceptions import CloseSpider
from cbury_scrapy.items import MyItem
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["example.com"]
start_urls = [
"http://example.com/cgi/search?&start_index=",
]
url_index = 0
URLS_PER_PAGE = 10
records_remaining = 16
crawl_done = False
da = MyItem()
def parse(self, response):
while self.crawl_done != True:
url = "http://example.com/cgi/search?&start_index=" + str(self.url_index)
yield scrapy.Request(url, callback=self.parse_results)
self.url_index += self.URLS_PER_PAGE
def parse_results(self, response):
# Retrieve all table rows from results page
for row in response.xpath('//table/tr[@class="datrack_resultrow_odd" or @class="datrack_resultrow_even"]'):
# extract the Description and Status fields
# extract the link to Item page
url = r.xpath('//td[@class="datrack_danumber_cell"]//@href').extract_first()
yield scrapy.Request(url, callback=self.parse_item)
if self.records_remaining == 0:
self.crawl_done = True
raise CloseSpider('Finished scrape of requested number of records.')
self.records_remaining -= 1
def parse_item(self, response):
# get fields from item page
# ...
yield self.item
The code currently does not stop when _records_remaining_ reaches 0 and even
after throwing _CloseSpider_ exception so that is a bug.
I feel this stems from being wrong in how the parsing methods are arranged.
What would be the correct way to structure this in the "scrapy" way? Any help
is appreciated.
Answer:
def parse(self, response):
list_of_indexes = response.xpath('place xpath here that leads to a list of urls for indexes')
for indexes in list_of_indexes:
#maybe the urls are only tags ie. ['/extension/for/index1', '/extension/for/index2', etc...]
index_urls = ['http://domain.com' + index for index in indexes]
yield scrapy.Request(index_urls, callback = self.parse_indexes)
def parse_index(self, response):
da = MyItem()
da['record_date'] = response.xpath('xpath_here')
da['record_summary'] = response.xpath('xpath_here')
da['additional_record_info'] = response.xpath('xpath_here')
yield da
This example is over-simplified but I hope it helps.
You want to instantiate your item `da = MyItem()` within the parse itself.
To answer the larger question about parse flow I would start with URLs. Once
you find the XPaths for the indexes from the start_url you'll use
scrapy.Requests(URL = index_url, callback =parse_indexes)
This will direct your spider to the the next parse method parse_indexes.
index_url will be drawn from an iteration through the necessary xpaths.
parse_indexes will be just like parse but will then draw out the info from
the_next_index_url
If this answer is going in the right direction I can post an example later.
|
Python subprocess piping to stdin
Question: I'm trying to use python Popen to achieve what looks like this using the
command line.
echo "hello" | docker exec -i $3 sh -c 'cat >/text.txt'
The goal is to pipe the "hello" text into the `docker exec` command and have
it written to the docker container.
I've tried this but can't seem to get it to work.
import subprocess
from subprocess import Popen, PIPE, STDOUT
p = Popen(('docker', 'exec', '-i', 'nginx-ssl', 'sh', '-c', 'cat >/text.txt'), stdin=subprocess.PIPE)
p.stdin.write('Hello')
p.stdin.close()
Answer: You need to give `stdin` the new line also:
p.stdin.write('Hello\n')
That is the same thing even with `sys.stdout`. You don't need to give `print`
a new line because it does that for you, but any writing to a file that you do
manually, you need to include it. You should use `p.communicate('Hello')`
instead, though. It's made for that.
|
TypeError: findall() takes at least 2 arguments (1 given)
Question: posted a much worse version of this question before. I've calmed down, refined
my searches and I've almost figured out what I need. I'm trying to extract all
the words ending in "ing" from a decently sized text file. Also, I'm supposed
to be using regex but that has me incredibly confused, so at this point I'm
just trying to get the results I need. here's my code:
import re
file = open('ing words.txt', 'r')
pattern = re.compile("\w+ing")
print re.findall(r'>(\w+ing<')
here's what I get:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-32-861693e3a217> in <module>()
3 pattern = re.compile("\w+ing")
4
----> 5 print re.findall(r'>(\w+ing<')
TypeError: findall() takes at least 2 arguments (1 given)
I'm still very new at this, and I don't know exactly why the second argument
is needed (i know that the short answer is "because", but I'd like to know the
theory if someone could take the time to explain it), but more-so how to add a
second argument that won't break my code even further. I'm confident (but
probably wrong) that after " print re.findall(r'>(\w+ing<') " I need some way
of re-telling my terminal that it needs to search within that ing words.txt.
Am I even close?
Answer: [`re.findall()`](https://docs.python.org/2/library/re.html#re.findall)
requires at least 2 arguments to be provided - a pattern itself and the string
to search in. You though meant to use `pattern.findall()` instead:
print pattern.findall(r'>(\w+ing<')
|
Python Syntax Error (Unicode error)
Question: [Click here to see screenshot](http://i.stack.imgur.com/Bg04E.jpg)
I am trying to convert a CSV to XLS using Python 3.5.1 I have attached a
picture to show the issue
import csv, xlwt
files = ["C:\Users\Office\Documents"]
for i in files:
f=open(i, 'rb')
g = csv.reader ((f), delimiter=";")
wbk= xlwt.Workbook()
sheet = wbk.add_sheet("Sheet 1")
for rowi, row in enumerate(g):
for coli, value in enumerate(row):
sheet.write(rowi,coli,value)
wbk.save(i + '.xls')
Answer: Following [@KoebmandSTO's
advice](https://stackoverflow.com/questions/35906530/python-syntax-error-
unicode-error#comment59473494_35906530) you may want to [try
this](https://www.google.com/#q=\(unicode+error\)+%27unicodeescape%27).
you are using backslashes in the string that are normally used to escape
special characters like `\n`, to prevent this behaviour use `r"..."`:
files = [r"C:\Users\Office\Documents"]
see [this answer](http://stackoverflow.com/questions/2081640/what-exactly-do-
u-and-r-string-flags-do-in-python-and-what-are-raw-string-l) for better
explanation on what the `r` does.
or backslash escape the backslash with `\\`:
files = ["C:\\Users\\Office\\Documents"]
since the `\` is a special character that needs to be escaped.
|
Why do I get "table not found" even though I can see it created?
Question: I first created the User model before I read the Django documentation about
authentication so I put all attributes in the same model. So, later I tried to
split it into User and User profile. But when I run the the population script,
it says User profile table is not found even though I saw the SQL that created
it.
These are two classes connected to the User model that I import.
from django.contrib.auth.models import User
class UserProfile(models.Model):
user = models.OneToOneField(User)
profilepic = models.ImageField(blank=True)
city = models.ForeignKey(City)
slug = models.SlugField(unique=True)
def save(self, *args, **kwargs):
@property
def avg_rating(self):
return self.userrating_set.all().aggregate(Avg('rating'))['rating__avg']
class UserRating(models.Model):
user = models.ForeignKey(User)
comment = models.CharField(max_length=500)
for_username = models.CharField(max_length=128)
rating = models.IntegerField(default=5)
def __unicode__(self):
return unicode(self.rating)
And this is the portion of the population script where the problem is:
new_user = User.objects.get_or_create(username=username, email=email)[0]
#new_user.profilepic = profile_picture
new_user.firstname = first_name
new_user.secondname = last_name
new_user.save()
new_user_profile = UserProfile.objects.get_or_create(user=new_user, city=created_city)
new_user_profile.slug = username
new_user_profile.save()
And this is the error I get when running the script:
Traceback (most recent call last):
File "C:\Users\bnbih\excurj\populationScript.py", line 108, in <module>
populate()
File "C:\Users\bnbih\excurj\populationScript.py", line 101, in populate
new_user_profile = UserProfile.objects.get_or_create()
File "C:\python27\lib\site-packages\django\db\models\manager.py", line 92, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "C:\python27\lib\site-packages\django\db\models\query.py", line 422, in get_or_create
return self.get(**lookup), False
File "C:\python27\lib\site-packages\django\db\models\query.py", line 351, in get
num = len(clone)
File "C:\python27\lib\site-packages\django\db\models\query.py", line 122, in __len__
self._fetch_all()
File "C:\python27\lib\site-packages\django\db\models\query.py", line 966, in _fetch_all
self._result_cache = list(self.iterator())
File "C:\python27\lib\site-packages\django\db\models\query.py", line 265, in iterator
for row in compiler.results_iter():
File "C:\python27\lib\site-packages\django\db\models\sql\compiler.py", line 700, in results_iter
for rows in self.execute_sql(MULTI):
File "C:\python27\lib\site-packages\django\db\models\sql\compiler.py", line 786, in execute_sql
[Finished in 0.8s with exit code 1]cursor.execute(sql, params)
File "C:\python27\lib\site-packages\django\db\backends\utils.py", line 81, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\python27\lib\site-packages\django\db\backends\utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "C:\python27\lib\site-packages\django\db\utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\python27\lib\site-packages\django\db\backends\utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "C:\python27\lib\site-packages\django\db\backends\sqlite3\base.py", line 485, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: mainapp_userprofile
Answer: Django's [`sqlmigrate`](https://docs.djangoproject.com/en/1.9/ref/django-
admin/#sqlmigrate) just shows you _what_ will get run, it doesn't apply any
changes, you need to run
[`migrate`](https://docs.djangoproject.com/en/1.9/ref/django-admin/#migrate)
> Prints the SQL for the named migration. This requires an active database
> connection, which it will use to resolve constraint names; this means you
> must generate the SQL against a copy of the database you wish to later apply
> it on.
|
Clicking Specific button with Selenium
Question: I am trying to click a particular button with Selenium in Python, but am
having trouble identifying that particular button. For example, if I was on
the google page of [this](https://www.google.com/webhp?sourceid=chrome-
instant&ion=1&espv=2&ie=UTF-8#q=ignominious), and I wanted to have the
translation bar drop down, how would I go about referencing that specific
element. Inspecting it in my browser I see some of what I assume to be its
data as:
<div style="clear: both;" aria-controls="uid_0" aria-expanded="false"
class="_LJ _qxg xpdarr _WGh vk_arc" data-fbevent="fastbutton" jsaction="kx.t;
fastbutton: kx.t" role="button" tabindex="0" data-ved="0ahUKEwiwn-6K17XLAhVLWD4KHTk9CTkQmDMILzAA">
However, from this point I'm not sure how I would use the find element by
functions to reference what I need to in order to call it properly.
driver.find_element_by_*("?").click()
import unittest
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
#comment
print ("Let's talk about Python.")
driver = webdriver.Firefox()
driver.get("http://www.google.com")
assert "Google" in driver.title
elem = driver.find_element_by_name("q")
elem.send_keys("ignominious")
elem.send_keys(Keys.RETURN)
driver.find_element_by_*("?").click()
assert "No results found." not in driver.page_source
driver.close()
Answer: You can use `css_selector` with the class attribute
driver.find_element_by_css_selector("._LJ._qxg.xpdarr._WGh.vk_arc").click()
Or `class_name` with any one of the classes
driver.find_element_by_class_name("_LJ").click()
# or
driver.find_element_by_class_name("_qxg").click()
# or
driver.find_element_by_class_name("xpdarr").click()
# or
driver.find_element_by_class_name("_WGh").click()
# or
driver.find_element_by_class_name("vk_arc").click()
Sending click to the element child will also work
driver.find_element_by_class_name("vk_ard").click()
|
how to call function from DLL in C#/Python
Question: I have next C++ code for create DLL file
// MathFuncsDll.h
#ifdef MATHFUNCSDLL_EXPORTS
#define MATHFUNCSDLL_API __declspec(dllexport)
#else
#define MATHFUNCSDLL_API __declspec(dllimport)
#endif
namespace MathFuncs
{
// This class is exported from the MathFuncsDll.dll
class MyMathFuncs
{
public:
// Returns a + b
static MATHFUNCSDLL_API double Add(double a, double b);
// Returns a - b
static MATHFUNCSDLL_API double Subtract(double a, double b);
// Returns a * b
static MATHFUNCSDLL_API double Multiply(double a, double b);
// Returns a / b
// Throws const std::invalid_argument& if b is 0
static MATHFUNCSDLL_API double Divide(double a, double b);
};
}
// MathFuncsDll.cpp : Defines the exported functions for the DLL application.
//
#include "stdafx.h"
#include "MathFuncsDll.h"
#include <stdexcept>
using namespace std;
namespace MathFuncs
{
double MyMathFuncs::Add(double a, double b)
{
return a + b;
}
double MyMathFuncs::Subtract(double a, double b)
{
return a - b;
}
double MyMathFuncs::Multiply(double a, double b)
{
return a * b;
}
double MyMathFuncs::Divide(double a, double b)
{
return a / b;
}
}
after compile I have dll file and i want to call for example ADD function
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.InteropServices;
namespace call_func
{
class Program
{
[DllImport("MathFuncsDll.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern double MyMathFuncs::Add(double a, double b);
static void Main(string[] args)
{
Console.Write(Add(1, 2));
}
}
}
but got this message [error img](http://i.stack.imgur.com/9PJcM.png)
or in python code
Traceback (most recent call last):
File "C:/Users/PycharmProjects/RFC/testDLL.py", line 6, in <module>
result1 = mydll.Add(10, 1)
File "C:\Python27\lib\ctypes\__init__.py", line 378, in __getattr__
func = self.__getitem__(name)
File "C:\Python27\lib\ctypes\__init__.py", line 383, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'Add' not found
please help how I can fix this code, and call for example ADD function.
Thank you
Answer: Since it is C++ you are compiling, the exported symbol name will be
[_mangled_](https://en.m.wikipedia.org/wiki/Name_mangling).
You can confirm this by looking at your DLL's exports list, using a tool like
[DLL export viewer](http://www.nirsoft.net/utils/dll_export_viewer.html).
It's best to provide a plain C export from DLLs when you intend to call them
via an [FFI](https://en.m.wikipedia.org/wiki/Foreign_function_interface). You
can do this using [`extern
"C"`](http://stackoverflow.com/questions/1041866/in-c-source-what-is-the-
effect-of-extern-c) to write a wrapper around your C++ methods.
See also:
* [Developing C wrapper API for Object-Oriented C++ code](http://stackoverflow.com/questions/2045774/developing-c-wrapper-api-for-object-oriented-c-code)
|
How does one bypass SyntaxError when parsing code?
Question: I am using openpyxl to read an excel file that will have changing values over
time. The following function will take string inputs from the excel sheets to
make frames for Tkinter.
def make_new_frame(strng, frame_location, frame_name, frame_list):
if not(frame_name in frame_list):
frame_list.append(frame_name)
exec("global %s" %(frame_name)) in globals()
exec("%s = Frame(%s)"%(frame_name, frame_location))
.... etc. The code itself is quite long but I think this is enough of a
snapshot to address my problem.
Now this results in the following error while parsing:
> SyntaxError: function 'make_new_frame' uses import * and bare exec, which
> are illegal because it is a nested function
Everything in the code I included parsed and executed just fine several times,
but after I added a few more lines in later versions in this function, it
keeps spitting out the above error before executing the code. The error
references the third line in the function, (which, I repeat, has been cleared
in the past).
I added "in globals()" as recommended in another SO post, so that solution is
not working. There is a solution online
[here](http://python.6.x6.nabble.com/Tutor-executing-dynamic-code-with-exec-
td1681904.html) that uses setattr, which I have no idea how to use to create a
widget without eventually using exec. I would really appreciate if someone
could tell me how to bypass the error while parsing or provide an alternative
means for a dynamically changing set of frame names.
Quick Note:
* I am aware that setting a variable as global in python is generally warned against, but I am quite certain that it will serve useful for my code
Edit 1: I have no idea why this was downvoted. If I have done something
incorrectly, please let me know what it is so I can avoid doing so in the
future.
Answer: I think this is an [X/Y
problem](http://meta.stackexchange.com/questions/66377/what-is-the-xy-
problem). You are asking for help with solution Y instead of asking for help
on problem X.
If your goal is to create an unknown number of Frame objects based on external
data, you can store references to the frame in a list or dictionary without
having to resort to using `exec` and dynamically created variable names.
`exec` is a perfectly fine function, but is one of those things that you
should never use until you fully understand why you should never use it.
Here's how to solve your actual problem without using exec:
frames = {}
def make_new_frame(strng, frame_location, frame_name, frames):
if not(frame_name in frames):
frames[frame_name] = Frame(frame_location)
return frames[frame_name]
With that, you now have a dictionary (`frames`) that includes a reference for
every new frame by name. If you had a frame named `"foo"`, for example, you
could configure and pack it like this:
frames["foo"].configure(background="red", ...)
frames["foo"].pack(...)
If preserving the order of the frames is important you can use an
[OrderedDict](https://docs.python.org/2/library/collections.html#collections.OrderedDict).
|
Python tkinter Label
Question: Hi I have a little question about Label in tkinter.
When you use Label outside classes, you do something like
import tkinter as tk
root = tk.Tk()
label = tk.Label(root, text = "something", background = "something")
label.pack()
However, when it's inside a class and the code goes something like
import tkinter as tk
class Example(tk.Frame):
COLOURS = [ "#f45", "#ee5", "#aa4", "#a1e433", "#e34412", "#116611",
"#111 eeefff", "#3aa922191", "#abbabbaaa" ]
def __init__(self, parent):
tk.Frame.__init__(self, parent)
self.parent = parent
col = 1
for colour in Example.COLOURS:
#
label = tk.Label(self, text=colour, background=colour)
#
label.grid(row=1, column=col)
col += 1
def main():
root = tk.Tk()
ex = Example(root)
root.geometry("+300+300")
root.mainloop()
if __name__ == '__main__':
main()
but shouldn't it be rather like
label = tk.Label(self.parent, text=colour, background=colour)
since self.parent would correspond to root? When I try to do that, I get an
error and I only do when I have the label.grid(...) line under it(I tried pack
and it worked fine).
So I thought this code
import tkinter as tk
root = tk.Tk()
label = tk.Label(root)
label.grid(row=0, column=0)
root.mainloop()
wouldn't work either, but it actually worked fine. So I'm confused. Can anyone
explain?
Answer: No, it should not be `self.parent`.
In the class example you give, the class is itself a frame. It is designed
this way to make the example self-contained. By inheriting from `Frame` you
can take all of the code in that class and put it anywhere in the GUI. You can
think of the class and everything in it as a single custom widget. You could
have multiple of these classes, and each one can be treated as a single GUI
object.
To make that work, the class only ever puts widgets inside itself, not in its
parent.
The entire purpose of using a sublcass of `Frame` is to act as a container for
other widgets. If you don't plan on using it as a container for other widgets,
there's no point in inheriting from `Frame`.
It is the equivalent of this, without classes:
import tkinter as tk
root = tk.Tk()
frame = tk.Frame(root)
frame.pack(...)
label = tk.Label(frame, text = "something", background = "something")
label.pack(...)
If you wanted the class to put widgets in the parent, you would define the
class like the following. Notice that it inherits from `object` rather than
`Frame`:
class Example(object):
def __init__(self, parent):
self.parent = parent
...
label = tk.Label(parent, ...)
|
(Tkinter) Image won't show up in new window
Question: I just started using python tkinter and I have a button that opens a new
window. One the new window there is an image, but the image won't show up.Can
you please help me solve my problem?
from tkinter import *
def nwindow():
nwin = Toplevel()
nwin.title("New Window")
btn.config(state = 'disable')
photo2 = PhotoImage(file = 'funny.gif')
lbl2 = Label(nwin, image = photo2)
lbl2.pack()
def quit():
nwin.destroy()
btn.config(state = 'normal')
qbtn = Button(nwin, text = 'Quit', command = quit)
qbtn.pack()
main = Tk()
main.title("Main Window")
main.geometry("750x750")
photo = PhotoImage(file = 'funny.gif')
lbl = Label(main, image = photo)
lbl.pack()
btn = Button(main, text = "New Winodw", command = nwindow)
btn.pack()
main.mainloop()
Answer: your coding doesn't work but putting .mainloop() should fix your issue
def nwindow():
nwin = Toplevel()
nwin.title("New Window")
btn.config(state = 'disable')
photo2 = PhotoImage(file = 'funny.gif')
lbl2 = Label(nwin, image = photo2)
lbl2.pack()
nwin.mainloop()
|
Wikipedia JSON parser in python
Question: i want to print extract of Wikipedia pages but for each search the page no is
changed so how to print extract with wildcard for page no.
i tried following code
import urllib2
import json
response = urllib2.urlopen('https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=Stack%20Overflow')
data = json.load(response)
print data["query"]["pages"][0][extract]
but it gives error
Traceback (most recent call last):
File "C:/Users/GM/Desktop/pytest/pytest.py", line 6, in <module>
print data["query"]["pages"][0]["extract"]
KeyError: 0
please help
Answer: Try this:
print data["query"]["pages"].values()[0]["extract"]
This creates a list of all of the values in the "pages" dictionary. In your
example, there is only one value, so `[0]` gets it.
If there is more than one value, one of them will be returned. It is
unpredictable which one.
|
Python Iterate through list of list to make a new list in index sequence
Question: How would you iterate through a list of lists, such as:
[[1,2,3,4], [5,6], [7,8,9]]
and construct a new list by grabbing the first item of each list, then the
second, etc. So the above becomes this:
[1, 5, 7, 2, 6, 8, 3, 9, 4]
Answer: You can use a list comprehension along with
[`itertools.izip_longest`](https://docs.python.org/2/library/itertools.html#itertools.izip_longest)
(or `zip_longest` in Python 3)
from itertools import izip_longest
a = [[1,2,3,4], [5,6], [7,8,9]]
[i for sublist in izip_longest(*a) for i in sublist if i is not None]
# [1, 5, 7, 2, 6, 8, 3, 9, 4]
|
Add multiplication signs (*) between coefficients
Question: I have a program in which a user inputs a function, such as `sin(x)+1`. I'm
using `ast` to try to determine if the string is 'safe' by whitelisting
components as shown in [this
answer](http://stackoverflow.com/a/11952618/4414003). Now I'd like to parse
the string to add multiplication (`*`) signs between coefficients without
them.
For example:
* `3x`-> `3*x`
* `4(x+5)` -> `4*(x+5)`
* `sin(3x)(4)` -> `sin(3x)*(4)` (`sin` is already in globals, otherwise this would be `s*i*n*(3x)*(4)`
Are there any efficient algorithms to accomplish this? I'd prefer a pythonic
solution (i.e. not complex regexes, not because they're pythonic, but just
because I don't understand them as well and want a solution I can understand.
Simple regexes are ok. )
I'm very open to using `sympy` (which looks really easy for this sort of
thing) under one condition: safety. Apparently `sympy` uses `eval` under the
hood. I've got pretty good safety with my current (partial) solution. If
anyone has a way to make `sympy` safer with untrusted input, I'd welcome this
too.
Answer: A regex is easily the quickest and cleanest way to get the job done in vanilla
python, and I'll even explain the regex for you, because regexes are such a
powerful tool it's nice to understand.
To accomplish your goal, use the following statement:
import re
# <code goes here, set 'thefunction' variable to be the string you're parsing>
re.sub(r"((?:\d+)|(?:[a-zA-Z]\w*\(\w+\)))((?:[a-zA-Z]\w*)|\()", r"\1*\2", thefunction)
I know it's a bit long and complicated, but a different, simpler solution
doesn't make itself immediately obvious without even more hacky stuff than
what's gone into the regex here. But, this has been tested against all three
of your test cases and works out precisely as you want.
As a brief explanation of what's going on here: The first parameter to
`re.sub` is the regular expression, which matches a certain pattern. The
second is the thing we're replacing it with, and the third is the actual
string to replace things in. Every time our regex sees a match, it removes it
and plugs in the substitution, with some special behind-the-scenes tricks.
A more in-depth analysis of the regex follows:
* `((?:\d+)|(?:[a-zA-Z]\w*\(\w+\)))((?:[a-zA-Z]\w*)|\()` : Matches a number or a function call, followed by a variable or parentheses.
* `((?:\d+)|(?:[a-zA-Z]\w*\(\w+\)))` : **Group 1**. Note: Parentheses delimit a Group, which is sort of a sub-regex. Capturing groups are indexed for future reference; groups can also be repeated with modifiers (described later). This group matches a number or a function call.
* `(?:\d+)` : Non-capturing group. Any group with `?:` immediately after the opening parenthesis will not assign an index to itself, but still act as a "section" of the pattern. Ex. `A(?:bc)+` will match "Abcbcbcbc..." and so on, but you cannot access the "bcbcbcbc" match with an index. However, without this group, writing "Abc+" would match "Abcccccccc..."
* `\d` : Matches any numerical digit once. A regex of `\d` all its own will match, separately, `"1"`, `"2"`, and `"3"` of `"123"`.
* `+` : Matches the previous element _one or more_ times. In this case, the previous element is `\d`, any number. In the previous example, `\d+` on "123" will successfully match "123" as a single element. This is vital to our regex, to make sure that multi-digit numbers are properly registered.
* `|` : Pipe character, and in a regex, it effectively says `or`: `"a|b"` will match `"a"` OR `"b"`. In this case, it separates "a number" and "a function call"; match a number OR a function call.
* `(?:[a-zA-Z]\w*\(\w+\))` : Matches a function call. Also a non-capturing group, like `(?:\d+)`.
* `[a-zA-Z]` : Matches the first letter of the function call. There is no modifier on this because we only need to ensure the _first_ character is a letter; `A123` is technically a valid function name.
* `\w` : Matches any alphanumeric character or an underscore. After the first letter is ensured, the following characters could be letters, numbers, or underscores and still be valid as a function name.
* `*` : Matches the previous element _0 or more_ times. While initially seeming unnecessary, the star character effectively makes an element _optional_. In this case, our modified element is `\w`, but a function doesn't technically need any more than one character; `A()` is a valid function name. `A` would be matched by `[a-zA-Z]`, making `\w` unnecessary. On the other end of the spectrum, there could be any number of characters _following_ the first letter, which is why we need this modifier.
* `\(` : This is important to understand: _this is not another group_. The backslash here acts much like an escape character would in a normal string. In a regex, any time you preface a special character, such as parentheses, `+`, or `*` with a backslash, it uses it like a normal character. `\(` matches **an opening parenthesis** , for the actual function call part of the function.
* `\w+` : Matches a number, letter or underscore one or more times. This ensures the function actually has a parameter going into it.
* `\)` : Like `\(`, but matches a **closing** parenthesis
* ((?:[a-zA-Z]\w*)|() : **Group 2**. Matches a variable, or an opening parenthesis.
* (?:[a-zA-Z]\w*) : Matches a variable. This is the exact same as our function name matcher. However, note that this is in a non-capturing group: this is important, because of the way the OR checks. The OR immediately following this looks at this group as a whole. If this was not grouped, the "last object matched" would be `\w*`, which would not be sufficient for what we want. It would say: "match one letter followed by more letters OR one letter followed by a parenthesis". Putting this element in a non-capturing group allows us to control what the OR registers.
* `|` : Or character. Matches `(?:[a-zA-Z]\w*)` or `\(`.
* `\(` : Matches an opening parenthesis. Once we have checked if there is an opening parenthesis, we don't need to check anything beyond it for the purposes of our regex.
Now, remember our two groups, group one and group two? These are used in the
substitution string, `"\1*\2"`. The substitution string is not a true regex,
but it still has certain special characters. In this case, `\<number>` will
insert the group of that number. So our substitution string is saying: "Put
group 1 in (which is either our function call or our number), then put in an
asterisk (*), then put in our second group (either a variable or a
parenthesis)"
I think that about sums it up!
|
python - convert encoded json into utf-8
Question: I have several json files that need to be handled in a python script, although
it seems to NOT to be in a valid json format:
{
'data': [
{
'ad_id': u'6038487',
'adset_id': u'6038483800',
'campaign_id': u'603763200',
'created_time': u'2015-12-17T15:26:04+0000',
'field_data': [
{u'values': [u'Fahrrad'], u'name': u'what is your vehicle?'},
{u'values': [u'Coco'], u'name': u'first_name'},
{u'values': [u'Homer'], u'name': u'last_name'},
{u'values': [u'[email protected]'], u'name': u'email'},
{u'values': [u'+490999999'], u'name': u'phone_number'}
], 'id': u'5655545710'
},
{
'ad_id': u'39392400',
'adset_id': u'39366200',
'campaign_id': u'39363200',
'created_time': u'2014-12-16T13:01:52+0000',
'field_data': [
{u'values': [u'Frankfurt'], u'name': u'in_welcher_stadt_m\xf6chtest_du_arbeiten?'},
{u'values': [u'Auto'], u'name': u'what is your vehicle?'},
{u'values': [u'Homer'], u'name': u'first_name'},
{u'values': [u'abc'], u'name': u'last_name'},
{u'values': [u'[email protected]'], u'name': u'email'},
{u'values': [u'0555555555'], u'name': u'phone_number'}
],
'id': u'149809770'
}
]
}
1. it has single-quotes instead of double-quotes
2. is encoded (seethe `u`)
3. some letters are encoded e.g. `\xf6` that represents `ö`
ideally, the json should be possible to read with the snippet:
import json
import pprint
with open('leads.json') as data_file:
data = json.load(data_file)
pprint(data)
**How can I convert the input json into a valid json in utf-8 format?**
Answer: As I said, that's not JSON, it's a printed representation of a Python object
(which happens to look similar to JSON). To safely import it, you can use
`ast.literal_eval`:
from pprint import pprint
import ast
with open('leads.json') as data_file:
data = ast.literal_eval(data_file.read())
pprint(data)
|
Flask: cannot import name 'app'
Question: Trying to run my python file `updater.py` to SSH to a server and run some
commands every few set intervals or so. I'm using APScheduler to run the
function `update_printer()` from `__init__.py`. Initially I got a `working
outside of application context error` but someone suggested that I just import
app from `__init__`.py. However it isn't working out so well. I keep getting a
`cannot import name 'app'` error.
**app.py**
from queue_app import app
if __name__ == '__main__':
app.run(debug=True)
**__init__.py**
from flask import Flask, render_template
from apscheduler.schedulers.background import BackgroundScheduler
from queue_app.updater import update_printer
app = Flask(__name__)
app.config.from_object('config')
@app.before_first_request
def init():
sched = BackgroundScheduler()
sched.start()
sched.add_job(update_printer, 'interval', seconds=10)
@app.route('/')
def index():
return render_template('index.html')
**updater.py**
import paramiko
import json
from queue_app import app
def update_printer():
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(app.config['SSH_SERVER'], username = app.config['SSH_USERNAME'], password = app.config['SSH_PASSWORD'])
...
**File Structure**
queue/
app.py
config.py
queue_app/
__init__.py
updater.py
**Error**
Traceback (most recent call last):
File "app.py", line 1, in <module>
from queue_app import app
File "/Users/name/queue/queue_app/__init__.py", line 3, in <module>
from queue_app.updater import update_printer
File "/Users/name/queue/queue_app/updater.py", line 3, in <module>
from queue_app import app
ImportError: cannot import name 'app'
What do I need to do be able to get to the app.config from updater.py and
avoid a "working outside of application context error" if ran from
APScheduler?
Answer: It's a circular dependency, as you import `updater` in your `__init__.py`
file. In my Flask setup, `app` is created in `app.py`.
|
Python find second lowest value in list
Question: I have a list of lists, such that:
a = [[1,0.8,0.4,0.1,0.3,0.5,1],
[1,0.8,0.5,0.0,0.3,0.5,1]],
........................]
As can be seen in `a[1]` there is a negative value in the array. At some point
later on in my code, I subtract the lowest value away from a constant (in this
case it is 1) within a loop, such that:
b = []
for i in range(len(a)):
b.append(1-min(a[i]))
However this presents a problem as in `a[1]` I want 1-0.1 and not 1-0.0. The
value of 0.0 was originally a negative value (its a noisy data point) and so I
used:
a[a<0]=0.0
I cannot remove the value entirely using `a=a[a>0.0]` as it is important that
I keep all of the data points (these are y values that have corresponding x
values). I would ideally like to ignore it rather then remove it.
Is there a way I could achieve something like:
b = []
for i in range(len(a)):
b.append(1-min(a[i]) where min(a[i]) is greater than 0) # i.e. the lowest value that isn't 0
Answer: Here is one solution.
b = []
for i in range(len(a)):
b.append(1 - min(filter(lambda x: x>0, a[i])))
No need remove from source, just do a temporary filter, or even just:
b = map(lambda x : 1 - min(filter(lambda y: y>0, x)), a)
|
local variable context_dict referenced before assignment
Question: I am making a django app .According to me everything is in views.py but when i
run the server it generates an error `local variable 'state' referenced before
assignment`
I have made the context_dict variable in the above view but then too it is
generating error.
views.py
from django.shortcuts import render
from .models import States,Colleges
def index(request):
all_states = States.objects.all()
context_dict = {'all_states':all_states}
return render(request,'practise_app/index.html',context_dict)
def college(request,state_slug):
try:
state = States.objects.get(slug = state_slug)
colleges = Colleges.objects.filter(state = state)
context_dict = {'state':state,'colleges':colleges}
except States.DoesNotExist:
pass
return render(request,'practise_app/colleges.html',context_dict)
TRACEBACK:
Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/madhya-pradesh/
Django Version: 1.8
Python Version: 3.5.1
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'practise_app')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware')
Traceback:
File "C:\Users\sahib navlani\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\handlers\base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "D:\practise_project\practise_app\views.py" in college
19. return render(request,'practise_app/colleges.html',context_dict)
Exception Type: UnboundLocalError at /madhya-pradesh/
Exception Value: local variable 'context_dict' referenced before assignment
Answer: That's pretty obvious, your code went to the `except` block, but
`context_dict` is only defined in `try` block, so when you use it in your
`render` function, it's not defined. The quickest fix is to define
`context_dict` as empty dict at the beginning of the function so that it's
always there when you return it.
|
Instagram API get location data on geotagged photo
Question: Is it possible to retrieve location data about a business or a venue a photo
was tagged at on `Instagram` using `python` and the `Instagram API`?
I have been using `access token` for my own account and have `Python
Instagram` installed.
Answer: You should query the image using the
[`get_media`](https://www.instagram.com/developer/endpoints/media/#get_media)
request to get its location details.
If the image was geo-tagged, the response will include the location details
like this:
"location": {
"latitude": 40.417044464,
"name": "Puerta del Sol Madrid",
"longitude": -3.703540741,
"id": 3002373
}
To get this data with `python` use the [`Python-
Instagram`](https://github.com/Instagram/python-instagram) package and call
`api.media`:
import instagram
# Fill in access token here and image id
access_token = ''
media_id = ''
api = instagram.client.InstagramAPI(access_token=access_token)
res = api.media(media_id)
print res.location
# You should get a Location object with a latitude and a longitude point
>> Location: 3002373 (Point: (40.417044464, -3.703540741))
|
Python: how to get the weekday from a CSV?
Question: I have a sample of CSV which contains various columns and i have to extract
the Weekday from the given `date` column. the sample is shown as below:
Name,Birthdate,Age,Address
ABC,3-10-2016 11:00:00AM,21,XYZ Street 21 zone
BCD,3-11-2016 15:54:00PM,22,WXY Street 21/A, S zone
CDW,4-11-2015 21:09:00PM,22,ZYX Street 21Avenue, North Zone
i want to read the CSV and extract the Date to determine the weekday of the
given date column
so far i have created a code to read the CSV and get the weekday but i am
unable to implement it on any other CSV.
The code is given below:
import csv
from datetime import datetime as dt
with open('date.csv', 'r') as f:
f.readline()
for line in f:
date = dt.strptime(line.strip(), '%m-%d-%Y %H:%M:%S').strftime('%A')
print date
please help me here as this is a part of my academic research.
NOTE: In case if the question is not clear then please let me know. :)
Answer: Use the [`csv` module](https://docs.python.org/2/library/csv.html) to read CSV
files, then parse the one column. Since you have a file with headers, it'd be
easiest to use the `DictReader()` approach here:
import csv
from datetime import datetime
with open(filename, 'rb') as infile:
reader = csv.DictReader(infile)
for row in reader:
birthdate = row['Birthdate'] # keys are named in the first row of your CSV
birthdate = datetime.strptime(birthdate, '%m-%d-%Y %H:%M:%S')
print birthdate.strftime('%A')
|
adding data to database with columns continuously added
Question: Hello I am trying to add data to a database with sqlite3 in python. However, I
am not so sure on how to write the sql code to add data to a database that
continuously gets more columns. How would I write the sql code to add data to
database that continuously gets more columns.
thank you for your time
Answer: To insert data you may use the cursor to execute the query. See the example
from python tutorial
<http://www.bogotobogo.com/python/python_sqlite_connect_create_drop_table.php>
db.close()
import sqlite3
db = sqlite3.connect('data/test.db')
cursor = db.cursor()
cursor.execute('''CREATE TABLE books(id INTEGER PRIMARY KEY,
... title TEXT, author TEXT, price TEXT, year TEXT)
... ''')
db.commit()
import sqlite3
db = sqlite3.connect('data/test.db')
cursor = db.cursor()
title1 = 'Learning Python'
author1 = 'Mark Lutz'
price1 = '$36.19'
year1 ='Jul 6, 2013'
title2 = 'Two Scoops of Django: Best Practices For Django 1.6'
author2 = 'Daniel Greenfeld'
price2 = '$34.68'
year2 = 'Feb 1, 2014'
cursor.execute('''INSERT INTO books(title, author, price, year)
... VALUES(?,?,?,?)''', (title1, author1, price1, year1))
cursor.execute('''INSERT INTO books(title, author, price, year)
... VALUES(?,?,?,?)''', (title2, author2, price2, year2))
db.commit()
Maybe this would help.
|
UDF's in redshift : Possible to reference a udf within another
Question: Is possible to nest UDF's within each other ?
Following is a code for computing confidence intervals in A/B tests -
Ofcourse, I could write a huge function that does all-in-one, but wondering a
better way to achieve this goal ?
set search_path to public;
create function cumnormdist(x float)
returns float
IMMUTABLE AS $$
import math
b1 = 0.319381530
b2 = -0.356563782
b3 = 1.781477937
b4 = -1.821255978
b5 = 1.330274429
p = 0.2316419
c = 0.39894228
h=math.exp(-x * x / 2.0)
if(x >= 0.0) :
t = 1.0 / ( 1.0 + p * x )
return (1.0 - c * h * t *( t *( t * ( t * ( t * b5 + b4 ) + b3 ) + b2 ) + b1 ))
else :
t = 1.0 / ( 1.0 - p * x );
return ( c * h * t *( t *( t * ( t * ( t * b5 + b4 ) + b3 ) + b2 ) + b1 ))
$$ language plpythonu;
set search_path to public;
create or replace function conversion(experience_total float,experience_conversions float)
returns float
IMMUTABLE AS $$
return experience_conversions*1.0/experience_total
$$ language plpythonu;
create or replace function zscore(total_c float,conversions_c float,total_t float,conversions_t float )
returns float
IMMUTABLE AS $$
import math
z = conversion(total_t,conversions_t )-conversion(total_c,conversions_c) # Difference in means
s =(conversion(total_t,conversions_t)*(1-conversion(total_t,conversions_t)))/total_t+(conversion(total_c,conversions_c)*(1-conversion(total_c,conversions_c)))/total_c
return float(z)/float(math.sqrt(s))
$$ language plpythonu;
create or replace function confidence(total_c float,conversions_c float,total_t float,conversions_t float )
returns float
IMMUTABLE AS $$
import math
return **(1-float(cumnormdist(zscore(total_c float,conversions_c float,total_t float,conversions_t float )),4))*100.00**
$$ language plpythonu;
The individual calls work fine, eg : `select cumnormdist (-3.1641397476);`
**If I insert them in the function definition, they don't** , for example
zscore that calls conversion function.
ERROR: NameError: global name 'zscore' is not defined. Please look at svl_udf_log for more information
DETAIL:
-----------------------------------------------
error: NameError: global name 'zscore' is not defined. Please look at svl_udf_log for more information
code: 10000
context: UDF
query: 0
location: udf_client.cpp:298
process: padbmaster [pid=3585]
-----------------------------------------------
**If I could nest functions inside each other,(instead of having UDF's as
above that are finally nested) that would be a reasonable status-quo**.
End goal : Publish these computations in Tableau.
Answer: Here's how I solved it. UDF's cannot cross-reference the contents of another
UDF, so you can create a custom library, upload it to AWS using CREATE
library.
[More here](http://docs.aws.amazon.com/redshift/latest/dg/udf-python-language-
support.html)
|
Connecting Python with Teradata using Teradata module
Question: I have installed python 2.7.0 and Teradata module on Windows 7. i am not able
to connect and quyey TD from python.
Pip install Teradata
Now i want to import teradata module using Python and perform operations like
-
1. firing queries to teradata and get result set.
2. check if connection is made to teradata. 3.etc..
Please help me writing code for the same as i am new to Python and there is no
information available with me to connect to teradata.
Answer: Download the Teradata Python module and python pyodbc.pyd from internet.
Install using cmd install setup.py.
Here is the sample script for connecting to teradata and extracting data:
import teradata
import pyodbc
import sys
udaExec = teradata.UdaExec (appName="HelloWorld", version="1.0",
logConsole=False)
session = udaExec.connect(method="odbc", dsn="prod32",
username="PRODRUN", password="PRODRUN");
i = 0
REJECTED = 'R';
f = file("output.txt","w");sys.stdout=f
cursor = session.cursor();
ff_remaining = 0;
cnt = cursor.execute("SELECT SEQ_NO,FRQFBKDC,PNR_RELOC FROM ttemp.ffremaining ORDER BY 1,2,3 ").rowcount;
rows = cursor.execute("SELECT SEQ_NO,FRQFBKDC,PNR_RELOC FROM ttemp.ffremaining ORDER BY 1,2,3 ").fetchall();
for i in range(cnt):
ff_remaining = cursor.execute("select count(*) as coun from ttemp.ffretroq_paxoff where seq_no=? and status <> ?",(rows[i].seq_no,REJECTED)).fetchall();
print ff_remaining[0].coun, rows[i].seq_no, REJECTED;
|
What happened to my code? --too many arguments Python error--
Question: So I have following code which eventually worked until some time ago:
import sys
from PyQt4 import QtCore, QtGui
from SerialMonitor import Ui_SerialMonitor
class StartQT4(QtGui.QMainWindow):
def __init__(self, parent=None):
QtGui.QWidget.__init__(self,parent)
self.ui = Ui_SerialMonitor()
self.ui.setupUi(self)
QtCore.QObject.connect(self.ui.readButton,QtCore.SIGNAL("clicked()"),self.startReading)
QtCore.QObject.connect(self.ui.stopButton, QtCore.SIGNAL("clicked()"),self.stopReading)
def startReading(self):
print("1")
self.ui.stopButton.isEnabled(False)
def stopReading(self):
print("2")
self.ui.readButton.isEnabled(True)
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
myapp = StartQT4()
myapp.show()
sys.exit(app.exec_())
After a couple of tries this code somewhat died and now it returns:
> line 13, in **init** self.ui.button_save.isEnabled(True) TypeError:
> QWidget.isEnabled(): too many arguments
Can't actually figure out what happened. The funny thing is that other similar
codes, which worked normally before, now stopped working with the same error.
Answer: Use `setEnabled(True)` instead of `isEnabled(True)`.
|
Python client server dilemma
Question: I'm trying to implement a project, where the python code will be written on
web-browser and then executed in a remote server. The arch is Javascript -->
Java --> python
The python code will be sent to java using web sockets, which is connected to
a python server using a TCP/IP socket. The script needs to be read line by
line from the socket using readLine and executed. It would be great if someone
can tell me how to run python commands within a python script. Is there a
better way to do it, like for example, save it as a file and run the the
entire script and send the output back to Java?
For example, I want to execute the following from the socket as I read it
using readLine...
import pylab as pl
import numpy as np
y = randn(100)
pl.plot(y)
pl.savefig('foo.png', bbox_inches='tight)
I have written the TCP/IP socket which gets the data from the java client
Any help here would be appreciated.
Answer: just put it in a variable as a string and use exec:
tmp_str = '''
print "Hello World!"
print "another hello world!"
'''
exec tmp_str
|
Python: AttributeError: 'str' object has no attribute 'datetime'
Question: I am using this code:
def calcDateDifferenceInMinutes(end_date,start_date):
fmt = '%Y-%m-%d %H:%M:%S'
start_date_dt = datetime.strptime(start_date, fmt)
end_date_dt = datetime.strptime(end_date, fmt)
# convert to unix timestamp
start_date_ts = time.mktime(start_date_dt.timetuple())
end_date_ts = time.mktime(end_date_dt.timetuple())
# they are now in seconds, subtract and then divide by 60 to get minutes.
return (int(end_date_ts-start_date_ts) / 60)
from this question: [stackoverflow
question](http://stackoverflow.com/questions/2788871/python-date-difference-
in-minutes/6879077#6879077)
But I'm getting this message:
> AttributeError: 'str' object has no attribute 'datetime'
I've reviewed similar questions but don't see any alternatives other than to
do something like:
start_date_dt = datetime.datetime.strptime(start_date, fmt)
Here's the full trace:
> Traceback (most recent call last): File "tabbed_all_cols.py", line
> 156, in <module>
> trip_calculated_duration = calcDateDifferenceInMinutes (end_datetime,start_datetime) File "tabbed_all_cols.py", line 41, in
> calcDateDifferenceInMinutes
> start_date_dt = datetime.datetime.strptime(start_date, fmt) AttributeError: 'str' object has no attribute 'datetime'
And line 41 is:
>
> start_date_dt = datetime.datetime.strptime(start_date, fmt)
>
Can someone shed light on what I'm missing?
**New Update** : I'm still trying to figure this out. I see that version is
important. I am using version 2.7 and am importing datetime.
I don't think I am setting the string date back to a string, which is what I
think people are suggesting below.
Thanks
Answer: When you get an error like `<str> object has no attribute X`, that means that
somewhere you are doing something like `some_object.X`. It also means that
`some_object` is a string. Since it doesn't have the attribute, it typically
means you are assuming that `some_object` is something else.
The full error message will tell you what line is causing the problem. In your
case, it is this:
start_date_dt = datetime.datetime.strptime(start_date, fmt) AttributeError: 'str' object has no attribute 'datetime'
The only object here that is accessing `datetime` is the first `datetime`.
That means that the first `datetime` is a string, and you're assuming it
represents a module.
If you were to print out `datetime` (eg: `print("datetime is:", datetime)`)
I'm sure you would see a string.
That means that somewhere else in your code you are overwriting `datetime` by
setting it to a string (eg: `datetime = "some string"`)
|
How to structure the Logging in this case (python)
Question: I was wondering what would be the best way for me to structure my logs in a
special situation.
I have a series of python services that use the same python files for
communicating (ex. com.py) with the HW. I have logging implemented in this
modules and i would like for it to be dependent(associated) with the main
service that is calling the modules.
How should i structure the logger logic so that if i have:
* main_service_1->module_for_comunication
The logging goes to file **main_serv_1.log**
* main_service_2->module_for_comunication
The logging goes to file **main_serv_2.log**
What would be the best practice in this case without harcoding anything?
Is there a way to know the file which is importing the com.py, so that i am
able inside of the com.py, to use this information to adapt the logging to the
caller?
Answer: In my experience, for a situation like this, the cleanest and easiest to
implement strategy is to **pass the logger to the code that does the
logging**.
So, create a logger for each service you want to have log to a different file,
and pass that logger in to the code from your communications module. You can
use `__name__` to get the name of the current module (the actual module name,
without the `.py` extension).
In the example below I implemented a fallback for the case when no logger is
passed in as well.
**`com.py`**
from log import setup_logger
class Communicator(object):
def __init__(self, logger=None):
if logger is None:
logger = setup_logger(__name__)
self.log = logger
def send(self, data):
self.log.info('Sending %s bytes of data' % len(data))
* * *
**`svc_foo.py`**
from com import Communicator
from log import setup_logger
logger = setup_logger(__name__)
def foo():
c = Communicator(logger)
c.send('foo')
* * *
**`svc_bar.py`**
from com import Communicator
from log import setup_logger
logger = setup_logger(__name__)
def bar():
c = Communicator(logger)
c.send('bar')
* * *
**`log.py`**
from logging import FileHandler
import logging
def setup_logger(name):
logger = logging.getLogger(name)
handler = FileHandler('%s.log' % name)
logger.addHandler(handler)
return logger
* * *
**`main.py`**
from svc_bar import bar
from svc_foo import foo
import logging
# Add a StreamHandler for the root logger, so we get some console output in
# addition to file logging (for easy of testing). Also set the level for
# the root level to INFO so our messages don't get filtered.
logging.basicConfig(level=logging.INFO)
foo()
bar()
* * *
So, when you execute `python main.py`, this is what you'll get:
On the console:
INFO:svc_foo:Sending 3 bytes of data
INFO:svc_bar:Sending 3 bytes of data
And `svc_foo.log` and `svc_bar.log` each will have one line
Sending 3 bytes of data
If a client of the `Communicator` class uses it without passing in a logger,
the log output will end up in `com.log` (fallback).
|
Cross Validation for Logistic Regression
Question: I am wondering how to use cross validation in python to improve the accuracy
of my logistic regression model. The dataset being used is called 'iris'. I
have already successfully used cross validation for a SVM model but I am
struggling to adjust my code to do the same for the logistic regression model.
Here's my code so far:
from sklearn import cross_validation
from sklearn import datasets, linear_model
iris = datasets.load_iris()
x_iris = iris.data
y_iris = iris.target
svc = svm.SVC(C=1, kernel='linear')
k_fold = cross_validation.StratifiedKFold(y_iris, n_folds=10)
# labels, the number of folders
#for train, test in k_fold:
# print train, test
scores = cross_validation.cross_val_score(svc, x_iris, y_iris, cv=k_fold, scoring='accuracy')
# clf.fit() is repeatedly called inside the cross_validation.cross_val_score()
print scores
print 'average score = ', np.mean(scores)
print 'std of scores = ', np.std(scores)
What adjustments must I make to the code to achieve successful cross
validation for my logistic regression model?
Thanks for any help.
Answer:
lg = LogisticRegression()
scores = cross_validation.cross_val_score(lg, x_iris, y_iris, cv=k_fold,scoring='accuracy')
print scores
print 'average score = ', np.mean(scores)
print 'std of scores = ', np.std(scores)
Creating the `LogisticRegression` with default values classifier works fine
for me. The output is slightly lower than the `SVM` machine approach,
`0.953333333333` vs. `0.973333333333`.
But for **parameter adjustment** you can always use `GridSearchCV` which
automatically performs a cross-validation of `cv` folds (in the next example
I'll use `10` as you did before) trying all possible combinations of
parameters. Example:
from sklearn import grid_search
parameters = {
'penalty':['l2'],
'C':[1,10,100],
'solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag'],
}
GS = grid_search.GridSearchCV(lg, parameters,cv=10,verbose=10)
GS.fit(x_iris,y_iris)
print GS.best_params_ # output: {'penalty': 'l2', 'C': 100, 'solver': 'liblinear'}
print GS.best_score_ # output: 0.98
By doing this, creating your classifier with best params
`LogisticRegression(penalty='l2',C=100,solver='liblinear')` will give you a
`0.98` accuracy.
> **Gentle warning** : when performing cross validation you'd better save a
> portion of your data for testing purposes that has not been included in the
> learning process. Otherwise, one way or another your learning algorithm has
> seen all data and you could easily fall into overfitting.
|
How to import python file located in same subdirectory in a pycharm project
Question: I have an input error in pycharm when debugging and running.
My project structure is rooted properly, `etc./HW3/.` so that `HW3` is the
root directory.
I have a subfolder in HW3, `util`, and a file, `util/util.py`. I have another
file in `util` called `run_tests.py`.
In `run_tests.py`, I have the following import structure,
from util.util import my_functions, etc.
This yields an input error, `from util.util import
load_dataset,proportionate_sample ImportError: No module named 'util.util';
'util' is not a package`
* * *
However, in the exact same project, in another directory (same level as
`util`) called `data`, I have a file `data/data_prep.py`, which also imports
functions from `util/util.py` using a similar import statement...and it runs
without any problems.
* * *
Obviously, I am doing this as a homework, so I'm not that experienced...but
I've done this exact configuration for the last 3 homeworks and ran into zero
problems, so I have no idea how to even troubleshoot this problem--especially
when the other file works.
* * *
The problem goes away when I move the file to another directory. So I guess
this question is **How do I import a python file located in the same directory
in a pycharm project?** Because pycharm raises an error if I just do `import
util` and prompts me to use the full name from the root.
Answer: If you don't have an `__init__.py` create one and add this line
from util.util import my_function
then you can easily import the module in your scripts the `__init__.py` tells
python that it should treat that folder as a python package, it can also be
used to import/load modules too.
in most cases the `__init__.py` is empty.
Quoting the docs
> The **init**.py files are required to make Python treat the directories as
> containing packages; this is done to prevent directories with a common name,
> such as string, from unintentionally hiding valid modules that occur later
> on the module search path. In the simplest case, **init**.py can just be an
> empty file, but it can also execute initialization code for the package or
> set the **all** variable, described later.
|
Error running basic tensorflow example
Question: I have just reinstalled latest tensorflow on ubuntu:
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
[sudo] password for ubuntu:
The directory '/home/ubuntu/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/ubuntu/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting tensorflow==0.7.1 from https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl
Downloading https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl (13.8MB)
100% |████████████████████████████████| 13.8MB 32kB/s
Requirement already up-to-date: six>=1.10.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: protobuf==3.0.0b2 in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: wheel in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: numpy>=1.8.2 in /usr/local/lib/python2.7/dist-packages (from tensorflow==0.7.1)
Requirement already up-to-date: setuptools in /usr/local/lib/python2.7/dist-packages (from protobuf==3.0.0b2->tensorflow==0.7.1)
Installing collected packages: tensorflow
Found existing installation: tensorflow 0.7.1
Uninstalling tensorflow-0.7.1:
Successfully uninstalled tensorflow-0.7.1
Successfully installed tensorflow-0.7.1
When following the directions to test it fails with **cannot import name
pywrap_tensorflow** :
$ ipython
/git/tensorflow/tensorflow/__init__.py in <module>()
21 from __future__ import print_function
22
---> 23 from tensorflow.python import *
/git/tensorflow/tensorflow/python/__init__.py in <module>()
43 _default_dlopen_flags = sys.getdlopenflags()
44 sys.setdlopenflags(_default_dlopen_flags | ctypes.RTLD_GLOBAL)
---> 45 from tensorflow.python import pywrap_tensorflow
46 sys.setdlopenflags(_default_dlopen_flags)
47
ImportError: cannot import name pywrap_tensorflow
Is there an additional change needed to my python or ubuntu/bash environment?
Answer: From the path in your stack trace (`/git/tensorflow/tensorflow/…`), it looks
like your Python path may be loading the tensorflow libraries from the source
directory, rather than the version that you have installed. As a result, it is
unable to find the (compiled) `pywrap_tensorflow` library, which is installed
in a different directory.
A common solution is to `cd` out of the `/git/tensorflow` directory before
starting `python` or `ipython`.
|
Rotating an image using cImage
Question: I'm trying to rotate an image in Python by 90 degrees using cImage (and only
cImage) and here's what I've got so far.
def rotateImage90CW(imageFile):
myWin = ImageWin("90D", 350, 630)
oldIm = FileImage(imageFile)
newIm = EmptyImage(350, 630)
bigList = []
for row in range(oldIm.getHeight()):
bigList.append([]);
for column in range(oldIm.getWidth()):
x = oldIm.getPixel(column, row)
bigList[row].append(x)
bigList.reverse()
for row in range(newIm.getHeight()):
for column in range(newIm.getWidth()):
newIm.setPixel(column, row, bigList[column][row])
newIm.draw(myWin)
Unfortunately, it only gives me a fully black image when I load the function.
Can someone tell me what's wrong with my code? :[ Thanks.
Answer: I used `map` and `zip` to basically transpose your list:
from cImage import ImageWin, FileImage, EmptyImage
def rotate90D(img):
'''
Rotate +90deg
'''
assert img != None, 'Image is empty!'
map(list, zip(*img)) # transpose; this can be called multiple times to rotate to 0, 90, 180, 270
def rotate90D_(img):
'''
Rotate -90deg
'''
assert img != None, 'Image is empty!'
img.reverse()
map(list, zip(*img)) # transpose; this can be called multiple times to rotate to 0, 90, 180, 270
def unshared_copy(inList):
'''
Create a copy of a lists of lists
'''
# I use this because of map(list, zip(*img))
if isinstance(inList, list):
return list( map(unshared_copy, inList) )
return inList
# Load image
oldIm = FileImage('/home/USER/Pictures/google doodles/google-doodle-90th-anniversary-of-the-first-demonstration-of-television-6281357497991168.2-hp.jpg')
# Create two empty image (one for +90deg and another for -90deg
newIm90D = EmptyImage(oldIm.height, oldIm.width)
newIm90D_ = EmptyImage(oldIm.height, oldIm.width)
# Create windows for displaying all the images
myWin0D = ImageWin('0Deg', oldIm.width, oldIm.height)
myWin90D = ImageWin('+90Deg', newIm90D.width, newIm90D.height)
myWin90D_ = ImageWin('-90Deg', newIm90D_.width, newIm90D_.height)
# Generate a list of lists from the loaded image
img_to_matrix = []
for row in range(oldIm.getHeight()):
t = [];
for column in range(oldIm.getWidth()):
x = oldIm.getPixel(column, row)
t.append(x)
img_to_matrix.append(t)
# Create a copy of the list of lists so that we can demonstrate rotation in both directions
img_to_matrix2 = unshared_copy(img_to_matrix)
# Rotate +90deg
rotate90D(img_to_matrix)
# Rotate -90deg
rotate90D_(img_to_matrix2)
# Load the pixel data in the respective images
for row in range(newIm90D.getHeight()):
for col in range(newIm90D.getWidth()):
newIm90D.setPixel(col, row, img_to_matrix[col][row])
for row in range(newIm90D_.getHeight()):
for col in range(newIm90D_.getWidth()):
newIm90D_.setPixel(col, row, img_to_matrix2[col][row])
# Display the images
oldIm.draw(myWin0D)
newIm90D.draw(myWin90D)
newIm90D_.draw(myWin90D_)
And here is what I get:
[](http://i.stack.imgur.com/BjcYX.png)
Since you named your list of lists `bigLists` I would assume that you want to
load huge image. So you should consider modifying my sample code since (as
mentioned in the code comments) I do some copying.
**PS:** Frankly, I personally wouldn't bother using an image "library" that
can't do basic transformations. I looked inside `cImage.py` and it seems that
it uses `PIL`'s `Image` as a basis. `PIL` itself does offer rotation (and
other such basic things) out of the box if I recall correctly (I'm more of an
OpenCV guy) though I don't know if the `cImage` custom image format is
compatible with `PIL`'s image processing tools.
|
How to find every walk in a numpy array
Question: I'm trying to find every single "walk" of length n through an array. A walk in
this case is defined as a sequence of length n of adjacent elements
(horizontal, diagonal, or vertical) in the array so that point is repeated.
For example, a 2x2 matrix
[1 2]
[4 8]
would have walks of length 2: (1, 2), (1, 4), (1, 8), (2, 1), (2, 4), (2, 8)
...
walks of length 3: (1, 2, 4), (1, 2, 8), (1, 4, 2), (1, 4, 8) ... and so on
How could I implement a fast implementation of such an algorithm for small
(5x5) matrices in python/numpy, possibly using some aspect of maths that I
don't know currently?
Current slow implementation:
from copy import deepcopy
def get_walks(arr, n):
n = n-1
dim_y = len(arr)
dim_x = len(arr[0])
# Begin with every possibly starting location
walks = [[(y, x)] for y in range(dim_y) for x in range(dim_x)]
# Every possible direction to go in
directions = [(0,1), (1,1), (1,0), (1, -1), (0, -1), (-1,-1), (-1, 0), (-1, 1)]
temp_walks = []
for i in range(n):
# Go through every single current walk and add every
# possible next move to it, making sure to not repeat any points
#
# Do this n times
for direction in directions:
for walk in walks:
y, x = walk[-1]
y, x = y+direction[0], x+direction[1]
if -1 < y < dim_y and -1 < x < dim_x and (y, x) not in walk:
temp_walks.append(walk + [(y, x)])
# Overwrite current main walks list with the temporary one and start anew
walks = deepcopy(temp_walks)
temp_walks = []
return walks
Answer: I've come up with a recursive solution. Since you want to treat only small
problems, this approach can be feasible. I don't have numpy installed for
python 3, so this is only guaranteed to work for python 2 as-is (but it should
be fairly compatible). Also, I'm pretty sure my implementation is far from
optimal.
When checking my output against yours, it occured to me that I get 200 paths
for a 3x3 case, while you get 160. Looking at the paths, I think your code has
some bug, and you are the one missing paths (and not me having additional
ones). Here's my version:
import numpy as np
import timeit
def get_walks_rec(shape,inpath,ij,n):
# add n more steps to mypath, with dimensions shape
# procedure: call shorter walks for allowed neighbouring sites
mypath = inpath[:]
mypath.append(ij)
# return if this is the last point
if n==0:
return mypath
i0 = ij[0]
j0 = ij[1]
neighbs = [(i,j) for i in (i0-1,i0,i0+1) for j in (j0-1,j0,j0+1) if 0<=i<shape[0] and 0<=j<shape[1] and (i,j)!=(i0,j0)]
subpaths = [get_walks_rec(shape,mypath,neighb,n-1) for neighb in neighbs]
# flatten out the sublists for higher levels
if n>1:
flatpaths = []
map(flatpaths.extend,subpaths)
else:
flatpaths = subpaths
return flatpaths
# front-end for recursive function, called only once
def get_walks_rec_caller(mat,n):
# collect all the paths starting from each point of the matrix
sh = mat.shape
imat,jmat = np.meshgrid(np.arange(sh[0]),np.arange(sh[1]))
tmppaths = [get_walks_rec(sh,[],ij,n-1) for ij in zip(imat.ravel(),jmat.ravel())]
# flatten the list of lists of paths to a single list of paths
allpaths = []
map(allpaths.extend,tmppaths)
return allpaths
# input
mat = np.random.rand(3,3)
nmax = 3
# original:
walks_old = get_walks(mat,nmax)
# new recursive:
walks_new = get_walks_rec_caller(mat,nmax)
# timing:
number = 1000
print(timeit.timeit('get_walks(mat,nmax)','from __main__ import get_walks,mat,nmax',number=number))
print(timeit.timeit('get_walks_rec_caller(mat,nmax)','from __main__ import get_walks_rec_caller,mat,nmax',number=number))
For this 3x3 case with a max path length of 3, 1000 runs with `timeit` gives
me 1.81 seconds with yours vs 0.53 seconds with mine (and you're missing 20%
of your paths). For a 4x4 case with max length of 4, 100 runs give 2.1 seconds
(yours) vs 0.67 seconds (mine).
An example path, which is present in mine but seems to be missing from yours:
[(0, 0), (0, 1), (0, 0)]
|
Attribute Error: at /auth/login/facebook/ Exception Value: operators
Question: Am Using the following configuration with django
**cassandra-driver (3.1.0)**
**Django (1.9.4)**
**django-cassandra-engine (0.7.0)**
**django-oauth-toolkit (0.10.0)**
**django-rest-framework-social-oauth2 (1.0.4)**
**djangorestframework (3.3.2)**
**oauthlib (1.0.3)**
**python-social-auth (0.2.14)**
**Python 2.7.9**
**My site settings.py**
INSTALLED_APPS = [
'django_cassandra_engine',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'oauth2_provider',
'userlogin',
'social.apps.django_app.default',
'rest_framework_social_oauth2'
]
DATABASES = {
'default': {
'ENGINE': 'django_cassandra_engine',
'NAME': 'sample',
'TEST_NAME' : 'test_sample',
'HOST': 'localhost'
}
}
AUTHENTICATION_BACKENDS = (
'social.backends.facebook.FacebookOAuth2',
'social.backends.facebook.FacebookOAuth2',
'social.backends.google.GoogleOAuth2',
'social.backends.twitter.TwitterOAuth',
'django.contrib.auth.backends.ModelBackend',
)
LOGIN_REDIRECT_URL = '/'
**Home.html**
{% extends 'base.html' %} {% block main %}
<div>
<h1>Third-party authentication demo</h1>
<p>
<ul>
{% if user and not user.is_anonymous %}
<li>
<a>Hello {{ user.get_full_name|default:user.username }}!</a>
</li>
<li>
<a href="{% url 'auth:logout' %}?next={{ request.path }}">Logout</a>
</li>
{% else %}
<li>
<a href="{% url 'social:begin' 'facebook' %}?next={{ request.path }}">Login with Facebook</a>
</li>
<li>
<a href="{% url 'social:begin' 'google-oauth2' %}?next={{ request.path }}">Login with Google</a>
</li>
<li>
<a href="{% url 'social:begin' 'twitter' %}?next={{ request.path }}">Login with Twitter</a>
</li>
{% endif %}
</ul>
</p>
</div>
{% endblock %}
**Views.py**
from django.shortcuts import render
from django.shortcuts import render_to_response
from django.template.context import RequestContext
def home(request):
context = RequestContext(request,
{'request': request,
'user': request.user})
return render_to_response('home.html',
context_instance=context)
**URLS.py**
urlpatterns = patterns('',
url(r'^$', 'userlogin.views.home', name='home'),
url(r'^admin/', include(admin.site.urls)),
url(r'^auth/', include('rest_framework_social_oauth2.urls')),
url('', include('social.apps.django_app.urls', namespace='social')),
url('', include('django.contrib.auth.urls', namespace='auth')),
)
When I access the facebook authetication, i received the following error.
Traceback:
File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
235. response = middleware_method(request, response)
File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/middleware.py" in process_response
50. request.session.save()
File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/db.py" in save
80. return self.create()
File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/db.py" in create
49. self._session_key = self._get_new_session_key()
File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/base.py" in _get_new_session_key
155. if not self.exists(session_key):
File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/db.py" in exists
45. return self.model.objects.filter(session_key=session_key).exists()
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py" in exists
651. return self.query.has_results(using=self.db)
File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in has_results
501. return compiler.has_results()
File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py" in has_results
819. return bool(self.execute_sql(SINGLE))
File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py" in execute_sql
837. sql, params = self.as_sql()
File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py" in as_sql
389. where, w_params = self.compile(self.where) if self.where is not None else ("", [])
File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py" in compile
366. sql, params = node.as_sql(self, self.connection)
File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/where.py" in as_sql
79. sql, params = compiler.compile(child)
File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py" in compile
366. sql, params = node.as_sql(self, self.connection)
File "/usr/local/lib/python2.7/site-packages/django/db/models/lookups.py" in as_sql
160. rhs_sql = self.get_rhs_op(connection, rhs_sql)
File "/usr/local/lib/python2.7/site-packages/django/db/models/lookups.py" in get_rhs_op
164. return connection.operators[self.lookup_name] % rhs
File "/usr/local/lib/python2.7/site-packages/django_cassandra_engine/base/__init__.py" in __getattr__
103. raise AttributeError(attr)
Exception Type: AttributeError at /auth/login/facebook/
Exception Value: operators
Can somebody please tell me what am i doing wrong here?
Answer: You need to set `django_cassandra_engine` as secondary database backend:
from cassandra import ConsistencyLevel
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
},
'cassandra': {
'ENGINE': 'django_cassandra_engine',
'NAME': 'db',
'USER': 'user',
'PASSWORD': 'pass',
'TEST_NAME': 'test_db',
'HOST': '127.0.0.1',
'OPTIONS': {
'replication': {
'strategy_class': 'SimpleStrategy',
'replication_factor': 1
},
'connection': {
'consistency': ConsistencyLevel.LOCAL_ONE,
'retry_connect': True
},
'session': {
'default_timeout': 10,
'default_fetch_size': 10000
# + All options for cassandra.cluster.Session()
}
}
}
}
If you have plans for using f.ex. `django.contrib.auth` or
`django.contrib.admin`, then `django_cassandra_engine` has to be your
**secondary** database backend (not _default_ one).
Further instructions: <https://r4fek.github.io/django-cassandra-
engine/guide/advanced_usage/#cassandra-as-secondary-database>
|
Python function to read JSON file and retrieve the correct value
Question: I'm reading a JSON file to retrieve some values with my extract_json function
and calling it by `time_minutes_coords = extract_json("boxes", "time_minutes",
"coord")` which gives me the right path to my coord value.
def extract_json(one,two,three):
with open('document.json') as data_file:
data = json.load(data_file)
return data[one][two][three]
But it just works for 3 arguments. What if I would like to use this function
for any number of arguments passed? I would like to have something like:
def extract_json(*args):
with open('document.json') as data_file:
data = json.load(data_file)
return data[args]
but all the args are displayed in this way:
> (args1, args2, args3, args4)
and `data(args1, args2, args3, args4)` returns nothing. How can I have
something like:
> data[args1][args2][args3][args4]
for moving to the correct value in the json file?
Answer: You can solve it with _JSONPath_ via the [`jsonpath-rw`
module](https://pypi.python.org/pypi/jsonpath-rw). Working sample:
from jsonpath_rw import parse
obj = {
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}
keys = ["glossary", "GlossDiv", "GlossList", "GlossEntry", "GlossDef", "para"]
jsonpath_expr = parse(".".join(keys))
print(jsonpath_expr.find(obj)[0].value)
Prints:
A meta-markup language, used to create markup languages such as DocBook.
Here the keys are coming in a form of a list (in your case it is `args`).
Then, the keys are joined with a `dot` to construct a path to the desired
node.
|
Unable to import python library urllib
Question:
sudo apt-get install python-urllib
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package python-urllib
Why is it coming like this?? Please help out
Answer: Consult [this page](http://askubuntu.com/questions/378558/unable-to-locate-
package-while-trying-to-install-packages-by-apt)
The package either does not exist or is distributed through different software
source from the one you are using with `apt-get`
|
Python: infos about the implementation of a Python function
Question: I'm discovering the CPython implementation, the structure of Python objects
and the Python bytecodes.
Playing with functions, I've found out that empty functions have a stack size
of 1.
Why? **What var is declared to occupy the stack space?**
**Empty function:**
def empty():
pass
**Function infos:**
>>> dis.show_code(empty)
Name: empty
Filename: <pyshell#27>
Argument count: 0
Kw-only arguments: 0
Stack size: 1
Number of locals: 0
Variable names:
Constants:
0: None
Names:
Flags: OPTIMIZED, NEWLOCALS, NOFREE
First line number: 1
Free variables:
Cell variables:
**Function with locals:**
def withlocals():
first = 0
second = [1, 2, 3]
**Function infos:**
>>> dis.show_code(withlocals)
Name: withlocals
Filename: <pyshell#27>
Argument count: 0
Kw-only arguments: 0
Stack size: 3
Number of locals: 2
Variable names:
0: first
1: second
Constants:
0: None
1: 0
2: 1
3: 2
4: 3
Names:
Flags: OPTIMIZED, NEWLOCALS, NOFREE
First line number: 1
Free variables:
Cell variables:
Answer: The `stack_size` is the _upper bound_ of the stack usage by the interpreter
opcodes. However, the analysis has some
[bugs](https://bugs.python.org/issue26204) and another, larger one at the end
of this post, so the bound is not tight.
>>> def empty():
... pass
...
>>> import dis
>>> dis.dis(empty)
2 0 LOAD_CONST 0 (None)
3 RETURN_VALUE
An empty function returns `None`. It requires 1 item of stack to load the
reference to `None` on top of stack; `RETURN_VALUE` returns the value that is
stored on top of stack.
The local variables themselves are not included in this count, which is very
evident from
>>> def many_vars():
... a = 1
... b = 2
... c = 3
... d = 4
... e = 5
... f = 6
... g = 7
...
>>> many_vars.__code__.co_stacksize
1
* * *
In the case of
def withlocals():
first = 0
second = [1, 2, 3]
the stack must be large enough to build the list of 3. If you add elements to
the list, the stack grows by that amount. I've added the size of the stack at
each point to the dump:
>>> dis.dis(withlocals)
2 0 LOAD_CONST 1 (0) 1
3 STORE_FAST 0 (first) 0
3 6 LOAD_CONST 2 (1) 1
9 LOAD_CONST 3 (2) 2
12 LOAD_CONST 4 (3) 3
15 BUILD_LIST 3 1
18 STORE_FAST 1 (second) 0
21 LOAD_CONST 0 (None) 1
24 RETURN_VALUE 0
However the analysis seems to have bugs when it comes to tuple constants:
>>> def a_long_tuple():
... first = (0, 0, 0, 0, 0, 0, 0)
...
...
>>> dis.dis(a_long_tuple)
2 0 LOAD_CONST 2 ((0, 0, 0, 0, 0, 0, 0))
3 STORE_FAST 0 (first)
6 LOAD_CONST 0 (None)
9 RETURN_VALUE
>>> dis.show_code(a_long_tuple)
Name: withlocals
Filename: <stdin>
Argument count: 0
Kw-only arguments: 0
Number of locals: 1
Stack size: 7
Flags: OPTIMIZED, NEWLOCALS, NOFREE
Constants:
0: None
1: 0
2: (0, 0, 0, 0, 0, 0, 0)
Variable names:
0: first
The code only has one tuple, that is a constant, yet the analysis claims it
requires stack space of 7, in both Python 2 and 3!
The reason for that is that the assembled code for building a constant tuple
is initially identical to building a list, except with `BUILD_TUPLE` opcode at
the end; but the [peephole optimizer
optimizes](https://github.com/python/cpython/blob/c448fb502e87bfa73c1bf0c2bd78f5176feaa52d/Python/compile.c#L4732)
that into `LOAD_CONST` from partial assembler output. However the
[`co_stacksize`](https://github.com/python/cpython/blob/c448fb502e87bfa73c1bf0c2bd78f5176feaa52d/Python/compile.c#L4745)
is calculated based on the original assembled code!
|
TypeError: 'list' object is not callable: why?
Question:
from difflib import get_close_matches
order_output = {
"initiate ion drive" : "Initiating the ion drive.",
"run" : "Running",
"eat" : "Eating",
"enter coords to" : "Settings coords to:"
}
orders_list = ["initiate ion drive", "eat", "run", "enter coords to"]
def ord_input():
order = input()
order1 = get_close_matches(order, orders_list)
order2 = ''.join(map(order1, order_output))
if order: ^#the problem
print(order_output[order1])
else:
print("Don't know this order.")
ord_input()
This is the error:
Traceback (most recent call last):
File "C:/Python34/order.py", line 16, in <module>
ord_input()
File "C:/Python34/order.py", line 11, in ord_input
order2 = ''.join(map(order1, order_output))
TypeError: 'list' object is not callable
I want to convert that specific list string to a regular string.
How can I do that?
Answer: `difflib.get_close_matches` returns a list. The `map` builtin takes a
_function_ as its first argument, and applies it to each element of the
iterable that is the second argument. So you're trying to call a list as a
function, which is a `TypeError`. A list is not callable.
If the goal is to index the `order_output` dictionary, just do so directly.
order2 = order_output[order1]
|
Is array preferred over set or map?
Question: I recently interviewed with a company in the bay area (CA,USA). One of the
questions was to simply find if a string has repeated characters(I have
simplified a lengthy question).
>
> eg:
> input : "qwerrty"
> output : True
>
I used python to code this.
I gave a solution that uses a set to track the elements encountered during the
iteration.
However the interviewer wanted me to use an array[255] that tracks the
characters encountered.
Although I was quite comfortable using either of them, my opinion was to use a
set simply because we are wasting 255 character space when we use an array.
This is because (as we all know) initially we create an arr[255] = 0 all
elements being zero and then increment the ASCII equivalent index value by 1.
A set on the other hand would spend memory only on the elements visited.
Since he (kind of) argued to use an array over a set I am curious to know if
he was technically correct. Is array preferred over a set/map in this case? If
so, why?
Answer: One thing to notice about this question is that if there are only C possible
distinct characters that can be in the string, then for any string you get of
length C+1 or greater you can automatically return that a duplicate exists
without even looking at the string because there are too many characters for
them to all be unique (this is the pigeonhole principle at work). This is
important for thinking about the structure of this particular problem.
Next, notice that you don't even need a bunch of counters. You can just get
away with one bit per character, since you just need to know whether you've
never seen a character (0) or seen it before (1) when you are iterating across
the array. That means that you need one bit per character. If your word size
is W, this means you need roughly C / W total machine words of storage space
for the array-based solution.
Let's imagine that you're working with C = 256 (say, for example, each
character is a one-byte value) on a machine with a 32-bit word size (W = 32).
This means that you need eight machine words to store the bit array, which is
a negligible amount of storage space and can easily be initialized to 0. Now,
think about your set implementation. If you use a hash table, there will be
some sort of internal array used to store everything. You also need space to
store information about the hash function, and usually you'd cache the size of
the set somewhere. That's going to eat up something like three machine words
just for the size and hash function info, which leaves you five words of
space. If the hash table is implemented generically and each entry uses up one
machine word, then your approach only saves space if you have a hash table of
four entries or less, which is unlikely to happen. If your hash table is
optimized and stores char values directly, then you can store up to five
words' worth of chars (20 chars) without any collisions, but if you tried to
keep the load factor low you'd probably resize the table after you saw 10 or
so chars. So in short, unless you have a _very_ short string, the hash table
approach probably will use _more_ memory, and the overhead of the hashing will
be high. The array approach is likely faster.
On the other hand, imagine that you're storing arbitrary Unicode characters in
the string. Now, C = 1,114,112 (thanks, Wikipedia), and even with a 64-bit
word size you're talking about needing an array of 17,408 machine words to
store one bit per possible character. That's a _lot_ of storage space and it's
going to take a while to initialize it. Now, if the strings you're getting as
input are "reasonable" and not pathologically-constructed, chances are you're
going to find a duplicate element pretty early on in the string (if the string
is totally random, then by the birthday paradox you'll only need √(2C)
characters before you'll get a duplicate, on average), so building a hash
table will likely require a lot less space. If the strings are pathologically
constructed so that every character is unique, though, the constant factor
overhead from the hash functions being computed, the hash table resizing, etc.
will likely mean that your approach will be slower than the array-based one,
but that's an unusual use case.
To summarize:
* If the number of possible characters is small (think ASCII), the array-based approach is likely going to be a lot faster and more memory-efficient.
* If the number of possible characters is large (think Unicode), the array-based approach is likely going to be slower and less memory-efficient on reasonable inputs, but for pathologically-chosen inputs may potentially be faster than the hash-based approach.
Now, that said, you could argue that unless the code is run in a tight loop,
anything other than "just use a set" makes the code hard to read for a minimal
benefit to the overall program efficiency. For that reason, a reasonable
answer would be "use the set unless there's a reason not to, and then switch
to the array-based one only if the data supports it."
|
multi thread issue in Python
Question: New to Python multi-thread and write such simple program, here is my code and
error message, any ideas what is wrong? Thanks.
Using Python 2.7.
import time
import thread
def uploader(threadName):
while True:
time.sleep(5)
print threadName
if __name__ == "__main__":
numOfThreads = 5
try:
i = 0
while i < numOfThreads:
thread.start_new_thread(uploader, ('thread'+str(i)))
i += 1
print 'press any key to exit test'
n=raw_input()
except:
print "Error: unable to start thread"
Unhandled exception in thread started by <pydev_monkey._NewThreadStartupWithTrace instance at 0x10e12c830>
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydev_monkey.py", line 521, in __call__
return self.original_func(*self.args, **self.kwargs)
TypeError: uploader() takes exactly 1 argument (7 given)
thanks in advance, Lin
Answer: The args of `thread.start_new_thread` [need to be a
tuple](https://docs.python.org/2.7/library/thread.html#thread.start_new_thread).
Instead of this:
('thread' + str(i)) # results in a string
Try this for the args:
('thread' + str(i),) # a tuple with a single element
Incidentally, you should check out the [`threading`
module](https://docs.python.org/2.7/library/threading.html), which is a
higher-level interface than `thread`.
|
is it possible to use the discovery module from the Google apiclient in Cloud Datalab?
Question: I have a simple python script that does something like this:
from apiclient import discovery
from oauth2client.client import GoogleCredentials
ggSvc = discovery.build ( 'genomics', 'v1', credentials=credentials )
body = { "readGroupSetIds": [readGroupSetId],
"referenceName": args.chr,
"start": args.pos-2,
"end": args.pos+2,
"pageSize": 256 }
r = ggSvc.reads().search ( body=body ).execute()
is it possible to do this from Datalab or is my best option to use the
requests module and then construct and post the http request that way?
Answer: The following command will install the google api python client
`!pip install google-api-python-client`
You can also run commands using the `%%bash` cell magic option.
For example,
%%bash
pip install google-api-python-client
|
Invalid default value for user_id_id in django
Question: I am new to django and I am creating my models but I am having trouble when
trying to add a foreign key to another model. here's my models:
from django.db import models
class User(models.Model):
user_id = models.CharField(max_length=10, primary_key=True)
name = models.CharField(max_length=30)
surname = models.CharField(max_length=30)
role = models.CharField(max_length=10)
address = models.CharField(max_length=50)
email = models.EmailField(max_length=30)
password = models.CharField(max_length=20)
phone = models.IntegerField()
GPA = models.FloatField(max_length=5)
Gender = models.CharField(max_length=1)
def __str__(self):
return self.user_id
class Login(models.Model):
user_id = models.ForeignKey(User, on_delete=models.CASCADE, default='00000')
email = models.EmailField(max_length=30)
password = models.CharField(max_length=20)
def __str__(self):
return self.email
When I type makemigrations I get this:
You are trying to change the nullable field 'user_id' on login to non-nullable without a default; we can't do that (the database needs something to populate existing rows).
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows)
2) Ignore for now, and let me handle existing rows with NULL myself (e.g. because you added a RunPython or RunSQL operation to handle NULL values in a previous data migration)
3) Quit, and let me add a default in models.py
Select an option: 3
So I added a default value but I am getting this error when I tried to
migrate. I tried to change user_id from User to AutoField so I don't have to
add any default value but it is still giving me this error. Plus I don't know
why it says user_id_id at the end. Can anyone help me out with this?
Running migrations:
Rendering model states... DONE
Applying login.0003_login_user_id...Traceback (most recent call last):
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\backends\mysql\base.py", line 112, in execute
return self.cursor.execute(query, args)
File "C:\User\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\cursors.py", line 226, in execute
self.errorhandler(self, exc, value)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\connections.py", line 42, in defaulterrorhandler
raise errorvalue
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\cursors.py", line 223, in execute
res = self._query(query)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\cursors.py", line 379, in _query
rowcount = self._do_query(q)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\cursors.py", line 342, in _do_query
db.query(q)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\connections.py", line 286, in query
_mysql.connection.query(self, query)
_mysql_exceptions.OperationalError: (1067, "Invalid default value for 'user_id_id'")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\PyCharm 5.0.4\helpers\pycharm\django_manage.py", line 41, in <module>
run_module(manage_file, None, '__main__', True)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 182, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:/Users/Desktop/Project/TSL/mysite\manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\management\__init__.py", line 353, in execute_from_command_line
utility.execute()
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\management\__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\management\base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\management\base.py", line 399, in execute
output = self.handle(*args, **options)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\management\commands\migrate.py", line 200, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\migrations\executor.py", line 92, in migrate
self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_initial)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\migrations\executor.py", line 121, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\migrations\executor.py", line 198, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\migrations\migration.py", line 123, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\migrations\operations\fields.py", line 62, in database_forwards
field,
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\backends\mysql\schema.py", line 50, in add_field
super(DatabaseSchemaEditor, self).add_field(model, field)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\backends\base\schema.py", line 396, in add_field
self.execute(sql, params)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\backends\base\schema.py", line 110, in execute
cursor.execute(sql, params)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\backends\utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\utils\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\db\backends\mysql\base.py", line 112, in execute
return self.cursor.execute(query, args)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\cursors.py", line 226, in execute
self.errorhandler(self, exc, value)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\connections.py", line 42, in defaulterrorhandler
raise errorvalue
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\cursors.py", line 223, in execute
res = self._query(query)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\cursors.py", line 379, in _query
rowcount = self._do_query(q)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\cursors.py", line 342, in _do_query
db.query(q)
File "C:\Users\AppData\Local\Programs\Python\Python35-32\lib\site-packages\MySQLdb\connections.py", line 286, in query
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (1067, "Invalid default value for 'user_id_id'")
Process finished with exit code 1
Answer: First of all, as mentioned in the comments, do NOT name the `ForeignKey` field
`user_id`, but `user`. This will create a column called `user_id` in the db
table and your model instance's `user` attribute will return a `User` instance
while its auto-generated attribute `user_id` will return that user's id.
As for specifying a default value for a `ForeignKey`. If you do that on the
model, make sure you provide an **existing** User. If you choose to provide a
one-off default during `makemigrations` (if you select option 1), make sure to
provide the primary key of an **existing** User. Alternatively, make sure
there are no existing `Login` records in the db.
|
How to install GDAL/scipy using cmd in window?
Question: I downloaded `scipy-0.17.0-cp27-none-win_amd64.whl` and
`GDAL-1.11.4-cp27-none-win_amd64.whl` from
[gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy) in
**C:\Python27\Scripts**
To install I used
`pip install scipy-0.17.0-cp27-none-win_amd64.whl pip install
GDAL-1.11.4-cp27-none-win_amd64.whl`
It says the installation is complete but when I import the libraries as
import scipy
import gdal
it shows error as
> No module named gdal
> No module named gdal
However, I installed the `matplotlib`, `numpy` in the very same way and they
are working absolutely fine.
Answer: I solved this problem eventually to found the mistake in my approach. This
problem can occur with anyone using ArcGIS in one's system.
ArcGIS comes with it's default Python package and if one installs python
separately, each time the new libraries gets installed in the newer Python
installation not in ArcGIS.
Therefore, the pyhton IDLE which one uses need to be from another Python
installation. Here in my case, ArcGIS has Python 2.6 and I have made an
separate installation using Python 2.7.11.
All the libraries were getting installed in right place but I was opening the
wrong IDLE to write scripts hence getting error.
|
Sorting list of values in the returned dictionary in ascending order-Python
Question: Following functions return a reverse of the input dictionary where the values
of the original dictionary are used as keys for the returned dictionary and
the keys of the original dictionary are used as value for the returned
dictionary:
def lower(d):
return dict((k.lower(), [item.lower() for item in v]) for k, v in d.iteritems())
def reverse_dictionary(input_dict):
D=lower(input_dict)
reverse_dict = {}
for key, value in D.iteritems():
if not isinstance(value, (list, tuple)):
value = [value]
for val in value:
reverse_dict[val] = reverse_dict.get(val, [])
reverse_dict[val].append(key)
for key, value in reverse_dict.iteritems():
if len(value) == 1:
reverse_dict[key] = value[0]
return reverse_dict
input_dict= {'astute': ['Smart', 'clever', 'talented'],
'Accurate': ['exact', 'precise'],
'exact': ['precise'], 'talented': ['smart', 'keen', 'Bright'],
'smart': ['clever', 'bright', 'talented']}
print(reverse_dictionary(input_dict))
But list of values in the returned dictionary is not sorted in ascending
order.
This function returns:
{'precise': ['accurate', 'exact'], 'clever': ['astute', 'smart'], 'talented': ['astute', 'smart'], 'keen': 'talented', 'bright': ['talented', 'smart'], 'exact': 'accurate', 'smart': ['astute', 'talented']}
The correct output is:
{'precise': ['accurate', 'exact'], 'clever': ['astute', 'smart'], 'talented': ['astute', 'smart'], 'keen': ['talented'], 'bright': ['smart', 'talented'], 'exact': ['accurate'], 'smart': ['astute', 'talented']}
Any help will be much appreciated.
Answer: `reverse_dict` is just a plain old dictionary, that does **not** retain the
order you add elements to it, rendering the entire method somewhat pointless.
Instead, if you wish to retain the order of insertion, you should use
`OrderedDict` from the `collections` module (`from collections import
OrderedDict`) when initializing this variable:
reverse_dict = OrderedDict();
|
Bi-grams in python with lots of txt files
Question: I have a corpus which includes 70,429 files(296.5 mb). I try to find bi-grams
by using whole corpus. I have written the following code;
allFiles = ""
for dirName in os.listdir(rootDirectory):
for subDir in os.listdir(dirName):
for fileN in os.listdir(subDir):
FText = codecs.open(fileN, encoding="'iso8859-9'")
PText = FText.read()
allFiles += PText
tokens = allFiles.split()
finder = BigramCollocationFinder.from_words(tokens, window_size = 3)
finder.apply_freq_filter(2)
bigram_measures = nltk.collocations.BigramAssocMeasures()
for k,v in finder.ngram_fd.most_common(100):
print(k,v)
There is a root directory and the root directory includes subdirectories and
each subdirectory includes numerous files. What I have done is;
I read all of the files one by and add the context into to the string called
`allFiles`. Eventually, I split the string into tokens and call the relevant
bi-gram functions. The problem is;
I ran the program for a day and couldn't get any results. Is there a more
efficient way to find bigrams within a corpus which includes lots of files?
Any advice and suggestions will be greatly appreciated. Thanks in advance.
Answer: By trying to read a huge corpus into memory at once, you're blowing out your
memory, forcing a lot of swap use, and slowing everything down.
The NLTK provides various "corpus readers" that can return your words one by
one, so that the complete corpus is never stored in memory at the same time.
This might work if I understand your corpus layout right:
from nltk.corpus.reader import PlaintextCorpusReader
reader = PlaintextCorpusReader(rootDirectory, "*/*/*", encoding="iso8859-9")
finder = BigramCollocationFinder.from_words(reader.words(), window_size = 3)
finder.apply_freq_filter(2) # Continue processing as before
...
**Addendum:** Your approach has a bug: You're taking trigrams that span from
the end of one document to the beginning of the next... that's nonsense you
want to get rid of. I recommend the following variant, which collects trigrams
from each document separately.
document_streams = (reader.words(fname) for fname in reader.fileids())
BigramCollocationFinder.default_ws = 3
finder = BigramCollocationFinder.from_documents(document_streams)
|
Python converting lists into 2D numpy array
Question: I have some lists that I want to convert into a 2D numpy array.
list1 = [ 2, 7 , 8 , 5]
list2 = [18 ,29, 44,33]
list3 = [2.3, 4.6, 8.9, 7.7]
The numpy array I want is:
[[ 2. 18. 2.3]
[ 7. 29. 4.6]
[ 8. 44. 8.9]
[ 5. 33. 7.7]]
which I can get by typing the individual items from the list directly into the
numpy array expression as `np.array(([2,18,2.3], [7,29, 4.6], [8,44,8.9],
[5,33,7.7]), dtype=float)`.
But I want to be able to convert the lists into the desired numpy array.
Answer: One way to do it would be to create your `numpy` array and then use the
transpose function to convert it to your desired output:
import numpy as np
list1 = [ 2, 7 , 8 , 5]
list2 = [18 ,29, 44,33]
list3 = [2.3, 4.6, 8.9, 7.7]
arr = np.array([list1, list2, list3])
arr = arr.T
print(arr)
**Output**
[[ 2. 18. 2.3]
[ 7. 29. 4.6]
[ 8. 44. 8.9]
[ 5. 33. 7.7]]
|
Can't get value from multidimensional array
Question: I've been doing a lot more Python than PHP recently so perhaps I've forgotten
something important, but as far as I can see, this looks like it should work.
What's actually wrong with it?
$form_settings = array('typeofzone' >= array('tm', 'tp', 'tc'),
'chargenum' >= array('pcn'));
$form_id = 'typeofzone';
// This echoes absolutely nothing
echo $form_settings[$form_id][0];
echo $form_settings[$form_id][1];
echo $form_settings[$form_id][2];
Answer: Your syntax is wrong... you should use "=>" instead of ">="...
|
Kivy Images Not Showing Without Added Time
Question: # Summary of this app:
* Raspberry Pi running Raspbian Jessie and output to TV
* Python 3.4
* Kivy 1.9.1
* pet information is pulled through a SOAP request
* the information is parsed
* a kivy window is created and the pet information is displayed for a set interval before the next pet's information is displayed
# Issue:
* each pet has a single image (typically 20-60kB each) which many times is not being displayed
* initially I was using asynchronous loading to pull the image and display it direct from it's web address
* now I am downloading every image to a USB drive prior to starting the display sequence, but having the same issues
* the pre-loaded images open fine outside of the app
* when the images were pulled direct from the web, it took about a second for the image to display (or not display)
* now that the images are downloaded first, the images appear almost instantly (or not appear)
* the only way that I can guarantee that every image will display is to set the time interval between pets to 20 or more seconds (whether direct from the web or pulled from the USB stick)
* I tried using and not using asynchronous loading with images stored on the USB stick without success either way
* I've watched the folder on the USB stick and can see that the images load at a rate of about 1-3 images per second (total of about 110 images)
* I've tried adding a delay between the downloading/saving of each image with no luck
* the first 7 images always succeed, independent of whatever list of pets are loaded
* after the first 7 images, it is random with the success rate dependent on the time interval between pets being displayed
* I can't figure out why the added time is necessary for all images to show when the images are rather small and the ones that do show appear almost instantly
## Python
`
import kivy
import sys
import os
import time
import requests
kivy.require('1.9.1')
import xml.etree.ElementTree as ET
from datetime import datetime
from kivy.app import App
from kivy.core.window import Window
from kivy.uix.floatlayout import FloatLayout
from kivy.clock import Clock
# from kivy.loader import Loader
# image = Loader.image('nophoto.png')
# Loader.error_image = 'nophoto.png'
# SET ADDRESS FOR SOAP
from suds.client import Client
url = 'http://qag.petpoint.com/webservices/AdoptableSearch.asmx?WSDL'
client = Client(url)
# DELETES PRELOADED IMAGES TO START WITH AN EMPTY USB FOLDER
for ea_file in os.listdir('/media/pi/PRELOAD'):
thedress = '/media/pi/PRELOAD' + '/' + ea_file
os.remove(thedress)
# PUSHES DYNAMIC INFO TO SCROLLER.KV
class TheBox(FloatLayout):
def update(self, *args):
global date_now, which_petL, which_petR, total_count, Lname, Lsex, Lbreed, Lage, Lphoto, Rname, Rsex, Rbreed, Rage, Rphoto
quantity = len(ans_lists[0]) - 1
ans_particular = feeder()
Lname = ans_particular[0]
Lsex = ans_particular[1]
Lbreed = ans_particular[2]
Lage = ans_particular[3]
Lphoto = ans_particular[4]
Rname = ans_particular[5]
Rsex = ans_particular[6]
Rbreed = ans_particular[7]
Rage = ans_particular[8]
Rphoto = ans_particular[9]
self.ids.Start_Time.text = '%02d %02d %05d' % (date_now.day, date_now.hour, total_count)
if (total_count % 2) == 0:
if which_petL < quantity:
which_petL += 1
else:
which_petL = 0
self.ids.PetL_name.text = str.upper(Lname)
self.ids.PetL_sex.text = str(Lsex)
self.ids.PetL_breed.text = str(Lbreed)
self.ids.PetL_age.text = str(Lage)
self.ids.PetL_photo.source = str(Lphoto)
else:
if which_petR < quantity:
which_petR += 1
else:
which_petR = 0
self.ids.PetR_name.text = str.upper(Rname)
self.ids.PetR_sex.text = str(Rsex)
self.ids.PetR_breed.text = str(Rbreed)
self.ids.PetR_age.text = str(Rage)
self.ids.PetR_photo.source = str(Rphoto)
# SOAP RESPONSE IS CONVERTED TO XML FORMAT
def reformat_soap():
result = client.service.adoptableSearch('0', 'A', 'All', 'not4u')
..
root = ET.fromstring(closeit)
return root
# ITERATES THE SOAP RESPONSE TO ASSIGN DATA TO LISTS
def pull_data(ans_root):
lpetid = []
lname = []
lsex = []
lbreed = []
lage = []
lphoto = []
for child in ans_root.iter('pet_id'):
..
iphoto = child.find('pet_photo').text
# WEB ADDRESSES FOR IMAGES ARE USED TO CREATE LOCAL ADDRESSES
local_name = iphoto.replace('http://sms.petpoint.com/sms/photos/615/','/media/pi/PRELOAD/')
ghost_pet = local_name.replace('http://sms.petpoint.com/sms3/emails/images/','/media/pi/PRELOAD/')
lphoto.extend([ghost_pet])
# IMAGES ARE DOWNLOADED FROM THE WEB AND SAVED LOCALLY
photo_cache = open(ghost_pet, 'wb')
photo_cache.write(requests.get(iphoto).content)
# time.sleep(2)
photo_cache.close()
return(lname, lsex, lbreed, lage, lphoto)
# ASSEMBLES PET DATA PRIOR TO PUSH
def feeder():
global which_petL, which_petR, cname, csex, cbreed, cage, cphoto
pname = cname[which_petL]
psex = csex[which_petL]
pbreed = cbreed[which_petL]
page = cage[which_petL]
pphoto = cphoto[which_petL]
qname = cname[which_petR]
qsex = csex[which_petR]
qbreed = cbreed[which_petR]
qage = cage[which_petR]
qphoto = cphoto[which_petR]
return(pname, psex, pbreed, page, pphoto, qname, qsex, qbreed, qage, qphoto)
ans_root = reformat_soap()
ans_lists = pull_data(ans_root)
which_petR = int(len(ans_lists[0]) / 2)
cname = ans_lists[0]
csex = ans_lists[1]
cbreed = ans_lists[2]
cage = ans_lists[3]
cphoto = ans_lists[4]
# DEFINES THE KIVY APP, INTERVAL BETWEEN PET DISPLAYS, AND TIES TO SCROLLER.KV
class ScrollerApp(App):
def build(self):
self.load_kv('Scroller.kv')
x = TheBox()
x.update()
Clock.schedule_interval(x.update, 10)
return(x)
# KIVY WINDOW CREATION
if __name__ == '__main__':
ScrollerApp().run()
`
## Kivy Language
`
#:kivy 1.9.1
<TheBox>:
FloatLayout:
FloatLayout:
size: 810, 1080
pos_hint: {'center_x': .21}
Image:
canvas.before:
Color:
rgb: (0, 0, 0)
Rectangle:
pos: self.pos
size: self.size
id: PetL_photo
size_hint: None, None
size: 790, 770
pos_hint: {'center_x': .5, 'center_y': .64}
allow_stretch: True
keep_ratio: True
source:
Label:
..
Label:
..
Label:
..
Label:
..
FloatLayout:
..
FloatLayout:
size: 810, 1080
pos_hint: {'center_x': .79}
Image:
canvas.before:
Color:
rgb: (0, 0, 0)
Rectangle:
pos: self.pos
size: self.size
id: PetR_photo
size_hint: None, None
size: 790, 770
pos_hint: {'center_x': .5, 'center_y': .64}
allow_stretch: True
keep_ratio: True
source:
Label:
..
Label:
..
Label:
..
Label:
..
Answer: Not positive on this on this correlation, but it seems to make sense. I
increased the GPU memory for the Raspberry Pi from 64 to 128MB and now the
application is able to successfully display the images with a much smaller
interval between image changes.
|
Sum up values in dictionary in Python 2.7
Question: I have a dictionary with data for every 0,04 sec, looking like that
{1: 0, 2: 4.22109297745, 3: 0.324239117507, 4: 3.99972239616 ...}
keys represent time and values — data I received. I need to count the
arithmetic mean for every second. So first I have to sum up data from every 25
values. And here I'm stuck... I would appreciate a lot some help.
Answer: As I understand your dictionary contains keys 1 to 25 for the 1st sec, 26-50
for the 2nd etc which are inserted in a sequential way... Assuming that, you
could do the following First make an ordered dictionary
import collections
od = collections.OrderedDict(sorted(d.items()))
and then
avg=[]
sum = 0
for k,v in od.iteritems():
sum += v
if k%25 == 0:
avg.append(sum/25)
sum = 0
By the end of the loop avg[0] will contain the average of the 1st second's
values, avg[1] the average of the 2nd etc... However this is going to work
only if every second has exactly 25 values
|
Retrieve specific rows of CSV file containing a particular keyword in Python
Question: I am currently trying to extract particular rows that contain certain
keyword(s)(e.g. 'battery' etc.) from a large csv file.
I have the following code written but it seems not to work for the filter
part.
keywords={'battery'}
import csv
import sys
csv.field_size_limit(sys.maxsize)
invalids=0
valids=0
path=r'/Users/hung/Desktop/test.csv'
with open (path,'r')as f:
reader = csv.reader(f,delimiter=';')
for row in reader:
try:
print(row[2])
valids+=1
except IndexError:
invalids+=1
for field in row:
if field in keywords:
print(row)
break
print(('parsed {0} records. ignored {1}').format(valids,invalids))
I am getting an error saying 'SyntaxError: invalid syntax' for 'print' in the
last line. Is there anything missing causing the error? Or is my code not
gonna work?
Thanks.
Answer: Replace:
print(('parsed {0} records. ignored {1}').format(valids,invalids))
With
print('parsed {0} records. ignored {1}'.format(valids,invalids))
See documentation for
[string.format](https://docs.python.org/3.5/library/string.html#format-string-
syntax)
|
python why data type changed by def function?
Question: Why num_r1(x) and num_r2(x) type numpy.ndarray, but num_r(t) is type float?
How can I keep num_r(t) type as array?
def num_r(t):
for x in t:
if x>tx:
return num_r2(x)
else:
return num_r1(x)
Thank you!
The complete example is below
# -*- coding: utf-8 -*
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import math
from pylab import *
#### physical parameters
c = 2.998*10**10
hp = 6.626*10**-27
hb = 1.055*10**-27
kb = 1.381*10**-16
g = 6.673*10**-8
me = 9.109*10**-28
mp = 1.673*10**-24
q = 4.803*10**-10 #gausi
sigT = 6.652*10**-25
# The evolution of the characteristic frequencies
p = 2.5
E52 = 1
epsB_r = 1
epse_r = 1
D28 = 1
n1 = 1.0
nu15 = 1*10**(-5)
r014 = 1
g42 = 1
delt12 =1
g4 = g42*10**2.5
E0 = E52*10**52
eta = g4
N0 = E0/(g4*mp*c**2)
p_tx = 3**(1./3)*2**(4./3)*mp**(-1./3)*c**(-5./3)
tx = p_tx*n1**(-1./3)*eta**(-8./3)
p_num_r1 = 2**(11./2)*7**(-2)*mp**(5./2)*me**(-3)*pi**(-1./2)*q*p_tx**(-6)*2**30*3**18*10**12
p_nuc_r1 = 2**(-33./2)*3**(-4)*10**(-4)*me*mp**(-3./2)*c**(-2)*sigT**(-2)*pi**(-1./2)*q
p_Fmax_r1 = 2**(15./2)*3**(9./2)*10**30*p_tx**(-3./2)*10**(-56)*me*mp**(1./2)*c**3*sigT*q**(-1)*2**(1./2)*3**(-1)
p_num_r2 = 2**(11./2)*7**(-2)*mp**(5./2)*me**(-3)*pi**(-1./2)*q*p_tx**(54./35)*(2**5*3**3*10**2)**(-54./35)
p_nuc_r2 = 2**(-13./2)*3**2*pi**(-1./2)*me*mp**(-3./2)*c**(-2)*sigT**(-2)*q*p_tx**(-74./35)*(2**5*3**3*10**2)**(4./35)
p_Fmax_r2 = 2**(1./2)*3**(-1)*pi**(-1./2)*me*mp**(1./2)*c**3*sigT*q**(-1)*10**(-56)
num_r1 = lambda t : p_num_r1*eta**18*((p-2)/(p-1))**2*epse_r**2*epsB_r**(1./2)*n1**(5./2)*t**6*E52**(-2)
nuc_r1 = lambda t : p_nuc_r1*eta**(-4)*epsB**(-3./2)*n1**(-3./2)*t**(-2)
Fmax_r1 = lambda t : p_Fmax_r1*N0**t**(3./2)*n1*eta**6*E52**(-1./2)*D28**(-2)*epsB_r**(1./2)
num_r2 = lambda t : p_num_r2*((p-2)/(p-1))**2*n1**(-74./35)*n1**(74./105)*eta**(592./105)*E52**(-74./105)
nuc_r2 = lambda t : p_nuc_r2*eta**(172./105)*t**(4./35)*n1**(-167./210)*epsB_r**(-3./2)
Fmax_r2 = lambda t : N0*eta**(62./105)*n1**(37./210)*epsB_r**(1./2)*t**(-34./35)*D28**(-2)
def fspe(t,u):
if num_r(t)<nuc_r(t):
return np.where(u<num_r(t),(u/num_r(t))**(1./3)*Fmax_r(t),np.where(u<nuc_r(t),(u/num_r(t))**(-(p-1.)/2)*Fmax_r(t),(u/nuc_r(t))**(-p/2)*(nuc_r(t)/num_r(t))**(-(p-1.)/2)*Fmax_r(t)))
else:
return np.where(u<nuc_r(t),(u/nuc_r(t))**(1./3)*Fmax_r(t),np.where(u<num_r(t),(u/nuc_r(t))**(-1./2)*Fmax_r(t),(u/num_r(t))**(-p/2)*(num_r(t)/nuc_r(t))**(-1.2)*Fmax_r(t)))
def num_r(t):
for x in t:
if x>tx:
return num_r2(x)
else:
return num_r1(x)
def nuc_r(t):
for x in t:
if t>tx:
return nuc_r2(x)
else:
return nuc_r1(x)
def Fmax_r(t):
for x in t:
if t>tx:
return Fmax_r2(x)
else:
return Fmax_r1(x)
i= np.arange(-4,6,0.1)
t = 10**i
dnum = [math.log10(mmm) for mmm in num_r(t)]
dnuc = [math.log10(j) for j in nuc_r(t)]
nu_obs = [math.log(2.4*10**17,10) for a in i]
plt.figure('God Bless: Observable Limit')
plt.title(r'$\nu_{obs}$ and $\nu_c$ and $\nu_m$''\nComparation')
plt.xlabel('Time: log t')
plt.ylabel(r'log $\nu$')
plt.axvline(math.log10(tx))
plt.plot(i,nu_obs,'.',label=r'$\nu_{obs}$')
plt.plot(i,dnum,'D',label=r'$\nu_m$')
plt.plot(i,dnuc,'s',label=r'$\nu_c$')
plt.legend()
plt.grid(True)
plt.savefig("nu_obs.eps", dpi=120,bbox_inches='tight')
plt.show()
But thereś a Error
TypeError Traceback (most recent call last)
<ipython-input-250-c008d4ed7571> in <module>()
95 i= np.arange(-4,6,0.1)
96 t = 10**i
---> 97 dnum = [math.log10(mmm) for mmm in num_r(t)]
> TypeError: 'float' object is not iterable
Answer: You should write your function as:
def num_r_(x):
if x > tx:
return num_r2(x)
else:
return num_r1(x)
And then pass it through `np.vectorize` to lift it from `float` to `float` to
`np.array` to `np.array`
num_r = np.vectorize(num_r_)
From [Efficient evaluation of a function at every cell of a NumPy
array](http://stackoverflow.com/questions/7701429/efficient-evaluation-of-a-
function-at-every-cell-of-a-numpy-array)
And then when you use it in:
dnum = [math.log10(mmm) for mmm in num_r(t)]
You should rather do:
dnum = np.log10(num_r(t))
That is to say don't use the functions from the `math` module. Use those from
the `np` module as they can take `np.array` as well as float.
As:
i = np.arange(-4,6,0.1)
t = 10**i
results in `t` being a `np.array`
|
Import of excel in odoo on Ubuntu 14.04 - not working
Question: When I tried to import an excel file in odoo from Windows it worked perfectly.
But when tried this from Ubuntu machine it didn't work. It showed me this
error
**"import preview failed due to: Unable to load "xlsx" file requires Python
module "xlrd >= 0.8"**.
Here's the screen shot
[](http://i.stack.imgur.com/Tnm7L.png)
Answer: The error is saying that you need a python library "xlrd" in order to load
this xlsx file. So make sure that you have installed
[this](http://pypi.python.org/pypi/xlrd/0.9.2) python library in your openerp
ubuntu machine.
There's another way mentioned in odoo forum how you can installed it. Here it
is
First you have to download the package from:
[pypi.python.org/pypi/xlrd/0.9.2](https://pypi.python.org/pypi/xlrd/0.9.2)
Find the folder "xlrd" inside the download, copy it to "OpenERP\Server\"
Restart your server.
|
Return 'similar score' based on two dictionaries' similarity in Python?
Question: I know it's possible to return how similar two strings are by using the
following function:
from difflib import SequenceMatcher
def similar(a, b):
output=SequenceMatcher(None, a, b).ratio()
return output
In [37]: similar("Hey, this is a test!","Hey, man, this is a test, man.")
Out[37]: 0.76
In [38]: similar("This should be one.","This should be one.")
Out[38]: 1.0
But is it possible to score two _dictionaries_ based on the similarity of keys
and their corresponding values? Not a number of in common keys, or what _is_
in common, but a score from 0 to 1, like the example above with strings.
I'm trying to find the similarity score between ratings['Shane'] and
ratings['Joe'] in this dictionary:
`ratings={'Shane': {'127 Hours': 3.0, 'Avatar': 4.0, 'Nonstop': 5.0}, 'Joe':
{'127 Hours': 5.0, 'Taken 3': 4.0, 'Avatar': 5.0, 'Nonstop': 3.0}}`
I am using Python 2.7.10
Answer:
import math
ratings={'Shane': {'127 Hours': 3.0, 'Avatar': 4.0, 'Nonstop': 5.0}, 'Joe': {'127 Hours': 5.0, 'Taken 3': 4.0, 'Avatar': 5.0, 'Nonstop': 3.0}}
def cosine_similarity(vec1,vec2):
sum11, sum12, sum22 = 0, 0, 0
for i in range(len(vec1)):
x = vec1[i]; y = vec2[i]
sum11 += x*x
sum22 += y*y
sum12 += x*y
return sum12/math.sqrt(sum11*sum22)
list1 = list(ratings['Shane'].values())
list2 = list(ratings['Joe'].values())
sim = cosine_similarity(list1,list2)
print(sim)
output
o/p : 0.9205746178983233
**Updated** When i use :
ratings={'Shane': {'127 Hours': 5.0, 'Avatar': 4.0, 'Nonstop': 5.0},
'Joe': {'127 Hours': 5.0, 'Taken 3': 4.0, 'Avatar': 5.0, 'Nonstop': 3.0}}
output :`0.9574271077563381`
**Update2: Normalized length and considered keys**
from math import*
ratings={'Shane': {'127 Hours': 5.0, 'Avatar': 4.0, 'Nonstop': 5.0},
'Joe': {'127 Hours': 5.0, 'Taken 3': 4.0, 'Avatar': 5.0, 'Nonstop': 3.0},
'Bob': {'Panic Room':5.0,'Nonstop':5.0}}
def square_rooted(x):
return round(sqrt(sum([a*a for a in x])),3)
def cosine_similarity(x,y):
input1 = {}
input2 = {}
vector2 = []
vector1 =[]
if len(x) > len(y):
input1 = x
input2 = y
else:
input1 = y
input2 = x
vector1 = list(input1.values())
for k in input1.keys(): # Normalizing input vectors.
if k in input2:
vector2.append(float(input1[k]))
else :
vector2.append(float(0))
numerator = sum(a*b for a,b in zip(vector2,vector1))
denominator = square_rooted(vector1)*square_rooted(vector2)
return round(numerator/float(denominator),3)
print("Similarity between Shane and Joe")
print (cosine_similarity(ratings['Shane'],ratings['Joe']))
print("Similarity between Joe and Bob")
print (cosine_similarity(ratings['Joe'],ratings['Bob']))
print("Similarity between Shane and Bob")
print (cosine_similarity(ratings['Shane'],ratings['Bob']))
output:
Similarity between Shane and Joe
0.887
Similarity between Joe and Bob
0.346
Similarity between Shane and Bob
0.615
**Nice explanation between jaccurd and cosine** :
<http://datascience.stackexchange.com/questions/5121/applications-and-
differences-for-jaccard-similarity-and-cosine-similarity>
i am using Python 3.4
**NOTE** : I have assigned 0 to missing values. But you can assign some proper
values too. Refer : <http://www.analyticsvidhya.com/blog/2015/02/7-steps-data-
exploration-preparation-building-model-part-2/>
|
Integrating Python into Java - can we call .py files directly?
Question: I am trying to understand Jython. I have some algorithms written in Python
that I want to integrate in Java. The Jython docs are very complex for me to
understand. All I could get from them is that I can run individual Python
statements from Java by embedding them like this:
interp = new PythonInterpreter();
interp.exec("import sys");
interp.exec("print sys");
But I can't embed my giant algorithms like that. I need to run the py scripts.
Is there any way to do that? Can I get a hello world example where the
`print("hello")` statement is written in a py script file and the output is
shown on a Java console?
Answer: Jython its the better option
else you can run python program from the java using the command prompt and
collect the output back in java
as eg:
ProcessBuilder builder = new ProcessBuilder(
"cmd.exe", "/c", "C:\\Python27\\python.exe C:\\Users\\Bens\\Desktop\\test.py");
builder.redirectErrorStream(true);
Process p = builder.start();
BufferedReader r = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line;
while (true) {
line = r.readLine();
if (line == null) {
break;
}
System.out.println(line);
}
} catch (IOException ex) {
Logger.getLogger(TCPServer.class.getName()).log(Level.SEVERE, null, ex);
}
|
Splitting the dictionary into multiple copies in python
Question: I have a python dictionary
d = {
'facets':{'style':"collared",'pocket':"yes"},
'vars':[ {'facets':{'color':"blue", 'size':"XL"}},
{'facets':{'color':"blue", 'size':"L"}} ]
}
Since there are 2 dictionaries in 'vars' key, i want to have 3 different
dictionaries as given below. Please make it dynamically 3 documents as the the
'vars' can have any number of facets
d1 = {
'facets':{'style':"collared",'pocket':"yes"}
}
d2 = {
'facets':{'color':"blue", 'size':"XL"}
}
d3 = {
'facets':{'color':"blue", 'size':"L"}
}
Answer: Don't create separate variables. If you have 3 additional facet dictionaries
in the `vars` key, you have to figure out how to create `d4` as well, etc.
Later on you suddenly have to now guess at how many `d*` variables exist.
Create a list instead:
facets = [{'facets': d['facets']}] + [facet for facet in d['vars']]
With a list, you can now simply loop over all the `facets` entries to
manipulate or display them.
Demo:
>>> d = {
... 'facets':{'style':"collared",'pocket':"yes"},
... 'vars':[ {'facets':{'color':"blue", 'size':"XL"}},
... {'facets':{'color':"blue", 'size':"L"}} ]
... }
>>> [{'facets': d['facets']}] + [facet for facet in d['vars']]
[{'facets': {'pocket': 'yes', 'style': 'collared'}}, {'facets': {'color': 'blue', 'size': 'XL'}}, {'facets': {'color': 'blue', 'size': 'L'}}]
>>> from pprint import pprint
>>> pprint(_)
[{'facets': {'pocket': 'yes', 'style': 'collared'}},
{'facets': {'color': 'blue', 'size': 'XL'}},
{'facets': {'color': 'blue', 'size': 'L'}}]
|
PyUsb does not recognize my USB device while my PC does
Question: I'm trying to communicate between the my PC and PIC18F4550, but the program is
not detecting it whereas the computer is showing it in [Device
Manager](http://en.wikipedia.org/wiki/Device_Manager).
import usb.core
dev = usb.core.find(idVendor = 0x04D8, idProduct = 0xFEAA)
The function for checking USB devices:
def find(find_all = False, backend = None, custom_match = None, **args):
def device_iter(k, v):
for dev in backend.enumerate_devices():
d = Device(dev, backend)
if _interop._reduce(lambda a, b: a and b,map(operator.eq,v,map(lambda i:getattr(d,i),k)),True)and (custom_match is None or custom_match(d)):
yield d
if backend is None:
import usb.backend.libusb1 as libusb1
import usb.backend.libusb0 as libusb0
import usb.backend.openusb as openusb
for m in (libusb1, openusb, libusb0):
backend = m.get_backend()
if backend is not None:
_logger.info('find(): using backend "%s"', m.__name__)
break
else:
raise ValueError('No backend available')
k, v = args.keys(), args.values()
if find_all:
return device_iter(k, v)
else:
try:
return _interop._next(device_iter(k, v))
except StopIteration:
return None
Error which I'm getting while running the code.
Traceback (most recent call last):
File "C:\modules\motor.py", line 29, in <module>
dev = usb.core.find(idVendor=0x04D8,idProduct=0xFEAA)
File "C:\Python27\lib\site-packages\usb\core.py", line 1199, in find
raise ValueError('No backend available')
ValueError: No backend available
Before it used to execute properly, but for the past few days it's showing
this error. I don't understand what happened all of sudden. Is there any
problem using the PyUSB modules?
I have seen some of them getting the same problem while using USB
communication.
* * *
I've sorted out the problem. The solution is that PyUSB module will search for
libusb0.dll and libusb-1.0.dll files which are backends to communicate with
USB devices which we need to include in PATH environment variable.
Answer: Whenever we use PyUSB modules for USB communication with PC then PyUSB module
will check for libusb0.dll and libusb-1.0.dll files (which act as backends) in
the **`PATH environment variable`** and in **`C:\windows\System32`** locations
and then establishes communication with USB devices. Since i'm using libusb-
win32-wizard for creating device drivers it uses libusb0.dll. The process of
execution can be found using following DEBUG program:
import os
os.environ['PYUSB_DEBUG'] = 'debug'
import usb.core
print list(usb.core.find(find_all=True))
when i execute the above program in **Shell** , the output i got is:
2016-03-26 11:41:44,280 ERROR:usb.libloader:'Libusb 1' could not be found
2016-03-26 11:41:44,280 ERROR:usb.backend.libusb1:Error loading libusb 1.0 backend
2016-03-26 11:41:44,280 ERROR:usb.libloader:'OpenUSB library' could not be found
2016-03-26 11:41:44,280 ERROR:usb.backend.openusb:Error loading OpenUSB backend
2016-03-26 11:41:44,280 INFO:usb.core:find(): using backend "usb.backend.libusb0"
2016-03-26 11:41:44,280 DEBUG:usb.backend.libusb0:_LibUSB.enumerate_devices()
2016-03-26 11:41:44,296 DEBUG:usb.backend.libusb0:_LibUSB.get_device_descriptor(<usb.backend.libusb0._usb_device object at 0x0200E530>)
2016-03-26 11:41:44,296 DEBUG:usb.backend.libusb0:_LibUSB.get_device_descriptor(<usb.backend.libusb0._usb_device object at 0x0200E5D0>)
2016-03-26 11:41:44,296 DEBUG:usb.backend.libusb0:_LibUSB.get_device_descriptor(<usb.backend.libusb0._usb_device object at 0x0200E6C0>)
2016-03-26 11:41:44,296 DEBUG:usb.backend.libusb0:_LibUSB.get_device_descriptor(<usb.backend.libusb0._usb_device object at 0x0200E7B0>)
2016-03-26 11:41:44,296 DEBUG:usb.backend.libusb0:_LibUSB.get_device_descriptor(<usb.backend.libusb0._usb_device object at 0x0200E8A0>)
2016-03-26 11:41:44,296 DEBUG:usb.backend.libusb0:_LibUSB.get_device_descriptor(<usb.backend.libusb0._usb_device object at 0x0200E990>)
2016-03-26 11:41:44,296 DEBUG:usb.backend.libusb0:_LibUSB.get_device_descriptor(<usb.backend.libusb0._usb_device object at 0x0200EA80>)
2016-03-26 11:41:44,296 DEBUG:usb.backend.libusb0:_LibUSB.get_device_descriptor(<usb.backend.libusb0._usb_device object at 0x0200EB70>)
[<DEVICE ID 046d:c05a on Bus 000 Address 001>, <DEVICE ID 046d:c31d on Bus 000 Address 002>, <DEVICE ID 046d:c31d on Bus 000 Address 003>, <DEVICE ID 046d:c31d on Bus 000 Address 004>, <DEVICE ID 04d8:feaa on Bus 000 Address 005>, <DEVICE ID 046d:082b on Bus 000 Address 006>, <DEVICE ID 046d:082b on Bus 000 Address 007>, <DEVICE ID 046d:082b on Bus 000 Address 008>]
So here since i gave the argument as `find_all=True` in `usb.core.find()`
function it returns every device ID's connected to PC. Also in first 4 lines
it gives error since we use lib-usb-win32-wizard which uses libusb0.dll and
hence in 5th line it gave `INFO:usb.core:find(): using backend
"usb.backend.libusb0"` which means it is using libusb0.dll for communicating
with USB devices.
|
Is is bad design for a parent class's method in Python to produce instances of its children?
Question: I am doing some work with the Wikipedia category graph (using Python 3.5), and
have run into a design problem.
I have a base class Page, which defines some methods common to both articles
and categories, as well as classes for Article and Category, which inherit
from Page.
The problem is that each of these classes are quite large, so ideally I'd like
them in separate modules within a package. However, since any page on
Wikipedia (i.e. both articles and categories) can itself have categories, the
method to return the categories of a page is defined in the base Page class.
This means that the Page class depends on Category. However, since Category
depends on Page, this is a circular dependency so the only way it will work
without scoped imports is by defining both Category and Page in the same
module.
This really comes down to the fact that a method of the base class produces
instances of a specific child thereof, which is not a pattern I've had cause
to use before. (As opposed to a base class generically producing instances of
whichever child is calling the method). Is there a design pattern that will
deal with this situation, or is this perhaps one of the rare cases that calls
for a scoped import?
Snippet below for a vague illustration:
class Page(object):
def categories(self):
return [Category(title) for title in self._category_titles()]
class Category(Page):
...
Answer: The problem with circular dependency hints that the classes are actually
tightly coupled, so it is better to keep them in the same module.
In my opinion you just trying to solve the problem, which is "these classes
are quite large" in a wrong way. Python modules are used to group related
classes / functions together, which is your case.
On the other hand, problem with [large
classes](https://sourcemaking.com/refactoring/smells/large-class) should be
solved by refactoring them into smaller classes / functions.
|
Python: Initialising a new derived class in a method of a base class
Question: My situation is that I have access to two classes that work nicely together.
Modifying their code extensively is probably not possible but maybe small
changes could be implemented. However, there are some small extensions to both
classes that I would like to make. This seems like a job for subclassing,
deriving from each class and adding functionality. But I've run into a problem
because one base class calls the other, not my derived one.
Let's say I have two classes A and B in a module 'base_classes.py'. Class B
has a method that creates an instance of class A and uses it e.g.
'base_classes.py'
class A():
def __init__(self):
print('Class A being called')
class B():
def __init__(self):
self.do_thing()
def do_thing(self):
self.myA = A() # Here class A is explicitly named
So I would like to subclass these two and extend them. I do that in a separate
module:
'extensions.py'
import base_classes
class DerivedA(base_classes.A):
def __init__(self):
super().__init__()
print('Class DerivedA being called')
class DerivedB(base_classes.B):
pass
db = DerivedB()
As expected the output is simply
Class A being called
But how can I prevent _my subclass of B_ from making the instance of the base
class of A in the method `do_thing(self)` and instead make an instance of
DerivedA?
The simple way would be to override `do_thing(self)` in DerivedB so that it
explicitly calls DerivedA e.g.
class DerivedB(base_classes.B):
def do_thing(self):
self.myA = DerivedA() # Here class DerivedA is explicitly named
This is fine for this small example, but what if `do_thing(self)` was a
hundred lines long and contained many objects of type A? What if most methods
in B contained some A objects. You'd have to override basically every method
with an almost exact replica, making it pointless to derive from B in the
first place. That's pretty much the problem in my case and I think there must
be a clever pythonic way to solve this. Ideally without completely rewriting
the original classes.
Any ideas?
Answer: You can change the `do_thing` method on class B and pass a class in parameter
:
class B():
def __init__(self):
self.do_thing()
def do_thing(self, klass=A):
self.myA = klass()
Then derivedB `do_thing` method can call it with DerivedA
class DerivedB(base_classes.B):
def do_thing(self, klass=DerivedA):
return super(DerivedB, self).do_thing(klass)
|
Troubles of making Django Models work on MS-SQL on MS Azure?
Question: I want to create a django web on Azure.
I spent all day on trying to connect my model classes to an exist MSSQL
database on Azure.
It still did not work. My hand is tried.
I list my steps, and hope someone to help me thanks a lot!
Step1. Install dependency libraries
sudo pip install --upgrade pip
sudo pip install django-pyodbc
sudo pip install django-sqlserver
sudo pip install django-mssql
sudo pip install django-pyodbc-azure
brew install freetds
brew install freetds --with-unixodbc
Step2. Configuration writing
~/.bash_profile
#ODBC
export ODBCSYSINI=/usr/local/opt/unixodbc/etc
export ODBCINI=/usr/local/opt/unixodbc/etc/odbc.ini
/etc/odbcinst.ini
[FreeTDS]
Driver=/usr/local/lib/libtdsodbc.so
Setup=/usr/local/lib/libtdsodbc.so
Server={host}
UsageCount=1
Port=1433
Database={db name}
User={user name}
Password={password}
TDS_Version=7.2
client_charset=utf-8
/etc/odbc.ini
[FreeTDS]
Driver = FreeTDS
ServerName = {hostname}
Database = {db name}
UserName = {user name}
Password = {password}
Port = 1433
Protocol = 7.2
TDS_Version = 8.0
Step3. Try to connect DB.
tsql -S FreeTDS -p 1433 -U {user name} -P {password}
It’s ok to connect to ‘INFORMATION_SCHEMA’ DB。
But when I try:
tsql -S FreeTDS -p 1433 -U {user name} -P {password} -D {database name}
I had problems:
Msg 4075 (severity 16, state 1) from {hostname} Line 1:
"The USE database statement failed because the database collation Chinese_Traditional_Stroke_Order_100_CS_AS_WS is not recognized by older client drivers. Try upgrading the client operating system or applying a service update to the database client software, or use a different collation. See SQL Server Books Online for more information on changing collations."
Msg 18456 (severity 14, state 1) from {hostname} Line 1:
"Login failed for user ‘{user name}’.”
Error 20002 (severity 9):
Adaptive Server connection failed
There was a problem connecting to the server
If I try “tsql -S FreeTDS -p 1433 -U {user name} -P {password}” and under,
1> USE somedb
2> go
Msg 40508 (severity 16, state 1) from {hostname} Line 1:
"USE statement is not supported to switch between databases. Use a new connection to connect to a different database."
problems again. (I found some pages to says: USE is not work at a MSSQL DB on Azure)
Step4. Try: python manage.py inspectdb > models.py
django.db.utils.Error: ('08001', '[08001] [unixODBC][FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)')
Step5. Try models
settings.py
DATABASES = {
'default': {
'ENGINE': 'sql_server.pyodbc',
'NAME': ‘{database name}’,
'USER': ‘{user name}’,
'PASSWORD': ‘{password}’,
'HOST': ‘{hostname}’,
'PORT': '1433',
'OPTIONS': {
'driver': 'FreeTDS',
},
}
}
and
$python manage.py shell
>>> from app.models import {ModelName}
>>> {ModelName}.objects.all()
Error: ('08001', '[08001] [unixODBC][FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)')
(again, no surprise.)
Environments:
1. Python: 2.7.10
2. Django: 1.9.4 (final)
3. MS-SQL Server Version: V12
4. OS: Mac OSX 10.11.3
5. FreeTDS: 0.95.80
6. Others
pyobjc-core (2.5.1)
pyobjc-framework-Accounts (2.5.1)
pyobjc-framework-AddressBook (2.5.1)
pyobjc-framework-AppleScriptKit (2.5.1)
pyobjc-framework-AppleScriptObjC (2.5.1)
pyobjc-framework-Automator (2.5.1)
pyobjc-framework-CFNetwork (2.5.1)
pyobjc-framework-Cocoa (2.5.1)
pyobjc-framework-Collaboration (2.5.1)
pyobjc-framework-CoreData (2.5.1)
pyobjc-framework-CoreLocation (2.5.1)
pyobjc-framework-CoreText (2.5.1)
pyobjc-framework-DictionaryServices (2.5.1)
pyobjc-framework-EventKit (2.5.1)
pyobjc-framework-ExceptionHandling (2.5.1)
pyobjc-framework-FSEvents (2.5.1)
pyobjc-framework-InputMethodKit (2.5.1)
pyobjc-framework-InstallerPlugins (2.5.1)
pyobjc-framework-InstantMessage (2.5.1)
pyobjc-framework-LatentSemanticMapping (2.5.1)
pyobjc-framework-LaunchServices (2.5.1)
pyobjc-framework-Message (2.5.1)
pyobjc-framework-OpenDirectory (2.5.1)
pyobjc-framework-PreferencePanes (2.5.1)
pyobjc-framework-PubSub (2.5.1)
pyobjc-framework-QTKit (2.5.1)
pyobjc-framework-Quartz (2.5.1)
pyobjc-framework-ScreenSaver (2.5.1)
pyobjc-framework-ScriptingBridge (2.5.1)
pyobjc-framework-SearchKit (2.5.1)
pyobjc-framework-ServiceManagement (2.5.1)
pyobjc-framework-Social (2.5.1)
pyobjc-framework-SyncServices (2.5.1)
pyobjc-framework-SystemConfiguration (2.5.1)
pyobjc-framework-WebKit (2.5.1)
pyodbc (3.0.10)
django-mssql (1.6.2)
django-pyodbc-azure (1.9.3.0)
django-sqlserver (1.7)
Answer: Per my experience, it seems that you need to configure the Driver
`/usr/local/lib/libtdsodbc.so` of FreeTDS in the `odbc.ini` file on your Mac
OS.
There is [a similiar answered
thread](http://stackoverflow.com/questions/29571568/how-to-connect-to-azure-
sql-database-from-django-app-on-linux-vm) you can refer to that suggest to use
`pymssql` package to connect Azure SQL Database. Also, you can see the Azure
offical doc [Connect to SQL Database by using Python on Ubuntu
Linux](https://azure.microsoft.com/en-us/documentation/articles/sql-database-
develop-python-simple-ubuntu-linux/).
|
Capture any occurrence of word in a text; RegEx; Python
Question: I have a list of words that I want to cross-reference with a bunch of texts,
and if a word from the search string is present in the text, I want to retain
the text.
search_string = ['Good', 'Bad', 'Ugly']
My code so far is:
retained_texts = []
for text in full_text:
if set(text) & search_string:
retained_texts.append(' '.join(text))
Here, `full_text` is a list of lists and `text` is a list of words.
This method has a very low level of accuracy, because it retains only texts
where the `Good`, the `Bad`, and the `Ugly` are separate words. However, it
rejects instances where they are imbedded in other words.
E.g.,
Instances like `Goodwill`, `Ugly-duckling`, `BadBoy`, `Good-Bad-Ugly` etc. are
all rejected, while I definitely need them to be retained.
I would assume this could be solved with regex, but my I frankly don't know
how.
Answer: You can do this with following regular expression:
re.match('(Good|Bad|Ugly)', text)
So your full code would look something like this:
import re
search_string = ['Good', 'Bad', 'Ugly']
pattern = '({0})'.format('|'.join(map(re.escape, search_string)))
retained_texts = []
for text in full_text:
if re.search(pattern, text):
retained_texts.append(' '.join(text))
**UPDATE:** As comments point out there is a problem if `search_string`
contains dots, parenthesis or any other characters that need to be escaped
within regular expressions. This can be fixed by calling `re.escape` when
pattern is being constructed, I've edited the example above accordingly.
|
Redmine: remove archived projects
Question: I would like to remove all archived projects on my redmine installation. Doing
so from my browser works, but I have 400 of them...
I had the idea to script it, but the [redmine REST
api](http://www.redmine.org/projects/redmine/wiki/Rest_api_with_python)
doesn't seem to expose the deletion of archived projects... And there is no
"Remove all" feature from the administration panel.
Did any of you already had to deal with those kind of things?
Answer: try python-redmine library:
from redmine import Redmine
redmine = Redmine('http:///', username='', password='')
projects = redmine.project.all()
for project in projects:
print project.status, project.id
for project in projects:
if project.status == 5:
redmine.project.delete(project.id)
<https://media.readthedocs.org/pdf/python-redmine/latest/python-redmine.pdf>
|
How to delete columns in xlwings?
Question: I'm using `xlwings` on Windows (Excel 2007 with Python 2.7) and would like to
delete either ranges or columns with `xlwings`. As far as I could see,
deletion of a range or a column is a missing feature, so I tried to follow the
instructions given
[here](http://docs.xlwings.org/en/stable/missing_features.html) and tried to
access the `.Delete` method of Range object in VBA. Do you have any
suggestions on what is causing the error and how to delete a range of whole
column in `xlwings`?
The code I was trying to run in command line is below (for deleting the whole
column in the active workbook):
import xlwings as xw
wb = xw.Workbook.active()
xw.Range('C1:C3').xl_range.EntireColumn.Delete
I received the following error:
bound method CDispatch.Delete of <COMObject <unknown>>>
`Xlwings` would offer the possibility to clear values from range (by
`Range('C1:C3').clear()`), but that would leave an empty range or column to
the sheet.
Answer: Access the entire column and then use `.xl_range.Delete()` instead:
xw.Range('C:C').xl_range.Delete()
|
Encoding error when combining text files
Question: I'm trying to run this code:
import glob
import io
read_files = filter(lambda f: f!='final.txt' and f!='result.txt', glob.glob('*.txt'))
with io.open("REGEXES.rx.txt", "w", encoding='UTF-32') as outfile:
for f in read_files:
with open(f, "r") as infile:
outfile.write(infile.read())
outfile.write('|')
To combine some text files and I get this error:
Traceback (most recent call last):
File "/Users/kosay.jabre/Desktop/Password Assessor/RegexesNEW/CombineFilesCopy.py", line 10, in <module>
outfile.write(infile.read())
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xa3 in position 2189: ordinal not in range(128)
I've tried UTF-8, UTF-16, UTF-32 and latin-1 encodings. Any ideas?
Answer: You're getting the error from `infile.read()`. The file was opened in text
mode without an encoding specified. Python will try to guess your default file
encoding but may default to ascii. Any byte larger than `\x7f` / 127 is not
ASCI, so will throw an error.
You need to know the encoding of your files before you proceed, otherwise you
will get errors if Python tries to read one encoding and gets another, or you
will simply get mojibake.
**Assuming** that `infile` will be utf-8 encoded, change:
with open(f, "r") as infile:
to:
with open(f, "r", encoding="utf-8") as infile:
You may also want to change `outfile`'s encoding to UTF-8 to avoid potential
storage wastage. Because the input is being decoded to plain Unicode, infile
and outfile's encoding don't need to match.
|
Python CSV - Combine, clean and to output emails in correct format
Question: I am trying to take more than 1 file with information such as names, email
addresses, and other. To take these files from CSV format and remove
everything except the emails. Then output a new file with delimiter of
semicolon all on same line. The final format should look like:
[email protected]; [email protected]; [email protected]
I must check the emails are in correct format of
`[email protected]`. I must remove all duplicates. I must
compare this list to another and remove the emails from 1 list that occur in
the others.
The final format will be such that someone can copy and paste into Outlook the
email recipient addresses.
I have looked at some video. Also here. I found: [python csv copy
column](http://stackoverflow.com/questions/19324968/python-csv-copy-column)
But I get an error when trying to write the new file. I have `import csv and
re`
Here is my code below:
def final_emails(email_list):
with open(email_list) as csv_file:
read_csv = csv.reader(csv_file, delimiter=',')
write_csv = csv.writer(out_emails, delimiter=";")
for row in read_csv:
email = row[2] # only take the emails (from column 3)
if email != '': # remove empties
# remove the header, or anything that doesn't have a '@'
# convert to lowercase and append to list
emails.append(re.findall('\w*@\w*.\w{3}', email.lower()))
write_csv.write([email])
return emails
final_emails(list1)
final_emails(list2)
print(emails)
I have the print to check at bottom. I added write to make new file, but this
error TypeError: argument 1 must have a "write" method
I'm still trying to learn. I do many things this time I didn't before, like
csv and regular expression. Please any assistance. Thank you.
Answer: You need to define `out_emails` as file handle with writable permissions
before you can use it in `csv.writer`
`csv.writer` needs an object which has a `.write` property like file handles
to be able to write to it. It seems like `out_emails` doesn't have a `.write`
property.
|
Splitting my code into multiple files in Python 3
Question: I wish to split my code into multiple files in Python 3.
I have the following files:
/hello
__init__.py
first.py
second.py
Where the contents of the above files are:
**`first.py`**
from hello.second import say_hello
say_hello()
**`second.py`**
def say_hello():
print("Hello World!")
But when I run:
python3 first.py
while in the `hello` directory I get the following error:
Traceback (most recent call last):
File "first.py", line 1, in <module>
from hello.second import say_hello
ImportError: No module named 'hello'
Answer: Swap out
from hello.second import say_hello
for
from second import say_hello
Your default Python path will include your current directory, so importing
straight from `second` will work. You don't even need the `__init__.py` file
for this. You _do_ , however, need the `__init__.py` file if you wish to
import from outside of the package:
$ python3
>>> from hello.second import say_hello
>>> # Works ok!
|
Unable to solve "ImportError: No module named '_tkinter'"
Question: When I try to run my program I keep getting the `ImportError: No module named
'_tkinter'` error. I tried two things which I found could solve this problem:
sudo apt-get install python3-tk
sudo apt-get install tk-dev
They both say that they are up to date but I still get the `no module named
'_tkinter'`.
Edit:
The error points to this line `from tkinter import *`
This is how I run the program that produces the error:
python3 myprog.py
Answer: Run this code and see what it says
import sys
if sys.version_info[0] < 3:
import Tkinter as tk ## Python 2.x
print("Python 2.X")
else:
import tkinter as tk ## Python 3.x
print("Python 3.X")
print "version", tk.TclVersion
|
How do I specify which python and which modules are being used in my jupyter notebook?
Question: When I do
import sys
sys.executable
I get `'/usr/local/opt/python/bin/python2.7'` in my ordinary python shell and
`'/usr/bin/python'` in IPython or my jupyter notebook. I would like to force
my jupyter notebook to use this same python that the shell is using. I have
installed many modules and would like to be able to use the same ones in
jupyter than I am using already in the shell. How can I do this?
Answer: The simplest way is to install IPython and Jupyter with the Python you want
them to use. You can do this using pip:
path/to/python -m pip install jupyter
You could alternatively set up the IPython kernel to run with your desired
Python without reinstalling the notebook. See [the docs on installing
kernels](http://ipython.readthedocs.org/en/stable/install/kernel_install.html).
This is more complicated than just installing everything again, though.
|
Subsets and Splits