text
stringlengths 226
34.5k
|
---|
PyCharm Django test runner can't see django.sites (Runtime Error)
Question: When I run `manage.py test` everything is working normally, but if run test
with PyCharm Django Tests it gives me following error:
Error
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/case.py", line 58, in testPartExecutor
yield
File "/usr/lib/python3.4/unittest/case.py", line 577, in run
testMethod()
File "/usr/lib/python3.4/unittest/loader.py", line 32, in testFailure
raise exception
ImportError: Failed to import test module: order_form.tests
Traceback (most recent call last):
File "/usr/lib/python3.4/unittest/loader.py", line 312, in _find_tests
module = self._get_module_from_name(name)
File "/usr/lib/python3.4/unittest/loader.py", line 290, in _get_module_from_name
__import__(name)
File "/home/vagrant/project/order_form/tests.py", line 2, in <module>
from .models import Order
File "/home/vagrant/project/order_form/models.py", line 3, in <module>
from cms.models.pluginmodel import CMSPlugin
File "/home/vagrant/.virtualenvs/env/lib/python3.4/site-packages/cms/models/__init__.py", line 3, in <module>
from .pagemodel import * # nopyflakes
File "/home/vagrant/.virtualenvs/env/lib/python3.4/site-packages/cms/models/pagemodel.py", line 6, in <module>
from django.contrib.sites.models import Site
File "/home/vagrant/.virtualenvs/env/lib/python3.4/site-packages/django/contrib/sites/models.py", line 83, in <module>
class Site(models.Model):
File "/home/vagrant/.virtualenvs/env/lib/python3.4/site-packages/django/db/models/base.py", line 102, in __new__
"INSTALLED_APPS." % (module, name)
RuntimeError: Model class django.contrib.sites.models.Site doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.
**Yes, I've enabled** `'django.contrib.sites'` **in INSTALLED_APPS and SITE_ID
is set.**
[](http://i.stack.imgur.com/H3N1s.png)
Any ideas?
Answer: I created local virtualenv and install same requirements — tests work fine
with it.
Thanks, everybody for comments.
|
Syntax errors with interactive Sage cell
Question: I am trying to create a webpage which uses an interactive Sage cell to
implement the Vigenere Cipher on user-inputted strings. Code runs perfectly
when I run it outside of the interactive cell. See below:
message = 'Beware the Jabberwock, my son!'
key = 'VIGENERECIPHER'
from itertools import starmap, cycle
def encrypt(message, key):
message = filter(lambda _: _.isalpha(), message.upper())
def enc(c,k): return chr(((ord(k) + ord(c)) % 26) + ord('A'))
return "".join(starmap(enc, zip(message, cycle(key))))
encr = encrypt(message, key)
print encr
But when I try to implement it within the interactive cell, I get syntax
errors.
@interact
def f(message = input_box('Beware the Jabberwock, my son!', label ="Plain text"), key = input_box('VIGENERECIPHER', label = "Key word")):
from itertools import starmap, cycle
def encrypt(message, key):
message = filter(lambda _: _.isalpha(), message.upper())
def enc(c,k): return chr(((ord(k) + ord(c)) % 26) + ord('A'))
return "".join(starmap(enc, zip(message, cycle(key))))
encr = encrypt(message, key)
print encr
The following error is printed:
AttributeError: 'exceptions.SyntaxError' object has no attribute 'upper'
I am new to python/sage... I'm guessing this is some sort of error with
class/type? I've tried googling, but I can't find anything related to this
problem specifically. Thanks
Answer: I don't see this `AttributeError`, but another error instead. Maybe it's a
sympton of the same thing. In any case, the problem is that
`message=input_box(...)` expects a Python expression in the box. You should
add a `type` keyword:
message=input_box('Beware the Jabberwock, my son!', label ="Plain text", type=str)
(Alternatively, you can enter all of your strings in the input box with
explicit quotes.)
|
python selenium detect alert error si active
Question: on this [page](https://secure.ingdirect.it/login.aspx) when digit wrong data
open alert error, i need check if this alert is open, if is open need close,
im write this code bat not working :
example alert image : <http://snag.gy/8WM0q.jpg>
my actual code :
driver = webdriver.Firefox()
driver.get("https://secure.ingdirect.it/login.aspx")
driver.find_element_by_id("ctl00_cphContenuto_LoginContainerUC1_LoginStepCifUC1_txtCodiceCliente").clear()
driver.find_element_by_id("ctl00_cphContenuto_LoginContainerUC1_LoginStepCifUC1_txtCodiceCliente").send_keys('1234567')
driver.find_element_by_id("ctl00_cphContenuto_LoginContainerUC1_LoginStepCifUC1_txtgg").clear()
driver.find_element_by_id("ctl00_cphContenuto_LoginContainerUC1_LoginStepCifUC1_txtgg").send_keys("01")
driver.find_element_by_id("ctl00_cphContenuto_LoginContainerUC1_LoginStepCifUC1_txtmm").clear()
driver.find_element_by_id("ctl00_cphContenuto_LoginContainerUC1_LoginStepCifUC1_txtmm").send_keys("01")
driver.find_element_by_id("ctl00_cphContenuto_LoginContainerUC1_LoginStepCifUC1_txtaaaa").clear()
driver.find_element_by_id("ctl00_cphContenuto_LoginContainerUC1_LoginStepCifUC1_txtaaaa").send_keys("1999")
driver.find_element_by_id("ctl00_cphContenuto_LoginContainerUC1_LoginStepCifUC1_lnbvanti").click()
if self.is_element_present(By.LINK_TEXT, "chiudi"):
driver.find_element_by_link_text("chiudi").click()
return
how i can check if this alert exist, and close it ?
Answer: you can simply use following code to check whether pop-up appears or not
(close pop-up if it is opened or do nothing if there is no pop-up):
from selenium.common.exceptions import NoSuchElementException
try:
driver.find_element_by_xpath('//a[@class="popuptipo1chiudi close"]').click()
except NoSuchElementException:
pass
|
Sending a JSON POST using urllib2 results in HTTP 422
Question: i have been having a problem sending a JSON request towards an API. I've added
application type headers though the API still responds with HTTP 422. The JSON
file data is valid, checked via jsonlint.
post_config = urllib2.Request(config_url)
post_config.add_header('AUTHORIZATION', 'Token token=hash')
post_config.add_header('Content-Type', 'application/json')
post_data = json.dumps(post_data)
print post_data
>>{"type": "numeric", "instance_id": "e0140", "name": "name0140", "uid": "970ebb1b2549b4dd5254", "instance_type": "Recommended", "power": "high"}
send = urllib2.urlopen(post_config, post_data)
Results in:
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 422: Unprocessable Entity
Answer: Try using the requests module instead and see if you're still getting the same
response.
# Snippet example on how to POST the payload to a REST endpoint
import requests
import json
url = 'http://example.com/endpoint'
headers = {'Content-type': 'application/json'}
data = json.dumps(post_data)
print(data)
r = requests.post(url, data=json.dumps(payload),headers=headers)
print(r.status_code)
|
How to parse a key value nested in another key whose name is not know in python?
Question: I'd like to access the value of extract key that is nested in the pages key
{
"batchcomplete": "",
"query": {
"normalized": [
{
"from": "sample",
"to": "Sample"
}
],
"pages": {
"23895873": {
"pageid": 23895873,
"ns": 0,
"title": "Sample",
"extract": "<p><b>Sample</b> or <b>samples</b> may refer to:</p>\n<p></p>\n"
}
}
}
}
I am creating a wikipedia bot that will print the summary (value of the key
`"extract"`) . But the problem is that the `"pageid"` value keeps on changing
with the search result . How can I do this?
I tried using json:
import json
import requests
wikiReq = requests.get("https://en.wikipedia.org/w/api.php?action=query&prop=extracts&exintro=&titles=sample&format=json")
jsonResult = wikiReq.json()
result = jsonResult["query"]["pages"][""]["extract"]
print(json.dumps(result , indent = 4))
Answer: You can do
for i in jsonResult["query"]["pages"]:
result = jsonResult["query"]["pages"][i]["extract"]
Assuming there is just one item in there it will always work
|
How to use sort_index() on a dataframe?
Question: I loaded JSON file in to a dataframe using spark SQLContext. It stores tweets
from different users. It looks like below. I am using pandas library in python
to explore the data in this dataframe.
import pandas as pd
tweets = pd.read_json('/filepath')
sqlcontext = SQLContext(sc)
tweet_sdf = sqlcontext.createDataFrame(tweets)
tweet_sdf.show(10)
+-------------+------------------+-------------+--------------------+-------------------+
| country| id| place| text| user|
+-------------+------------------+-------------+--------------------+-------------------+
| India|572692378957430784| Orissa|@always_nidhi @Yo...| Srkian_nishu :)|
|United States|572575240615796736| Manhattan|@OnlyDancers Bell...| TagineDiningGlobal|
|United States|572575243883036672| Claremont|1/ "Without the a...| Daniel Beer|
|United States|572575252020109312| Vienna|idk why people ha...| someone actually|
|United States|572575274539356160| Boston|Taste of Iceland!...| BostonAttitude|
|United States|572647819401670656| Suwanee|Know what you don...|Collin A. Zimmerman|
| Indonesia|572647831053312000| Mario Riawa|Serasi ade haha @...| Rinie Syamsuddin|
| Indonesia|572647839521767424|Bogor Selatan|Akhirnya bisa jug...| Vinny Sylvia|
|United States|572647841220337664| Norwalk|@BeezyDH_ it's li...| Cas|
|United States|572647842277396480| Santee| obsessed with music| kimo|
+-------------+------------------+-------------+--------------------+-------------------+
only showing top 10 rows
tweet_sdf.printSchema()
root
|-- country: string (nullable = true)
|-- id: long (nullable = true)
|-- place: string (nullable = true)
|-- text: string (nullable = true)
|-- user: string (nullable = true)
I am trying to sort the dataframe on index 'id' using below.
tweet_sdf.sort_index(by='id', ascending=False, inplace=True)
But I receive an attribute error which is mentioned below. AttributeError:
'DataFrame' object has no attribute 'sort_index'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-106-6cd99444a12a> in <module>()
----> 1 tweet_sdf.sort_index(by='id', ascending=False, inplace=True)
/home/notebook/spark-1.6.0-bin-hadoop2.6/python/pyspark/sql/dataframe.pyc in __getattr__(self, name)
837 if name not in self.columns:
838 raise AttributeError(
--> 839 "'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
840 jc = self._jdf.apply(name)
841 return Column(jc)
AttributeError: 'DataFrame' object has no attribute 'sort_index'
Version on pandas is 0.18.0 and python version is 2.7.11 Can someone help me
understand why this is behaving in this way?
Answer: I think you can use [`sort_values`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.sort_values.html), because you need
sort by column `id`.
print tweet_sdf
country id place text \
0 India 572692378957430784 Orissa @always_nidhi@Yo
1 United States 572575240615796736 Manhattan @OnlyDancers Bell
2 United States 572575243883036672 Claremont 1/ "Without the a
3 United States 572575252020109312 Vienna idk why people ha
4 United States 572575274539356160 Boston Taste of Iceland!
5 United States 572647819401670656 Suwanee Know what you don
6 Indonesia 572647831053312000 Mario Riawa Serasi ade haha @
7 Indonesia 572647839521767424 Bogor Selatan Akhirnya bisa jug
8 United States 572647841220337664 Norwalk @BeezyDH_ it's li
9 United States 572647842277396480 Santee obsessed with music
user
0 Srkian_nishu :)
1 TagineDiningGlobal
2 Daniel Beer
3 someone actually
4 BostonAttitude
5 Collin A Zimmerman
6 Rinie Syamsuddin
7 Vinny Sylvia
8 Cas
9 kimo
tweet_sdf.sort_values(by='id', ascending=False, inplace=True)
print tweet_sdf
country id place text \
0 India 572692378957430784 Orissa @always_nidhi@Yo
9 United States 572647842277396480 Santee obsessed with music
8 United States 572647841220337664 Norwalk @BeezyDH_ it's li
7 Indonesia 572647839521767424 Bogor Selatan Akhirnya bisa jug
6 Indonesia 572647831053312000 Mario Riawa Serasi ade haha @
5 United States 572647819401670656 Suwanee Know what you don
4 United States 572575274539356160 Boston Taste of Iceland!
3 United States 572575252020109312 Vienna idk why people ha
2 United States 572575243883036672 Claremont 1/ "Without the a
1 United States 572575240615796736 Manhattan @OnlyDancers Bell
user
0 Srkian_nishu :)
9 kimo
8 Cas
7 Vinny Sylvia
6 Rinie Syamsuddin
5 Collin A Zimmerman
4 BostonAttitude
3 someone actually
2 Daniel Beer
1 TagineDiningGlobal
|
Python: Getting specific list elements
Question: So I made a list of elements from a HTML-Page and counted the frequency of
these elements. But I just need some specific elements like "bb" and "nw". So
I don't know what position they'll have in the list and I'm not sure how to
seperate them from the other elements.
This is my code so far:
from bs4 import BeautifulSoup
import urllib2
import re
import operator
from collections import Counter
from string import punctuation
source_code = urllib2.urlopen('https://de.wikipedia.org/wiki/Liste_von_Angriffen_auf_Fl%C3%BCchtlinge_und_Fl%C3%BCchtlingsunterk%C3%BCnfte_in_Deutschland/bis_2014')
html = source_code.read()
soup = BeautifulSoup(html, "html.parser")
text = (''.join(s.findAll(text=True))for s in soup.findAll('a'))
c = Counter((x.rstrip(punctuation).lower() for y in text for x in y.split()))
bb,nw=operator.itemgetter(1,2)(c.most_common())
print(bb,nw)
Thank you for your help and any hints.
Answer: You could use a filter:
relevant_items = ('bb', 'nw')
items = filter(lambda x: x[0] in relevant_items, c.most_common())
Alternatively, you can already filter in the comprehension:
c = Counter((x.rstrip(punctuation).lower() for y in text for x in y.split() if x in relevant_items))
|
variable in python dictionary for http header construction
Question: New to python but I'm trying to use a variable within a dictionary that is
used to construct a http header
This is what I have:
import requests
url = "https://sample.com"
auth = "sampleauthtoken"
headers = {
'authorization': "Bearer "<VARIABLE auth HERE>,
'cache-control': "no-cache"
}
response = requests.request("GET", url, headers=headers)
print(response.text)
I have tried a few different combinations with no luck
Answer: If I understand you correctly you just want to concatenating the strings using
the `+` operator:
import requests
url = "https://sample.com"
auth = "sampleauthtoken"
headers = {
'authorization': "Bearer " + auth, # -> "Bearer sampleauthtoken"
'cache-control': "no-cache"
}
response = requests.request("GET", url, headers=headers)
print(response.text)
|
Retrieving structs from .so files in Python
Question: I am attempting to write a .so library wrapper for an existing C source code
project, and then call the functions in the .so library from Python. I have
been able to call functions with primitive arguments and return types with no
problem, so I am now working on interfacing with more complex functions that
have arguments that are pointers to structures.
My problem is in creating the structures on the Python side so that I can call
the C-library functions. Some of the structs in the .so library have hundreds
of fields, so I was hoping there was an easier alternative to spelling out all
the fields and types in a Python ctypes `Structure` object.
I would like to be able to write something like this is Python:
from ctypes import *
lib = cdll.LoadLibrary("./libexample.so")
class Input(Structure):
_fields_ = lib.example_struct._fields ## where `example_struct` is defined in the .so library
## I have no idea if you can actually get the fields of the struct!!
my_input = Input(a,b,c,...) ## pseudo-code
my_ptr = pointer(my_input) ## wrap the input with a pointer
result = lib.my_lib_func(my_ptr) ## call .so function with struct
This would allow me to easily replicate at least the structure definitions of
the large C structs without having to create and maintain lengthy Python
versions of the struct definitions. Is this possible? Or is there another way
to achieve the same effect?
EDIT: The C source code is third party, so for now, I am looking for an
approach where I don't have to modify the C source.
Answer: The Cython approach is to read and interpret the .h header file. But I do not
say it would be easy.
|
Selenium webdriver with python to scrape dynamic page cannot find element
Question: So there are a lot of questions that have been asked around dynamic content
scraping on stackoverflow, and I went through all of these, but all the
solutions suggested did not work for the following problem:
## Context:
* Using Selenium webdriver with python
* I mostly used this resource: <http://selenium-python.readthedocs.org/page-objects.html> regarding the Python.org example.
* **Page to scrape:** h t t p://propertymap.sfplanning.org/
## Issue:
I have not been able to access any of the DOM elements on this page. Note if I
could get some hints on how to access the search bar, and the search button,
that would be a great start. [See page to
scrape](http://i.stack.imgur.com/WOipY.png) What I want in the end, is to go
through a list of addresses, launch the search, and copy the information
displayed on the right hand side of the screen.
I have tried the following:
* Changed the browser for webdriver (from Chrome to Firefox)
* Added waiting time for the page to load
try:
WebDriverWait(self.driver, 10).until(EC.presence_of_element_located((By.ID, "addressInput")))
except:
print "address input not found"
* Tried to access the item by ID, XPATH, NAME, TAG NAME, etc., nothing worked.
**Questions**
* What else could I try that I have not so far (using Selenium webdriver)?
* Are some websites really impossible to scrape? (I don't think that the city used an algorithm to generate any random DOM everytime I re-load the page).
Answer: You can use this url `http://50.17.237.182/PIM/` to get the source:
In [73]: from selenium import webdriver
In [74]: dr = webdriver.PhantomJS()
In [75]: dr.get("http://50.17.237.182/PIM/")
In [76]: print(dr.find_element_by_id("addressInput"))
<selenium.webdriver.remote.webelement.WebElement object at 0x7f4d21c80950>
If you look at the source returned, there is a frame attribute with that src
url:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>San Francisco Property Information Map </title>
<META name="description" content="Public access to useful property information and resources at the click of a mouse"><META name="keywords" content="san francisco, property, information, map, public, zoning, preservation, projects, permits, complaints, appeals">
</head>
<frameset rows="100%,*" border="0">
<frame src="http://50.17.237.182/PIM" frameborder="0" />
<frame frameborder="0" noresize />
</frameset>
<!-- pageok -->
<!-- 02 -->
<!-- -->
</html>
Thanks to @Alecxe, the simplest method it to use `dr.switch_to.frame(0)`:
In [77]: dr = webdriver.PhantomJS()
In [78]: dr.get("http://propertymap.sfplanning.org/")
In [79]: dr.switch_to.frame(0)
In [80]: print(dr.find_element_by_id("addressInput"))
<selenium.webdriver.remote.webelement.WebElement object at 0x7f4d21c80190>
If you visit `http://50.17.237.182/PIM/` in your browser, you will see exactly
the same as `propertymap.sfplanning.org/`, the only difference is you have
full access to the elements using the former.
If you want to input a value and click the search box, it is something like:
from selenium import webdriver
dr = webdriver.PhantomJS()
dr.get("http://propertymap.sfplanning.org/")
dr.switch_to.frame(0)
dr.find_element_by_id("addressInput").send_keys("whatever")
dr.find_element_by_xpath("//input[@title='Search button']").click()
But if you want to pull data, you may find querying using the url an easier
option, you will get some json back from the query.
[](http://i.stack.imgur.com/phbsp.png)
|
Error setting up Django registration
Question: So, I'm trying to set up `registration` and I keep getting the error
> Unhandled exception in thread started by
>
> Traceback (most recent call last):
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/utils/autoreload.py", line 226, in wrapper fn(*args,
> **kwargs)
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/core/management/commands/runserver.py", line 109, in
> inner_run autoreload.raise_last_exception()
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/utils/autoreload.py", line 249, in raise_last_exception
> six.reraise(*_exception)
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/utils/autoreload.py", line 226, in wrapper fn(*args,
> **kwargs)
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/**init**.py", line 18, in setup
> apps.populate(settings.INSTALLED_APPS)
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/apps/registry.py", line 115, in populate app_config.ready()
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/contrib/admin/apps.py", line 22, in ready
> self.module.autodiscover()
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/contrib/admin/**init**.py", line 26, in autodiscover
> autodiscover_modules('admin', register_to=site)
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/utils/module_loading.py", line 50, in autodiscover_modules
> import_module('%s.%s' % (app_config.name, module_to_search))
>
> File "//anaconda/envs/hellovenv/lib/python2.7/importlib/**init**.py", line
> 37, in import_module **import**(name)
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/registration/admin.py", line 2, in from django.contrib.sites.models
> import RequestSite
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/contrib/sites/models.py", line 83, in class
> Site(models.Model):
>
> File "//anaconda/envs/hellovenv/lib/python2.7/site-
> packages/django/db/models/base.py", line 102, in **new** "INSTALLED_APPS." %
> (module, name)
>
> RuntimeError: Model class django.contrib.sites.models.Site doesn't declare
> an explicit app_label and isn't in an application in INSTALLED_APPS.
after running
pip install django-registration-redux==1.1
and my `INSTALLED_APPS` are
INSTALLED_APPS = [
'collection', # this is the app we added
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'registration',
]
Not being super familiar with python tracebacks I'm not sure which files to
modify to fix this.
Thanx.
Answer: from [docs](https://django-registration-
redux.readthedocs.org/en/latest/quickstart.html#settings),
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.sites',
'registration',
# ...other installed applications...
)
`django.contrib.sites` seems to be omitted in your `INSTALLED_APPS`.
|
pip installing to wrong folder even though `which pip` is correct
Question: I'm using Mac OS X 10.10. I want to use pip to install packages for my
homebrew installed version of python (located in `/usr/local/bin/python`,
which is an alias that points to
`/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/bin`).
It appears that site-packages for this version are here:
`/usr/local/lib/python2.7/site-packages/`.
`which python` returns `/usr/local/bin/python`
`which pip` returns `/usr/local/bin/pip`
These seem correct to me.
Trying something like `pip install pylzma` returns:
Collecting pylzma
Installing collected packages: pylzma
Successfully installed pylzma
You are using pip version 8.0.2, however version 8.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
But then `pip list` does not show `pylzma` to be installed. It looks like pip
installs the packages to
`/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages` (the python that ships with Mac OS X).
How can I get pip to install to my homebrewed python?
I've tried a number of suggestions from similar questions:
1. I've tried `export PATH=/usr/local/bin/python:${PATH}`.
2. I've tried `pip install --install-option="--prefix=/usr/local/lib/python2.7" pylzma`.
3. I've tried changing the first line of the pip executable script to `#!/usr/local/bin/python`
4. I've tried `/usr/local/bin/python -m pip install pylzma`.
But none of these work. I also tried upgrading pip to 8.1.1, but that made pip
break entirely. People recommend using `virtualenv`, but as far as I know, I
can't install that without pip.
When I type `python -m pip`, it says:
Usage:
/usr/local/opt/python/bin/python2.7 -m pip <command> [options]
Could that be a problem?
Answer: My issue was that my `/Users/<username>/.pydistutils.cfg` contained the
following:
[easy_install]
# set the default location to install packages
install_dir = /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
[install]
install_lib = /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
install_scripts = ~/bin
I changed this to:
[easy_install]
# set the default location to install packages
install_dir = /usr/local/lib/python2.7/site-packages
[install]
install_lib = /usr/local/lib/python2.7/site-packages
install_scripts = ~/bin
That seemed to have worked. `pip install` now installs packages to the desired
location `/usr/local/lib/python2.7/site-packages`.
However, I am have ongoing path issues.
`import pylzma` still gives me `ImportError: No module named pylzma`.
and running `jupyter notebook` in terminal gives `-bash: jupyter: command not
found`. `/Users/<username>/bin/jupyter notebook` does execute, but I get
`ImportError: No module named markupsafe` despite the fact that
`/usr/local/lib/python2.7/site-packages/MarkupSafe-0.23.dist-info` exists.
EDIT: I got jupyter notebook working eventually. I had to install several
packages from the source tarballs directly, including MarkupSafe, functools32,
and jsonschema. Maybe Python is not looking in the correct folder or
something.
|
Kivy Android Scikitlearn
Question: I'm trying to build a test Android application based on Kivy and Scikitlearn.
To build the apk I use buildozer. The code of the app is the basic Hello world
example. In the buildozer.spec file I add scikit-learn as an external library:
[app]
title = MyTest
package.name = kivycrash2
package.domain = org.test
source.dir = .
source.include_exts = py,png,jpg,kv,atlas
version = 0.1
requirements = kivy, numpy, scikit-learn
orientation = landscape
When I try to build the apk, I get following error:
ImportError: Numerical Python (NumPy) is not installed.
scikit-learn requires NumPy >= 1.6.1.
I have also tried to put sklearn instead of scikit-learn with no success. Is
it possible to satisfy the Numpy required version?
Thank you.
Answer: The problem here is that scikit-learn is searching for a numpy installation
during its own setup, but it doesn't know to look in the android build
directory where numpy has been built for the android architecture. I'd try
creating a python-for-android recipe for scikit-learn (rather than your
current method which tries to use pip) that either patches it to override the
numpy check, or possibly tries to let it detect numpy correctly by adding the
right directory to the PYTHONPATH - but I'm not sure if that would work, it
depends on what it tries to import during installation.
There are also other possibilities, e.g. the new python-for-android toolchain
possibly has a way to let setup.py find numpy automatically, but this doubles
the numpy build time and still requires that you make a recipe.
|
Python: how to make python calculate a sum to make sure an input is correct?
Question: I want to make python ask 10 questions and a user to have to input their
answers which works. However I also want python to say whether this is correct
or not by using the code below but this does not work and only moves onto the
next question. Could anybody tell me why? Or what I need to change? Also how
do I make this ask 10 question specifically using the variables I have and a
while loop?
import time
import random
question = 0
score = 0
name = input("What is your full name?")
print ("Hello " + name, "welcome to The Arithmetic Quiz")
time.sleep(2)
operand1 = list(range(2, 12))
operators = ["+"]
operand2 = list(range(2, 12))
while question < 10:
user_answer=int(input(str(random.choice(operand1)) + random.choice(operators) + str(random.choice(operand2))))
if operators=='+':
expected_answer==operand1 + operand2
if user_answer==expected_answer:
print('This is correct!')
score = score + 1
question = question + 1
time.sleep(2)
else:
print('This is incorrect!')
question = question + 1
time.sleep(2)
Answer: All of your comparisons in your `while` statement are being done against
`list`s instead of the randomly chosen element.
You likely want to do something like this:
operands1 = list(range(2, 12))
operators = ["+"]
operands2 = list(range(2, 12))
while question < 10:
operand1 = random.choice(operands1)
operand2 = random.choice(operands2)
operator = random.choice(operators)
user_answer = int(input('{} {} {} '.format(operand1, operator, operand2)))
if operator == '+':
expected_answer = operand1 + operand2
if user_answer == expected_answer:
print('This is correct!')
score = score + 1
question = question + 1
time.sleep(2)
else:
print('This is incorrect!')
question = question + 1
time.sleep(2)
There are many other ways to improve the structure of the code, which might
make it look like this:
import operator as ops
import time
import random
NUM_QUESTIONS = 10
OPERANDS = list(range(2, 12))
OPERATORS = {'+': ops.add, '-': ops.sub, '*': ops.mul}
def getInteger(prompt, errormsg='Please input an integer'):
while True:
try:
return int(input(prompt))
except ValueError:
print(errormsg)
def main():
question = score = 0
name = input('What is your full name? ')
print('Hello {}, welcome to The Arithmetic Quiz'.format(name))
time.sleep(2)
for _ in range(NUM_QUESTIONS):
operand1 = random.choice(OPERANDS)
operand2 = random.choice(OPERANDS)
operator = random.choice(list(OPERATORS))
user_answer = getInteger('{} {} {} '.format(operand1, operator, operand2))
expected_answer = OPERATORS[operator](operand1, operand2)
if user_answer == expected_answer:
print('This is correct!')
score += 1
else:
print('This is incorrect!')
time.sleep(2)
if __name__ == '__main__':
main()
This uses a dedicated `getInteger` function to handle invalid input, uses a
dictionary and functions being first-class objects to choose which "actual"
operator function to use, uses `+=`, uses `range` and `for`, instead of a
`while` loop, uses sane constants...the list of possible improvements is
large.
|
Dryscrape/webkit_server memory leak
Question: I'm using dryscrape/webkit_server for scraping javascript enabled websites.
The memory usage of the process webkit_server seems to increase with each call
to session.visit(). It happens to me using the following script:
import dryscrape
for url in urls:
session = dryscrape.Session()
session.set_timeout(10)
session.set_attribute('auto_load_images', False)
session.visit(url)
response = session.body()
I'm iterating over approx. 300 urls and after 70-80 urls webkit_server takes
up about 3GB of memory. However it is not really the memory that is the
problem for me, but it seems that dryscrape/webkit_server is getting slower
with each iteration. After the said 70-80 iterations dryscrape is so slow that
it raises a timeout error (set timeout = 10 sec) and I need to abort the
python script. Restarting the webkit_server (e.g. after every 30 iterations)
might help and would empty the memory, however I'm unsure if the 'memory
leaks' are really responsible for dry scrape getting slower and slower.
Does anyone know how to restart the webkit_server so I could test that?
I have not found an acceptable workaround for this issue, however I also don't
want to switch to another solution (selenium/phantomjs, ghost.py) as I simply
love dryscrape for its simplicity. Dryscrape is working great btw. if one is
not iterating over too many urls in one session.
This issue is also discussed here
<https://github.com/niklasb/dryscrape/issues/41>
and here
[Webkit_server (called from python's dryscrape) uses more and more memory with
each page visited. How do I reduce the memory
used?](http://stackoverflow.com/questions/32211733/webkit-server-called-from-
pythons-dryscrape-uses-more-and-more-memory-with-ea/32289828#32289828)
Answer: Hi,
Sorry for digging up this old post but what i did to solve the issue(After
googling and only finding this post) was to run the dryscrape in a seperate
process and then killing Xvfb after each run.
So my dryscrape script is:
dryscrape.start_xvfb()
session = dryscrape.Session()
session.set_attribute('auto_load_images', False)
session.visit(sys.argv[1])
print session.body().encode('utf-8')
And to run it:
p = subprocess.Popen(["python", "dryscrape.py", url],
stdout=subprocess.PIPE)
result = p.stdout.read()
print "Killing all Xvfb"
os.system("sudo killall Xvfb")
I know it's not the best way, and the memory leak should be fixed, but this
works.
|
Calculate distance between two coordinates on a globe
Question: I get two coordinate pairs in the form `90°0′0″N 0°0′0″E` as string and want
to calculate the distance between those points on a sphere with radius
R=6371km.
I found two formulas on the internet [here](http://www.movable-
type.co.uk/scripts/latlong.html), the "haversine" and the "spherical law of
cosines", but they don't seem to work. For a 90° angle which should return
`2*pi*R / 4`, the haversine operates correct but the cosines fail and return
0. A different point with more random coordinates returns false values with
both algorithms: the haversine is too high and the cosines are too low.
Is my implementation wrong or did I chose an incorrect algorithm?
How should I make these calculations (coordinate pairs to distance on globe
surface) instead?
(And yes, I know that I'm not checking for N/S and E/W yet, but the tested
coordinates are all in the north-eastern hemisphere.)
Here's my Python 3 code:
import math, re
R = 6371
PAT = r'(\d+)°(\d+)′(\d+)″([NSEW])'
def distance(first, second):
def coords_to_rads(s):
return [math.radians(int(d) +int(m)/60 +int(s)/3600) \
for d, m, s, nswe in re.findall(PAT, s)]
y1, x1 = coords_to_rads(first)
y2, x2 = coords_to_rads(second)
dx = x1 - x2
dy = y1 - y2
print("coord string:", first, "|", second)
print("coord radians:", y1, x1, "|", y2, x2)
print("x/y-distances:", dy, dx)
a = math.sin(dx/2)**2 + math.cos(x1) * math.cos(x2) * math.sin(dy/2)**2
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1-a))
haversine = R * c
law_of_cosines = math.acos( math.sin(x1) * math.sin(x2) + \
math.cos(x1) * math.cos(x2) ) * R
print("HS:", round(haversine, 2), "LOC:", round(law_of_cosines, 2))
return haversine
#return law_of_cosines
if __name__ == '__main__':
def test(result, correct):
print("result: ", result)
print("correct:", correct)
test(distance("90°0′0″N 0°0′0″E", "0°0′0″N, 0°0′0″E"), 10007.5)
test(distance("51°28′48″N 0°0′0″E", "46°12′0″N, 6°9′0″E"), 739.2)
test(distance("90°0′0″N 0°0′0″E", "90°0′0″S, 0°0′0″W"), 20015.1)
test(distance("33°51′31″S, 151°12′51″E", "40°46′22″N 73°59′3″W"), 15990.2)
Here is some output:
coord string: 90°0′0″N 0°0′0″E | 0°0′0″N, 0°0′0″E
coord radians: 1.5707963267948966 0.0 | 0.0 0.0
x/y-distances: 1.5707963267948966 0.0
HS: 10007.54 LOC: 0.0
result: 10007.543398010286
correct: 10007.5
coord string: 51°28′48″N 0°0′0″E | 46°12′0″N, 6°9′0″E
coord radians: 0.8984954989266809 0.0 | 0.8063421144213803 0.10733774899765128
x/y-distances: 0.09215338450530064 -0.10733774899765128
HS: 900.57 LOC: 683.85
result: 900.5669567853056
correct: 739.2
Answer: It looks like you mixed up `x` and `y` in your calculation of `a`. You're
supposed to take the cosine of latitude (`y`), not longitude (`x`).
I discovered this by changing your `distance` to `angular_distance` (i.e.
don't multiply by `R`) and adding some additional tests:
test(angular_distance("90°0′0″N 0°0′0″E", "89°0′0″N, 0°0′0″E"), math.radians(1))
test(angular_distance("90°0′0″N 0°0′0″E", "80°0′0″N, 0°0′0″E"), math.radians(10))
test(angular_distance("90°0′0″N 0°0′0″E", "50°0′0″N, 0°0′0″E"), math.radians(40))
test(angular_distance("90°0′0″N 0°0′0″E", "50°0′0″N, 20°0′0″E"), math.radians(40))
|
Erasing screen with button
Question:
#Imported Pygame
import pygame
#The Colors
BLACK = ( 0, 0, 0)
GREEN = ( 0, 255, 0)
WHITE = ( 255, 255, 255)
RED = ( 255, 0, 0)
ORANGE = ( 255, 115, 0)
YELLOW = ( 242, 255, 0)
BROWN = ( 115, 87, 39)
PURPLE = ( 298, 0, 247)
GRAY = ( 168, 168, 168)
PINK = ( 255, 0, 234)
pygame.init()
#The Screen
screen = pygame.display.set_mode([1000,500])
#Name of the window
pygame.display.set_caption("My first game")
clock = pygame.time.Clock()
#The sounds
# Positions of graphics
background_position = [0,0]
singleplayer_position = [350, 200]
tutorial_position = [350,300]
sorry_position = [0,0]
developer_position = [0,450]
rules_position = [0,0]
#The graphics
background_image = pygame.image.load("Castle.png").convert()
singleplayer_image = pygame.image.load("SinglePlayer.png").convert()
singleplayer_image.set_colorkey(WHITE)
tutorial_button = pygame.image.load("Tutorial_button.png").convert()
sorry_message = pygame.image.load("Sorry.png").convert()
sorry_message.set_colorkey(WHITE)
developer_message = pygame.image.load("Developer.png").convert()
developer_message.set_colorkey(WHITE)
Rules_image = pygame.image.load("Rules.png").convert()
#Main Loop __________________________
done = False
while not done:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
# Copy of background or main menu
screen.blit(background_image, background_position)
#Copy of other images
mouse_pos = pygame.mouse.get_pos()
my_rect = pygame.Rect(350,200,393,75)
tutorial_rect = pygame.Rect(350,300,393,75)
screen.blit(singleplayer_image, singleplayer_position)
screen.blit(tutorial_button, tutorial_position)
screen.blit(developer_message, developer_position)
if pygame.mouse.get_pressed()[0] and my_rect.collidepoint(mouse_pos):
screen.blit(sorry_message, sorry_position)
correct = False
if pygame.mouse.get_pressed()[0] and my_rect.collidepoint(mouse_pos):
#Here I make the screen fill white
if python.mouse.get_pressed()[0]tutorial_rect.collidepoint(mouse.pos):
correct = True
if correct == True:
screen.blit(Rules_image, rules_position)
pygame.display.flip()
clock.tick(60)
#To quit game
pygame.quit()
This is basically my code... When I hit the single player button I have it
making the area white but it doesn't stay there. Like when I hit it once and
hold the singleplayer button it stays white but when i unclick the screens
back to what it was. Is there anyway I can just erase everything I did before
and start a new screen when I hit the Singleplayer button?'
Ok back to the answer you gave me.. I structured my code like you said.
if pygame.mouse.get_pressed()[0] and my_rect.collidepoint(mouse_pos):
color_white = True
if color_white = True
screen.fill(WHITE)
This isn't working because It still doesn't make the screen stay white. I
tried this.
if pygame.mouse.get_pressed()[0] and my_rect.collidepoint(mouse_pos):
color_white = True
if color_white = True
screen.fill(WHITE)
This also doesn't seem to work because it keeps saying color_white is
undefined.
Answer: Your confusion results from the while loop and how it behaves, so I'll explain
that to answer your question.
Quick note: if you are not using a pygame clock object with tick at end of
code, comment and I'll explain that at end, its important you do
this.(<http://www.pygame.org/docs/ref/time.html>)
Okay, the problem: your picture is not remaining white after you click it. It
stays white if you hold the mouse down, but it goes away once you lift up. I
assume you want it to remain white even once you lift the mouse click.
Currently, your code colors the picture white inside of an if statement.
if pygame.mouse.get_pressed()[0] and my_rect.collidepoint(mouse_pos):
Review the docs on what .get_pressed() does. It returns True if the mouse
button is pressed. So, when you click it, it says True, if you are holding it
down, it says True. If you are not clicking or holding, its False. So it only
colors it white when the mouse is clicked or held down, since thats when its
told to do so. What makes it turn back to normal are your blits earlier in the
loop. Each loop, pygame makes the image normal (via blit) and colors the
picture white if your statement evaluates to True. This means whenever your if
statement is False, the picture remains normal.
To make it remain painted white, use a boolean.
if pygame.mouse.get_pressed()[0] and my_rect.collidepoint(mouse_pos):
color_white = True
And then instead of putting the code to color the white inside the if
statement that now sets the boolean to true, make a new if statement before
your loop ends.
if color_white:
# Code to make the screen white.
This way, it can remain white even while not holding it down. If you want to
make it back to normal with another click. You can expand your first if
statement.
if pygame.mouse.get_pressed()[0] and my_rect.collidepoint(mouse_pos):
if color_white is True:
color_white = False
else:
color_white = True
Which can be coded in a shorter fashion...
color_white = False if color_white == True else True
Edit: I wrote the previous code considering events. This code would work if
you were using the MOUSEBUTTONDOWN event to change the color. However, if you
want to use get_pressed(), you'll have to use a different mouse button. If you
only use left click, how should the program know whether to turn it off or on
with so many loops going by?
I'll rewrite the code with get_pressed in mind.
if pygame.mouse.get_pressed()[0] and my_rect.collidepoint(mouse_pos):
color_white = True
if pygame.mouse.get_pressed()[1] and my_rect.collidepoint(mouse_pos): # Not sure if it should be 1 or 2 in get_pressed, but I'm assuming they represent the right click and middle mouse button. So you can use these to turn the screen back to normal.
color_white = False
Edit2: Your color_white is undefined, because it doesn't get defined until
after the if statements in your code. So before you get a chance to click (and
define it), a loop runs and gets to
if color_white:
But color_white doesn't exist to the computer yet. To solve, define
color_white before your while loop.
color_white = False # Default to not color the screen white.
|
Python: logical comparing with columns in panda's dataframe
Question: I have a dataframe where I want to determine when the `ser_no` and `CTRY_NM`
are the same and differ. However, I want to be mindful of the `ser_no` changes
and not make a false and false return true or a false/true return false.
Consider the following dataframe:
import pandas as pd
df = pd.DataFrame({'ser_no': [1, 1, 1, 2, 2, 2, 2, 3, 3, 3],
'CTRY_NM': ['a', 'a', 'b', 'e', 'e', 'a', 'b', 'b', 'b', 'd']})
def check(key):
return df[key] == df[key].shift(1)
match = check('ser_no') == check('CTRY_NM')
This returns:
[](http://i.stack.imgur.com/ykWJB.png)
However, at indices, 4 and 8 we have serial number changes. Since each serial
number is a different machine, it doesn't make sense to have a logical
comparison at these locations. When `ser_no` changes, how can I insert `NaN`
instead of do a logical comparison?
Answer: is this what you want?
def check(data, key):
mask = data[key].shift(1) == data[key]
mask.iloc[0] = np.nan
return mask
df.groupby(by=['ser_no']).apply(lambda x: check(x, 'CTRY_NM'))
result
ser_no
1 0 NaN
1 1
2 0
2 3 NaN
4 1
5 0
6 0
3 7 NaN
8 1
9 0
Name: CTRY_NM, dtype: float64
|
Getting last ten codepad.org posts with Easy Html Parser (EHP) python
Question: I have found a python html parser that builds a dom like structure for html
sources it seems easy to use and very fast. i'm trying to write a scraper for
codepad.org that retrieves the last ten posts from <http://codepad.org/recent>
The EHP lib is at <https://github.com/iogf/ehp> i have this code below that is
working.
import requests
from ehp import Html
def catch_refs(data):
html = Html()
dom = html.feed(data)
return [ind.attr['href']
for ind in dom.find('a')
if 'view' in ind.text()]
def retrieve_source(refs, dir):
"""
Get the source code of the posts then save in a dir.
"""
pass
if __name__ == '__main__':
req = requests.get('http://codepad.org/recent')
refs = catch_refs(req.text)
retrieve_source(refs, '/tmp/')
print refs
it outputs:
[u'http://codepad.org/aQGNiQ6t',
u'http://codepad.org/HMrG1q7t',
u'http://codepad.org/zGBMaKoZ', ...]
as expected but i cant figure out how to download the source code of the
files.
Answer: Actually your `retrieve_source(refs, dir)` don't do anything.
So you are not getting any result.
Update according to your comment:
import os
def get_code_snippet(page):
dom = Html().feed(page)
# getting all <div class=='highlight'>
elements = [el for el in dom.find('div')
if el.attr['class'] == 'highlight']
return elements[1].text()
def retrieve_source(refs, dir):
for i, ref in enumerate(refs):
with open(os.path.join(dir, str(i) + '.html'), 'w') as r:
r.write(get_code_snippet(requests.get(ref).content))
|
AttributeError: 'module' object has no attribute 'Screen'
Question: I'm trying to make a square with python. Here's the code:
import turtle
def draw_square():
window = turtle.Screen()
window.bgcolor("red")
brad = turtle.Turtle()
brad.shape("turtle")
brad.color("yellow")
brad.speed(2)
brad.forward(100)
brad.right(90)
brad.forward(100)
brad.right(90)
brad.forward(100)
brad.right(90)
brad.forward(100)
brad.right(90)
window.exitonclick()
draw_square()
But I get this error:
File "C:/Python27\turtle.py", line 4, in draw_square
window = turtle.Screen()
AttributeError: 'module' object has no attribute 'Screen'
Answer: you called your file `turtle.py` so you end up importing your own file instead
of the module, rename it and remove the `.pyc` files (possibly in a
`__pycache__` folder) and you should be good to go.
|
Error using FigureCanvasQTAgg in MatplotlibWidget pyqt5
Question: I would like to plot on my GUI with pyqt5 using matplotlib. I have created a
class called MatplotlibWidget which create the figure and canvas of my plot.
But I have a problem to generate my canvas with the FigureCanvasQTAgg function
(which is a matplotlib function).
Here the part of my code which is bugging:
import matplotlib.pyplot as plt
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg
from matplotlib.figure import Figure
#Some more code...not relevant
class MatplotlibWidget(QWidget):
def __init__(self):
QWidget.__init__(self)
self.fig = Figure()
self.canvas = FigureCanvasQTAgg(self.fig) #line 86
self.axis = self.fig.add_subplot(111)
self.layoutVerticalTest = QVBoxLayout(self)
self.layoutVerticalTest.addWidget(self.canvas)
I have this error :
File "/Users/AlexisTuil/Desktop/projet inno/sc_analysis/visualisation.py", line 86, in
__init__self.canvas = FigureCanvasQTAgg(self.fig)
File "/usr/local/lib/python3.5/site packages/matplotlib/backends/backend_qt4agg.py", line 76, in
__init__FigureCanvasQT.__init__(self, figure)
File "/usr/local/lib/python3.5/site-packages/matplotlib/backends/backend_qt4.py", line 71, in
__init__QtWidgets.QWidget.__init__(self)
TypeError: __init__() missing 1 required positional argument: 'figure'
Abort trap: 6
I've searched on many forums but i couldn't find out a solution to my problem.
I don't get it why there is a missing "positional argument". Please help me !
I am on MacOS El Capitan with python3.5 64bit. I installed matplotlib with pip
if it can help.
Thanks guys :)
Answer: I finally found out: while using pyqt5 I have to import
matplotlib.backends.backend_q5agg import FigureCanvasQTAgg
|
How to customize button ok in Gtk.MessageDialog with CSS?
Question: My day's question is about apply CSS in default button in Gtk.MessageDialog. I
tried a lot things without result.
The deal is to find th good id like buttons, or GTKbutton, or
GtkMessageDialog.Button,......
#!/usr/bin/env python
# -*- coding: ISO-8859-1 -*-
#demo_messagedialog_css.py
from gi.repository import Gtk,Gdk
class show_message_dlg:
def __init__(self, message, type_message=Gtk.MessageType.INFO,stock_message=Gtk.STOCK_DIALOG_INFO, decorate=True):
"""
This Function is used to show an message
error dialog when an error occurs.
error_string - The error string that will be displayed on the dialog.
==>type_message=gtk.MESSAGE_ERROR for error message
==>type_message=gtk.MESSAGE_INFO for information message
==>type_message=gtk.MESSAGE_WARNING for warning message
GTK_WIN_POS_NONE
GTK_WIN_POS_CENTER equivalent in python to Gtk.WindowPosition.CENTER
GTK_WIN_POS_MOUSE equivalent in python to Gtk.WindowPosition.MOUSE
GTK_WIN_POS_CENTER_ALWAYS equivalent in python to Gtk.WindowPosition.CENTER_ALWAYS
GTK_WIN_POS_CENTER_ON_PARENT equivalent in python to Gtk.WindowPosition.CENTER_ON_PARENT
"""
self.message = message
self.message_dlg = Gtk.MessageDialog(type = type_message
, buttons = Gtk.ButtonsType.OK)
self.message_dlg.set_decorated(decorate)
self.message_dlg.set_markup(self.message)
self.message_dlg.set_position(Gtk.WindowPosition.CENTER_ON_PARENT )
style_provider = Gtk.CssProvider()
css = """
GtkMessageDialog
{ background:linear-gradient(to bottom, green, rgba(0,255,0,0));}
#Buttons{ background-color: yellow}
"""
style_provider.load_from_data(css)
Gtk.StyleContext.add_provider_for_screen(Gdk.Screen.get_default(),
style_provider,
Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION)
def run(self):
reponse = self.message_dlg.run()
self.message_dlg.destroy()
return reponse
if __name__ == "__main__":
exemple = show_message_dlg(u"message in the box dialog ")
exemple.run()
Gtk.main()
after few white night and 150 black coffe I found a small part of answer
#!/usr/bin/env python
# -*- coding: ISO-8859-1 -*-
#demo_messagedialog_css1.py
from gi.repository import Gtk,Gdk
class MyButtonClass(Gtk.Button):
__gtype_name__ = 'MyButton'
def __init__(self, label):
Gtk.Button.__init__(self, label)
self.connect("clicked", self._clicked1)
def _clicked1(self, button):
print "button ok clicked"
class show_message_dlg:
def __init__(self, message, type_message=Gtk.MessageType.INFO,stock_message=Gtk.STOCK_DIALOG_INFO, decorate=True):
self.message = message
self.message_dlg = Gtk.MessageDialog(type = type_message)
self.message_dlg.set_decorated(decorate)
self.message_dlg.set_markup(self.message)
self.message_dlg.set_position(Gtk.WindowPosition.CENTER_ALWAYS )
button_v = MyButtonClass('Ok button')
self.message_dlg.add_action_widget(button_v, Gtk.ResponseType.OK)
self.message_dlg.set_default_response(Gtk.ResponseType.OK)
# line below necessary if button not defined like Gtk.MessageDialog(type = type_message, buttons = Gtk.ButtonsType.OK)
self.message_dlg.show_all()
style_provider = Gtk.CssProvider()
css = """
GtkMessageDialog
{ background:linear-gradient(to bottom, yellow, rgba(0,255,0,0));}
MyButton {
color: darkgrey;
font: Comic Sans 20;} /* run OK for label font */
MyButton GtkLabel{ background-color: blue} /* run OK for button background label*/
/*MyButton GtkLabel{ background:linear-gradient(to right, yellow, blue,yellow,green,red,orange);}*/ /* run OK for button background label*/
MyButton:active GtkLabel{ background-color: red;} /* do not run if state change */
"""
style_provider.load_from_data(css)
Gtk.StyleContext.add_provider_for_screen(Gdk.Screen.get_default(),
style_provider,
Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION)
def run(self):
reponse = self.message_dlg.run()
self.message_dlg.destroy()
return reponse
if __name__ == "__main__":
exemple = show_message_dlg(u"message in the dialog box ")
response = exemple.run()
if response == Gtk.ResponseType.OK:
print("OK button clicked and end")
else:
print("destroyed")
Gtk.main()
please note special class button with class atribut **gtype_name** =
'MyButton' and CSS command adressed by MyButton {}
Now I don't understand why part below not OK when I click on button
MyButton:active GtkLabel{ background-color: red;}
Answer: ok I found a solution for specific problem button
MyButton:active GtkLabel{ background: red;}
my exemple
#!/usr/bin/env python
# -*- coding: ISO-8859-1 -*-
#demo_messagedialog_css2.py
from gi.repository import Gtk,Gdk
class MyButtonClass(Gtk.Button):
__gtype_name__ = 'MyButton'
def __init__(self, label):
Gtk.Button.__init__(self, label)
self.connect("clicked", self._clicked1)
def _clicked1(self, button):
print "button ok clicked"
class show_message_dlg:
def __init__(self, message, type_message=Gtk.MessageType.INFO,stock_message=Gtk.STOCK_DIALOG_INFO, decorate=True):
self.message = message
self.message_dlg = Gtk.MessageDialog(type = type_message)
self.message_dlg.set_decorated(decorate)
self.message_dlg.set_markup(self.message)
self.message_dlg.set_position(Gtk.WindowPosition.CENTER_ALWAYS )
button_v = MyButtonClass('Ok button')
self.message_dlg.add_action_widget(button_v, Gtk.ResponseType.OK)
self.message_dlg.set_default_response(Gtk.ResponseType.OK)
# line below necessary if button not defined like Gtk.MessageDialog(type = type_message, buttons = Gtk.ButtonsType.OK)
self.message_dlg.show_all()
style_provider = Gtk.CssProvider()
css = """
GtkMessageDialog
{ background:linear-gradient(to bottom, yellow, rgba(0,255,0,0));}
MyButton {
color: darkgrey;
font: Comic Sans 20;}
MyButton:active { background: red;}
MyButton GtkLabel{ background:linear-gradient(to right, yellow, blue,yellow,green,red,orange);}
"""
style_provider.load_from_data(css)
Gtk.StyleContext.add_provider_for_screen(Gdk.Screen.get_default(),
style_provider,
Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION)
def run(self):
reponse = self.message_dlg.run()
self.message_dlg.destroy()
return reponse
if __name__ == "__main__":
exemple = show_message_dlg(u"message in the dialog box ")
response = exemple.run()
if response == Gtk.ResponseType.OK:
print("OK button clicked and end")
else:
print("destroyed")
Gtk.main()
|
Opening a new window when we click on a button in wx python
Question: I am new to python . I want to open a new window when I click on button OK. I
am having the following code but getting error. I googled it but got few
answers but didn't get that how to make it work
import wx
class MyFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1, "My Frame", size=(3000, 3000))
panel = wx.Panel(self,-1)
#panel.Bind(wx.EVT_MOTION, self.OnMove)
wx.StaticText(panel, -1, "What are the values of X", pos=(10, 12))
#self.posCtrl = wx.TextCtrl(panel, -1, "", pos=(100, 10))
wx.CheckBox(panel, -1, "Apples", (20,100), (160,-1))
wx.CheckBox(panel, -1, "Mango", (20,150), (160,-1))
wx.CheckBox(panel, -1, "Banana", (20,200), (160,-1))
wx.CheckBox(panel, -1, "Orange", (20,250), (160,-1))
button=wx.Button(panel,label="OK",pos=(800, 400), size = (50,50))
self.Bind(wx.EVT_BUTTON, self.newwindow, button)
# def OnMove(self, event):
# pos = event.GetPosition()
# self.posCtrl.SetValue("%s, %s" % (pos.x, pos.y))
def newwindow(self, event):
secondWindow = window2()
secondWindow.Show()
class window2(wx.Frame):
title = "new Window"
def __init__(self,parent,id):
wx.Frame.__init__(self, id,'Window2', size=(1000,700))
panel=wx.Panel(self, -1)
self.SetBackgroundColour(wx.Colour(100,100,100))
self.Centre()
self.Show()
app = wx.App(False)
frame = MyFrame()
frame.Show(True)
app.MainLoop()
Error that I am getting when click on OK button
Traceback (most recent call last):
File "gui_quiz.txt", line 36, in newwindow
secondWindow = window2()
TypeError: __init__() takes exactly 3 arguments (1 given)
Answer: the answer is in the traceback. your `window2` class `__init__` function
requires `(self,parent,id)`. self is provided already (behind the scene) and
that's why `1 given`.
So you will have to provide other two parameters (`parent` and `id`). `Parent`
here can be just `self` (recommended, if you want to close this frame from the
main frame etc) or `None` otherwise and `id` can be just `-1` to leave out for
wx to set it for you. if you plan to have many frames then setting id will
help you keep track of them.
|
Python/PyQT: How can I truncate a text in QLineEdit
Question: I am looking for a solution for my problem. What did I do? I wrote a subclass
named _ExtendedTruncateTextLineEdit_ that inherits from QLineEdit. What I
want? Well, I want to **truncate a text** in a Qwidget named **QLineEdit** ,
when you resize the window and the QLineEdit is getting smaller than content.
The following code does work, but the QLineEdit-widget looks like a QLabel.
What I have to do, that the following code draws also my QLineEdit?
import sys
from PyQt4.QtCore import Qt
from PyQt4.QtGui import QApplication,\
QLineEdit,\
QLabel,\
QFontMetrics,\
QHBoxLayout,\
QVBoxLayout,\
QWidget,\
QIcon,\
QPushButton,\
QToolTip,\
QBrush,\
QColor,\
QFont,\
QPalette,\
QPainter
qt_app = QApplication(sys.argv)
class Example(QWidget):
def __init__(self):
QWidget.__init__(self)
self.setMinimumWidth(100)
self.init_ui()
def init_ui(self):
v_layout = QVBoxLayout()
v_layout.addStretch(1)
lbl = ExtendedTruncateTextLabel("This is a really, long and poorly formatted runon sentence used to illustrate a point", self)
#lbl.setText("This is a really, long and poorly formatted runon sentence used to illustrate a point")
lbl_1 = ExtendedTruncateTextLabel(self)
lbl_1.setText("Dies ist ein normaler Text")
l_text = ExtendedTruncateTextLineEdit()
l_text.setText("In the widget namend QLineEdit is also a very long text")
v_layout.addWidget(lbl)
v_layout.addWidget(lbl_1)
v_layout.addWidget(l_text)
self.setLayout(v_layout)
def run(self):
self.show()
qt_app.exec_()
class ExtendedTruncateTextLineEdit(QLineEdit):
def __init(self, parent):
QLineEdit.__init__(self, parent)
def paintEvent(self, event):
""" Handle the paint event for the title bar.
This paint handler draws the title bar text and title buttons.
"""
super(ExtendedTruncateTextLineEdit, self).paintEvent(event)
painter = QPainter(self)
metrics = QFontMetrics(self.font())
elided = metrics.elidedText(self.text(), Qt.ElideMiddle, self.width())
painter.drawText(self.rect(), self.alignment(), elided)
if __name__ == '__main__':
app = Example()
app.run()
Answer: Here is a crude version of what you probably want. I cooked this up in around
10 mins so is very crude. Note that this in **no way is any where near
complete**. It still has several things left to be done. In this case, the
text is elided when the `QLineEdit` loses focus. The text has to be restored
when it gains focus. That part is not yet implemented. or change of fonts will
result in erroneous eliding, since `QFontMentrics` object will not get
changed, etc, etc...
class ElidingLineEdit( QLineEdit ) :
"""Eliding text lineedit
"""
def __init__( self, text = QString(), parent = None ) :
"""Class initialiser
"""
QLineEdit.__init__( self, parent )
self.mText = text;
self.fm = QFontMetrics( self.font() )
self.textEdited[ QString ].connect( self.saveText )
self.editingFinished.connect( self.shortenText )
def setText( self, txt ) :
"""setText( QString ) -> None
Override the QLineEdit::setText to display the shortened text
@return None
"""
QLineEdit.setText( self, self.fm.elidedText( self.mText, Qt.ElideRight, self.width() ) )
def resizeEvent( self, rEvent ) :
"""resizeEvent( QResizeEvent ) -> None
Override the resizeevent to shorten the text
@return None
"""
QLineEdit.setText( self, self.fm.elidedText( self.mText, Qt.ElideRight, rEvent.size().width() ) )
rEvent.accept()
def saveText( self, newText ) :
"""saveText() -> None
Save the text as it is changing
@return None
"""
self.mText = newText
def shortenText( self ) :
"""saveText() -> None
Save the text as it is changing
@return None
"""
QLineEdit.setText( self, self.fm.elidedText( self.mText, Qt.ElideRight, self.width() ) )
|
Import package from PYPI
Question: I use the "pip install xxx" from PYPI (<https://pypi.python.org/pypi>). Then I
type "import xxx", it can import package without any problem.
However, when I uploaded my package to PYPI, Then I type "import xxx", it
cannot import package. It said "ImportError, no module named xxx".
I think it is because the package is not my current directory? If yes, how
should I do to avoid this problem when I uploaded my package to PYPI? Thanks.
Answer: Your package does not contain any (valid) Python packages. Python package by
definition has to have a `__init__.py`. Just put an empty `__init__.py` inside
the `mypackagemx3292016` folder.
I would however suggest not to use a package but rather just a single module.
A package works good when you need to group multiple modules together. A
[simple example from distutils
docs](https://docs.python.org/2/distutils/introduction.html#distutils-simple-
example) shows how to list individual modules.
In terms of installation you need to do exactly the same as with any other
package:
pip install mypackagemx3292016
If you want to avoid the hassle of having to do this every time you upload a
new version to pypi, you can symlink the local copy:
pip install -e /path/to/mypackagemx3292016
If that does not work there is probably a problem with your `setup.py`.
|
python script slow read and write gz files
Question: I have a xxx.wig.gz file, that have 3,000,000,000 lines in such format:
fixedStep chrom=chr1 start=1 step=1
0
0
0
0
0
1
2
3
4
5
6
7
8
9
10
...
fixedStep chrom=chr2 start=1 step=1
0
0
0
0
0
11
12
13
14
15
16
17
18
19
20
...
and i want to
1. break it down by "chrom". So every time I read a line starts with "fixedstep", I create a new file and close old one.
2. I want 0/1 output by comparing each value to a "threshold", pass=1 otherwise 0
below is my python script which runs super slow (I am projecting it to finish
~10hours, so far 2 chromosomes done after ~1 hour)
can someone help me improve it?
#!/bin/env python
import gzip
import re
import os
import sys
fn = sys.argv[1]
f = gzip.open(fn)
fo_base = os.path.basename(fn).rstrip('.wig').rstrip('.wig.gz')
fo_ext = '.bt.gz'
thres = 100
fo = None
for l in f:
if l.startswith("fixedStep"):
if fo is not None:
fo.flush()
fo.close()
fon = re.search(r'chrom=(\w*)', l).group(0).split('=')[-1]
fo = gzip.open(fo_base + "_" + fon + fo_ext,'wb')
else:
if int(l.strip())>= thres:
fo.write("1\n")
else:
fo.write("0\n")
if fo is not None:
fo.flush()
fo.close()
f.close()
PS. I assume awk can do it much faster but I am not great with awk
Answer: Thanks Summer for editing the text.
I added buffered read/write to the script and now it is several times faster
(still relatively slow though):
import io
f = io.BufferedReader( gzip.open(fn) )
fo = io.BufferedWriter( gzip.open(fo_base + "." + fon + fo_ext,'wb') )
|
How to pass images to the model for classification in Tensorflow
Question: I have created a model using the following code below:
# Deep Learning
# In[25]:
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
# In[37]:
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
print(test_labels)
# Reformat into a TensorFlow-friendly shape:
# - convolutions need the image data formatted as a cube (width by height by #channels)
# - labels as float 1-hot encodings.
# In[38]:
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
def reformat(dataset, labels):
dataset = dataset.reshape(
(-1, image_size, image_size, num_channels)).astype(np.float32)
#print(np.arange(num_labels))
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
#print(labels[0,:])
print(labels[0])
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
#print(labels[0])
# In[39]:
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
# Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.
# In[47]:
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1),name="layer1_weights")
layer1_biases = tf.Variable(tf.zeros([depth]),name = "layer1_biases")
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1),name = "layer2_weights")
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]),name ="layer2_biases")
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1),name="layer3_biases")
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]),name = "layer3_biases")
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1),name = "layer4_weights")
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]),name = "layer4_biases")
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
# In[48]:
num_steps = 1001
#saver = tf.train.Saver()
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
save_path = tf.train.Saver().save(session, "/tmp/model.ckpt")
print("Model saved in file: %s" % save_path)
I have saved the model and wrote another python program where i am trying to
restore the model and use it for classification of my images , but i am not
being able to create a 4D tensor of the image , that i have to pass as input
to the model.
The code of the python file is as follows :
# In[8]:
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
from scipy import ndimage
# In[9]:
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
# In[10]:
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
# In[15]:
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
pixel_depth =255
graph = tf.Graph()
with graph.as_default():
'''# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
#tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)'''
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1),name="layer1_weights")
layer1_biases = tf.Variable(tf.zeros([depth]),name = "layer1_biases")
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1),name = "layer2_weights")
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]),name ="layer2_biases")
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1),name="layer3_biases")
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]),name = "layer3_biases")
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1),name = "layer4_weights")
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]),name = "layer4_biases")
saver = tf.train.Saver()
tf_
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
#test_prediction = tf.nn.softmax(model(tf_test_dataset))
# In[19]:
with tf.Session(graph=graph) as sess:
# Restore variables from disk.
saver.restore(sess, "/tmp/model.ckpt")
print("Model restored.")
image_data = (ndimage.imread('notMNIST_small/A/QXJyaWJhQXJyaWJhU3RkLm90Zg==.png').astype(float) -
pixel_depth / 2) / pixel_depth
data = [0:,image_data:,]
sess.run(valid_prediction,feed_dict={tf_valid_dataset:data})
# Do some work with the model
As you can see in ln[19] i have restored my model and want to pass an image to
the model by creating a 4d Tensor , I am reading the image and then trying to
convert it to a 4d tensor but the ysntax for creating it is wrong in my code ,
thus need help in correcting it .
Answer: Assuming that `image_data` is a _grayscale_ image, it should be a 2-D NumPy
array. You can convert it to a 4-D array with the following:
data = image_data[np.newaxis, ..., np.newaxis]
The
[`np.newaxis`](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis)
adds a new dimension of size 1 in the first (batch size) and last (channels)
dimensions. It is equivalent to the following, using
[`np.expand_dims()`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.expand_dims.html):
data = np.expand_dims(np.expand_dims(image_data, 0), -1)
On the other hand, if you are working with RGB data, you will need to convert
it to fit the model. You could for example define a placeholder for the image
input:
input_placeholder = tf.placeholder(tf.float32, shape=[None, image_size, image_size, 3])
input_grayscale = tf.image.rgb_to_grayscale(input_placeholder)
prediction = tf.nn.softmax(model(input_grayscale))
image_data = ... # Load from file
data = image_data[np.newaxis, ...] # Only add a batch dimension.
prediction_val = sess.run(prediction, feed_dict={input_placeholder: data})
|
Using Sacred Module with Python
Question: I am trying to set up `sacred` for python and I am going through the tutuorial
here - <http://sacred.readthedocs.org/en/latest/quickstart.html>. I was able
to set up sacred using `pip install sacred` with no issues. I am having
trouble running the basic code:
from sacred import Experiment
ex = Experiment("hello_world")
Running this code returns the a valueError:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-25-66f549cfb192> in <module>()
1 from sacred import Experiment
2
----> 3 ex = Experiment("hello_world")
/Users/ryandevera/anaconda/lib/python2.7/site-packages/sacred/experiment.pyc in __init__(self, name, ingredients)
42 super(Experiment, self).__init__(path=name,
43 ingredients=ingredients,
---> 44 _caller_globals=caller_globals)
45 self.default_command = ""
46 self.command(print_config, unobserved=True)
/Users/ryandevera/anaconda/lib/python2.7/site-packages/sacred/ingredient.pyc in __init__(self, path, ingredients, _caller_globals)
48 self.doc = _caller_globals.get('__doc__', "")
49 self.sources, self.dependencies = \
---> 50 gather_sources_and_dependencies(_caller_globals)
51
52 # =========================== Decorators ==================================
/Users/ryandevera/anaconda/lib/python2.7/site-packages/sacred/dependencies.pyc in gather_sources_and_dependencies(globs)
204 def gather_sources_and_dependencies(globs):
205 dependencies = set()
--> 206 main = Source.create(globs.get('__file__'))
207 sources = {main}
208 experiment_path = os.path.dirname(main.filename)
/Users/ryandevera/anaconda/lib/python2.7/site-packages/sacred/dependencies.pyc in create(filename)
61 if not filename or not os.path.exists(filename):
62 raise ValueError('invalid filename or file not found "{}"'
---> 63 .format(filename))
64
65 mainfile = get_py_file_if_possible(os.path.abspath(filename))
ValueError: invalid filename or file not found "None"
I am not sure why this error is returning. The documentation does not say
anything about setting up an Experiment file prior to running the code. Any
help would be greatly appreciated!
Answer: The traceback given indicates that the constructor for `Experiment` searches
its namespace to find the file in which its defined.
Thus, to make the example work, place the example code into a file and run
that file directly.
If you are using `ipython`, then you could always try using the `%%python`
command, which will effectively capture the code you give it into a file
before running it (in a separate python process).
|
Restarting Heroku Python script periodically
Question: I made a **Flask-Python** web application that scrapes the fixture of Real
Madrid and displays it in a neat countdown page. I am trying to host it via
**Heroku**. i moved the scraping part to the main python script and passed the
scraped variables via **render_template**.
My question is how does the Python script run on the Heroku servers? Is it
called when someone opens the webpage or does it run only once and serves the
requests? If it is like that is there a way to restart the servers or rerun
the Python script periodically so the changes in fixtures is reflected in the
webpage.
Here is my app.py
import requests
import datetime
from bs4 import BeautifulSoup as bs
from lxml import html
url = 'http://www.realmadrid.com/en/football/schedule'
response = requests.get(url)
html = response.content
soup = bs(html)
loc = soup.find('p', {'class': 'm_highlighted_next_game_location'}).contents
loc1 = loc[0]
if "Santiago" in loc1:
opp = soup.find('div',{'class':'m_highlighted_next_game_team m_highlighted_next_game_second_team'}).strong.contents
else:
opp = soup.find('div', {'class': 'm_highlighted_next_game_team'}).strong.contents
opp1=opp[0]
time = soup.find('div', {'class': 'm_highlighted_next_game_info_wrapper'}).time.contents
time1 = time[0]
date = soup.find('header', {'class': 'm_highlighted_next_game_header'}).time.contents
date1 = date[0]
times = time1.split(":")
dates = date1.split("/")
hour = times[0]
mintemp = times[1]
minutes = mintemp[:-2]
year = dates[0]
month = dates[1]
day = dates[2]
from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html',hour=hour,minutes=minutes,year=year,month=month,day=day,loc=loc1,opp=opp1)
if __name__ == '__main__':
app.run(debug=True)
P.S : I'm using Heroku for the first time. Please excuse if something sounds
stupid.
Answer: Heroku is considered 'lazy' and stops servers if they have been idle for more
than 30 minutes (also to save power). However, if your app makes a request, it
will awaken right up (might take a couple of seconds to wake up). In your
case, it will rerun the python script every-time the website is opened and a
request is made.
If you want to update fixtures periodically without making a request from your
website, check out [Heroku
Scheduler](https://devcenter.heroku.com/articles/scheduler), which lets you
schedule heroku tasks periodically. Keep in mind that you need to let the
heroku server sleep for at least 6 hours/day for the free version.
Hope it helps!
|
Py 3: tkinter frames, coding inside or outside of frame classes
Question: So I'm creating a GUI mathematical program for school using python 3 and
tkinter where we have to ask the user what formula they want to use (add, sub,
multiply, divide) and then ask 10 questions show a results page once all 10
questions have been answered on that page and then start over again in a loop.
i want to create 4 different difficulties
Easy = Range between (1,9), medium = Range between (10,24), Hard = Range
between (25,50), Insane = Range Between (51,100)
I've created the gui so far as shown below but want to know if I'm going to
organize the working out of the equations in the page classes or somehow
outside of the class. im pretty new to python/tkinter I've never used it
before but just learnt what i have so far off youtube hours and hours of
watching.
All i want to know is where to go from now with where to place the equation
code/formula. once i know where to stick it ill be away.
(all code below if you copy and paste into python 3 and save it will instantly
work. any problems will be indents from the copy and paste due to how i copied
it all into this chat.
Any issues you see below please feel free to point out. As I said I'm pretty
new I Google didn't really help as I didn't quite know what to type in to find
it.
kind regards
import tkinter as tk
from tkinter import *
from tkinter import ttk #css kind of thing for tkinter
import random
difficulty = []
LARGE_FONT = ("Times New Roman", 25)
MEDIUM_FONT = ("Times New Roman", 15)
SMALL_FONT = ("Times New Roman", 10)
###Base Code For Pages
class ourprogramclass(tk.Tk):
def __init__ (self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
tk.Tk.iconbitmap(self, default="mathsicon.ico")
tk.Tk.wm_title(self, "Mathematic Equation program")
container = tk.Frame(self)
container.pack(side="top", fill="both", expand=True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (StartPage, AdditionPage, SubtractionPage, MultiplicationPage, DivisionPage ):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(StartPage)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
###Page Classes front page
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label1 = tk.Label(self, text = "Mathmatics Problems Quiz", font=LARGE_FONT)
label2 = tk.Label(self, text = "Mathematic Equation program", font=MEDIUM_FONT)
label3 = tk.Label(self, text = "Select Your Operation and Difficulty Level", font=SMALL_FONT)
label1.pack(pady=10,padx=10)
label2.pack(pady=10,padx=10)
label3.pack(pady=10,padx=10)
button1 = tk.Button(self, text = "Addition Equations", font=MEDIUM_FONT,command=lambda: controller.show_frame(AdditionPage)).pack(fill=X)
label = Label(self,text="").pack()
button2 = tk.Button(self, text = "Subtraction Equations", font=MEDIUM_FONT,command=lambda: controller.show_frame(SubtractionPage)).pack(fill=X)
label = Label(self,text="").pack()
button3 = tk.Button(self, text = "Multiplication Equations", font=MEDIUM_FONT,command=lambda: controller.show_frame(MultiplicationPage)).pack(fill=X)
label = Label(self,text="").pack()
button4 = tk.Button(self, text = "Division Equations", font=MEDIUM_FONT,command=lambda: controller.show_frame(DivisionPage)).pack(fill=X)
label = Label(self,text="").pack()
label4 = tk.Label(self, text = "Select Difficulty", font=LARGE_FONT).pack()
def checkbutton_value1():
if (var1.get()):
var2.set(0)
var3.set(0)
var4.set(0)
del difficulty[:]
difficulty.append(1)
print (difficulty[0])
def checkbutton_value2():
if(var2.get()):
var1.set(0)
var3.set(0)
var4.set(0)
del difficulty[:]
difficulty.append(2)
print (difficulty[0])
def checkbutton_value3():
if(var3.get()):
var1.set(0)
var2.set(0)
var4.set(0)
del difficulty[:]
difficulty.append(3)
print (difficulty[0])
def checkbutton_value4():
if(var4.get()):
var1.set(0)
var2.set(0)
var3.set(0)
del difficulty[:]
difficulty.append(4)
print (difficulty[0])
var1 = IntVar()
dif_button1 = tk.Checkbutton(self, text="Easy", variable=var1, command=checkbutton_value1).pack()
var2 = IntVar()
dif_button2 = tk.Checkbutton(self, text="Medium", variable=var2, command=checkbutton_value2).pack()
var3 = IntVar()
dif_button3 = tk.Checkbutton(self, text="Hard ", variable=var3, command=checkbutton_value3).pack()
var4 = IntVar()
dif_button4 = tk.Checkbutton(self, text="Insane", variable=var4, command=checkbutton_value4).pack()
quit_button = tk.Button(self, text='Quit', command=quit, font=MEDIUM_FONT).pack(fill=X, side = BOTTOM)
###Addition Page
class AdditionPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label1 = tk.Label(self, text = "Mathmatics Problems Quiz", font=LARGE_FONT).pack(pady=10,padx=10)
label2 = tk.Label(self, text = "Mathematic Equation program", font=MEDIUM_FONT).pack(pady=10,padx=10)
label3 = tk.Label(self, text = "You have Selected Addition as The Unit", font=SMALL_FONT).pack(pady=10,padx=10)
button1 = tk.Button(self, text = "Reselect Unit", font=MEDIUM_FONT,command=lambda: controller.show_frame(StartPage)).pack(fill=X)
label = Label(self,text="").pack()
label = Label(self,text="").pack()
#-----THIS IS WHERE I WANT THE EQUATION TO SHOW IN THE LABEL BELOW-----#
question_label = Label(self, text="Enter Your Answer", font=MEDIUM_FONT ).pack()
label = Label(self,text="").pack()
self.entrytext = StringVar()
Entry(self, textvariable=self.entrytext, font=MEDIUM_FONT,).pack(fill=X)
label = Label(self,text="").pack()
label = Label(self,text="").pack()
submit_button = tk.Button(self, text = "Submit Answer", font=MEDIUM_FONT).pack(fill=X)
quit_button = tk.Button(self, text='Quit', command=quit, font=MEDIUM_FONT,).pack(fill=X, side = BOTTOM)
####Subtraction Page
class SubtractionPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label1 = tk.Label(self, text = "Mathmatics Problems Quiz", font=LARGE_FONT).pack(pady=10,padx=10)
label2 = tk.Label(self, text = "Mathematic Equation program", font=MEDIUM_FONT).pack(pady=10,padx=10)
label3 = tk.Label(self, text = "You have Selected Subtraction as The Unit", font=SMALL_FONT).pack(pady=10,padx=10)
button1 = tk.Button(self, text = "Reselect Unit", font=MEDIUM_FONT,command=lambda: controller.show_frame(StartPage)).pack(fill=X)
label = Label(self,text="").pack()
label = Label(self,text="").pack()
#-----THIS IS WHERE I WANT THE EQUATION TO SHOW IN THE LABEL BELOW-----#
question_label = Label(self, text="Enter Your Answer", font=MEDIUM_FONT ).pack()
label = Label(self,text="").pack()
self.entrytext = StringVar()
Entry(self, textvariable=self.entrytext, font=MEDIUM_FONT,).pack(fill=X)
label = Label(self,text="").pack()
label = Label(self,text="").pack()
submit_button = tk.Button(self, text = "Submit Answer", font=MEDIUM_FONT).pack(fill=X)
quit_button = tk.Button(self, text='Quit', command=quit, font=MEDIUM_FONT,).pack(fill=X, side = BOTTOM)
###Multiply Page
class MultiplicationPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label1 = tk.Label(self, text = "Mathmatics Problems Quiz", font=LARGE_FONT).pack(pady=10,padx=10)
label2 = tk.Label(self, text = "Mathematic Equation program", font=MEDIUM_FONT).pack(pady=10,padx=10)
label3 = tk.Label(self, text = "You have Selected Multiplication as The Unit", font=SMALL_FONT).pack(pady=10,padx=10)
button1 = tk.Button(self, text = "Reselect Unit", font=MEDIUM_FONT,command=lambda: controller.show_frame(StartPage)).pack(fill=X)
label = Label(self,text="").pack()
label = Label(self,text="").pack()
#-----THIS IS WHERE I WANT THE EQUATION TO SHOW IN THE LABEL BELOW-----#
question_label = Label(self, text="Enter Your Answer", font=MEDIUM_FONT ).pack()
label = Label(self,text="").pack()
self.entrytext = StringVar()
Entry(self, textvariable=self.entrytext, font=MEDIUM_FONT,).pack(fill=X)
label = Label(self,text="").pack()
label = Label(self,text="").pack()
submit_button = tk.Button(self, text = "Submit Answer", font=MEDIUM_FONT).pack(fill=X)
quit_button = tk.Button(self, text='Quit', command=quit, font=MEDIUM_FONT,).pack(fill=X, side = BOTTOM)
###Division Page
class DivisionPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label1 = tk.Label(self, text = "Mathmatics Problems Quiz", font=LARGE_FONT).pack(pady=10,padx=10)
label2 = tk.Label(self, text = "Mathematic Equation program", font=MEDIUM_FONT).pack(pady=10,padx=10)
label3 = tk.Label(self, text = "You have Selected Division as The Unit", font=SMALL_FONT).pack(pady=10,padx=10)
button1 = tk.Button(self, text = "Reselect Unit", font=MEDIUM_FONT,command=lambda: controller.show_frame(StartPage)).pack(fill=X)
label = Label(self,text="").pack()
label = Label(self,text="").pack()
#-----THIS IS WHERE I WANT THE EQUATION TO SHOW IN THE LABEL BELOW-----#
question_label = Label(self, text="Enter Your Answer", font=MEDIUM_FONT ).pack()
label = Label(self,text="").pack()
self.entrytext = StringVar()
Entry(self, textvariable=self.entrytext, font=MEDIUM_FONT,).pack(fill=X)
label = Label(self,text="").pack()
label = Label(self,text="").pack()
submit_button = tk.Button(self, text = "Submit Answer", font=MEDIUM_FONT).pack(fill=X)
quit_button = tk.Button(self, text='Quit', command=quit, font=MEDIUM_FONT,).pack(fill=X, side = BOTTOM)
app = ourprogramclass()
app.mainloop()
Answer: First, the logic carried by those checkbuttons for difficulty selection can be
replaced by radio button. It will reduce much code.
Second, you can extract the logic shared by the four equation pages into a
base class, and inherit that base class to create the four equation pages.
As for your question, I think it can be put in some method of the base class,
this is the code I modified.
The book [Programming Python] covers tkinter very well, and this website
<http://www.tcl.tk/man/tcl8.6/TkCmd/contents.htm> provides details for tk.
import tkinter as tk
from tkinter import *
from tkinter import ttk #css kind of thing for tkinter
import random
dif_enum = ['Easy', 'Medium', 'Hard', 'Insane']
current_difficulty = dif_enum[0]
NUM_QUESTIONS = 10
LARGE_FONT = ("Times New Roman", 25)
MEDIUM_FONT = ("Times New Roman", 15)
SMALL_FONT = ("Times New Roman", 10)
###Base Code For Pages
class ourprogramclass(tk.Tk):
def __init__ (self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
# tk.Tk.iconbitmap(self, default="mathsicon.ico")
tk.Tk.wm_title(self, "Mathematic Equation program")
container = tk.Frame(self)
container.pack(side="top", fill="both", expand=True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (StartPage, AdditionPage, SubtractionPage, MultiplicationPage, DivisionPage ):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(StartPage)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
###Page Classes front page
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label1 = tk.Label(self, text = "Mathmatics Problems Quiz", font=LARGE_FONT)
label2 = tk.Label(self, text = "Mathematic Equation program", font=MEDIUM_FONT)
label3 = tk.Label(self, text = "Select Your Operation and Difficulty Level", font=SMALL_FONT)
label1.pack(pady=10,padx=10)
label2.pack(pady=10,padx=10)
label3.pack(pady=10,padx=10)
button1 = tk.Button(self, text = "Addition Equations", font=MEDIUM_FONT,command=lambda: controller.show_frame(AdditionPage)).pack(fill=X)
label = Label(self,text="").pack()
button2 = tk.Button(self, text = "Subtraction Equations", font=MEDIUM_FONT,command=lambda: controller.show_frame(SubtractionPage)).pack(fill=X)
label = Label(self,text="").pack()
button3 = tk.Button(self, text = "Multiplication Equations", font=MEDIUM_FONT,command=lambda: controller.show_frame(MultiplicationPage)).pack(fill=X)
label = Label(self,text="").pack()
button4 = tk.Button(self, text = "Division Equations", font=MEDIUM_FONT,command=lambda: controller.show_frame(DivisionPage)).pack(fill=X)
label = Label(self,text="").pack()
label4 = tk.Label(self, text = "Select Difficulty", font=LARGE_FONT).pack()
# @etuc_comment:Use `Radiobutton` for the "select one from many" logic. It takes only a few lines.
difficulty = tk.StringVar(value=dif_enum[0])
def show_and_set_difficulty():
global current_difficulty
current_difficulty = difficulty.get()
print(1 + dif_enum.index(difficulty.get()), difficulty.get())
for dif in dif_enum:
tk.Radiobutton(self, text=dif, value=dif, variable=difficulty, command=show_and_set_difficulty).pack()
quit_button = tk.Button(self, text='Quit', command=quit, font=MEDIUM_FONT).pack(fill=X, side = BOTTOM)
class EquationPageBase(tk.Frame):
def __init__(self, parent, controller, equal_type):
tk.Frame.__init__(self,parent)
tk.Label(self, text="Mathmatics Problems Quiz", font=LARGE_FONT).pack(pady=10,padx=10)
tk.Label(self, text="Mathematic Equation program", font=MEDIUM_FONT).pack(pady=10,padx=10)
# @etuc_comment: Use the third parameter `equal_type` for displaying current equation type.
tk.Label(self, text="You have Selected {} as The Unit".format(equal_type), font=SMALL_FONT).pack(pady=10,padx=10)
tk.Button(self, text="Reselect Unit", font=MEDIUM_FONT,command=lambda: controller.show_frame(StartPage)).pack(fill=X)
Label(self,text="").pack()
Label(self,text="").pack()
#-----THIS IS WHERE I WANT THE EQUATION TO SHOW IN THE LABEL BELOW-----#
self.submit_counter = 0
self.correct_counter = 0
self.equation_strvar = tk.StringVar(value='')
Label(self, textvariable=self.equation_strvar, font=MEDIUM_FONT).pack()
Label(self,text="").pack()
self.answer_strvar = StringVar()
Entry(self, textvariable=self.answer_strvar, font=MEDIUM_FONT, ).pack(fill=X)
Label(self,text="").pack()
Label(self,text="").pack()
tk.Button(self, text="Submit Answer", font=MEDIUM_FONT, command=self.submit).pack(fill=X)
tk.Button(self, text='Quit', command=quit, font=MEDIUM_FONT,).pack(fill=X, side = BOTTOM)
self.update_equation()
def update_equation(self):
self.equation = self.gen_equal()
self.equation_strvar.set('Enter Your Answer For: {}'.format(self.equation))
self.answer_strvar.set('')
def gen_equal(self):
"""
Subclasses will overload this method to generate the proper equation according to difficulty and
equation type setting. See an example in the `AdditionPage`
:return: Should return the equation as string. e.g. "1 + 2"
"""
pass
def submit(self):
try:
user_answer = int(self.answer_strvar.get())
except:
return
if eval(self.equation) == user_answer:
print('Correct')
self.correct_counter += 1
else:
print('Error: {} != {}, answer is {}'.format(self.equation, user_answer, eval(self.equation)))
self.submit_counter += 1
if self.submit_counter < NUM_QUESTIONS:
self.update_equation()
else:
self.show_result()
self.submit_counter = 0
self.correct_counter = 0
def show_result(self):
print('{} questions, {} correct.'.format(NUM_QUESTIONS, self.correct_counter))
# Addition Page
class AdditionPage(EquationPageBase):
def __init__(self, parent, controller):
EquationPageBase.__init__(self, parent, controller, equal_type='Addition')
def gen_equal(self):
"""
This is an example for addition.
"""
if current_difficulty == 'Easy':
return '{} + {}'.format(random.randint(1, 10), random.randint(1, 10))
elif current_difficulty == 'Medium':
return '{} + {}'.format(random.randint(10, 25), random.randint(10, 25))
elif current_difficulty == 'Hard':
return '{} + {}'.format(random.randint(25, 50), random.randint(25, 50))
elif current_difficulty == 'Insane':
return '{} + {}'.format(random.randint(50, 100), random.randint(50, 100))
else:
raise ValueError('Wrong difficulty')
# Subtraction Page
class SubtractionPage(EquationPageBase):
def __init__(self, parent, controller):
EquationPageBase.__init__(self, parent, controller, equal_type='Subtraction')
def gen_equal(self):
pass
# Multiply Page
class MultiplicationPage(EquationPageBase):
def __init__(self, parent, controller):
EquationPageBase.__init__(self, parent, controller, equal_type='Multiplication')
def gen_equal(self):
pass
# Division Page
class DivisionPage(EquationPageBase):
def __init__(self, parent, controller):
EquationPageBase.__init__(self, parent, controller, equal_type='Division')
def gen_equal(self):
pass
app = ourprogramclass()
app.mainloop()
|
Can't Import a file in different directory using Python
Question: I have a python module at
`/home/amit/folder1/folder2/folder3/folder4/folder5/nbsearch` named
**neighbourhoodsearch.py**. I have included a class in this file in another
module at `/home/amit/folder1/folder2/folder3/folder4/controller` named
**nbsearch_controller.py**.
from neighbourhoodsearch import NeighbourhoodSearch
I have added the path
`/home/amit/folder1/folder2/folder3/folder4/folder5/nbsearch` to PYTHONPATH
using `sys.path.append` and added a file **__init__.py** in all the folders in
the path. But still when I run the file nbsearch_controller.py, it says
ImportError: No module named neighbourhoodsearch
Answer: You can try multiple things:
* triple-check for typos in the path you give
* print `sys.path` before your problematic `import` and make sure that your path is really there
* remove `.pyc` files in your directory structure if you have some
* check the result of `os.path.isfile("/home/amit/folder1/folder2/folder3/folder4/folder5/nbsearch/neighbourhoodsearch.py")` (it should return `True`)
* check that you have "read" rights on your Python files
|
Negative pattern matching Reg ex In Python
Question: Tryto use negative forward to replace all string which does not match a
pattern:
regexPattern = '((?!*' + 'word1|word2|word3' + ').)*$'
mytext= 'jsdjsqd word1dsqsqsword2fjsdjswrod3sqdq'
return re.sub(regexPattern, "P", mytext)
#Expected Correct Output: 'PPPPPPword1PPPPPPword2PPPPPword3PPP'
#BAD Output: 'jsdjsqd word1dsqsqsword2fjsdjswrod3sqdq'
I try this but it does not work (string remains same). How to modify it ?
(think this is pretty difficult regex)
Answer: You can use
import re
regex = re.compile(r'(word1|word2|word3)|.', re.S)
mytext = 'jsdjsqd word1dsqsqsword2fjsdjsword3sqdq'
print(regex.sub(lambda m: m.group(1) if m.group(1) else "P", mytext))
// => PPPPPPPPword1PPPPPPword2PPPPPPword3PPPP
See the [IDEONE demo](https://ideone.com/H0BK39)
The regex is `(word1|word2|word3)|.`:
* `(word1|word2|word3)` \- either `word1`, or `word2`, or `word3` character sequences
* `|` \- or...
* `.` \- any character (incl. a newline as `re.S` DOTALL mode is on)
See the [regex demo](https://regex101.com/r/eF3kJ2/1)
|
Issue in storing regexp of multiple lines in python
Question: I am having the following pattern in text file
Question1: Quesion XYZ
a. option1
b. option2
c. option3
d. option4
Question2 : Question next
a. option1
b. option2
c. option3
d. option4
so on multiple questions
This pattern is there in the text file. I am trying to read the text file and
segregate each question and respective choices. I want the output as
Block1 :
Question1: Quesion XYZ
a. option1
b. option2
c. option3
d. option4
Block 2 :
Question2 : Question next
a. option1
b. option2
c. option3
d. option4
Motive is to retrieve each question and respective options. I was trying for a
`list` which I will append each question and respective options. I will create
multiple lists where I will store each question
I have done coding like
import re
txt = open("questions.txt")
i = 0 ;
for line in txt:
key = 'list' + str(i)
key = []
# if line == "\n" : continue
if re.search("^\n", line): continue
searchObj = re.search("([\d]+)\.\s*(.*)", line)
if searchObj :
i = searchObj.group(1)
key = searchObj.group(2)
searchObj2 = re.search("[\w]+\.\s*(.*)", line)
if searchObj2 :
key.append = (searchObj2.group(1))
txt.close()
print alist1
It is giving error. Moreover it is not solving the above purpose. I heard that
we can also use dict for that. Need to know how as there can be `n` number of
questions in the file.
Answer: Here is a generic method to solve this case, I said generic because it needs
to be modified within the content of your text file and how it is formatted,
which means tweaking the `re` expressions, anyways, what I would do, is a
create a list of dictionaries, with each dictionary holding data of each
question, as follows:
>>> import re
>>>
>>> patQNbBody = re.compile('(?P<QNb>Question\\d+)\\s*:\\s*(?P<QBody>[\\w\\s]+)$', re.IGNORECASE|re.MULTILINE|re.DOTALL)
>>>
>>> patQopt = re.compile('^[a-z]\\.\\s*(?P<Opt>\\w+)$', re.IGNORECASE|re.MULTILINE|re.DOTALL)
>>>
>>> my_questions = []
>>>
>>> with open('questions.txt', 'r') as f:
i = -1
for line in f:
m1 = patQNbBody.search(line)
if m1:
i += 1
p = m1.group(1)
my_questions.append({'Number':m1.group(1), 'Body':m1.group(2), 'options':[]})
else:
m2 = patQopt.search(line)
if m2:
my_questions[i]['options'].append(m2.group(1))
>>> my_questions
[{'options': ['option1', 'option2', 'option3', 'option4'], 'Number': 'Question1', 'Body': 'Quesion XYZ\n'}, {'options': ['option1', 'option2', 'option3', 'option4'], 'Number': 'Question2', 'Body': 'Question next\n'}]
If order matters to you, then you can get the advantage of
[`OrderedDict`](https://docs.python.org/3/library/collections.html#collections.OrderedDict)
to your best:
>>> import re
>>> from collections import OrderedDict
>>>
>>> patQNbBody = re.compile('(?P<QNb>Question\\d+)\\s*:\\s*(?P<QBody>[\\w\\s]+)$', re.IGNORECASE|re.MULTILINE|re.DOTALL)
>>>
>>> patQopt = re.compile('^[a-z]\\.\\s*(?P<Opt>\\w+)$', re.IGNORECASE|re.MULTILINE|re.DOTALL)
>>>
>>> my_questions = []
>>> with open('test1.txt', 'r') as f:
i = -1
for line in f:
d = OrderedDict()
m1 = patQNbBody.search(line)
if m1:
i += 1
for k,v in zip(('Number','Body','Options'),(m1.group(1), m1.group(2), [])):
d[k] = v
my_questions.append(d)
else:
m2 = patQopt.search(line)
if m2:
my_questions[i]['Options'].append(m2.group(1))
>>> my_questions
[OrderedDict([('Number', 'Question1'), ('Body', 'Quesion XYZ\n'), ('Options', ['option1', 'option2', 'option3', 'option4'])]), OrderedDict([('Number', 'Question2'), ('Body', 'Question next\n'), ('Options', ['option1', 'option2', 'option3', 'option4'])])]
>>>
>>>
>>> my_questions[0]['Number']
'Question1'
>>> my_questions[1]['Body']
'Question next\n'
>>>
>>> my_questions[1]['Options']
['option1', 'option2', 'option3', 'option4']
>>>
|
creating a list from json csv file using python
Question: I am sorry for asking this question, but i already look through but could not
find the answer. I am honestly newbie.I am trying to generate a list of whole
word from a json csv file. I already created a list of lines, but then i
cannot use split() to generate new list containing separate word (later i need
to count word occurrence). My input file contains twitter information:
[twitter data](http://i.stack.imgur.com/hV182.jpg) i tried to write simple
code:
myfile=open('fileName','r')
words=[]
for line in myfile:
words.append(line.split())
len(words)=82
I also tried reader=csv.reader(myFile) and reader=csv.DictReader(myFile)
but in all I can get each line, but how to further split the string/line into
independent word. Sorry and thank you in advanced.
My data #I change to a different example as maybe last one was bad formatted:
id,flags,expiration,cas,value
493926581610364928,0,0,2635740904247446,"{""contributors"":null,""truncated"":false,""text"":""@xaaronh @blueredandgold If Namco Bandai's One Piece Unlimited World is anything to go by, no local retail release means no eShop either =\\"",""in_reply_to_status_id"":493925918998425600,""id"":493926581610364928,""favorite_count"":0,""source"":""<a href=\""hp://twitter.com\"" rel=\""nofollow\"">Twitter Web Client</a>"",""retweeted"":false,""coordinates"":null,""entities"":{""symbols"":[],""user_mentions"":[{""id"":139852376,""indices"":[0,8],""id_str"":""139852376"",""screen_name"":""xaaronh"",""name"":""Aaron""},{""id"":74393990,""indices"":[9,24],""id_str"":""74393990"",""screen_name"":""blueredandgold"",""name"":""Leigh""}],""hashtags"":[],""urls"":[]},""in_reply_to_screen_name"":""xaaronh"",""in_reply_to_user_id"":139852376,""retweet_count"":0,""id_str"":""493926581610364928"",""favorited"":false,""user"":{""follow_request_sent"":false,""profile_use_background_image"":true,""default_profile_image"":false,""id"":42302246,""profile_background_image_url_hp"":""hp://pbs.twimg.com/profile_background_images/464279459932020736/v1xnMcrV.jpeg"",""verified"":false,""profile_text_color"":""333333"",""profile_image_url_https"":""hp://pbs.twimg.com/profile_images/490791031487463424/udSldTQ3_normal.png"",""profile_sidebar_fill_color"":""DDEEF6"",""entities"":{""description"":{""urls"":[{""url"":""hp:tttt"",""indices"":[67,89],""expanded_url"":""hp://infernalmonkey.com"",""display_url"":""infernalmonkey.com""}]}},""followers_count"":506,""profile_sidebar_border_color"":""000000"",""id_str"":""42302246"",""profile_background_color"":""1A1B1F"",""listed_count"":22,""is_translation_enabled"":false,""utc_offset"":36000,""statuses_count"":8676,""description"":""I probably tweet about video games and onaholes. Let's be friends! (NSFW)"",""friends_count"":261,""location"":""Sydney, Australia"",""profile_link_color"":""2FC2EF"",""profile_image_url"":""hp://pbs.twimg.com/profile_images/490791031487463424/udSldTQ3_normal.png"",""following"":false,""geo_enabled"":false,""profile_banner_url"":""hp://pbs.twimg.com/profile_banners/42302246/1406105444"",""profile_background_image_url"":""hp://pbs.twimg.com/profile_background_images/464279459932020736/v1xnMcrV.jpeg"",""screen_name"":""infernal_monkey"",""lang"":""en"",""profile_background_tile"":false,""favourites_count"":2018,""name"":""Lance McGill"",""notifications"":false,""url"":null,""created_at"":""Sun May 24 23:20:25 +0000 2009"",""contributors_enabled"":false,""time_zone"":""Sydney"",""protected"":false,""default_profile"":false,""is_translator"":false},""geo"":null,""in_reply_to_user_id_str"":""139852376"",""lang"":""en"",""_id"":""493926581610364928"",""created_at"":""Tue Jul 29 01:10:48 +0000 2014"",""in_reply_to_status_id_str"":""493925918998425600"",""place"":null,""metadata"":{""iso_language_code"":""en"",""result_type"":""recent""}}"
Answer: This is not the best solution, just an effort from a noob (me), definitely
need further editing for better output. I am using windows OS.
import csv
import json
abc=[]
myList=[]
myDict={}
myFile=open('fileName.csv','r',encoding='utf-8')
myReader=csv.reader(myFile)
header=next(myReader)
for line in myReader:
abc=json.loads(line[4])
myDict=abc
myList.append(myDict['text'])
dct={}
for eachLine in myList:
item=eachLine.split()
for one in item:
if one in dct:
dct[one]+=1
else:
dct[one]=1
finalList=list(dct.items())
finalList.sort()
|
GetLastInputInfo and GetTickCount are not consistent with each other
Question: I am trying to work out how long, approximately, the current user has been
idle (e.g. like [this question](http://stackoverflow.com/questions/203384/how-
to-tell-when-windows-is-inactive)), from Python on a Windows machine.
To do that, I figure I need to compare the result of
[GetLastInputInfo](https://msdn.microsoft.com/en-
us/library/ms646302\(VS.85\).aspx) with the result of
[GetTickCount](https://msdn.microsoft.com/en-
us/library/windows/desktop/ms724408\(v=vs.85\).aspx). The results should be in
milliseconds.
(I am expecting roll-over problems every 49.7 days, but I will solve that
later.)
My code is straightforward:
import win32api
last_active = win32api.GetLastInputInfo()
now = win32api.GetTickCount()
elapsed_milliseconds = (now - last_active)
print(last_active, now, elapsed_milliseconds)
I expect to get two similar large numbers, and a difference of a few hundred
milliseconds.
Instead, I get results like:
3978299058 -316668238 -4294967296
and
3978316717 -316650501 -4294967218
Between runs, they are both changing by roughly the same amount, but there is
a large constant offset between them that I am not expecting.
What am I missing?
Answer: Looking at the numbers more closely, this is a signed/unsigned mismatch.
3978299058 = 0xED2006B2
-316668238 (in two's complement) = 0xED2006B2
3978316717 = 0xED204BAD
-316650501 (in two's complement) = 0xED204BFB
So the times are consistent, it's just that `win32.GetTickCount` is
interpreting the tick count as a signed 32-bit integer whereas
`win32.GetLastInputInfo` is interpreting it as unsigned.
(Specifically, `GetLastInputInfo` is using `PyLong_FromUnsignedLong` whereas
`GetTickCount` casts the `DWORD` to a `long` and then calls `Py_BuildValue`.
You might want to consider filing a bug, since the tick count _should_ be an
unsigned value.)
|
Python: how to compute the Euclidean distance distribution of a regular network?
Question: I have an `NxN` regular network, each node of which has an `(X,Y)` set of
coordinates. The nodes are separated by the _unit_. The network looks like
this:
(0,0) (1,0) (2,0)
(0,1) (1,1) (2,1)
(0,2) (1,2) (2,2)
I want to be able to compute the _Euclidean distance_ from each node to all
the others. Example:
#Euclidean distances from node (0,0):
0 sqrt(1) sqrt(4)
sqrt(1) sqrt(2) sqrt(5)
sqrt(4) sqrt(5) sqrt(8)
Then, I want to draw the distance distribution, which tells me with which
frequency a given distance has a certain value. I want then to turn the graph
into a log-log plot.
This is my attempt:
import networkx as nx
from networkx import *
import matplotlib.pyplot as plt
#Creating the regular network
N=10 #This can vary
G=nx.grid_2d_graph(N,N)
pos = dict( (n, n) for n in G.nodes() )
labels = dict( ((i, j), i + (N-1-j) * N ) for i, j in G.nodes() )
nx.relabel_nodes(G,labels,False)
inds=labels.keys()
vals=labels.values()
inds.sort()
vals.sort()
pos2=dict(zip(vals,inds)) #Dict storing the node coordinates
nx.draw_networkx(G, pos=pos2, with_labels=False, node_size = 15)
#Computing the edge length distribution
def plot_edge_length_distribution(): #Euclidean distances from all nodes
lengths={}
for k, item in pos2:
for t, elements in pos2:
if k==t:
lengths[k]=0
else:
lengths[k]=((pos2[t][2]-pos2[k][2])**2)+((pos2[t][1]-pos2[k][1])**2) #The square distance (it's ok to leave it like this)
items=sorted(lengths.items())
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot([k for (k,v) in items],[v for (k,v) in items],'ks-')
ax.set_xscale("log")
ax.set_yscale("log")
title_string=('Edge Length Distribution')
subtitle_string=('Lattice Network | '+str(N)+'x'+str(N)+' nodes')
plt.suptitle(title_string, y=0.99, fontsize=17)
plt.title(subtitle_string, fontsize=9)
plt.xlabel('Log l')
plt.ylabel('Log p(l)')
ax.grid(True,which="both")
plt.show()
plot_edge_length_distribution()
**EDIT**
When running, this script throws out the error: `TypeError: 'int' object is
not iterable`, pointing at the line where I wrote `for k, item in pos2:`.
**Where is it that I go wrong?**
Answer: The function
[`scipy.spatial.distance.pdist`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.pdist.html)
does this about as efficiently as can be.
Consider the following:
from scipy.spatial import distance
import numpy as np
coords = [np.array(list(c)) for c in [(0,0),(1,0), (2,0)]]
>>> distance.pdist(coords)
array([ 1., 2., 1.])
The function returns the upper-right part of the distance matrix - the
diagonals are 0, and the lower-left part can be obtained from the transpose.
E.g., the above corresponds to
0 1 2
1 0 1
2 1 0
with
* the 0 diagonal and everything to its lower-left removed.
* the upper-right "flattened" to [1, 2, 1].
It is not difficult to reconstruct the distances from the flattened result.
|
Docker image builds on laptop, not on Digital Ocean - my understanding of Docker is shattered
Question: I have a non-trivial `Docker` environment for a Python app I'm building (see
below for full `Dockerfile`). On my MacBook (with `Docker version 1.10.3,
build 20f81dd`) I am able to build the `Docker` image, run the container and
the app works fine.
I now want to test the app on Digital Ocean. I have only ever used Docker on
my laptop up to this point. I created a droplet using the `Ubuntu Docker
1.10.3 on 14.04` image. I SSH'd in, cloned my git repo, executed the `docker
build` command, but I got an error during the build (see bottom for full stack
trace).
Exception: Cython-generated file 'pandas/index.c' not found.
Cython is required to compile pandas from a development branch.
Please install Cython or download a release package of pandas.
This is a valid exception, but my questions is: **_Why would the
same`Dockerfile` and `docker build` command successfully build on one machine,
but raise an exception on another?_** My understanding of Docker was that it
prevented this sort of thing from happening by building the environment from
scratch using just the Dockerfile...I just can't wrap my head around what is
causing this exception on one machine and not the other.
* * *
**`Dockerfile`**
FROM python:2.7
ENV HOME /root
# Install dependencies
RUN apt-get update \
&& apt-get upgrade -y
RUN apt-get install -y apt-utils
RUN apt-get install -y gcc
RUN apt-get install -y build-essential
RUN apt-get install -y zlib1g-dev
RUN apt-get install -y wget
RUN apt-get install -y unzip
RUN apt-get install -y cmake
RUN apt-get install -y gfortran
RUN apt-get install -y libatlas-base-dev
RUN apt-get install -y python-pip
RUN apt-get install -y python-dev
RUN apt-get install -y subversion
RUN apt-get install -y supervisor
RUN apt-get install -y nginx
RUN apt-get clean
# Install Python packages
RUN pip install --upgrade pip
RUN pip install numpy
RUN pip install pandas
RUN pip install bottleneck
RUN pip install boto3
RUN pip install scipy
RUN pip install Flask
RUN pip install uwsgi
# Build OpenCV and dependencies
RUN cd && wget https://github.com/Itseez/opencv/archive/3.1.0.zip \
&& git clone https://github.com/Itseez/opencv_contrib.git \
&& unzip 3.1.0.zip \
&& cd opencv-3.1.0 && mkdir build && cd build \
&& cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON .. \
&& make -j2 && make install \
&& cd && rm -rf opencv-3.1.0 && rm 3.1.0.zip
# Build HDF5
RUN cd ; wget https://www.hdfgroup.org/ftp/HDF5/current/src/hdf5-1.8.16.tar.gz
RUN cd ; tar zxf hdf5-1.8.16.tar.gz
RUN cd ; mv hdf5-1.8.16 hdf5-setup
RUN cd ; cd hdf5-setup ; ./configure --prefix=/usr/local/
RUN cd ; cd hdf5-setup ; make && make install
# Cleanup
RUN cd ; rm -rf hdf5-setup
RUN apt-get -yq autoremove
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install Python packages with dependencies on HDF5
RUN pip install tables
RUN pip install h5py
RUN pip install -U scikit-image
RUN rm -fr /root/.cache
# Update environment and working directories
ENV PYTHONUNBUFFERED 1
WORKDIR /app
ADD . /app
RUN mv config ../config
# Setup config
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s /config/nginx.conf /etc/nginx/sites-enabled/
RUN ln -s /config/supervisor.conf /etc/supervisor/conf.d/
EXPOSE 80
CMD ["python", "app.py"]
**`Stack Trace`**
creating build/lib.linux-x86_64-2.7/pandas/tseries/tests/data
copying pandas/tseries/tests/data/series_daterange0.pickle -> build/lib.linux-x86_64-2.7/pandas/tseries/tests/data
copying pandas/tseries/tests/data/frame.pickle -> build/lib.linux-x86_64-2.7/pandas/tseries/tests/data
copying pandas/tseries/tests/data/dateoffset_0_15_2.pickle -> build/lib.linux-x86_64-2.7/pandas/tseries/tests/data
copying pandas/tseries/tests/data/daterange_073.pickle -> build/lib.linux-x86_64-2.7/pandas/tseries/tests/data
copying pandas/tseries/tests/data/series.pickle -> build/lib.linux-x86_64-2.7/pandas/tseries/tests/data
copying pandas/tseries/tests/data/cday-0.14.1.pickle -> build/lib.linux-x86_64-2.7/pandas/tseries/tests/data
UPDATING build/lib.linux-x86_64-2.7/pandas/_version.py
set build/lib.linux-x86_64-2.7/pandas/_version.py to '0.18.0'
running build_ext
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-OD55P2/pandas/setup.py", line 604, in <module>
**setuptools_kwargs)
File "/usr/local/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/local/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/local/lib/python2.7/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/usr/local/lib/python2.7/distutils/command/install.py", line 563, in run
self.run_command('build')
File "/usr/local/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/local/lib/python2.7/distutils/command/build.py", line 127, in run
self.run_command(cmd_name)
File "/usr/local/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/local/lib/python2.7/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/tmp/pip-build-OD55P2/pandas/setup.py", line 316, in build_extensions
self.check_cython_extensions(self.extensions)
File "/tmp/pip-build-OD55P2/pandas/setup.py", line 313, in check_cython_extensions
""" % src)
Exception: Cython-generated file 'pandas/index.c' not found.
Cython is required to compile pandas from a development branch.
Please install Cython or download a release package of pandas.
----------------------------------------
Command "/usr/local/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-OD55P2/pandas/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-JQaDVa-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-OD55P2/pandas/
The command '/bin/sh -c pip install pandas' returned a non-zero code: 1
Answer: I don't know about your specific problem, but to answer your general question:
> Why would the same Dockerfile and docker build command successfully build on
> one machine, but raise an exception on another?
It's likely that the machines are using different `python:2.7` images. Many
images (especially official ones) are rebuilt often, and [the `python` tags
page](https://hub.docker.com/r/library/python/tags/) says `2.7` was last built
6 days ago. If you just created the DigitalOcean instance, it would be using
the latest `python:2.7`, but if you pulled that image more than six days ago,
you would be using an out of date image. If you run `docker pull python:2.7`
on your local machine and try to rebuild, you should get the same error that
you're seeing on DigitalOcean.
An alternative, yet related, possible cause could be [build
caching](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-
practices/#build-cache): If one of the many packages your Dockerfile installs
has changed recently, but you haven't edited that line of the Dockerfile or
any line above it recently, the Docker instance on your local machine would
continue using the old version when building. You can turn off the use of the
build cache on your local machine by passing the `--no-cache` option to
`docker build`.
Other possible reasons for a Docker build to succeed on one machine and fail
on another include using different versions of Docker or using an HTTP proxy
when downloading packages on one machine and a different proxy (or no proxy)
on the other.
|
padding numpy array/matrix with string
Question: I am trying to read a multi-band image in a python code. My requirement is to
form a neighborhood matrix. So I need to pad the matrix with with some number
so as to be able to form neighborhood for each element. Ex. a is a matrix,
padding with 0
a= |1 2 3 |
|4 5 6 |
|7 8 9 |
neighborhood matrix = |0 0 0|
|0 1 2|
|0 4 5|
I am using `numpy.pad(below)` for this and it works perfectly with single
band. But for multi-band, it converts noDataValue to its equivalent in 0-255
and pads with it, which I do not want.
pixels = np.pad(a, (padding,padding), mode='constant', constant_values=(noDataValue))
where `padding = 1`, `noDataValue = -999.0` but it automatically converting it
to 125. And this is happening only for multiband. So any help would be
appreciated.
Or
If I can pad matrix with a string, that would be great. I could not find any
function that helps padding with String.
**Update 1** : [enter image description
here](http://i.stack.imgur.com/4uqko.png)
Answer: convert `a` to a type which can be the value noDataValue
e.g.
import numpy as np
# ....
a = [[1,2,3],[4,5,6],[7,8,9]]
a = np.array(a).astype(np.float32)
padding = 2
noDataValue = -999.0
pixels = np.pad(a, (padding,padding), mode='constant', constant_values=(noDataValue))
It works here
|
Python: How to remove quotes around numbers from string
Question: I have a python string like this:
"""
{id: 'id_0_4', value: '8450223051', name: 'XAD3', parent: 'id_0'},
{id: 'id_0_5', value: '509071269', name: 'ABSD', parent: 'id_0'}
"""
From the string, I want to remove the single quotes around the numbers that
appear after `value`.
How can I write a regex that will detect only such numbers and replace the
quotes around them?
Answer: Capture the number in a group, re-insert the group:
>>> import re
>>> s = """{id: 'id_0_4', value: '8450223051', name: 'XAD3', parent: 'id_0'}, {id: 'id_0_5', value: '509071269', name: 'ABSD', parent: 'id_0'}"""
>>> re.sub("'(\d+)'", r'\1', s)
"{id: 'id_0_4', value: 8450223051, name: 'XAD3', parent: 'id_0'}, {id: 'id_0_5', value: 509071269, name: 'ABSD', parent: 'id_0'}"
Or, if this must be specific to the number after 'value':
>>> re.sub("(value:\s*)'(\d+)'", r'\1\2', s)
"{id: 'id_0_4', value: 8450223051, name: 'XAD3', parent: 'id_0'}, {id: 'id_0_5', value: 509071269, name: 'ABSD', parent: 'id_0'}"
|
Block of code as a separate module in Python
Question: Assume I have a list `my_list`, a variable `var`, and a block of code that
modifies the list using the variable
my_list = ['foo']
var = 'bar'
my_list.append(var)
In the actual task I have a lot of variables like `var` and a lot of commands
like `append` which modify the list. I want to relegate those commands to
another module. In the case at hand I would like to have two modules:
`modify.py` which contains the modifying commands
my_list.append(var)
and `main.py` which defines the list and the variable and somehow uses the
code from the `modify.py`
my_list = ['foo']
var = 'bar'
import_and_run modify
The goal is to make the main file more readable. Modifying commands in my case
can be nicely grouped and would really be good as separate modules. However, I
am only aware of the practice when one imports a function from a module, not a
block of code. I do not want to make the whole `modify.py` module a function
because
1) I don't want to pass all the arguments needed. Rather, I want `modify.py`
to directly have access to `main.py` name space.
2) code in `modify.py` is not really a function. It runs only once. Also, I do
not the whole module to be a body of a function, that just does not feel
right.
How do I achieve that? Or the whole attitude is wrong?
Answer: If your goal is to make the code more readable, I'd suggest taking these
steps.
* Decompose your problem into a series of separate actions.
* Give these actions names.
* Define a function main in your module that calls functions named after the actions:
def main(): do_setp1() do_step2() # etc return
* Separate you existing code into the functions that you're calling in main()
* As @flaschbier suggested, collect related, common parameters into dictionaries to make passing the around easier to manage.
* Consider repeating these steps on your new functions, decomposing them into sub-functions.
Done well, you should be left with a file that's easier to look at, because
the function definitions and their indented bodies break up the flow of text.
The code should be easier to reason about because you only need to understand
one function at a time, instead of the entire script.
Generally you want to keep all the code related to a particular task in a
single module, unless there's more than say 500 lines. But before moving code
into separate modules see if you can reduce the total lines of code by
factoring repeated code into functions, or making your code more succinct: for
example see if `for` loops can be replaced by list comprehensions.
Consider using code linting tools to help you make the code well-formatted.
So in summary: don't go against the grain of Python by hiding code in another
module and going down the `import_and_run` route. Instead use good code
organisation and Python's inherent good visual structure to make your code
readable.
|
Seeded Python RNG showing non-deterministic behavior with sets
Question: I'm seeing non-deterministic behavior when trying to select a pseudo-random
element from sets, even though the RNG is seeded (example code shown below).
Why is this happening, and should I expect other Python data types to show
similar behavior?
Notes: I've only tested this on Python 2.7, but it's been reproducible on two
different Windows computers.
Similar Issue: The issue at [Python random seed not working with Genetic
Programming example code](http://stackoverflow.com/questions/6614447/python-
random-seed-not-working-with-genetic-programming-example-code) may be similar.
Based on my testing, my hypothesis is that run-to-run memory allocation
differences within the sets is leading to different elements getting picked up
for the same RNG state.
So far I haven't found any mention of this kind of caveat/issue in the Python
docs for set or random.
Example Code (randTest produces different output run-to-run):
import random
''' Class contains a large set of pseudo-random numbers. '''
class bigSet:
def __init__(self):
self.a = set()
for n in range(2000):
self.a.add(random.random())
return
''' Main test function. '''
def randTest():
''' Seed the PRNG. '''
random.seed(0)
''' Create sets of bigSet elements, presumably many memory allocations. '''
b = set()
for n in range (2000):
b.add(bigSet())
''' Pick a random value from a random bigSet. Would have expected this to be deterministic. '''
c = random.sample(b,1)[0]
print('randVal: ' + str(random.random())) #This value is always the same
print('setSample: ' + str(random.sample(c.a,1)[0])) #This value can change run-to-run
return
Answer: I'm fairly certain you're correct, and the issue is caused by the run-to-run
memory allocation differences for `set`. When I changed your program to use
lists instead of sets, I got deterministic behavior:
import random
''' Class contains a large list of pseudo-random numbers. '''
class bigList:
def __init__(self):
self.a = [random.random() for n in range(2000)]
''' Main test function. '''
def randTest():
''' Seed the PRNG. '''
random.seed(0)
''' Create lists of bigList elements, presumably many memory allocations. '''
b = [bigList() for n in range(2000)]
''' Pick a random value from a random bigSet. Would have expected this to be deterministic. '''
c = random.sample(b, 1)[0]
print('randVal: ' + str(random.random())) # This value is always the same
# and so is this now...
print('setSample: ' + str(random.sample(c.a, 1)[0]))
randTest()
|
PYTHON: nosetests import file path with multiple modules/files
Question: I'm currently working through
[LearnPythonTheHardWay](http://learnpythonthehardway.org/book/ex48.html) and
have reached [Exercise 48](http://learnpythonthehardway.org/book/ex48.html)
which details **Nosetests**. I am able to perform a unit testing as long as
all of the code is in a single python.py file. However if I include other
files as part of a program, i.e. use **import** and then attempt to
**nosetest** such a project I am getting an error, as follows:
> ======================================================================
>
> ## ERROR: Failure: ImportError (No module named 'temp')
>
> Traceback (most recent call last):
> File "/usr/local/lib/python3.4/dist-packages/nose/failure.py", line 39, in
> runTest
> raise self.exc_val.with_traceback(self.tb)
> File "/usr/local/lib/python3.4/dist-packages/nose/loader.py", line 414, in
> loadTestsFromName ## ##
> addr.filename, addr.module)
> File "/usr/local/lib/python3.4/dist-packages/nose/importer.py", line 47, in
> importFromPath
> return self.importFromDir(dir_path, fqname)
> File "/usr/local/lib/python3.4/dist-packages/nose/importer.py", line 94, in
> importFromDir
> mod = load_module(part_fqname, fh, filename, desc)
> File "/usr/lib/python3.4/imp.py", line 235, in load_module
> return load_source(name, filename, file)
> File "/usr/lib/python3.4/imp.py", line 171, in load_source
> module = methods.load()
> File "", line 1220, in load
> File "", line 1200, in _load_unlocked
> File "", line 1129, in _exec
> File "", line 1471, in exec_module
> File "", line 321, in _call_with_frames_removed
> File "/home/user/LEARNPYTHONTHEHARDWAY/ex48/tests/scanner_tests.py", line
> 6, in
> from ex48.scanner import lexicon
> File "/home/user/LEARNPYTHONTHEHARDWAY/ex48/ex48/scanner.py", line 6, in
> import temp
> ImportError: No module named 'temp'
* * *
> Ran 1 test in 0.028s
>
> FAILED (errors=1)
The structure of my project directories are as follows:
ex48/
ex48/
scanner.py
temp.py
__pycache__/
tests/
__init__.py
scanner_tests.py
Screenshot of my directory::
[](http://i.stack.imgur.com/Or9ne.png)
Screen shot of files themselves::
[](http://i.stack.imgur.com/v3p2i.png)
[](http://i.stack.imgur.com/QYsut.png)
[](http://i.stack.imgur.com/6I9JE.png)
My **scanner_tests.py** file is as follows:
from nose.tools import *
from ex48.scanner import lexicon
from ex48 import temp
def test_directions():
assert_equal(lexicon.scan("north"),[('direction','north')])
result = lexicon.scan("north south east")
assert_equal(result, [('direction', 'north'),
('direction', 'south'),
('direction', 'east')])
My **scanner.py** file is as follows:
import temp
class lexicon:
def scan(val):
if(val == "north"):
return [('direction', 'north')]
else:
return [('direction', 'north'),
('direction', 'south'),
('direction', 'east')]
runner = temp.temp("hello")
And finally my **temp.py** file is as follows:
class temp(object):
def __init__(self,name):
self.name = name
def run(self):
print "Your name is; %s" % self.name
runner.run()
My question is how to overcome the **ImportError: No Module named 'temp'**
because it seems as if I have imported the **temp.py** file in both the
scanner.py file and the **scanner_tests.py** file but nose does not seem to be
able to import it when it runs. _Nosetests_ works fine when its just the
single **scanner.py** file but not when importing. Is there a special syntax
for importing into a unit test for nose? The script also works fine when run
and imports properly at the command line.
*Note: I'm running python off a limited account off an online server so some admin privileges are not available.
**Note below are entirely different screenshots from another project with the
exact same error:
Directory Layout: [](http://i.stack.imgur.com/hdDsK.png)
Game.py:
[](http://i.stack.imgur.com/F2E8F.png)
Otherpy.py - the imported file:
[](http://i.stack.imgur.com/oIied.png)
the Nose test script file:
[](http://i.stack.imgur.com/N5fK6.png)
And finally the nosetests importerror:
[](http://i.stack.imgur.com/9BTHR.png)
Answer: Everything needs to be with respect to your execution point. You are running
your nose command from the root of `ex48`, therefore all your imports need to
be with respect to that location.
Therefore, in `game.py` you should be importing with respect to `ex48`.
Therefore:
from ex48.otherpy import House
The same logic should be applied to your example referencing the `temp`
folder.
from ex48.temp import temp
|
How to constrain function parameter's protocol's associated types
Question: For fun, I am attempting to extend the Dictionary class to replicate Python's
Counter class. I am trying to implement `init`, taking a `CollectionType` as
the sole argument. However, Swift does not allow this because of
`CollectionType`'s associated types. So, I am trying to write code like this:
import Foundation
// Must constrain extension with a protocol, not a class or struct
protocol SingletonIntProtocol { }
extension Int: SingletonIntProtocol { }
extension Dictionary where Value: SingletonIntProtocol { // i.e. Value == Int
init(from sequence: SequenceType where sequence.Generator.Element == Key) {
// Initialize
}
}
However, Swift does not allow this syntax in the parameter list. Is there a
way to write `init` so that it can take any type conforming to
`CollectionType` whose values are of type `Key` (the name of the type used in
the generic `Dictionary<Key: Hashable, Value>`)? Preferably I would not be
forced to write `init(from sequence: [Key])`, so that I could take any
`CollectionType` (such as a `CharacterView`, say).
Answer: You just have a syntax problem. Your basic idea seems fine. The correct syntax
is:
init<Seq: SequenceType where Seq.Generator.Element == Key>(from sequence: Seq) {
The rest of this answer just explains why the syntax is this way. You don't
really need to read the rest if the first part satisfies you.
The subtle difference is that you were trying to treat `SequenceType where
sequence.Generator.Element == Key` as a type. It's not a type; it's a type
constraint. What the correct syntax means is:
> There is a type `Seq` such that `Seq.Generator.Element == Key`, and
> `sequence` must be of that type.
While that may seem to be the same thing, the difference is that `Seq` is one
specific type at any given time. It isn't "any type that follows this rule."
It's actually one specific type. Every time you call `init` with some type
(say `[Key]`), Swift will create an entirely new `init` method in which `Seq`
is replaced with `[Key]`. (In reality, Swift can sometimes optimize that extra
method away, but in principle it exists.) That's the key point in
understanding generic syntax.
Or you can just memorize where the angle-brackets go, let the compiler remind
you when you mess it up, and call it day. Most people do fine without learning
the type theory that underpins it.
|
Strange TypeError with Theano
Question:
Traceback (most recent call last):
File "test.py", line 37, in <module>
print convLayer1.output.shape.eval({x:xTrain})
File "/Volumes/TONY/anaconda/lib/python2.7/site-packages/theano/gof/graph.py", line 415, in eval
rval = self._fn_cache[inputs](*args)
File "/Volumes/TONY/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py", line 513, in __call__
allow_downcast=s.allow_downcast)
File "/Volumes/TONY/anaconda/lib/python2.7/site-packages/theano/tensor/type.py", line 180, in filter
"object dtype", data.dtype)
TypeError
And here is my code:
import scipy.io as sio
import numpy as np
import theano.tensor as T
from theano import shared
from convnet3d import ConvLayer, NormLayer, PoolLayer, RectLayer
from mlp import LogRegr, HiddenLayer, DropoutLayer
from activations import relu, tanh, sigmoid, softplus
dataReadyForCNN = sio.loadmat("DataReadyForCNN.mat")
xTrain = dataReadyForCNN["xTrain"]
# xTrain = np.random.rand(10, 1, 5, 6, 2).astype('float64')
xTrain.shape
dtensor5 = T.TensorType('float64', (False,)*5)
x = dtensor5('x') # the input data
yCond = T.ivector()
# input = (nImages, nChannel(nFeatureMaps), nDim1, nDim2, nDim3)
kernel_shape = (5,6,2)
fMRI_shape = (51, 61, 23)
n_in_maps = 1 # channel
n_out_maps = 5 # num of feature maps, aka the depth of the neurons
num_pic = 2592
layer1_input = x
# layer1_input.eval({x:xTrain}).shape
# layer1_input.shape.eval({x:numpy.zeros((2592, 1, 51, 61, 23))})
convLayer1 = ConvLayer(layer1_input, n_in_maps, n_out_maps, kernel_shape, fMRI_shape,
num_pic, tanh)
print convLayer1.output.shape.eval({x:xTrain})
It is really weird as the error was not thrown in Jupyter (but it takes long
long time to run and finally the kernel is down I really don't know why), but
as I move it to the shell and run `python fileName.py` the error was thrown.
Answer: The problem lies in `loadmat` from `scipy`. The typeerror you are getting is
thrown by this code in Theano:
if not data.flags.aligned:
...
raise TypeError(...)
Now, when you create a new array in numpy from raw data, it would usually be
aligned:
>>> a = np.array(2)
>>> a.flags.aligned
True
But if you `savemat` / `loadmat` it, the value of the flag gets lost:
>>> savemat('test', {'a':a})
>>> a2 = loadmat('test')['a']
>>> a2.flags.aligned
False
(seems like this particular issue is discussed
[here](https://mail.scipy.org/pipermail/scipy-user/2009-December/023646.html))
One quick and dirty way to address it is to create a new numpy array from the
array you loaded:
>>> a2 = loadmat('test')['a']
>>> a3 = np.array(a2)
>>> a3.flags.aligned
True
So, for your code:
dataReadyForCNN = np.array(sio.loadmat("DataReadyForCNN.mat"))
|
Python Splinter browser = Browser() not working
Question: I am trying to use splinter to test my webapp. When I try to execute the
following
>>> from splinter import Browser
>>> browser = Browser()
I get this error. I have been looking around but I'm not sure how to fix.
Could someone please tell me how to get past this?
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/danny/anaconda/lib/python2.7/site-packages/splinter/browser.py", line 63, in Browser
return driver(*args, **kwargs)
File "/Users/danny/anaconda/lib/python2.7/site-packages/splinter/driver/webdriver/firefox.py", line 39, in __init__
self.driver = Firefox(firefox_profile)
File "/Users/danny/anaconda/lib/python2.7/site-packages/selenium/webdriver/firefox/webdriver.py", line 103, in __init__
self.binary, timeout)
File "/Users/danny/anaconda/lib/python2.7/site-packages/selenium/webdriver/firefox/extension_connection.py", line 51, in __init__
self.binary.launch_browser(self.profile, timeout=timeout)
File "/Users/danny/anaconda/lib/python2.7/site-packages/selenium/webdriver/firefox/firefox_binary.py", line 67, in launch_browser
self._start_from_profile_path(self.profile.path)
File "/Users/danny/anaconda/lib/python2.7/site-packages/selenium/webdriver/firefox/firefox_binary.py", line 90, in _start_from_profile_path
env=self._firefox_env)
File "/Users/danny/anaconda/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/Users/danny/anaconda/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
Answer: Here's the code from `firefox_binary.py` that is throwing:
<https://github.com/SeleniumHQ/selenium/blob/master/py/selenium/webdriver/firefox/firefox_binary.py#L79-L90>
Do you have Firefox installed and working properly? I would also try updating
Firefox to make sure it's on the latest version.
|
Spark RDD problems
Question: I am starting with spark and have never worked with Hadoop. I have 10 iMacs on
which I have installed Spark 1.6.1 with Hadoop 2.6. I downloaded the
precompiled version and just copied the extracted contents into
`/usr/local/spark/`. I did all the environment variables setup with
`SCALA_HOME`, changes to `PATH` and other spark conf. I am able to run both
`spark-shell` and `pyspark` (with anaconda's python).
I have setup the standalone cluster; all the nodes are showing up on my web
UI. Now, by using the python shell (ran on the cluster not locally) I followed
[this link's python interpreter word count
example](https://districtdatalabs.silvrback.com/getting-started-with-spark-in-
python).
This is the code I have used
from operator import add
def tokenize(text):
return text.split()
text = sc.textFile("Testing/shakespeare.txt")
words = text.flatMap(tokenize)
wc = words.map(lambda x: (x,1))
counts = wc.reduceByKey(add)
counts.saveAsTextFile("wc")
It is giving me error that the file `shakespeare.txt` was not found on a slave
nodes. Searching around I understood that if I am not using HDFS then the file
should be present on each slave node on the same path. Here is the stack trace
- [github
gist](https://gist.github.com/anonymous/6be913796d15493a913e8d99fc910d5b)
Now, I have a few questions-
* Isn't RDD supposed to be distributed? That is, it should have distributed (when the action was run on RDD) the file on all the nodes instead of requiring me to distribute it.
* I downloaded the spark with Hadoop 2.6, but any of the Hadoop commands are not available to make a HDFS. I extracted the Hadoop jar file found in the `spark/lib` hoping to find some executable but there was nothing. So, what Hadoop related files were provided in the spark download?
* Lastly, how can I run a distributed application (spark-submit) or a distributed analysis (using pyspark) on the cluster? If I have to create a HDFS then what extra steps are required? Also, how can I create a HDFS here?
Answer: If you read the [Spark Programming
Guide](http://spark.apache.org/docs/latest/programming-guide.html#basics), you
will find the answer to your first question:
> To illustrate RDD basics, consider the simple program below:
>
>
> val lines = sc.textFile("data.txt")
> val lineLengths = lines.map(s => s.length)
> val totalLength = lineLengths.reduce((a, b) => a + b)
>
>
> The first line defines a base RDD from an external file. This dataset is not
> loaded in memory or otherwise acted on: lines is merely a pointer to the
> file. The second line defines lineLengths as the result of a map
> transformation. Again, lineLengths is not immediately computed, due to
> laziness. Finally, we run reduce, which is an action. At this point Spark
> breaks the computation into tasks to run on separate machines, and each
> machine runs both its part of the map and a local reduction, returning only
> its answer to the driver program.
Remember that transformations are executed on the Spark workers (see
[link](http://www.slideshare.net/databricks/strata-sj-everyday-im-shuffling-
tips-for-writing-better-spark-programs), slide n.21).
Regarding your second question, Spark contains only the libs, as you can see,
to use the Hadoop infrastructure. You need to setup the Hadoop cluster first
(Hdfs, etc etc), in order to use it (with the libs in Spark): have a look at
[Hadoop Cluster Setup](http://hadoop.apache.org/docs/current/hadoop-project-
dist/hadoop-common/ClusterSetup.html).
To answer your last question, I hope that the [official
documentation](http://spark.apache.org/docs/latest/cluster-overview.html)
helps, in particular [Spark
Standalone](http://spark.apache.org/docs/latest/spark-standalone.html).
|
Unicodedata.normalize() ValueError: invalid normalization form
Question: I'm trying to take foreign language text and output a human-readable,
filename-safe equivalent. After looking around, it seems like the best option
is `unicodedata.normalize()`, but I can't get it to work. I've tried putting
the exact code from some answers here and elsewhere, but it keeps giving me
this error. I only got one success, when I ran:
unicodedata.normalize('NFD', '\u00C7')
'C\u0327'
But every other time, I get an error. Here's my code I've tried:
unicodedata.normalize('NFKD', u'\u2460') #error, not sure why. Look same as above.
s = 'ذهب الرجل'
unicodedata.normalize('NKFC',s) #error
unicodedata.normalize('NKFD', 'ñ') #error
Specifically, the error I get is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid normalization form
I don't understand why this isn't working. All of these are strings, which
means they are unicode in Python 3. I tried encoding them using `.encode()`,
but then `normalize()` said it only takes arguments of string, so I know that
can't be it. I'm seriously at a loss because even code I'm copying from here
seems to error out. What's going on here?
Answer: Looking at
[unicodedata.c](http://github.com/python/cpython/blob/master/Modules/unicodedata.c#L817),
the only way you can get that error is if you enter an invalid _form_ string.
The valid values are "NFC", "NFKC", "NFD", and "NFKD", but you seem to be
using values with the "F" and "K" switched around:
>>> import unicodedata
>>>
>>> unicodedata.normalize('NKFD', 'ñ')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid normalization form
>>>
>>> unicodedata.normalize('NFKD', 'ñ')
'ñ'
|
Windows Error dialog - 'Errors occurred' - Where is this coming from?
Question: I am dealing with a legacy Python application which uses WxPython, InnoSetup
and py2exe and has a custom sys.excepthook that should deal with all
exceptions. Yet, when an exception occurs, after the custom exception handler
has finished and the main window is closed, this dailog pops up. The worst
part is that it points to a non-existant log file, confusing users. The dialog
reads 'Errors occurred - See the logfile ... for details."
Where could this dialog be coming from? Is this some sort of system default?
[](http://i.stack.imgur.com/c99wN.jpg)
Answer: It turns out this is due to different effects.
py2exe by default redirects Stderr to a logfile named `sys.executable +
'.log'` and generates that dialog.
When permissions do not allow to write to that folder, it is instead
redirected to `C:\Users\UserName\AppData\Local\VirtualStore\Program Files
(x86)\...`. There, that file actually exists and may give hints to the cause
of the crash.
The last finding I made was that the custom exception handler was activated
way too late. During imports and inits, the program had plenty of opportunity
to crash before the custom handler kicked in.
Some more info on: [py2exe error handling redirection and
popup](http://stackoverflow.com/questions/18835641/py2exe-error-handling-
redirection-and-popup)
|
Pythoncially check if a variable name is valid
Question: tldr; see the final line; the rest is just preamble.
* * *
I am developing a test harness, which parses user scripts and generates a
Python script which it then runs. The idea is for non-techie folks to be able
to write high-level test scripts.
I have introduced the idea of variables, so a user can use the `LET` keyword
in his script. E.g. `LET X = 42`, which I simply expand to `X = 42`. They can
then use X later in their scripts - `RELEASE CONNECTION X`
But what if someone writes `LET 2 = 3`? That's going to generate invalid
Python.
If I have that `X` in a variable `variableName`, then how can I check whether
`variableName` is a valid Python variable?
Answer: In Python 3 you can use
[`str.isidentifier()`](https://docs.python.org/3/library/stdtypes.html?highlight=identifier#str.isidentifier)
to test whether a given string is a valid Python identifier/name.
>>> 'X'.isidentifier()
True
>>> 'X123'.isidentifier()
True
>>> '2'.isidentifier()
False
>>> 'while'.isidentifier()
True
The last example shows that you should also check whether the variable name
clashes with a Python keyword:
>>> from keyword import iskeyword
>>> iskeyword('X')
False
>>> iskeyword('while')
True
So you could put that together in a function:
from keyword import iskeyword
def is_valid_variable_name(name):
return name.isidentifier() and not iskeyword(name)
* * *
Another option, which works in Python 2 and 3, is to use the `ast` module:
from ast import parse
def is_valid_variable_name(name):
try:
parse('{} = None'.format(name))
return True
except SyntaxError, ValueError, TypeError:
return False
>>> is_valid_variable_name('X')
True
>>> is_valid_variable_name('123')
False
>>> is_valid_variable_name('for')
False
>>> is_valid_variable_name('')
False
>>> is_valid_variable_name(42)
False
This will parse the assignment statement without actually executing it. It
will pick up invalid identifiers as well as attempts to assign to a keyword.
In the above code `None` is an arbitrary value to assign to the given name -
it could be any valid expression for the RHS.
|
How to make Tkinter button to be placed in particular position?
Question: I am new to python so I was trying to make a GUI, in that i have to place a
button in a particular position. I tried using
`self.nxt_form.place(x=200,y=100)` instead of `self.nxt_form.pack()` but the
button disappeared and only the frame appeared when it ran. Can you tell me
how to place the button in a particular position?
Here is the code:
import tkinter as tk
class Main_form:
def __init__(self, root,title="Simulated MTBF"):
self.root = root
self.frame = tk.Frame(self.root)
"""Button nxt_form which moves to next form"""
self.nxt_form = tk.Button(self.frame, text = 'Next Form', width = 25,command = self.new_window)
self.nxt_form.pack()
self.frame.pack()
"""command to open new window by clicking Button """
def new_window(self):
self.newWindow = tk.Toplevel(self.root)
self.app = Demo2(self.newWindow)
class Demo2:
def __init__(self, root):
self.root = root
self.frame = tk.Frame(self.root)
self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25, command = self.close_windows)
self.quitButton.pack()
self.frame.pack()
def close_windows(self):
self.root.destroy()
def main():
root = tk.Tk()
app = Main_form(root)
root.mainloop()
if __name__ == '__main__':
main()
Answer: when i am using tkinter i used column and row to position objects
self.btn = tk.Button(self, text = "button")
self.btn.grid(row = 1, column = 1)
EDIT - expanded on information in response to comment (below)
I would make an label and change its width and height to make the spacing you
need (note im a beginer at python as well so this is probly a bad way but it
works)
from tkinter import *
import tkinter as tk
from tkinter.ttk import Combobox,Treeview,Scrollbar
class MainMenu(Frame):
def __init__(self, master):
""" Initialize the frame. """
super(MainMenu, self).__init__(master)
self.grid()
self.create_GUI()
def create_GUI(self):
frame1 = tk.LabelFrame(self, text="frame1", width=300, height=130, bd=5)
frame1.grid(row=0, column=0, columnspan=3, padx=8)
#the frame is not needed but it is a good thing to use as can group
#parts of your interface together
self.text1 = Entry(frame1)
#note if you were not using frames would just put self here
self.text1.grid(row = 1, column = 0)
self.text2 = Label(frame1, text = "",height = 10)
self.text2.grid(row = 2 , column = 0)
self.text3 = Entry(frame1)
self.text3.grid(row = 3, column = 0)
root = Tk()
root.title("hi")
root.geometry("500x500")
root.configure(bg="white")
app = MainMenu(root)
root.mainloop()
Also note that you can not use pack and grid together what you could do is
group your objects in different frames then use grid in one frame and pack in
a different frame. I personally prefer to use grid to pack as it gives you
more control over your object then pack does
|
Python 3: Sympy: Include list information to optimize lambdify
Question: I use `lambdify` to compile an expression which is a function of certain
parameters. Each parameter has `N` points. So I need to evaluate the
expression `N` times. The following shows a simplified example on how this is
done.
import numpy as np
from sympy.parsing.sympy_parser import parse_expr
from sympy.utilities.lambdify import lambdify, implemented_function
from sympy import S, Symbol
from sympy.utilities.autowrap import ufuncify
def CreateMagneticFieldsList(dataToSave,equationString,DSList):
expression = S(equationString)
numOfElements = len(dataToSave["MagneticFields"])
#initialize the magnetic field output array
magFieldsArray = np.empty(numOfElements)
magFieldsArray[:] = np.NaN
lam_f = lambdify(tuple(DSList),expression,modules='numpy')
try:
for i in range(numOfElements):
replacementList = np.zeros(len(DSList))
for j in range(len(DSList)):
replacementList[j] = dataToSave[DSList[j]][i]
try:
val = np.double(lam_f(*replacementList))
except:
val = np.nan
magFieldsArray[i] = val
except:
print("Error while evaluating the magnetic field expression")
return magFieldsArray
list={"MagneticFields":list(range(10000)), "Chx":list(range(10000))}
out=CreateMagneticFieldsList(list,"MagneticFields*5+Chx",["MagneticFields","Chx"])
print(out)
Is there a way to optimize this call further? Specifically, I mean is there a
way to make `lambdify` include that I'm calculating for a list of points, so
that the loop evalulation can be optimized?
Answer: Thanks to @asmeurer, he gave the idea on how to do it.
Since `lambdify` is compiled using numpy, then one could simply pass the lists
as arguments! The following is a working example
#!/usr/bin/python3
import numpy as np
from sympy.parsing.sympy_parser import parse_expr
from sympy.utilities.lambdify import lambdify, implemented_function
from sympy import S, Symbol
from sympy.utilities.autowrap import ufuncify
def CreateMagneticFieldsListOpt(dataToSave,equationString,DSList):
expression = S(equationString)
numOfElements = len(dataToSave["MagneticFields"])
#initialize the magnetic field output array
magFieldsArray = np.empty(numOfElements)
magFieldsArray[:] = np.NaN
lam_f = lambdify(tuple(DSList),expression,modules='numpy')
replacementList = [None]*len(DSList)
for j in range(len(DSList)):
replacementList[j] = np.array(dataToSave[DSList[j]])
print(replacementList)
magFieldsArray = np.double(lam_f(*replacementList))
return magFieldsArray
list={"MagneticFields":[1,2,3,4,5],"ChX":[2,4,6,8,10]}
out=CreateMagneticFieldsListOpt(list,"MagneticFields*5+ChX",["MagneticFields","ChX"])
print(out)
|
How to Catch Exception in youtube-dl in python?
Question: I use youtube-dl in python, sometime I got ContentTooShortError, how can I use
"try...catch..." in python to deal with these exceptions?
I use this code but it doesn't work
with youtube_dl.YoutubeDL(options) as ydl:
# youtube_url = video.youtube_url
n = 0
try:
# 用设置成list的形式
ydl.download([video.youtube_url])
except 'ContentTooShortError':
if n + 1 < max_retey:
ydl.download([video.youtube_url])
else:
return False
Answer: Remove the quotes. The exception class need not be added as a string.
except ContentTooShortError:
And as mentioned by @MichalPawlowski in the comments, make sure you import it.
# For Python 3
from urllib.error import ContentTooShortError
# For Python 2
from urllib import ContentTooShortError
|
running command via subprocess on Windows with spaces in filenames
Question: To run a command in python, for Windows, I do:
import subprocess
subprocess.check_output(lsCommand, shell=True)
where `lsCommand` is a list of strings that make up the bash command. This
works, except when it contains some input with spaces in it. For example,
copying + changing a name:
To try and do `cp "test 123" test123`:
lsCommand = ['cp', 'test 123', 'test123']
subprocess.check_output(lsCommand, shell=True)
fails because it thinks I am trying to do `cp "test" "123" test123`. Error
(doing google storage stuff):
python: can't open file 'c:\GSUtil\gsutil.py cp -n gs://folderl/test': [Errno 22] Invalid argument
Then I try
subprocess.check_output('cp "test 123" test123', shell=True)
Same shit. Any ideas?
Answer: `cp` is not an [internal command](http://ss64.com/nt/) and therefore you don't
need `shell=True` ([though you might need to specify a full path to
`cp.exe`](http://stackoverflow.com/q/24789749/4279)).
The internal interface for starting a new subprocess on Windows uses a string
i.e., it is up to the specific application how to interpret a command-line.
[The default MS C runtime rules (imlemented in `subprocess.list2cmdline()`
that is called implicitly if you pass a list on
Windows)](https://docs.python.org/3/library/subprocess.html#converting-an-
argument-sequence-to-a-string-on-windows) should work fine in this case:
#!/usr/bin/env python
from subprocess import check_call
check_call(['cp', 'test 123', 'test123'])
If you want to use `shell=True` then the program that interprets the command
line is `cmd.exe` and you should use its escape rules ([e.g., `^` is a meta-
character](http://stackoverflow.com/q/27864103/4279)) and pass the command as
a string as is (as you see it in the Windows console):
check_call('copy /Y /B "test 123" test123', shell=True)
Obviously, you don't need to start an external process, to [copy a file in
Python](http://stackoverflow.com/q/12842997/4279):
import shutil
shutil.copy('test 123', 'test123')
|
Python with Google Analytics Querying a specific Google Account from Googles scripts
Question: I am quite new to Python and got the following script from Google Analytics
API help. I have got it working and extracting data, however, it specifies to
get the first google account, I have multiple GA accounts and wish to specify
just one. Any help would be great?
Thanks
Craig
"""A simple example of how to access the Google Analytics API."""
import argparse
from apiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
import httplib2
from oauth2client import client
from oauth2client import file
from oauth2client import tools
def get_service(api_name, api_version, scope, key_file_location,
service_account_email):
"""Get a service that communicates to a Google API.
Args:
api_name: The name of the api to connect to.
api_version: The api version to connect to.
scope: A list auth scopes to authorize for the application.
key_file_location: The path to a valid service account p12 key file.
service_account_email: The service account email address.
Returns:
A service that is connected to the specified API.
"""
credentials = ServiceAccountCredentials.from_p12_keyfile(
service_account_email, key_file_location, scopes=scope)
http = credentials.authorize(httplib2.Http())
# Build the service object.
service = build(api_name, api_version, http=http)
return service
def get_first_profile_id(service):
# Use the Analytics service object to get the first profile id.
# Get a list of all Google Analytics accounts for this user
accounts = service.management().accounts().list().execute()
if accounts.get('items'):
# Get the first Google Analytics account.
account = accounts.get('items')[0].get('id')
# Get a list of all the properties for the first account.
properties = service.management().webproperties().list(
accountId=account).execute()
if properties.get('items'):
# Get the first property id.
property = properties.get('items')[0].get('id')
# Get a list of all views (profiles) for the first property.
profiles = service.management().profiles().list(
accountId=account,
webPropertyId=property).execute()
if profiles.get('items'):
# return the first view (profile) id.
return profiles.get('items')[0].get('id')
return None
def get_results(service, profile_id):
# Use the Analytics Service Object to query the Core Reporting API
# for the number of sessions within the past seven days.
return service.data().ga().get(
ids='ga:' + profile_id,
start_date='7daysAgo',
end_date='today',
metrics='ga:sessions').execute()
def print_results(results):
# Print data nicely for the user.
if results:
print 'View (Profile): %s' % results.get('profileInfo').get('profileName')
print 'Total Sessions: %s' % results.get('rows')[0][0]
else:
print 'No results found'
def main():
# Define the auth scopes to request.
scope = ['https://www.googleapis.com/auth/analytics.readonly']
# Use the developer console and replace the values with your
# service account email and relative location of your key file.
service_account_email = '<Replace with your service account email address.>'
key_file_location = '<Replace with /path/to/generated/client_secrets.p12>'
# Authenticate and construct service.
service = get_service('analytics', 'v3', scope, key_file_location,
service_account_email)
profile = get_first_profile_id(service)
print_results(get_results(service, profile))
if __name__ == '__main__':
main()
Answer: Comment out (or remove) the following line:
profile = get_first_profile_id(service)
In the next line enter the id of the profile you want to query manually as the
second parameter
print_results(get_results(service, '123456789'))
To get the profile id you can either visit the [query explorer](https://ga-
dev-tools.appspot.com/query-explorer/), a nice google tool that allows to ad
hoc queries to you authenticated accounts (i.e. you need to be logged in with
the Google Account that has access to analytics). You can get the profile id
from the "ids" field:
[](http://i.stack.imgur.com/szMnP.png)
Or got to your analytics account, and in the reports look at the url. It will
look like
https://analytics.google.com/analytics/web/?authuser=0#report/defaultid/a1111110w65439246p123456789/
The profile id is at the end of the url (after the "p" character).
|
Cannot see all tabs in ttk.Notebook
Question: I have some trouble with the tabs from the ttk Notebook class in python 2.7. I
cannot see all the tabs I create.
I made a minimal code to view the problem:
from Tkinter import *
import ttk
root = Tk()
nb = ttk.Notebook(root, width=320, height=240)
nb.pack(fill=BOTH, expand=1)
page0 = Frame(nb)
page1 = Frame(nb)
page2 = Frame(nb)
page3 = Frame(nb)
page4 = Frame(nb)
nb.add(page0, text="0")
nb.add(page1, text="1")
nb.add(page2, text="2")
nb.add(page3, text="3")
nb.add(page4, text="4")
root.mainloop()
All I can see is

I tried to change the number of tabs and I noticed the size of the top tab bar
changes and unless there's only one single lonely tab, I cannot see all of
them, as you can see:

What I tried that didn't do anything:
* Setting tabs width
* Moving the .pack() around
* Adding .pack() to the tabs
* Using ttk.Frame instead of tk.Frame
* Googling for a similar problem
What I tried that worked but isn't what I want:
* Not using tabs (too many stuff to show)
* Having only one tab
I'll appreciate any help, thanks!
Answer: So I did fix your issue however, I have no idea why tk is doing this. I solved
this tab over-lapping by increasing the length of the tab text. I changed this
portion of your code:
nb.add(page0, text="long_name1")
nb.add(page1, text="long_name2")
nb.add(page2, text="long_name3")
nb.add(page3, text="long_name4")
nb.add(page4, text="long_name5")
Once again I don't know why tk does this! Someone that is more experienced
with tk could probably tell you why.
|
Multi-threaded asyncio in Python
Question: I'm currently doing my first steps with asyncio in Python 3.5 and there is one
problem that's bugging me. Obviously I haven't fully understood coroutines...
Here is a simplified version of what I'm doing.
In my class I have an open() method that creates a new thread. Within that
thread I create a new event loop and a socket connection to some host. Then I
let the loop run forever.
def open(self):
# create thread
self.thread = threading.Thread(target=self._thread)
self.thread.start()
# wait for connection
while self.protocol is None:
time.sleep(0.1)
def _thread(self):
# create loop, connection and run forever
self.loop = asyncio.new_event_loop()
coro = self.loop.create_connection(lambda: MyProtocol(self.loop),
'somehost.com', 1234)
self.loop.run_until_complete(coro)
self.loop.run_forever()
Stopping the connection is now quite simple, I just stop the loop from the
main thread:
loop.call_soon_threadsafe(loop.stop)
Unfortunately I need to do some cleanup, especially I need to empty a queue
before disconnecting from the server. So I tried something like this stop()
method in MyProtocol:
class MyProtocol(asyncio.Protocol):
def __init__(self, loop):
self._loop = loop
self._queue = []
async def stop(self):
# wait for all queues to empty
while self._queue:
await asyncio.sleep(0.1)
# disconnect
self.close()
self._loop.stop()
The queue gets emptied from within the protocol's data_received() method, so I
just want to wait for that to happen using the while loop with the
asyncio.sleep() call. Afterwards I close the connection and stop the loop.
But how do I call this method from the main thread and wait for it? I tried
the following, but none of them seem to work (protocol is the currently used
instance of MyProtocol):
loop.call_soon_threadsafe(protocol.stop)
loop.call_soon_threadsafe(functools.partial(asyncio.ensure_future, protocol.stop(), loop=loop))
asyncio.ensure_future(protocol.stop(), loop=loop)
Can anyone please help me here? Thanks!
Answer: Basically you want to schedule coroutine on loop of different thread. You
could use
[`run_coroutine_threadsafe`](https://docs.python.org/3/library/asyncio-
task.html#asyncio.run_coroutine_threadsafe):
future = asyncio.run_coroutine_threadsafe(protocol.stop, loop=loop)
future.result() # wait for results
Or the old style `async` like in <http://stackoverflow.com/a/32084907/681044>
import asyncio
from threading import Thread
loop = asyncio.new_event_loop()
def f(loop):
asyncio.set_event_loop(loop)
loop.run_forever()
t = Thread(target=f, args=(loop,))
t.start()
@asyncio.coroutine
def g():
yield from asyncio.sleep(1)
print('Hello, world!')
loop.call_soon_threadsafe(asyncio.async, g())
|
Getting error "cannot marshal None" in spite of adding allow_none=True while using XMLRPC in Python
Question: I've tried to create a simple download and upload system using XMLRPC in
Python
Here is the code for client (name this file as client.py)
import sys
import xmlrpclib
import os
def return_pause():
"""Used for creating a pause during input"""
raw_input("\n\tPress enter to continue")
def mod_file_download(file_name, local_port, remote_proxy, local_proxy):
"""Sending details to remote node which will send file to local node"""
#print "till here"
#print "{%s}\t{%s}" % (file_name,local_proxy)
remote_proxy.mod_file_transfer(file_name, local_proxy)
def mod_file_upload(file_path, file_name, remote_proxy):
"""Used for sending files to a receiver. Sent file will always have the name file_1.txt"""
new_file_name = "file_1.txt"
with open(file_path, "rb") as handle:
bin_data = xmlrpclib.Binary(handle.read())
remote_proxy.mod_file_receive(bin_data, new_file_name)
return True
##MAIN MODULE STARTS HERE##
# Connection details of remote node
local_port = sys.argv[1]
# Getting details of remote node
remote_port = raw_input("\n\tEnter remote port ID : ")
# Creating connection details of remote node
remote_proxy = xmlrpclib.ServerProxy("http://localhost:" + remote_port + "/")
# Creating connection details of local node
local_proxy = xmlrpclib.ServerProxy("http://localhost:" + local_port + "/")
while True:
os.system('clear')
print "\t. : Collab Menu for %s : .\n" % local_port
print "\tSearch & download ...[1]"
print "\tUpload ...[2]"
print "\tExit ...[0]"
input_val = raw_input("\n\n\tEnter option : ")
if input_val == "1":
file_name = raw_input("\n\tEnter name of file to be downloaded : ")
mod_file_download(file_name, local_port, remote_proxy, local_proxy)
return_pause()
elif input_val == "2":
file_name = raw_input("\n\tEnter name of file to be uploaded : ")
file_path = "./" + file_name
mod_file_upload(file_path, file_name, remote_proxy, local_proxy)
return_pause()
elif input_val == "0":
print "\tExiting"
break
else:
print "\tIncorrect option value"
print "\tTry again..."
return_pause()
os.system('clear')
And here is the code for the listener (name this file as listener.py)
import sys
import xmlrpclib
from SimpleXMLRPCServer import SimpleXMLRPCServer
def mod_file_transfer(file_name, requestor_proxy):
"""Initiating the file transfer"""
print "[mod_file_transfer fired]"
file_path = "./" + file_name
print requestor_proxy
with open(file_path, "rb") as handle:
bin_data = xmlrpclib.Binary(handle.read())
# Connecting to requestor's server
requestor_proxy.mod_file_download_receive(bin_data, file_name)
return True
def mod_file_receive(bin_data, file_name):
"""Used to receive a file upon a request of an upload"""
print "[mod_file_receive fired]"
new_file_name = "./" + file_name
with open(new_file_name, "wb") as handle:
handle.write(bin_data.data)
return True
def mod_file_download_receive(bin_data, file_name):
"""Used to receive a file upon request of a download"""
print "[mod_file_download_receive fired]"
new_file_name = "./" + file_name + str(1)
with open(new_file_name, "wb") as handle:
handle.write(bin_data.data)
return True
##MAIN MODULE STARTS HERE##
local_port = sys.argv[1]
# Declared an XMLRPC server
node = SimpleXMLRPCServer(("localhost", int(local_port)), logRequests=True, allow_none=True)
print "Listening on port %s..." % local_port
# Registered a list of functions
node.register_function(mod_file_transfer, 'mod_file_transfer')
node.register_function(mod_file_receive, 'mod_file_receive')
node.register_function(mod_file_download_receive, 'mod_file_download_receive')
# Initialized the XMLRPC server
node.serve_forever()
**How to start the system?**
1. Place both the files in the same directory
2. Execute the following commands
3. python listener 9000
4. python listener 9500
5. python client 9000 (then give remote client port as 9500 as input)
6. python client 9500 (then give remote client port as 9000 as input)
File upload is working fine
_But file downloading is not working_
It's giving me the following error
Traceback (most recent call last):
File "collab_client.py", line 57, in <module>
mod_file_download(file_name, local_port, remote_proxy, local_proxy)
File "collab_client.py", line 17, in mod_file_download
remote_proxy.mod_file_transfer(file_name, local_proxy)
File "/usr/lib/python2.7/xmlrpclib.py", line 1240, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.7/xmlrpclib.py", line 1593, in __request
allow_none=self.__allow_none)
File "/usr/lib/python2.7/xmlrpclib.py", line 1091, in dumps
data = m.dumps(params)
File "/usr/lib/python2.7/xmlrpclib.py", line 638, in dumps
dump(v, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 660, in __dump
f(self, value, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 762, in dump_instance
self.dump_struct(value.__dict__, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 741, in dump_struct
dump(v, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 660, in __dump
f(self, value, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 762, in dump_instance
self.dump_struct(value.__dict__, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 741, in dump_struct
dump(v, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 660, in __dump
f(self, value, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 720, in dump_array
dump(v, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 660, in __dump
f(self, value, write)
File "/usr/lib/python2.7/xmlrpclib.py", line 664, in dump_nil
raise TypeError, "cannot marshal None unless allow_none is enabled"
TypeError: cannot marshal None unless allow_none is enabled
**But I already gave the option`allow_none=True` in the listener file.**
**Where am I going wrong?**
Answer: I found it after much headache. It seems that **connection details cannot be
sent or marshall'ed**. In the function `mod_file_transfer` I tried to send the
client connection details as an object (so that server knows to whom it has to
send the file), which was causing the error.
I simply sent the client connection details as a string and it worked. Thanks
to me!
|
My escape event isn't working
Question: I'm trying to make a program that when I press escape on the window appears a
label "You pressed Escape".
from tkinter import *
class the_Esc_test(Frame):
def __init__(self):
Frame.__init__(self)
self.start_number=0
self.pack(expand=YES, fill=BOTH)
self.master.title("Esc program")
self.master.geometry ("600x600")
self.bind("<Escape>", self.EscPressed)
def EscPressed(self):
self.start_number+=1
if self.start_number==1:
self.lbl1=Label(self, text="You pressed Escape")
self.lbl1.grid(row=6, column=6, sticky=W)
elif self.start_number==2:
self.lbl1.grid_forget()
self.start_number=0
start=the_Esc_test()
It doesn't work, but Python will not show any error.
Can someone show me why?
Answer: I was able to get it to work like so
from Tkinter import *
class the_Esc_test(Frame):
def __init__(self):
Frame.__init__(self)
self.start_number=0
self.pack(expand=YES, fill=BOTH)
self.master.title("Esc program")
self.master.geometry ("600x600")
self.master.bind("<Escape>", self.EscPressed)
def EscPressed(self, event):
print "called", event.keysym
start=the_Esc_test()
start.mainloop()
|
Python: doing multiple column aggregation in pandas
Question: I have dataframe where I went to do multiple column aggregations in pandas.
import pandas as pd
import numpy as np
df = pd.DataFrame({'ser_no': [1, 1, 1, 2, 2, 2, 2, 3, 3, 3],
'CTRY_NM': ['a', 'a', 'b', 'e', 'e', 'a', 'b', 'b', 'b', 'd'],
'lat': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'long': [21, 22, 23, 24, 25, 26, 27, 28, 29, 30]})
df2 = df.groupby(['ser_no', 'CTRY_NM']).lat.agg({'avg_lat': np.mean})
With this code, I get the mean for `lat`. I would also like to find the mean
for `long`.
I tried `df2 = df.groupby(['ser_no', 'CTRY_NM']).lat.agg({'avg_lat':
np.mean}).long.agg({'avg_long': np.mean})` but this produces
> AttributeError: 'DataFrame' object has no attribute 'long'
If I just do `avg_long`, the code works as well.
df2 = df.groupby(['ser_no', 'CTRY_NM']).long.agg({'avg_long': np.mean})
In[2]: df2
Out[42]:
avg_long
ser_no CTRY_NM
1 a 21.5
b 23.0
2 a 26.0
b 27.0
e 24.5
3 b 28.5
d 30.0
Is there a way to do this in one step or is this something I have to do
separately and join back later?
Answer: I think more simplier is use [`GroupBy.mean`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.core.groupby.GroupBy.mean.html):
print df.groupby(['ser_no', 'CTRY_NM']).mean()
lat long
ser_no CTRY_NM
1 a 1.5 21.5
b 3.0 23.0
2 a 6.0 26.0
b 7.0 27.0
e 4.5 24.5
3 b 8.5 28.5
d 10.0 30.0
Ir you need define columns for aggregating:
print df.groupby(['ser_no', 'CTRY_NM']).agg({'lat' : 'mean', 'long' : 'mean'})
lat long
ser_no CTRY_NM
1 a 1.5 21.5
b 3.0 23.0
2 a 6.0 26.0
b 7.0 27.0
e 4.5 24.5
3 b 8.5 28.5
d 10.0 30.0
More info in [docs](http://pandas.pydata.org/pandas-
docs/stable/groupby.html#aggregation).
EDIT:
If you need rename column names - remove `multiindex` in `columns`, you can
use `list comprehension`:
import pandas as pd
df = pd.DataFrame({'ser_no': [1, 1, 1, 2, 2, 2, 2, 3, 3, 3],
'CTRY_NM': ['a', 'a', 'b', 'e', 'e', 'a', 'b', 'b', 'b', 'd'],
'lat': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'long': [21, 22, 23, 24, 25, 26, 27, 28, 29, 30],
'date':pd.date_range(pd.to_datetime('2016-02-24'),
pd.to_datetime('2016-02-28'), freq='10H')})
print df
CTRY_NM date lat long ser_no
0 a 2016-02-24 00:00:00 1 21 1
1 a 2016-02-24 10:00:00 2 22 1
2 b 2016-02-24 20:00:00 3 23 1
3 e 2016-02-25 06:00:00 4 24 2
4 e 2016-02-25 16:00:00 5 25 2
5 a 2016-02-26 02:00:00 6 26 2
6 b 2016-02-26 12:00:00 7 27 2
7 b 2016-02-26 22:00:00 8 28 3
8 b 2016-02-27 08:00:00 9 29 3
9 d 2016-02-27 18:00:00 10 30 3
df2=df.groupby(['ser_no','CTRY_NM']).agg({'lat':'mean','long':'mean','date':[min,max,'count']})
df2.columns = ['_'.join(col) for col in df2.columns]
print df2
lat_mean date_min date_max date_count \
ser_no CTRY_NM
1 a 1.5 2016-02-24 00:00:00 2016-02-24 10:00:00 2
b 3.0 2016-02-24 20:00:00 2016-02-24 20:00:00 1
2 a 6.0 2016-02-26 02:00:00 2016-02-26 02:00:00 1
b 7.0 2016-02-26 12:00:00 2016-02-26 12:00:00 1
e 4.5 2016-02-25 06:00:00 2016-02-25 16:00:00 2
3 b 8.5 2016-02-26 22:00:00 2016-02-27 08:00:00 2
d 10.0 2016-02-27 18:00:00 2016-02-27 18:00:00 1
long_mean
ser_no CTRY_NM
1 a 21.5
b 23.0
2 a 26.0
b 27.0
e 24.5
3 b 28.5
d 30.0
|
Python sklearn.datasets.dump_svmlight_file failed to output the right index of column
Question: I want to execute SVM light and SVM rank,
so I need to process my data into the format of SVM light.
But I had a big problem....
My Python codes are below:
import pandas as pd
import numpy as np
from sklearn.datasets import dump_svmlight_file
self.df = pd.DataFrame()
self.df['patent_id'] = patent_id_list
self.df['Target'] = class_list
self.df['backward_citation'] = backward_citation_list
self.df['uspc_originality'] = uspc_originality_list
self.df['science_linkage'] = science_linkage_list
self.df['sim_bc_structure'] = sim_bc_structure_list
self.df['claim_num'] = claim_num_list
self.qid = dataset_list
X = self.df[np.setdiff1d(self.df.columns, ['patent_id','Target'])]
y = self.df.Target
dump_svmlight_file(X,y,'test.dat',zero_based=False, query_id=self.qid,multilabel=False)
The output file "test.dat" is look like this: [](http://i.stack.imgur.com/OgQTo.png)
But the real data is look like this: [](http://i.stack.imgur.com/kqH6c.png)
I got a wrong index....
Take first instance for example, the value of column 1 is 7, and the values of
column 2~4 are zeros, the value of column 5 is 2....
So my expected result is look like this:
1 qid:1 1:7 **5:2**
but the column index of output file are totally wrong....
and unfortunately... I cannot figure out where is the problem occur....
I cannot fix this problem for a long time....
Thank you for help!!
Answer: I change the data structure, I use np.array to produce array-like input.
Finally, I succeed!
|
Returning different values based on where function is called from in Python
Question: I have two functions, each of which call a third function, but expect a
slightly different thing returned back.
All functions are in separate files. Files A and B can NOT import one another,
so function A and B can not call one another. What is the best way to organize
this?
I would like to avoid duplicating the function, since most of it is identical,
to stay in line with DRY principals.
Here is some pseudo-code for what I have, and how I am thinking the program
would be structured:
#FileA.py
def functionA():
useableData = functionC(rawData)
-
#FileB.py
def functionB():
useableData = functionC(rawData)
-
#FileC.py
def functionC(rawdata):
*Extensive processing/formatting of data*
if C run from A:
return useableData
if C run from B:
*A bit more processing of data*
return useableData
The only solution I can think of is to pass a second variable along with
`rawdata`, a string that says the name of the function it came from. But this
seems inelegant to me.
Is there an ideal way to do this? Perhaps not involving checking where the
function was called from, but still allowing `functionC` to return slightly
different results depending on where the data is going to be returned to. I
might be able to restructure the code a bit, but all three functions need to
remain in their current files.
Answer: I think your approach is wrong. What I would do is this
def functionA():
useabledata = functionC(rawData)
def functionB():
useabledata = functionD(functionC(rawData))
def functionC(rawData):
*Extensive processing/formatting of data*
def functionD(partiallyProcessedData):
*A bit more processing of data*
Other option is as tobias_k said, to have an optional second parameter
def functionC(rawData, moreProcessing=False):
*Extensive processing*
if not moreProcessing:
return data
*more processing*
return data
And call it like
def functionA()
useabledata = functionC(rawData)
def functionB()
useabledata = functionC(rawData, True)
An even more encapsulated way to do it would be
def functionA():
useabledata = functionC(rawData)
def functionB():
useabledata = functionD(rawData)
def functionC(rawData):
*Extensive processing/formatting of data*
def functionD(rawData):
partialProcessedData = functionC(rawData)
*A bit more processing of data*
|
Dynamic matplotlib pyside widget not displaying
Question: I'm writing a python application using pyside and matplotlib. Following a
combination of [this
tutorial](http://matplotlib.org/examples/user_interfaces/embedding_in_qt4.html)
and [this SO post](http://stackoverflow.com/a/9082596/160300), I have created
a matplotlib widget that I can successfully add to a parent. However when I go
to actually add data to it, nothing seems to get displayed.
If I add static data like the SO post had, it shows up, but when I change it
to update on the fly (currently every second on a timer, but it will
eventually be using a signal from another class), I never get anything but the
empty axes to appear. I suspect that I'm missing a call to force a draw or
invalidate or that there is something wrong with the way I'm calling
update_datalim (though the values that get passed to it seem correct).
from PySide import QtCore, QtGui
import matplotlib
import random
matplotlib.use('Qt4Agg')
matplotlib.rcParams['backend.qt4']='PySide'
from matplotlib import pyplot as plt
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.figure import Figure
from matplotlib.patches import Rectangle
from collections import namedtuple
DataModel = namedtuple('DataModel', ['start_x', 'start_y', 'width', 'height'])
class BaseWidget(FigureCanvas):
def __init__(self, parent=None, width=5, height=4, dpi=100):
fig = Figure(figsize=(width, height), dpi=dpi)
self.axes = fig.add_subplot(111)
# We want the axes cleared every time plot() is called
self.axes.hold(False)
self.axes.set_xlabel('X Label')
self.axes.set_ylabel('Y Label')
self.axes.set_title('My Data')
FigureCanvas.__init__(self, fig)
self.setParent(parent)
FigureCanvas.setSizePolicy(self,
QtGui.QSizePolicy.Expanding,
QtGui.QSizePolicy.Expanding)
FigureCanvas.updateGeometry(self)
class DynamicWidget(BaseWidget):
def set_data(self, the_data):
self.axes.clear()
xys = list()
cmap = plt.cm.hot
for datum in the_data:
bottom_left = (datum.start_x, datum.start_y)
top_right = (bottom_left[0] + datum.width, bottom_left[1] + datum.height)
rect = Rectangle(
xy=bottom_left,
width=datum.width, height=datum.height, color=cmap(100)
)
xys.append(bottom_left)
xys.append(top_right)
self.axes.add_artist(rect)
self.axes.update_datalim(xys)
self.axes.figure.canvas.draw_idle()
class RandomDataWidget(DynamicWidget):
def __init__(self, *args, **kwargs):
DynamicWidget.__init__(self, *args, **kwargs)
timer = QtCore.QTimer(self)
timer.timeout.connect(self.generate_and_set_data)
timer.start(1000)
def generate_and_set_data(self):
fake_data = [DataModel(
start_x=random.randint(1, 100),
width=random.randint(20, 40),
start_y=random.randint(80, 160),
height=random.randint(20, 90)
) for i in range(100)]
self.set_data(fake_data)
**Edit:** I'm suspecting that there's an issue with updating the limits of the
plot. When running the above code, the plot opens with limits of 0 and 1 on
both the x and y axis. Since none of my generated data falls into that range,
I created another subclass of `DynamicWidget` that plots only data between 0
and 1 (the same data from the linked SO post). When instantiating the class
below, the data shows up successfully. Do I need to do something more than
calling `update_datalim` to get the graph to re-bound itself?
class StaticWidget(DynamicWidget):
def __init__(self):
DynamicWidget.__init__(self)
static_data = [
DataModel(0.5, 0.05, 0.2, 0.05),
DataModel(0.1, 0.2, 0.7, 0.2),
DataModel(0.3, 0.1, 0.8, 0.1)
]
self.set_data(static_data)
Answer: Yes, `update_datalim` only updates the bounding box that is kept internally by
the axes. You also need to enable auto scaling for it to be used. Add
`self.axes.autoscale(enable=True)` after the `self.axes.clear()` statement and
it will work. Or you can set the axes' range to a fixed value by using
`self.axes.set_xlim` and `self.axes.set_ylim`.
**edit:** here is my code, which works for me
from PySide import QtCore, QtGui
import matplotlib
import random, sys
matplotlib.use('Qt4Agg')
matplotlib.rcParams['backend.qt4']='PySide'
from matplotlib import pyplot as plt
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.figure import Figure
from matplotlib.patches import Rectangle
from collections import namedtuple
DataModel = namedtuple('DataModel', ['start_x', 'start_y', 'width', 'height'])
class BaseWidget(FigureCanvas):
def __init__(self, parent=None, width=5, height=4, dpi=100):
fig = Figure(figsize=(width, height), dpi=dpi)
self.axes = fig.add_subplot(111)
# We want the axes cleared every time plot() is called
self.axes.hold(False)
#self.axes.autoscale(enable=True)
self.axes.set_xlabel('X Label')
self.axes.set_ylabel('Y Label')
self.axes.set_title('My Data')
FigureCanvas.__init__(self, fig)
self.setParent(parent)
FigureCanvas.setSizePolicy(self,
QtGui.QSizePolicy.Expanding,
QtGui.QSizePolicy.Expanding)
FigureCanvas.updateGeometry(self)
class DynamicWidget(BaseWidget):
def set_data(self, the_data):
self.axes.clear()
self.axes.autoscale(enable=True)
#self.axes.set_xlim(0, 300)
#self.axes.set_ylim(0, 300)
xys = list()
cmap = plt.cm.hot
for datum in the_data:
print datum
bottom_left = (datum.start_x, datum.start_y)
top_right = (bottom_left[0] + datum.width, bottom_left[1] + datum.height)
rect = Rectangle(
xy=bottom_left,
width=datum.width, height=datum.height, color=cmap(100)
)
xys.append(bottom_left)
xys.append(top_right)
self.axes.add_artist(rect)
self.axes.update_datalim(xys)
self.axes.figure.canvas.draw_idle()
class RandomDataWidget(DynamicWidget):
def __init__(self, *args, **kwargs):
DynamicWidget.__init__(self, *args, **kwargs)
timer = QtCore.QTimer(self)
timer.timeout.connect(self.generate_and_set_data)
timer.start(1000)
def generate_and_set_data(self):
fake_data = [DataModel(
start_x=random.randint(1, 100),
width=random.randint(20, 40),
start_y=random.randint(80, 160),
height=random.randint(20, 90)) for i in range(100)]
self.set_data(fake_data)
print "done:...\n\n"
def main():
qApp = QtGui.QApplication(sys.argv)
aw = RandomDataWidget()
aw.show()
aw.raise_()
sys.exit(qApp.exec_())
if __name__ == "__main__":
main()
|
How can I get only the latest file/files created/modified on S3 location through python
Question: using boto i tried the below code :
from boto.s3.connection import S3Connection
conn = S3Connection('XXX', 'YYYY')
bucket = conn.get_bucket('myBucket')
file_list = bucket.list('just/a/prefix/')
but am unable to get the length of the list or the last element of the
file_list as it is a BucketListResultSet type ,please suggest a solution for
this scenario
Answer: You are trying to use `boto` library, which is rather obsolete and not
maintained. The number of issues with this library is growing.
Better use currently developed `boto3`.
First, let us define parameters of our search:
>>> bucket_name = "bucket_of_m"
>>> prefix = "region/cz/"
Do import `boto3` and create s3 representing S3 resource:
>>> import boto3
>>> s3 = boto3.resource("s3")
Get the bucket:
>>> bucket = s3.Bucket(name=bucket_name)
>>> bucket
s3.Bucket(name='bucket_of_m')
Define filter for objects with given prefix:
>>> res = bucket.objects.filter(Prefix=prefix)
>>> res
s3.Bucket.objectsCollection(s3.Bucket(name='bucket_of_m'), s3.ObjectSummary)
and iterate over it:
>>> for obj in res:
... print obj.key
... print obj.size
... print obj.last_modified
...
Each `obj` is ObjectSummary (not Object itself), but it holds enought to learn
something about it
>>> obj
s3.ObjectSummary(bucket_name='bucket_of_m', key=u'region/cz/Ostrava/Nadrazni.txt')
>>> type(obj)
boto3.resources.factory.s3.ObjectSummary
You can get Object from it and use it as you need:
>>> o = obj.Object()
>>> o
s3.Object(bucket_name='bucket_of_m', key=u'region/cz/rodos/fusion/AdvancedDataFusion.xml')
There are not so many options for filtering, but prefix is available.
|
Why is my print function printing the () and the "" along with the statement?
Question: _This is my code for a challenge in Python Programming for the Absolute
Beginner_
food = input("What is your favorite entree? ")
dessert = input("What is your favorite dessert? ")
print("\nI am going to assume that your favorite meal is" , food + dessert)
_Instead of printing_
I am going to assume that your favorite meal is
_It is printing_
('\nI am going to assume that your favorite meal is', 'steakcookies')
_What do I need to change?_
Answer: You need to take out the parentheses:
print "\nI am going to assume that your favorite meal is" , food + dessert
In Python2, `print` is a statement, not a function. When you include the
parentheses, `print` treats the arguments as part of a tuple. You should also
be using `raw_input()`, not `input()`. Alternatively, put `from __future__
import print_function` at the beginning of the file.
|
Gunicorn failed to load Flask application
Question: I have a Flask app I am trying to serve via Gunicorn.
I am using virtualenv and python3. If I activate my venv cd to my app base dir
then run:
gunicorn mysite:app
I get:
Starting gunicorn Listening at <http://127.0.0.1:8000>
DEBUG:mysite.settings:>>Config() ... Failed to find application: 'mysite'
Worker exiting Shutting down: master Reason: App failed to load
Looking in /etc/nginx/sites-available I only have the file 'default'. In
sites-enabled I have no file.
In my nginx.conf file I have:
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
App structure:
mysite #this is where I cd to and run gunicorn mysite:app
--manage.py
--/mysite
----settings.py
----__init__.py
in manage.py for mysite I have following:
logger.debug("manage.py entry point")
app = create_app(app_name)
manager = Manager(app)
if __name__ == "__main__":
manager.run()
In
__init__.py file:
def create_app(object_name):
app = Flask(__name__)
#more setup here
return app
In my settings.py in the app directory
class Config(object):
logger.debug(">>Config()") #this logs OK so gunicorn is at least starting in correct directory
From inside the virtualenv if I run
print(sys.path)
I find a path to python and site-packages for this virtualenv.
From what I have read to start gunicorn it's just a matter of installing it
and running gunicorn mysite:app
Running gunicorn from the parent directory of mysite I get the same failed to
find application: 'mysite', App failed to load error, but don't get the
DEBUG...Config() logged (as we are clearly in the wrong directory to start
in). Running gunicorn from mysite/mysite (clearly wrong) I get and Exception
in worker process ereor, ImportError: No module named 'mysite'.
Any clues as to how I can get gunicorn running?
Answer: You're pointing gunicorn at `mysite:app`, which is equivalent to `from mysite
import app`. However, there is no `app` object in the top (`__init__.py`)
level import of `mysite`. Tell gunicorn to call the factory.
gunicorn "mysite:create_app()"
You can pass arguments to the call as well.
gunicorn "mysite:create_app('production')"
Internally, this is equivalent to:
from mysite import create_app
app = create_app('production')
* * *
Alternatively, you can use a separate file that does the setup. In your case,
you already initialized an `app` in `manage.py`.
gunicorn manage:app
|
Python logging levels are behaving inconsistently
Question: I can't understand why the following code does not produce my debug message
even though effective level is appropriate (output is just `10`)
import logging
l = logging.getLogger()
l.setLevel(logging.DEBUG)
l.debug("Debug Mess!")
l.error(l.getEffectiveLevel())
while when I add this line after the import: `logging.debug("Start...")`
import logging
logging.debug("Start...")
l = logging.getLogger()
l.setLevel(logging.DEBUG)
l.debug("Debug Mess!")
l.error(l.getEffectiveLevel())
it produces following output:
DEBUG:root:Debug Mess!
ERROR:root:10
so even though "Start..." is not shown, it starts to log. Why?
It's on Python 3.5. Thanks
Answer: The top-level `logging.debug(..)` call calls the [`logging.basicConfig()`
function](https://docs.python.org/2/library/logging.html#logging.basicConfig)
for you if no handlers have been configured yet on the root logger.
Because using a call to `logging.getLogger().debug()` does _not_ trigger that
call, you don't see any output because there are no handlers to show the
output on.
The Python 3 version of `logger` does have a [`logging.lastResort`
handler](https://docs.python.org/3/library/logging.html#logging.lastResort),
used for when no logging configuration exists, but this handler is configured
to only show messages of level `WARNING` and up, which is why you see your
`ERROR` level message (`10`) printed to STDERR, but not your `DEBUG` level
message. In Python 2 you would get the message _No handlers could be found for
logger "root"_ printed instead, just once for the first attempt to log
anything. I'd not rely on the `lastResort` handler however; instead properly
configure your logging hierarchy with a decent handler configured for your own
needs.
Either call `logging.basicConfig()` yourself, or manually add a handler on the
root logger:
l = logging.getLogger()
l.addHandler(logging.StreamHandler())
The above basically does the same thing as a `logging.basicConfig()` call with
no further arguments. The `StreamHandler()` created this way logs to STDERR
and does not further filter on the message level. Note that a
`logging.basicConfig()` call can also set the logging level for you.
|
Delete rows pandas Dataframe based on index (multiple criteria) (Python 3.5.1)
Question: Suppose I have a Pandas DataFrame with MultiIndex on rows. How can I delete
rows based on the value of one of the levels of the index based on multiple
criteria?
For example, suppose I have
import pandas as pd
df = {'population': [100, 200, 300, 400, 500, 600, 700, 800]}
arrays = [['NJ', 'NJ', 'NY', 'NY', 'CA', 'CA', 'NV', 'NV'],
['A', 'B', None, 'D', 'E', 'F', None, 'G']]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['state', 'county'])
df = pd.DataFrame(df, index=index)
population
state county
NJ A 100
B 200
NY NaN 300
D 400
CA E 500
F 600
NV NaN 700
G 800
I want to delete all rows where the `county` level of the index is NaN and
also delete it when it is equal to 'D' and 'G'. In other words, I want to end
up with a DataFrame
population
state county
NJ A 100
B 200
D 400
CA E 500
F 600
So the following sort of works:
df = df.iloc[df.index.get_level_values('county') != 'D']
df = df.iloc[df.index.get_level_values('county') != 'G']
But the problem is that in my real use case there is several of these
criteria. Also, I can't seem to find a way to delete NaN's using this method.
Thanks!
Answer: Call [`drop`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.drop.html#pandas.DataFrame.drop) and
pass a list on `level='county` to drop row labels with those values on that
index level:
In [284]:
df.drop(['D','G',np.NaN], level='county')
Out[284]:
population
state county
NJ A 100
B 200
CA E 500
F 600
|
Python: compute average of n-th elements in list of lists with different lengths
Question: Suppose I have the following list of lists:
a = [
[1, 2, 3],
[2, 3, 4],
[3, 4, 5, 6]
]
I want to have the average of each n-th element in the arrays. However, when
wanting to do this in a simple way, Python generated out-of-bounds errors
because of the different lengths. I solved this by giving each array the
length of the longest array, and filling the missing values with None.
Unfortunately, doing this made it impossible to compute an average, so I
converted the arrays into masked arrays. The code shown below works, but it
seems rather cumbersome.
import numpy as np
import numpy.ma as ma
a = [ [1, 2, 3],
[2, 3, 4],
[3, 4, 5, 6] ]
# Determine the length of the longest list
lenlist = []
for i in a:
lenlist.append(len(i))
max = np.amax(lenlist)
# Fill each list up with None's until required length is reached
for i in a:
if len(i) <= max:
for j in range(max - len(i)):
i.append(None)
# Fill temp_array up with the n-th element
# and add it to temp_array
temp_list = []
masked_arrays = []
for j in range(max):
for i in range(len(a)):
temp_list.append(a[i][j])
masked_arrays.append(ma.masked_values(temp_list, None))
del temp_list[:]
# Compute the average of each array
avg_array = []
for i in masked_arrays:
avg_array.append(np.ma.average(i))
print avg_array
Is there a way to do this more quickly? The final list of lists will contain
600000 'rows' and up to 100 'columns', so efficiency is quite important :-).
Answer: [tertools.izip_longest](https://docs.python.org/2/library/itertools.html#itertools.izip_longest)
would do all the padding with None's for you so your code can be reduced to:
import numpy as np
import numpy.ma as ma
from itertools import izip_longest
a = [ [1, 2, 3],
[2, 3, 4],
[3, 4, 5, 6] ]
averages = [np.ma.average(ma.masked_values(temp_list, None)) for temp_list in izip_longest(*a)]
print(averages)
[2.0, 3.0, 4.0, 6.0]
No idea what the fastest way in regard to the numpy logic but this is
definitely going to be a lot more efficient than your own code.
If you wanted a faster pure python solution:
from itertools import izip_longest, imap
a = [[1, 2, 3],
[2, 3, 4],
[3, 4, 5, 6]]
def avg(x):
x = filter(None, x)
return sum(x, 0.0) / len(x)
filt = imap(avg, izip_longest(*a))
print(list(filt))
[2.0, 3.0, 4.0, 6.0]
If you have 0's in the arrays that won't work as 0 will be treated as Falsey,
you will have to use a list comp to filter in that case but it will still be
faster:
def avg(x):
x = [i for i in x if i is not None]
return sum(x, 0.0) / len(x)
filt = imap(avg, izip_longest(*a))
|
decrease string in python doesn't works
Question: I have a problem to cut the url that i get as result from Beautifulsoup, i've
used this code to retrieve the url.
import urllib2
from bs4 import BeautifulSoup
url = 'http://192.168.0.184:88/cgi-bin/CGIProxy.fcgi? cmd=snapPicture&usr=USER&pwd=PASS'
html = urllib2.urlopen(url)
soup = BeautifulSoup(html, "html5lib")
imgs = soup.findAll("img")
print imgs
print imgs[1:]
As result from print imgs i get `[<img
src="../snapPic/Snap_20160401-110642.jpg"/>]` I want to cut the unwanted
characters from this string so i try to use for eg. print imgs[1:] but as
result i get []
Any tips or solutions? I want to rebuild the imgs string to the correct image
url `imgs string = <img src="../snapPic/Snap_20160401-110642.jpg"/>` wanted
result = `http://192.168.0.184:88/snapPic/Snap_20160401-110642.jpg`
Answer: try this
import urllib2
from bs4 import BeautifulSoup
url = 'http://192.168.0.184:88/cgi-bin/CGIProxy.fcgi? cmd=snapPicture&usr=USER&pwd=PASS'
html = urllib2.urlopen(url)
soup = BeautifulSoup(html, "html5lib")
imgs = soup.findAll("img")
print imgs
for img in imgs:
print img["src"].replace("..","http://192.168.0.184:88")
|
kivy fatal error due to OpenGL requirement on windows desktop
Question: I have written a windows desktop application using Python 2.7 and kivy that
runs perfectly well from the pyCharm IDE and from the python commandline.
After building a distribution package using PyInstaller, running the
application from the ..\dist\applicdir\ I get a Kivy Fatal Error:
_GL:Minimum required OpenGL version (2.0) NOT found!_
How come? It runs from 2 different angles on the same PC but not from the dist
package on the same PC.
Can you explain to me why in the first two situations I do not get the fatal
Error?
>>>>>>>>>>>>>> EDIT 1 <<<<<<<<<<<<<<<
[INFO ] [Logger ] Record log in ~\.kivy\logs\kivy_16-04-01_99.txt
[INFO ] [Kivy ] v1.9.1
[INFO ] [Python ] v2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015,20:32:19) [MSC v.1500 32 bit (Intel)]
[INFO ] [Factory ] 179 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO ] [OSC ] using <thread> for socket
[INFO ] [Window ] Provider: sdl2
[WARNING ] The 'fake' fullscreen option has been deprecated, use Window.borderless or the borderless Config option instead.
[INFO ] [GL ] GLEW initialization succeeded
[INFO ] [GL ] OpenGL version <4.0.0 - Build 10.18.10.4176>
[INFO ] [GL ] OpenGL vendor <Intel>
[INFO ] [GL ] OpenGL renderer <Intel(R) HD Graphics 4000>
[INFO ] [GL ] OpenGL parsed version: 4, 0
[INFO ] [GL ] Shading version <4.00 - Build 10.18.10.4176>
[INFO ] [GL ] Texture max size <16384>
[INFO ] [GL ] Texture max units <16>
[INFO ] [Window ] auto add sdl2 input provider
[INFO ] [Window ] virtual keyboard not allowed, single mode, not docked
[INFO ] [Text ] Provider: sdl2
PyInstaller Bootloader 3.x
LOADER: executable is C:\Projects\UP52\PACKAGE\dist\up52creator\up52creator.exe
LOADER: homepath is C:\Projects\UP52\PACKAGE\dist\up52creator
LOADER: _MEIPASS2 is NULL
LOADER: archivename is C:\Projects\UP52\PACKAGE\dist\up52creator\up52creator.exe
LOADER: No need to extract files to run; setting extractionpath to homepath
LOADER: SetDllDirectory(C:\Projects\UP52\PACKAGE\dist\up52creator)
LOADER: Already in the child - running user's code.
LOADER: Python library: C:\Projects\UP52\PACKAGE\dist\up52creator\python27.dll
LOADER: Loaded functions from Python library.
LOADER: Manipulating environment (sys.path, sys.prefix)
LOADER: sys.prefix is C:\Projects\UP52\PACKAGE\dist\UP52CR~1
LOADER: Setting runtime options
LOADER: Initializing python
LOADER: Overriding Python's sys.path
LOADER: Post-init sys.path is C:\Projects\UP52\PACKAGE\dist\up52creator
LOADER: Setting sys.argv
LOADER: setting sys._MEIPASS
LOADER: importing modules from CArchive
LOADER: extracted struct
LOADER: callfunction returned...
LOADER: extracted pyimod01_os_path
LOADER: callfunction returned...
LOADER: extracted pyimod02_archive
LOADER: callfunction returned...
LOADER: extracted pyimod03_importers
LOADER: callfunction returned...
LOADER: Installing PYZ archive with Python modules.
LOADER: PYZ archive: out00-PYZ.pyz
LOADER: Running pyiboot01_bootstrap.py
no mem to add parser accelerators
Traceback (most recent call last):
File "<string>", line 21, in <module>
File "C:\Projects\UP52\qmonos.py", line 56, in showHome
recMgr = Manager()
File "C:\Python27\lib\multiprocessing\__init__.py", line 99, in Manager
m.start()
File "C:\Python27\lib\multiprocessing\managers.py", line 528, in start
self._address = reader.recv()
EOFError
qmonosmain returned -1
LOADER: OK.
LOADER: Cleaning up Python interpreter.
Traceback (most recent call last):
File "<string>", line 21, in <module>
Traceback (most recent call last):
File "<string>", line 21, in <module>
File "C:\Projects\UP52\qmonos.py", line 56, in showHome
recMgr = Manager()
File "C:\Projects\UP52\qmonos.py", line 56, in showHome
File "C:\Python27\lib\multiprocessing\__init__.py", line 99, in Manager
m.start()
recMgr = Manager()
File "C:\Python27\lib\multiprocessing\managers.py", line 528, in start
File "C:\Python27\lib\multiprocessing\__init__.py", line 99, in Manager
m.start()
self._address = reader.recv()
File "C:\Python27\lib\multiprocessing\managers.py", line 528, in start
EOFError
qmonosmain returned -1
LOADER: OK.
LOADER: Cleaning up Python interpreter.
self._address = reader.recv()
EOFError
qmonosmain returned -1
LOADER: OK.
LOADER: Cleaning up Python interpreter.
Answer: Updating the graphics driver solved the problem. But still a strange situation
that after PyInstaller package creation the application triggers on OpenGL
version issues and the PyCharm IDE and Python commandline do not.......
|
python mysql error in query
Question: I want to generate a dynamic table:
columnames=[element[0] for element in bufferdata['data'] ]
for index,element in enumerate(columnames):
columnames[index]=re.sub("[(%./)-]","",element)
tuple(columnames)
querycreatetable='''CREATE TABLE test (ID INT AUTO_INCREMENT,name VARCHAR(50),symbol VARCHAR(10),sector VARCHAR(50),
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,%s FLOAT,
%s FLOAT,%s FLOAT,%s FLOAT
)
'''
try:
self.cursor.execute(querycreatetable,columnames)
except MySQLdb.ProgrammingError, e:
try:
print "MySQL Error [%d]: %s" % (e.args[0], e.args[1])
except IndexError:
print "MySQL Error: %s" % str(e)
but i receive this error: MySQL Error [1064]: You have an error in your SQL
syntax; check the manual that corresponds to your MySQL server version for the
right syntax to use near ''SALES in millions' FLOAT,'Earnings per share'
FLOAT,'PE Ratio TTM' FLOAT,'PE Hi' at line 2
does anyone see where the problem is?
Answer: firstly, as told here : [Check for valid SQL column
name](http://stackoverflow.com/questions/4977898/check-for-valid-sql-column-
name)
> SQL identifiers and key words must begin with a letter (a-z, but also
> letters with diacritical marks and non-Latin letters) or an underscore (_).
> Subsequent characters in an identifier or key word can be letters,
> underscores, digits (0-9), or dollar signs ($). Note that dollar signs are
> not allowed in identifiers according to the letter of the SQL standard, so
> their use might render applications less portable
It comes from PostGre doc, but because PostGre is very close to the "ideal"
SQL syntax, it might be the same for mysql... So no parenthesis into column
names, no spaces...
And secondly, **Column names are not strings** :
The following syntax is valid:
CREATE TABLE (test VARCHAR(100) NOT NULL, ...)
And the following one is invalid and will throw a syntax error:
CREATE TABLE ('test' VARCHAR(100) NOT NULL, ...)
When you use the '%s' modifier, it parses data as STRING. so it surrounds it
with quotes, which is invalid...
So for create your table, I suggest a "for loop" which validate data (with
regexpr), and symply add it to the string:
import re
# ...
query = "CREATE TABLE test (ID INT AUTO_INCREMENT,name VARCHAR(50)"
for c in columnames:
if (re.search(r"^[A-Za-z][A-Za-z0-9_]*$", c) query += c + ", FLOAT" #this regex validate string if it begins with alphabetic char (upper or lower case), and if the others characters are alphanumeric, or are underscores
else raise SyntaxError("Invalid Column name!!") #If not, we raise a syntax error
query += ");"
And then you can create your table :)
|
How to compare more than 2 Lists in Python?
Question: I am new to Python as well new on Stackoverflow , can anyone tell me
efficient(pythonic) way to compare more than 2 Lists ? I want to enlist all
the elements of all 3 Lists and display in such fashion that user will be able
to know that which element is present in all 3 List OR element present in List
1 but not in List 2 OR which elements are duplicates. I have done comparison
using nested loops.
List1 = [10,10,11,12,15,16,18,19]
List2 = [10,11,13,15,16,19,20]
List3 = [10,11,11,12,15,19,21,23]
# Checking whether List 1 value present in List2 and List 3
for l1 in List1:
if l1 in List2:
List2.remove(l1)
if l1 in List3:
List3.remove(l1)
print(l1," ",l1," ",l1)
else:
print(l1," ",l1," ","NA")
else:
if l1 in List3:
List3.remove(l1)
print(l1," ","NA"," ",l1)
else:
print(l1," ","NA"," ","NA")
# Checking whether List 2 value present in List3
for l2 in List2:
if l2 in List3:
List3.remove(l2)
print("NA"," ",l2," ",l2)
else:
print("NA"," ",l2," ","NA")
# Checking for values present only in List 3
for l3 in List3:
print("NA","NA",l3)
--- Output---
List1 List2 List3
10 10 10
10 NA NA
11 11 11
12 NA 12
15 15 15
16 16 NA
18 NA NA
19 19 19
NA 13 NA
NA 20 NA
NA NA 11
NA NA 21
NA NA 23
NA 20 NA
NA NA 11
NA NA 21
NA NA 23
Is there any better way to compare the Lists ?
Answer:
import pandas as pd
d = {'List1' : pd.Series(List1),'List2' : pd.Series(List2),'List3': pd.Series(List3)}
df = pd.DataFrame(d)
print(df)
***OUTPUT:***
List1 List2 List3
10 NaN NaN
11 11 11
12 NaN 12
15 15 15
16 16 NaN
18 NaN NaN
19 19 19
NaN 13 NaN
Using Pandas you can compare the multiple list, while the empty coloumn would
be automatically filled as NaN.
Pandas is a Data visualization library of python
Install pandas by : pip install pandas
Edited :
> Gist Link:
> <https://gist.github.com/gr8Adakron/b51cc060b5e6dcc030261586f7237232>
|
How to build a new list from a list in Python?
Question: I have a python list that contains a set of objects (a class that has it's own
properties and functions and variables). I would like to extract some of the
list elements and create a new list.
My question is: Are the new list elments going to remain same or they will be
considered new instances of the class when I extract them from list?
For example: `List layer = [myclass1, myclass2, myclass3]`
I want new list `layernew = [myclass1]` or any such combination. How can I
extract the items. When I print directly layer and new layer (by using
`newlayer=layer[0:1]`) I see that it gives me the same instance reference of
the object.
Answer: > Are the new list elments going to remain same or they will be considered new
> instances of the class when I extract them from list.
They'll be the same. Demo:
class Widget:
def __init__(self, value):
self.value = value
a = [Widget(4), Widget(8), Widget(15)]
b = a[0:1]
print a[0] is b[0]
The output is `True`, so `a[0]` and `b[0]` are references to the same object.
One way to change this behavior is to use the `copy` module's `deepcopy`
method. This will attempt to duplicate the object you give it and return a
referentially distinct object with identical values.
import copy
class Widget:
def __init__(self, value):
self.value = value
a = [Widget(4), Widget(8), Widget(15)]
b = copy.deepcopy(a[0:1])
print a[0] is b[0]
#result: False
|
Python FFMPEG AttributeError: 'Popen' object has no attribute 'proc'
Question: I'm working on a tensorflow project that learns from an audio stream. I'm
trying to open an audio file and store the data in an array using FFMPEG. I'm
following the tutorial [here](http://zulko.github.io/blog/2013/10/04/read-and-
write-audio-files-in-python-using-ffmpeg/)
My code looks like this:
import subprocess as sp
FFMPEG_BIN = "ffmpeg"
try:
if image_file != 'train/rock/.DS_Store':
command = [FFMPEG_BIN,
'-i', image_file,
'-f', 's16le',
'-acodec', 'pcm_s16le',
'-ar', '44100',
'-ac', '2',
'output.png']
pipe = sp.Popen(command, stdout=sp.PIPE, bufsize=10**8)
# pipe = sp.Popen(command, stdout=sp.PIPE)
raw_audio = pipe.proc.stdout.read(88200*4)
But I'm getting the error:
AttributeError: 'Popen' object has no attribute 'proc'
Answer: I am working with `ffmpeg` and `pyaudio`. This code works for me.
import pyaudio
import subprocess as sp
import numpy
command = [ 'ffmpeg',
'-i', "Filename", # I used a url stream
'-loglevel','error',
'-f', 's16le',
'-acodec', 'pcm_s16le',
'-ar', '44100', # ouput will have 44100 Hz
'-ac', '2', # stereo (set to '1' for mono)
'-']
pipe = sp.Popen(command, stdout=sp.PIPE, bufsize=10**8)
p = pyaudio.PyAudio() #PyAudio helps to reproduce raw data in pipe.
stream = p.open(format = pyaudio.paInt16,
channels = 2,
rate = 44100,
output = True)
while True:
raw_audio = pipe.stdout.read(44100*2) #get raw data
stream.write(raw_audio) # reproduce
# Convert raw data in array with numpy
audio_array = numpy.fromstring(raw_audio, dtype="int16")
audio_array = audio_array.reshape((len(audio_array)/2,2))
stream.stop_stream()
stream.close()
In ubuntu you can install `pyaudio` with:
sudo apt-get install python-pyaudio python3-pyaudio
or
pip install pyaudio
|
How to list MAC Address of an AWS EC2 instance with Python and boto3
Question: how to find out information about MAC address of Amazon EC2 instances using
the Python boto3 library.
Answer: Assuming you have one network interface attached to your instances. If you
have more than one network interface attached to your instances, tweak the
code to your needs.
import boto3
ec2 = boto3.resource('ec2')
insts = list(ec2.instances.all())
for inst in insts:
for iface in inst.network_interfaces:
print inst.instance_id, iface.mac_address
|
The result of running time calculated by Python is not correct
Question: I trying to use time to record running time of this function, but i think the
result is not correct, sometimes it will only cost 0s and the result is not
stable.[The first two result is for N=10000, the third one is
N=30000](http://i.stack.imgur.com/kyYpb.png)
import time
def sumOfN(n):
start=time.time()
theSum=0
for i in range(1,n+1):
theSum=theSum+i
end=time.time()
return theSum,end-start
for i in range(5):
print("Sum is %d required %10.7f seconds"%sumOfN(300000))
Answer: According to [the Python
manual](https://docs.python.org/3/library/time.html#time.time):
> ## `time.time()`
>
> Return the time in seconds since the epoch as a floating point number. Note
> that even though the time is always returned as a floating point number,
> **not all systems provide time with a better precision than 1 second**.
> While this function normally returns non-decreasing values, it can return a
> lower value than a previous call if the system clock has been set back
> between the two calls.
(emphasis mine)
It seems the timer resolution of your system is not enough to correctly
measure the elapsed time of the function. It actually looks like the precision
is about 0.016, about 1/60 of a second, which is typical of Windows systems.
your approach has the following two problems:
* `time.time()` returns the current time (as in time of day), which can vary by auto-adjusting processes such as NTP or if someone modifies it (either by hand or via code). _Use[`time.perf_counter()`](https://docs.python.org/3/library/time.html#time.perf_counter) (or `time.clock()` in Python <3.3) instead_.
* You are measuring _one_ execution of the function. This can give you very wrong results due to the non-deterministic nature of garbage collection, bytecode optimization, and other quirks of languages like Python. You should look into the [`timeit`](https://docs.python.org/3/library/timeit.html) module instead.
|
Why I am not able to decrypt what I encrypted with pycrypto?
Question: Here's my code:
Encrypt:
from Crypto.Cipher import AES
import base64
def encryption (privateInfo):
BLOCK_SIZE = 16
PADDING = '{'
pad = lambda s: s + (BLOCK_SIZE - len(s) % BLOCK_SIZE) * PADDING
EncodeAES = lambda c, s: base64.b64encode(c.encrypt(pad(s)))
secret = 'Fr3@k1nP@ssw0rd.'
print('encryption key:', secret)
cipher = AES.new(secret)
encoded = EncodeAES(cipher, privateInfo)
print('Encrypted string:', encoded)
encryption('secret')
The encrypted string is: b'QuCzNmwiVaq1uendvX7P+g=='
Decrypt:
from Crypto.Cipher import AES
import base64
def decryption(encryptedString):
PADDING = '{'
DecodeAES = lambda c, e: c.decrypt(base64.b64decode(e)).rstrip(PADDING)
key = 'Fr3@k1nP@ssw0rd.'
cipher = AES.new(key)
decoded = DecodeAES(cipher, encryptedString)
print(decoded)
decryption("b'QuCzNmwiVaq1uendvX7P+g=='")
The result:
ValueError: Input strings must be a multiple of 16 in length
This is PyCrypto 2.6.1 on Python 3.4; I've installed VC++ 2010 Express as
well. What's really confusing me is that it works perfectly on Python 2.7
Any suggestion appreciated, but note that I'm new to Python.
Answer: Maybe because you have `"` around `"b'QuCzNmwiVaq1uendvX7P+g=='"`.
Change
`decryption("b'QuCzNmwiVaq1uendvX7P+g=='")`
to
`decryption(b'QuCzNmwiVaq1uendvX7P+g==')`
and you should be all set.
|
Python module distribution installs executable to path
Question: In the past I saw Python module distributions on PyPI which installed an
executable to the path when you installed them with `pip`, unfortunately I
can't find one like this anymore.
I wonder how this is possible. Would you do this in your `setup.py`? Can you
get this to work for multiple platforms?
A link to a module doing this would be very helpful as well.
**I'm NOT talking about installing python modules to the python path but
installing _executables to the system path_!**
Answer: Take a look at <http://python-packaging.readthedocs.org/en/latest/command-
line-scripts.html>
> ## The `scripts` Keyword Argument
>
> The first approach is to write your script in a separate file, such as you
> might write a shell script.:
>
>
> funniest/
> funniest/
> __init__.py
> ...
> setup.py
> bin/
> funniest-joke
>
>
> ... The funniest-joke script just looks like this:
>
>
> #!/usr/bin/env python
>
> import funniest print funniest.joke()
>
>
> Then we can declare the script in `setup()` like this:
>
>
> setup(
> ...
> scripts=['bin/funniest-joke'],
> ... )
>
>
> When we install the package, `setuptools` will copy the script to our `PATH`
> and make it available for general use.
|
constructor and method overriding in Python
Question: I am trying to figure out if there is a way to override the `__init__` method
in a python class. I have come up with this, not exactly method overriding but
it has the same effect
class A():
def __init__(self, x):
if isinstance(x, str):
print "calling some text parsing method"
elif isinstance(x, int):
print "calling some number crunching method"
else:
print "Oops"
Is this good practice please? Not only for constructors as in this particular
question but also for other methods too
Answer: That's essentially what you need to do, if the actions for a string argument
are very different from the actions for an integer argument. However, if one
case reduces to the other, then you can define a class method as an alternate
constructor. As an simple example, consider
class A():
def __init__(self, x):
if isinstance(x, str):
self.x = int(x)
elif isinstance(x, int):
self.x = x
else:
raise ValueError("Cannot turn %s into an int" % (x, ))
Here, the integer case is the "fundamental" way to create an instance of `A`;
the string case reduces to turning the string into an integer, then proceding
as in the integer case. You might rewrite this as
class A():
# x is expected to be an integer
def __init__(self, x):
self.x = x
# x is expected to be a string
@classmethod
def from_string(cls, x):
try:
return cls(int(x))
except ValueError:
# This doesn't really do anything except reword the exception; just an example
raise ValueError("Cannot turn %s into an int" % (x, ))
In general, you want to avoid checking for the type of a value, because types
are less important than behavior. For example, `from_string` above doesn't
really expect a string; it just expects something that can be turned into an
`int`. That could be a `str` or a `float`.
|
python 3 cleaner code, calculating values from dictionary
Question: Currently my code is working and usable but I think there are a few lines of
code that I do not need. I am inputting a sequence (of DNA) and having the
code calculate some values and returning the values to me. I input the
sequence, and then a concentration, and the melting temperature, dH, dS, and
dG returned to me. Just posting to see if there is any way I could have the
code cleaned up or if you guys think it is good as is. This is for python3 as
well. Thanks for the help!
import math
sequence1 = input("Enter DNA Sequence: ")
sequence2 = [i for i in sequence1[0::1]]
sequenceR = [i for i in sequence2[::-1]]
dnac = input("Enter DNA Concentration (M): ") #Effectively Ct
dnac = float(dnac)
#assume 1M NaOH, though this adjustment is easy to establish if necessary.
first = sequence1[0]
last = sequence1[-1]
sequence = [i+j for i,j in zip(sequence1[0::1], sequence1[1::1])]
navbles={ "AA": (-7.9 ,-22.2 ,-1.0),
"TT": (-7.9 ,-22.2 ,-1.0),
"AT": (-7.2 ,-20.4 ,-0.88),
"TA": (-7.2 ,-21.3 ,-0.58),
"CA": (-8.5 ,-22.7 ,-1.45),
"TG": (-8.5 ,-22.7 ,-1.45),
"GT": (-8.4 ,-22.4 ,-1.44),
"AC": (-8.4 ,-22.4 ,-1.44),
"CT": (-7.8 ,-21.0 ,-1.28),
"AG": (-7.8 ,-21.0 ,-1.28),
"GA": (-8.2 ,-22.2 ,-1.30),
"TC": (-8.2 ,-22.2 ,-1.30),
"CG": (-10.6 ,-27.2 ,-2.17),
"GC": (-9.8 ,-24.4 ,-2.24),
"GG": (-8.0 ,-19.9 ,-1.84),
"CC": (-8.0 ,-19.9 ,-1.84),
"A" : (0 , 0 , 0),
"C" : (0 , 0 , 0),
"G" : (0 , 0 , 0),
"T" : (0 , 0 , 0), }
initiator={ "G": (0.1 ,-2.8, 0.98),
"C": (0.1 ,-2.8, 0.98),
"A": (2.3, 4.1, 1.03),
"T": (2.3, 4.1, 1.03) }
complement = {'A' : 'T', 'T' : 'A', 'G' : 'C', 'C' : 'G'}
#First and last terms, to start off
F1 = initiator[first]
L1 = initiator[last]
dH1 = F1[0]
dH2 = L1[0]
dS1 = F1[1]
dS2 = L1[1]
dG1 = F1[2]
dG2 = L1[2]
R = 1.987 #cal mol K
#answer = (dH1/(dS1 + R*C))+(dH2/(dS2 + R*C))
answerH = dH1 + dH2
answerS = dS1 + dS2
answerG = dG1 + dG2
#the iterative meat
for na in range(len(sequence)):
n = navbles[sequence[na]]
H = n[0]
S = n[1]
G = n[2]
#answer = answer + (H/(S + R*C))
answerG = answerG + G
answerH = answerH + H
answerS = answerS + S
#symmetry check
if sequenceR == sequence2:
symm = "y"
else:
symm = "n"
if symm == "y":
answerS = answerS + -1.4
else:
pass
#complementary check
sequenceC = []
for i in range(len(sequenceR)):
sequenceC.append(complement[sequenceR[i]])
if sequenceC == sequence2:
comp = "y"
else:
comp = "n"
if comp == "n":
C = math.log(dnac / 4)
else:
C = math.log(dnac)
#print(C)
answerT = (1000*answerH)/(answerS + R*C)
print('Tm =', answerT)
print('dH(kcal) = ', answerH)
print('dS(cal) = ', answerS)
print('dG(kcal) = ', answerG)
#print(sequence1)
Answer: I think this looks fine. You could always refactor for performance if you plan
on applying it to really large databases in the future, but sometimes a
straight forward approach is all it takes to get the job done.
|
Raspberry pi:convert fisheye image to normal image using python
Question: I have attached the USB webcam with raspberry pi to capture image and write
code to send it using mail. It captures image using `fswebcam` commamnd so
code for capture image in python script is :
subprocess.Popen(["fswebcam","-r 640x480", "image4.jpg"])
when I pressed switch from raspberry pi it capture image and sends it using
mail,but problem is that captured image is fisheye image and iIwant to convert
this fisheye image into normal image but I don't know the command for it or
any code to convert it into normal image using python in raspberry pi?
Thanks.This is my code:
import smtplib
import time
import subprocess
from email.MIMEMultipart import MIMEMultipart
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email.MIMEImage import MIMEImage
import RPi.GPIO as GPIO
# Define these once; use them twice!
strFrom = '[email protected]'
strTo = '[email protected]'
#create email
# Create the root message and fill in the from, to, and subj$
msgRoot = MIMEMultipart()
msgRoot['Subject'] = 'capture image'
msgRoot['From'] = strFrom
msgRoot['To'] = strTo
GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.IN)
print "press button to send email"
GPIO.setup(4,GPIO.IN,pull_up_down=GPIO.PUD_UP)
while True:
input=GPIO.input(4)
if input == False:
print "button pressed"
subprocess.Popen(["fswebcam","-r 640x480", "image4.jpg"])
time.sleep(5)
# This example assumes the image is in the current directory
fp = open('image4.jpg', 'rb')
msgImage = MIMEImage(fp.read())
fp.close()
msgRoot.attach(msgImage)
# send mail
s = smtplib.SMTP('smtp.gmail.com',587)
s.starttls()
s.login('[email protected]' , 'password')
s.sendmail(strFrom, strTo, msgRoot.as_string())
s.close()
print "Email sent"
time.sleep(0.2)
* * *
so how to add solution provided in this both link :
<https://github.com/kscottz/dewarp> and <http://www.kscottz.com/dewarped-
panoramic-images-from-a-raspberrypi-camera-module/> in to my above code.
Answer: Maybe this could help :
<http://www.kscottz.com/dewarped-panoramic-images-from-a-raspberrypi-camera-
module/>
this is the corresponding repo :
<https://github.com/kscottz/dewarp>
|
Python Session variable
Question:
def get_items_from_cart(self):
""" Fetches items from sessions cart"""
item_list = []
cart_count = self.session.get('add_to_cart_count')
if not cart_count: return None;
for i in range(1, cart_count+1):
item = self.session.get(str(i))
if item:
item_list.append(item)
return item_list
class AddToCartHandler(Handler):
def get(self):
if users.get_current_user():
self.response.headers['Content-type'] = 'application/json'
get_current_add_count = int(self.session.get('add_to_cart_count'))
tshirt_id = self.request.get("tshirt_id")
item_title = self.request.get("item_title")
qty = self.request.get("qty")
size = self.request.get("size")
price = 325
get_current_add_count += 1
self.session[get_current_add_count] = { "qty" : qty, "size" : size ,
"item_title": item_title,
"tshirt_id" : tshirt_id,
"cost" : price * int(qty)}
current_cart_items = int(self.session.get("item_count"))
updated_cart_items = current_cart_items + int(qty)
self.session["item_count"] = updated_cart_items
self.session["add_to_cart_count"] = get_current_add_count
self.write(json.dumps({"status" : 1, "msg" : "Order added. <a href='/cart'><span class='label label-success'>View Cart</span></a>"}))
else:
self.write(json.dumps({"status" : 0, "msg" : "Please <a href='/login'><span class='label label-important'>login</span> </a>to start shopping!"}))
In method "get_items_from_cart" the code depicts fetching values named as
numbers and appending the list. those are json encoded (i don't know about
json yet). i want to ask that Does python allows to name a variable in session
as number? Kindly guide me if I'm getting the code in wrong context.
Answer: Here is the answer of my question... this code works fine.
import webapp2
from webapp2_extras import sessions
from google.appengine.api import users
class Handle(webapp2.RequestHandler):
def dispatch(self):
self.session_store = sessions.get_store(request=self.request)
try:
webapp2.RequestHandler.dispatch(self)
finally:
self.session_store.save_sessions(self.response)
@webapp2.cached_property
def session(self):
return self.session_store.get_session()
class ValTest(Handle):
def get(self):
user = users.get_current_user()
val = self.session.get("1")
val += 1
self.session["1"] = val
self.response.write('value is: %i' % val)
self.response.write('<form method="get" action="/valTst"><input type="submit" name="btn2" value="Incr"></form>')
class MainHandler(Handle):
def get(self):
self.response.write('Hello world!')
user = users.get_current_user()
self.session["1"] = 10
self.response.write('<form method="get" action="/valTst"><input type="submit" name="btn" value="show"></form>')
### Configuration ###
config = {}
config['webapp2_extras.sessions'] = {
'secret_key': 'my-super-secret-key',
}
app = webapp2.WSGIApplication([
('/', MainHandler),
('/valTst', ValTest)
], debug=True, config=config)
|
How to create set of image files in folder using python and PIL
Question: I am resizing a set of images and storing resized image to new folder.my
sample code is
import cv2
import PIL
import Image
import os
def resize_all(path1,path2):
resolution = (200,200)
scaler = Image.ANTIALIAS
if not os.path.exists(path2):
os.makedirs(path2)
listing=os.listdir(path1)
for file in listing:
img=Image.open(path1 + file)
res=img.resize(resolution , Image.ANTIALIAS)
cv2.imwrite(os.path.join(path2, res),'image')
def main():
resize_all('d:\\Emmanu\\project-data\\birds\\','d:\\Emmanu\\project-data\\new\\')
if __name__ == '__main__':main()
But when i execute i am getting error as Full Traceback added **EDIT**
Traceback (most recent call last):
File "D:/Emmanu/ImageClassification/preprocessing.py", line 20, in <module>
if __name__ == '__main__':main()
File "D:/Emmanu/ImageClassification/preprocessing.py", line 18, in main
resize_all('d:\\Emmanu\\project-data\\birds\\','d:\\Emmanu\\project-data\\new\\')
File "D:/Emmanu/ImageClassification/preprocessing.py", line 15, in resize_all
cv2.imwrite(os.path.join(path2, res),'image')
File "C:\Python27\lib\ntpath.py", line 67, in join
p_drive, p_path = splitdrive(p)
File "C:\Python27\lib\ntpath.py", line 115, in splitdrive
if len(p) > 1:
File "C:\Python27\lib\site-packages\PIL\Image.py", line 512, in __getattr__
raise AttributeError(name)
AttributeError: __len__
Answer: Finally got answer, Modified the for loop
for i,file in enumerate(listing):
img=Image.open(path1 + file)
res=img.resize(resolution , Image.ANTIALIAS)
res.save(path2+'h{}.jpg'.format(i))
Now all my images are resized and saved to a new folder
|
Javascript - call super on parents parent?
Question: I have defined this extension of current custom javascript view in Odoo:
openerp.account_move_journal_test = function(instance){
var _t = instance.web._t,
_lt = instance.web._lt;
var QWeb = instance.web.qweb;
instance.web.account.QuickAddListView.include({
init: function(){
this._super.apply(this, arguments);
console.log("QuickAddListView modified init")
},
});
};
Now for better representation I added console log in `QuickAddListView` and in
`ListView`, which are parents that are called using `_super`.
So if I run it like this, I get these prints:
'ListView init' // This is parent of QuickAddListView
'QuickAddListView Init'
'QuickAddListView modified init'
The order of constructors is this `View` -> `ListView` -> `QuickAddListView`
So all of this is printed like it should, but what I want is to modify `init`
in a way that it would directly call `ListView` and skip `QuickAddListView`
original `init`.
So after that it should print only this (meaning original QuickAddListView
init was not called):
'ListView init' // This is parent of QuickAddListView
'QuickAddListView modified init'
Is there a way in javascript to specify exact parent you want to call? So
instead of calling that everything is in a chain, it would start from where
you specify (Like in my case from `ListView`)?
For example in Python you can do something like this:
from some_module import SomeBaseClass
class CustomBase(SomeBaseClass):
def m1(self):
super(CustomBase, self).m1()
class Custom(CustomBase):
def m1(self):
# skip CustomBase
super(CustomBase, self).m1()
Answer: > Is there a way in javascript to specify exact parent you want to call?
Yes, you're pretty much already doing that: Using `this._super` to explicitly
refer to `QuickAddListView`'s `init` method.
> So instead of calling that everything is in a chain, it would start from
> where you specify? And directly call `ListView` and skip `QuickAddListView`
> original `init`.
For that case, you'd only have to replace the line
this._super.apply(this, arguments);
by
instance.web.ListView.prototype.init.apply(this, arguments);
(or however you can access that class, not sure about Odoo)
But **be warned that this an absolute antipattern**. If you want to inherit
from `QuickAddListView`, you should run its constructor (or `init` method) so
that it can initialise the properties it needs. If you don't want that for
whatever reason, you probably just should not inherit from it but inherit from
`ListView` directly.
|
How to convert CJK Extention B in QLineEdit of Python3-PyQt4 to utf-8 to Processing it with regex
Question: I have a code like that:
#!/usr/bin/env python3
#-*-coding:utf-8-*-
from PyQt4 import QtGui, QtCore
import re
.....
str = self.lineEdit.text() # lineEdit is a object in QtGui.QLineEdit class
# This line thanks to Fedor Gogolev et al from
#http://stackoverflow.com/questions/12214801/print-a-string-as-hex-bytes
print('\\u'+"\\u".join("{:x}".format(ord(c)) for c in str))
# u+20000-u+2a6d6 is CJK Ext B
cjk = re.compile("^[一-鿌㐀-䶵\U00020000-\U0002A6D6]+$",re.UNICODE)
if cjk.match(str):
print("OK")
else:
print("error")
when I inputted "敏感詞" (0x654F,0x611F, 0x8A5E in utf16 respectively), the
result was:
\u654f\u611f\u8a5e
OK
but when I input "詞" (0x8A5E, 0xD840 0xDC37, 0xD840 0xDC81, 0xD840 0xDC4D in
utf-16) in which there were 3 characters from CJK Extention B Area. The result
which is not expected is:
\u8a5e\ud840\udc37\ud840\udc81\ud840\udc4d
error
how can I processed these CJK characters with converting to utf-8 to be
processed suitabliy with re of Python3?
**P.S.**
1. the value from sys.maxunicode is 1114111, it might be UCS-4. Hence, I think that the question seems not to be the same as [python regex fails to match a specific Unicode > 2 hex values](http://stackoverflow.com/questions/19966386/python-regex-fails-to-match-a-specific-unicode-2-hex-values)
2. another code:
#!/usr/bin/env python3
#-*-coding:utf-8-*-
import re
CJKBlock = re.compile("^[一-鿌㐀-䶵\U00020000-\U0002A6D6]+$") #CJK ext B
print(CJKBlock.search('詞'))
returns `<_sre.SRE_Match object; span=(0, 4), match='詞'>` #expected result.
3. even I added `self.lineEdit.setText("詞")` inside `__init__` function of the window class and executed it, the word in LineEdit shows appropriately, but when I pressed enter, the result was still "error"
4. version:
* Python3.4.3
* Qt version: 4.8.6
* PyQt version: 4.10.4.
Answer: There were a few PyQt4 bugs following the implemetation of PEP-393 that can
affect conversions between `QString` and python strings. If you use sip to
switch to the v1 API, you should probably be able to confirm that the
`QString` returned by the line-edit _does not_ contain surrogate pairs. But if
you then convert it to a python string, the surrogates should appear.
Here is how to test this in an interactive session:
>>> import sip
>>> sip.setapi('QString', 1)
>>> from PyQt4 import QtGui
>>> app = QtGui.QApplication([])
>>> w = QtGui.QLineEdit()
>>> w.setText('詞')
>>> qstr = w.text()
>>> qstr
PyQt4.QtCore.QString('詞')
>>> pystr = str(qstr)
>>> print('\\u' + '\\u'.join('{:x}'.format(ord(c)) for c in pystr))
\u8a5e\u20037\u20081\u2004d
Of course, this last line does not show surrogates for me, because I cannot do
the test with PyQt-4.10.4. I have tested with PyQt-4.11.1 and PyQt-4.11.4,
though, and I did not get see any problems. So you should try to upgrade to
one of those.
|
How can I loop through a list of strings?
Question: I have this code:
test = ["A","B","C","D","E","F","G","H","I","J","K","L","M","N","O"]
for i in test:
if i not in ["C","D"]:
test.remove(i)
print(test)
I was expecting to get `['C','D']` as a result of running the code above,
however I am getting this `['B', 'C', 'D', 'F', 'H', 'J', 'L', 'N']`
How can I successfully loop through a list of strings and delete the elements
I don't need using Python 3?
**NOTE:** I don't want to use comprehension lists
thanks
Answer: When removing from lists in other languages, I used to reverse walk the list:
test = ["A","B","C","D","E","F","G","H","I","J","K","L","M","N","O"]
for i in reversed(test):
if i not in ["C","D"]:
test.remove(i)
print(test)
Note that `reversed` will create a new list so this might not be the best
solution for large lists. Now, since you already walk a copy of your list, and
if you need to parse in the correct order, you can use `copy`:
import copy
for i in copy.copy(test):
if i not in ["C","D"]:
test.remove(i)
and to avoid the import (from
[here](http://stackoverflow.com/questions/2612802/how-to-clone-or-copy-a-list-
in-python)):
for i in test[:]:
if i not in ["C","D"]:
test.remove(i)
Finally, the best solution for me, is a traditional, in-place reverse
iteration without copying the list ("borrowed" and modified from
[this](http://stackoverflow.com/a/6024599/3727050) answer)
for i in range(len(test) - 1, -1, -1):
if test[i] not in ["C","D"]:
del test[i]
|
Python first x-y-ticks overlapping
Question: I am working on this problem for over an hour now, but all i tried failed so
far. In my plot the first values on x and y axes keep overlapping. I am using
tight layout to fix this problem, but it does not help. Also my z-ticks are
overlapping with the Z-axis. Thanks for any suggestions [](http://i.stack.imgur.com/FK7CF.png)
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 18}
matplotlib.rc('font', **font)
time=round(t[time_period],0)
fig = plt.figure(figsize=(10,5))
###first subplot
ax = fig.add_subplot(1, 2, 1, projection='3d')
surf=ax.plot_surface(X_MESH, Y_MESH, Meshgrid_Output, rstride=1, cstride=1, cmap=cm.jet,linewidth=0, antialiased=False)
ax.set_xlabel(name+"$_"+str(most_sensitive[0])+" in "+str(unit)+"$")
ax.set_ylabel(name+"$_"+str(most_sensitive[1])+"$ in "+str(unit))
ax.set_zlabel("$\Delta$Output in [C]")
##formating labels
ax.xaxis._axinfo['label']['space_factor'] = 4.2
ax.yaxis._axinfo['label']['space_factor'] = 4.2
ax.zaxis._axinfo['label']['space_factor'] = 3
##position and rotation ticks
ax.xaxis._axinfo['tick']['inward_factor'] = 0
ax.xaxis._axinfo['tick']['outward_factor'] = 0.4
ax.yaxis._axinfo['tick']['inward_factor'] = 0
ax.yaxis._axinfo['tick']['outward_factor'] = 0.4
ax.zaxis._axinfo['tick']['inward_factor'] = 0
ax.zaxis._axinfo['tick']['outward_factor'] = 0.4
ax.zaxis._axinfo['tick']['outward_factor'] = 0.4
plt.xticks(rotation=45)
plt.yticks(rotation=325)
ax.grid(False)
## setting background
ax.xaxis.pane.set_edgecolor('black')
ax.yaxis.pane.set_edgecolor('black')
ax.xaxis.pane.fill = False
ax.yaxis.pane.fill = False
ax.zaxis.pane.fill = False
#set nubmer of ticks
ab, bc = ax.get_xlim( )
ax.set_xticks( np.linspace(ab, bc, 4 ) )
cd, de = ax.get_ylim( )
ax.set_yticks( np.linspace(cd, de, 4 ) )
lb, ub = ax.get_zlim( )
ax.set_zticks( np.linspace(lb, ub, 5 ) )
##round values
ax.xaxis.set_major_formatter(mtick.FormatStrFormatter('%.3f'))
ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.3f'))
ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.3f'))
##set point of view
angle=132
ax.view_init(30, angle)
##second subplot
ax = fig.add_subplot(1, 2, 2)
cax=ax.imshow(Meshgrid_Output, extent=[Y_MESH.min(),Y_MESH.max(),X_MESH.max(),X_MESH.min()],aspect='auto',interpolation='nearest',cmap=cm.jet)
ax.set_xlabel(name+"$_"+str(most_sensitive[0])+"$ in "+str(unit))
ax.set_ylabel(name+"$_"+str(most_sensitive[1])+"$ in "+str(unit))
#set nubmer of ticks
ab, bc = ax.get_xlim( )
ax.set_xticks( np.linspace(ab, bc, 4) )
cd, de = ax.get_ylim( )
ax.set_yticks( np.linspace(cd, de, 4 ) )
##round values
ax.xaxis.set_major_formatter(mtick.FormatStrFormatter('%.3f'))
ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.3f'))
ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.3f'))
#postion ticks
ax.yaxis.tick_right()
ax.xaxis.set_label_position('top')
ax.grid(True)
## formation colorbar
cbar=fig.colorbar(cax,orientation='horizontal',aspect=20,pad=0.08)
cbar.locator = ticker.MaxNLocator(nbins=6)
cbar.update_ticks()
#Posion both subplots
fig.subplots_adjust(bottom=0.2)
fig.subplots_adjust(wspace=0.3)
Answer: Here I offer three method, maybe it would help.
### A. Enhance the figure size
_Before_
fig = plt.figure()
ax1 = plt.subplot(121,projection='3d')
ax2 = plt.subplot(122)
[](http://i.stack.imgur.com/BcI6T.png)
_Set the`figsize`_
fig = plt.figure(figsize=(16,3))
ax1 = plt.subplot(121,projection='3d')
ax2 = plt.subplot(122)
[](http://i.stack.imgur.com/wDI06.png)
### B. Adjust the figure size of ax1.subplot
Change the portion of ax1.subplot by using **`gridspec`**.
import mpl_toolkits.mplot3d.axes3d as axes3d
import matplotlib.gridspec as gridspec
fig = plt.figure(figsize = (12,6))
gs = gridspec.GridSpec(1, 2,
width_ratios=[6,1],
height_ratios=[1,1]
)
ax1 = plt.subplot(gs[0],projection='3d')
ax2 = plt.subplot(gs[1])
[](http://i.stack.imgur.com/jdxbI.png)
### C. Adjust the frequency of x/y ticklabel.
The code below can adjust the xtick frequency. With less xticks and its
labels, the overlapping may be disappear.
ax.set_xticks(np.arange(0,1,1,0.1))
for xtick in ax.xaxis.get_ticklines()[1::2]: ### Hiding ticks each 2 steps.
xtick.set_visible(False)
ax.set_xticklabels(np.arange(0,1,1,0.2),fontsize = 14)
|
Mean squared displacement
Question: I am investigating the motion of a single micrometer sized particle in a low
density plasma. With a so called long distance microscope I have recorded the
motion of a particle (2726 images, fps=60 Hz).
The x,y data in mm are available here: <http://pastebin.com/qdMsaUHD>
With mathematica I got the following log-log plot for the mean squared
displacement (MSD):
[](http://i.stack.imgur.com/CKiCb.png)
**I am new to Python and have searched for examples on how to read in the 2D
coordinates from a file, calculate and display the MSD (mean and standard
deviation).**
I have seen that there are some answers which show how to calculate the MSD
but I am not able to adapt them to my data.
I would appreciate very much if somebody could show me the full Python source
code (with all imports) which is necessary to solve my problem.
That will be for me a great opportunity and a very concrete problem to start
also programming with Python. Thank you very much in advance for your help.
I tried the following from one of the answers ([Computing the mean square
displacement of a 2d random walk in
Python](http://stackoverflow.com/questions/26472653/computing-the-mean-square-
displacement-of-a-2d-random-walk-in-
python/27709260#comment60361901_27709260)), but the code produced errors.
import numpy as np
import matplotlib.pyplot as plt
data= [[49.136926889715, 48.4423791821561],
[48.8104534783146, 51.0491783022365],
[48.5231487166892, 53.3485202014],
[48.2320069851565, 55.2569539728078],
[47.8817794028032, 56.993296770262],
[47.381875792142, 58.179721166033],
...
[45.3409434914228, 49.0259838546922]]
def compute_MSD(path):
totalsize=len(path)
msd=[]
for i in range(totalsize-1):
j=i+1
msd.append(np.sum((path[0:-j]-path[j::])**2)/float(totalsize-j))
msd=np.array(msd)
return msd
result=compute_MSD(data)
plt.plot(result)
plt.show()
Answer: My graphs look slightly different than yours which likely stems from a
misunderstanding of exactly what it is you are calculating, however I believe
this addresses the basic idea.
plasma = [[0.09296720430107527, 0.09280376344086022],
[0.09230113636363636, 0.09769886363636364],
[0.09130555555555556, 0.10198777777777777],
...
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame(plasma, columns=['x', 'y'])
df['time'] = np.arange(0,len(df)) / 60.0
df['dist'] = np.sqrt(df['x']**2 + df['y']**2)
df['MSD'] = pd.rolling_mean((np.abs(df['dist'])**2), len(df), min_periods=1)
The above is my interpretation of what you mean by MSD. I am using distance
from the origin, i.e. `sqrt(x^2 + y^2)` and then applying the following
[definition](http://mathworld.wolfram.com/MeanSquareDisplacement.html)
Then you can create a plot using [matplotlib](http://matplotlib.org/) as
follows
plt.loglog(df['time'], df['MSD'], 'o')
plt.xlabel('t (sec)')
plt.ylabel('MSD')
plt.show()
[](http://i.stack.imgur.com/K6gVg.png)
If you want to touch up / refine the graphs an excellent place to get a handle
on matplotlib is the tutorial section,
[here](http://matplotlib.org/users/pyplot_tutorial.html).
|
define a regular expression in python
Question: I am trying to use regular expressions in python to say a 4 characters string
with 1st character being a digit and 3 other characters being either a digit
or a capital letter. Here's examples of patterns that should match 1CTT, 2IR8,
35TR, 4T1R
I tried many ways, here's the last code I tried :
exp=re.compile("[0-9]{1}([A-Z0-9]{3})")
Thank you for your help !
Answer: The expression you've tried last, looks correct and should match the provided
test strings. Though you don't have to specify `{1}` and there is no need for
a capturing group (the parenthesis):
>>> import re
>>> text = "text, 1CTT, 2IR8, 35TR, 4T1R, smth else"
>>> pattern = re.compile(r"[0-9][A-Z0-9]{3}")
>>> pattern.findall(text)
['1CTT', '2IR8', '35TR', '4T1R']
You might need to additionally add the _word boundary_ constraint (thanks to
@Jon Clements):
>>> text = "text, 1CTT, 2IR8, 35TR, 4T1R, smth else, 35TT35XYZ"
>>> pattern = re.compile(r"\b[0-9][A-Z0-9]{3}\b")
>>> pattern.findall(text)
['1CTT', '2IR8', '35TR', '4T1R']
|
memcached listeing on UDP with Django
Question: **Question** : I am not able to get `memcached` listening on `UDP`, to work
(`get` `set` `delete`) with Django.
* * *
I have the memcached listening only on `UDP` `11211`, as I have mentioned in
the [previous question](http://stackoverflow.com/questions/36055900/get-set-
memcached-listening-on-udp-using-python). What I have tried so far:
1.Setting **`CACHES`** to use **`python-memcached`** Python binding. get and
set didn't work with simple settings i.e. `'LOCATION': '127.0.0.1:11211'`, so
tried specifying `udp` explicitly (using this
[mention](http://sendapatch.se/projects/pylibmc/reference.html#pylibmc.Client)
as the rationale):
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'udp:127.0.0.1:11211',
'TIMEOUT': None,
}
}
gave:
**`ValueError: Unable to parse connection string: "udp:localhost:11211"`**
2.Setting **`CACHES`** to use **`pylibmc`** Python binding:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': 'udp:127.0.0.1:11211',
'TIMEOUT': None,
}
}
The server ran fine - to further verify:
>>> import django
>>> from django.core.cache import cache
>>> cache.set('udp_key', 12)
>>> cache.get('udp_key')
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/django/core/cache/backends/memcached.py", line 84, in get
val = self._cache.get(key)
NotSupportedError: error 28 from memcached_get(:1:udp_key): ACTION NOT SUPPORTED
* * *
**P.S.** Don't make it a memcached on **`TCP`** vs **`UDP`** debate
* * *
A similar question - [get() set() memcached listening on UDP using
Python](http://stackoverflow.com/questions/36055900/get-set-memcached-
listening-on-udp-using-python)
Answer: As far as I have been able to explore the library
[libmemcached](http://libmemcached.org/) that
[pylibmc](http://sendapatch.se/projects/pylibmc/) uses does not support `get`
operations with UDP.
I have traced the cache call to `get` up to `libmemcached` and I have found
[the following code](http://bazaar.launchpad.net/~tangent-
trunk/libmemcached/1.2/view/head:/libmemcached/get.cc):
...
if (memcached_is_udp(ptr))
{
return memcached_set_error(*ptr, MEMCACHED_NOT_SUPPORTED, MEMCACHED_AT);
}
...
that coincides with your error as **pylibmc** 's `get` method is mapped to
**libmemcached** 's `memcached_get` method in the file with the code above
(`/libmemcached/get.cc`).
I have install and configure the same environment in my own machine and I have
got identical results.
Nevertheless, the `set` operation seems to work perfectly as I have observed
running **memcached** in debugging mode.
I have also tried to provide different locations ((PROTOCOL + IP + PORT)s
separated by **;** in the **LOCATION** field) for the cache mixing TCP/UDP,
but the library DOES NOT SUPPORT mixing protocols either and returns an error.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': 'udp:127.0.0.1:11211;127.0.0.1:11211',
'TIMEOUT': None,
}
}
All the previous facts are confirmed by the documentation of
[libmemcached](http://docs.libmemcached.org/memcached_behavior.html?highlight=udp#MEMCACHED_BEHAVIOR_USE_UDP).
The option of using `django.core.cache.backends.memcached.MemcachedCache` as a
backend is also discarded as it only uses TCP sockets (`SOCK_STREAM`) for
connecting to **memcached**.
**UPDATE** : [python-memcached-udp](https://pypi.python.org/pypi/python-
memcached-udp/) is now a pip package. Its mantainer [is open to add more
features if needed](https://github.com/idanmo/python-memcached-udp/issues/3).
If you are interested we definitely could work on creating a new Django cache
backend for Memcached with UDP.
|
How to create a user in the Moodle Rest WS using the Python Requests module?
Question: I'm trying to create a user using the Moodle Webservices - Rest Server, but
I'm stuck at the validation of the params :S My code is the following:
import requests
token = 'TOKENNUMBER'
function = 'core_user_create_users'
url = 'http://localhost/webservice/rest/server.php?wstoken={0}&wsfunction={1}&moodlewsformat=json'.format(token,function)
user1 = {'email': '[email protected]','firstname': 'firstname',
'lastname': 'lastname', 'createpassword': 1,
'username': 'username'}
Then, I try to post the data (two different ways):
requests.post(url,data={'users': user1})
requests.post(url,data={'users': [user1,]})
And moodle keeping return the error:
Only arrays accepted. The bad value is: \'username\'</DEBUGINFO>
In the documentation (available from the own moodle), its stated:
Argumentos
users (Obrigatório)
Estrutura geral
list of (
object {
username string //Username policy is defined in Moodle security config.
password string Opcional //Plain text password consisting of any characters
createpassword int Opcional //True if password should be created and mailed to user.
firstname string //The first name(s) of the user
lastname string //The family name of the user
email string //A valid and unique email address
auth string Padrão para "manual" //Auth plugins include manual, ldap, imap, etc
idnumber string Padrão para "" //An arbitrary ID code number perhaps from the institution
lang string Padrão para "pt_br" //Language code such as "en", must exist on server
calendartype string Padrão para "gregorian" //Calendar type such as "gregorian", must exist on server
theme string Opcional //Theme name such as "standard", must exist on server
timezone string Opcional //Timezone code such as Australia/Perth, or 99 for default
mailformat int Opcional //Mail format code is 0 for plain text, 1 for HTML etc
description string Opcional //User profile description, no HTML
city string Opcional //Home city of the user
country string Opcional //Home country code of the user, such as AU or CZ
firstnamephonetic string Opcional //The first name(s) phonetically of the user
lastnamephonetic string Opcional //The family name phonetically of the user
middlename string Opcional //The middle name of the user
alternatename string Opcional //The alternate name of the user
preferences Opcional //User preferences
list of (
object {
type string //The name of the preference
value string //The value of the preference
}
)customfields Opcional //User custom fields (also known as user profil fields)
list of (
object {
type string //The name of the custom field
value string //The value of the custom field
}
)}
)
So, with this in mind, how can I create a Moodle user using python requests
module? Whats wrong with the data that has been passed?
Answer: To use REST service in Moodle, parameters to function has to be formated in a
flat dictionary. The structure of the arguments reflects in names of the keys.
In your example, you have one argument _courses_ which is a list. So keys of
your dictionary will be _courses[0]email_ , _courses[0]firstname_ , ... for
the first user, and _courses[1]email_ , _courses[1]firstname_ , ... for the
second and so on
users = {'users[0]email': '[email protected]',
'users[0]firstname': 'firstname',
'users[0]lastname': 'lastname',
'users[0]createpassword': 1,
'users[0]username': 'username'}
requests.post(url,data=users)
|
Python, probably a simple solution with dictionary and functions but dont know how
Question: So...I am doing a volunteer school project (building a quadcopter) and have
been sitting here for a very long time trying to solve the problem...:
Basically, i was able to start the motor at a certain speed per remote control
(Yes, the one youd use for TV :D)
But know, I'm struggling with adding new buttons to compare them with the
input as shown in the following:
import serial
import time
from RPIO import PWM
ser = serial.Serial("/dev/ttyAMA0")
ser.baudrate = 2400
motorspeed = 1000
s = PWM.Servo()
#initializing
print "\033[1;32mINITIALIZING SEQUENCE STARTED\033[1;m"
s.set_servo(27, 2000)
time.sleep(0.5)
s.set_servo(27, 1000)
time.sleep(2)
print "\033[1;32mINITIALIZING SEQUENCE COMPLETE\033[1;m"
print "\033[1;36mAWAITING PWM SIGNALS\033[1;m"
buttons = {
"up": [32, 36, 36, 27, 219, 250, 32, 36, 36, 27, 219, 250, 32, 36, 36],
"button1": [32, 219, 219, 36, 196, 32, 219, 219, 36, 196, 32, 219, 219, 36, 196]}
# every button has a signature
input = []
# signature thats coming from the remote
while input != buttons[:]:
# I suspect a fault here, I want to compare all key values with the input list
for i in range(0, 15):
data = ser.read(1)
print ord(data)
input.append(ord(data))
# Every IR button has a unique "key", thats what I am comparing
print input
if input == buttons["button1"]:
print ("Button pressed!")
s.set_servo(27, motorspeed)
time.sleep(3)
elif input == buttons["up"]:
s.set_servo(27, motorspeed + 100
else:
del input[:]
What the problem is: I want to compare the list called "input" with the
dictionary "buttons". If one of the lists in dictionary is identical with the
list "input" it should the raise motorspeed
Answer: If I understand what you are looking for help on, you could try something like
this:
buttons = {
"up": [32, 36, 36, 27, 219, 250, 32, 36, 36, 27, 219, 250, 32, 36, 36],
"button1": [32, 219, 219, 36, 196, 32, 219, 219, 36, 196, 32, 219, 219, 36, 196],
"new_button" : [1]
# etc.
}
def up_action():
s.set_servo(27, motorspeed+100)
def button1_action():
s.set_servo(27, motorspeed)
time.sleep(3)
def new_button_action():
print "Hello! I am a new button!"
actions = {
"up" : up_action,
"button1" : button1_action,
"new_button" : new_button_action,
# etc.
}
input = ... # get your input the way you have it
# EDIT: As others have pointed out, you may
# want to adjust your while loop
for button_name,button_signature in buttons.iteritems():
if input == button_signature:
button_pressed = button_name
break
action = actions[button_pressed]
action()
To add functionality for any new button you would just need to add an entry to
each dictionary:
1. Add the input signature to `buttons` ({"button_name": [...input sequence...]})
2. Create a function that performs the desired behavior.
3. Add that function to `actions` ({"button_name": button_function_name})
|
How to group by quarter and calculate average from an array using numpy?
Question: I'd like to utilize numpy to calculate the average of a set of values in each
quarter from the below array:
Data = [{'date':'2015-01-01',value:5},{'date':'2015-02-01',value:6},{'date':'2015-03-01',value:7},{'date':'2015-04-01',value:8},{'date':'2015-05-01',value:9},{'date':'2015-06-01',value:10},{'date':'2015-07-01',value:11},{'date':'2015-08-01',value:12}]
I'd like the result to tell me the following:
* For Q1-15, the average was 6
* For Q2-15, the average was 9
* For Q3-15, the average was 11.5
Based on [this stackoverflow
question](http://stackoverflow.com/questions/23859840/python-aggregate-by-
month-and-calculate-average), I've tried the below:
np = Data #I'm not sure how to read in data into numpy from an array in my structure
np.resample('Q',how='mean') #I'm not sure if using 'Q' would group this by quarter
Answer: I think pandas works better in this case. I will just use your simple example
for illustration.
import pandas as pd # use recent version which has dt.quarter attr for time
import json
value = 'value' # to be able to read your Data string as json
Data1 = json.dumps(Data) # need it to use read_json() method.
a = pd.read_json(Data1)
a[a['date'].dt.quarter == 1].mean() # 1st quarter
a[a['date'].dt.quarter == 2].mean() # 2nd quarter
a[a['date'].dt.quarter == 3].mean() # 3rd quarter
|
Is there a way to simplify this "n-way merge" in python
Question: So right now I've hardcoded 4 if/elif/else statements. There there a more
dynamic way to do this? For example if I wanted to do a 10 or eve a 40 way
merge?
#4-way merge sort, sorted page files
outfile="fullsorted.txt"
of=open(outfile,"w")
f1=open("temp0-sorted.txt","r")
f2=open("temp1-sorted.txt","r")
f3=open("temp2-sorted.txt","r")
f4=open("temp3-sorted.txt","r")
f1_line=f1.readline()
f2_line=f2.readline()
f3_line=f3.readline()
f4_line=f4.readline()
while len(f1_line)>0 and len(f2_line)>0 and len(f3_line)>0 and len(f4_line)>0:
if f1_line < f2_line and f1_line < f3_line and f1_line < f4_line and len(f1_line)>0:
of.write(f1_line)
f1_line=f1.readline()
elif f2_line < f3_line and f1_line < f4_line and len(f2_line)>0:
of.write(f2_line)
f2_line=f2.readline()
elif f3_line < f4_line and len(f3_line)>0:
of.write(f3_line)
f3_line=f3.readline()
else:
of.write(f4_line)
f4_line=f4.readline()
of.close()
Answer: Just use
[`heapq.merge`](https://docs.python.org/3/library/heapq.html#heapq.merge):
import heapq
#4-way merge sort, sorted page files
outfile="fullsorted.txt"
with open("temp0-sorted.txt","r") as f1,\
open("temp1-sorted.txt","r") as f2,\
open("temp2-sorted.txt","r") as f3,\
open("temp3-sorted.txt","r") as f4,\
open(outfile,"w") as of:
of.writelines(heapq.merge(f1, f2, f3, f4))
|
Using Dictionaries to Count First Letter in a String - Python
Question: I've been working on a dictionaries exercise in Python, and I'm fairly new to
the language and programming in itself. I've been trying to take a string or
list of strings and have my code compare the first letter of the strings and
make a dictionary out of how many strings begin with a certain letter of the
alphabet. This is what I have so far:
d = {}
text=["time","after","time"]
# count occurances of character
for w in text:
d[w] = text.count(w)
# print the result
for k in sorted(d):
print (k + ': ' + str(d[k]))
What I'm aiming for, is for example get the following result :
count_starts(["time","after","time"]) -->{'t': 2, 'a': 1}
But, what I'm getting is more like the following:
count_starts(["time","after","time"]) --> {time:2, after:1}
With what I have, I've been able to accomplish it counting how many times a
whole unique string appears, just not the counting of JUST the first letter in
the string.
I also tried the following:
d = {}
text=["time","after","time"]
# count occurances of character
for w in text:
for l in w[:1]:
d[l] = text.count(l)
# print the result
for k in sorted(d):
print (k + ': ' + str(d[k]))
but all that give me in the printed output is :
{"a":0,"t":0}
I'm using Python Visualizer for my testing purposes.
Answer: To count the number of occurrences of the first letter for each item in text:
from collections import Counter
text = ["time", "after", "time"]
>>> Counter(t[0] for t in text)
Counter({'a': 1, 't': 2})
or just getting the dictionary key/value pairs:
>>> dict(Counter(t[0] for t in text))
{'a': 1, 't': 2}
|
Import Cython class in Python file Pycharm
Question: How does one import a compiled cython file in a pycharm python file?
Here is my setup.
From my project root directory, the cython class is
"/classes/knn.cpython-35m-x86_64-linux-gnu.so." My python file is
"/classes/testing_cython_knn.py."
I have an "__init __.py" file in "/classes/"...
* * *
However, Pycharm does not recognize the ".so" file as a file which I can
import. What do I need to do in order to make this file available to import,
so I can test it?
* * *
Currently, I have successfully compiled and imported a "helloworld.so" file in
regular, terminal-based python...however, the function I defined was a
pythonic function...no C-stuff.
My Cythonic file is:
import numpy as np
cimport numpy as np
from scipy.stats import mode
from scipy.spatial.distance import cdist
from threading import Thread
cdef class KNN:
cdef public int k
cdef public str metric
cdef public np.ndarray trainingX
cdef public np.ndarray trainingY
cdef public np.ndarray predict(self,np.ndarray X):
cdef np.ndarray distances,predicted_classes,sorted_distance_indices
distances = cdist(X,self.trainingX,metric=self.metric)
predicted_classes = np.zeros(X.shape[0],dtype=np.float64)
sorted_distance_indices = np.argpartition(distances,self.k,axis=1)[:,:self.k]
predicted_classes,_ = mode(self.trainingY[sorted_distance_indices])
return predicted_classes
And setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
import numpy
extensions = [
Extension("knn",["cKNN.pyx"]),
Extension("*",["*.pyx"],include_dirs=[numpy.get_include()])]
setup(ext_modules = cythonize(extensions),include_dirs=[numpy.get_include()])
Currently, this fails on import to python running on the terminal with an
Import Error:
> Dynamic module does not define module export function (PyInit_knn)
Answer: The first thing I notice is that you are renaming your extension. For Cython,
the name of the extension must correspond with the name of the file to
compile. That should fix the
> Dynamic module does not define module export function (PyInit_knn)
Second, you are declaring two extensions, but you have only one file (wich
needs Numpy) so you should either remove the first extension completely (the *
will take care of all the .pyx files in the folder) or remove the second and
integrate the 'include_dirs' directive to the first extension, so you should
change your extensions list to:
extensions = [
Extension("cKNN",["cKNN.pyx"],include_dirs=[numpy.get_include()])]
If you use --inplace or if you move the .so file to the right place, it will
be imported. Pycharm has nothing to do there, it is all up to Cpython, but, in
your setup.py, you have to be careful about setting the paths properly. If
your project's folder structure is like:
- knnProject (this opens in Pycharm)
- - knnextension
- - - classes
- - - - __init__.py
- - - - cKNN.pyx
- - - __init__.py (this is required to do module import from /classes)
- - setup.py
- - test.py
Your extension should say:
extensions = [
Extension("knnextension/classes/cKNN",["knnextension/classes/cKNN.pyx"],include_dirs=[numpy.get_include()])]
The shared library file (.so or .pyd) will show up (by using --inplace) inside
/classes. In the `classes/__init__.py` file you can import the class with:
from .cKNN import KNN
and then from test.py:
from knnextension.classes import KNN
Check other working cython extensions, [like my own](https://github.com/jr-
garcia/AssimpCy), to see how the structure might be (that one uses Numpy too).
|
Python: TypeError: coercing to Unicode: need string or buffer, module found
Question: I am new to Python, just learning.
I am taking one of the file as input and I want to print the text on the
console using python.
# This will take the file as input
import fileinput
for line in fileinput.input():
print "The file name you provided is " + fileinput.filename()
#file of = open(fileinput, "r", 0)
with open(fileinput,'r') as myfile:
data=myfile.read()
print "This is your actual data \n\n" + data
The error I'm experiencing is:
TypeError: coercing to Unicode: need string or buffer, module found
Could any one please help me out on this please.
Answer: This particular problem is in the line below:
with open(fileinput,'r') as myfile:
`fileinput` is a module. I presume you mean to have:
with open(fileinput.filename(),'r') as myfile:
|
How can I use a Google Chrome extension with Selenium?
Question: I am trying to scrape match information from the a page like this one (page is
in the same format, but obviously has different values for different matches):
<https://csgolounge.com/match?m=8967>
The problem is, the information that I want is only displayed if you are using
the Chrome extension, "Lounge Destroyer"... After tons of trial and error, I
finally figured out that in order to get that information, the python script I
use has to have that extension "included in it" somehow. I have browsed other
answers here and found this code from a different stackoverflow thread that
demonstrates how to add an extension when using selenium:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chop = webdriver.ChromeOptions()
chop.add_extension('Adblock-Plus_v1.4.1.crx')
driver = webdriver.Chrome(chrome_options = chop)
I went to [Chrome Extension Downloader](http://chrome-extension-
downloader.com/) to snag the .crx file for LoungeDestroyer, placed it in the
chrome extension folder (getting the file address from "Get Info"), and
modified the above code a little bit for my purposes to get the following:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chop = webdriver.ChromeOptions()
chop.add_extension('Users/Username_Here/Library/Application Support/Google/Chrome/Default/Extensions/ghahcnmfjfckcedfajbhekgknjdplfcl/LoungeDestroyer_v0.9.3.7.crx')
driver = webdriver.Chrome(chrome_options = chop)
matchID = raw_input("Enter match ID (four digit number in CSGL URL): ")
driver.get("https://csgolounge.com/match?m="+matchID)
The problem is, I don't think I've substituted the right thing where the
'Adblock-Plus_v1.4.1.crx' was in the original code.
Running my modified version returns the following error:
IOError: Path to the extension doesn't exist
Any help or advice is greatly, greatly appreciated.
Answer: The problem was that I didn't have chromedriver installed
(<http://chromedriver.storage.googleapis.com/index.html?path=2.21/>). After
installing that, I had to enter the path to the chromedriver executable in my
code. All said and done, this was the code that worked:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chop = webdriver.ChromeOptions()
chop.add_extension('/Users/Username_Here/Library/Application Support/Google/Chrome/Default/Extensions/ghahcnmfjfckcedfajbhekgknjdplfcl/LoungeDestroyer_v0.9.3.7.crx')
driver = webdriver.Chrome(executable_path='/Users/Username_Here/Downloads/chromedriver', chrome_options = chop)
# go to the match page
matchID = raw_input("Enter match ID (four digit number in CSGL URL): ")
driver.get("https://csgolounge.com/match?m="+matchID)
Also, the reason I was getting that extension-path error was because I didn't
have the forward slash in front of the word "Users" in the file address.
|
python relative import error when referencing file
Question: Now I got relative imports to work by [following
this](http://stackoverflow.com/a/15458607/3834059). But I am getting some
errors where the files are referenced with the wrong relative paths.
My folder structure is something like this
spamfilter_app
├── Makefile
└── spamfilter
├── classifier
│ ├── classifierNB.py
│ ├── exceptions.py
│ ├── __init__.py
│ └── train.py
├── data
│ ├── corpus1
│ ├── corpus2
│ └── corpus3
├── __init__.py
├── logfiles
│ └── logfile.txt
└──run
├── __init__.py
└── test.py
It's absolute path is `/home/tasdik/Desktop/spamfilter_app`
* * *
My files look something like this
**test.py**
from ..classifier.train import Trainer
CUR_DIR = os.path.abspath('.')
PARENT_DIR = os.path.abspath(os.path.join(CUR_DIR, os.path.pardir))
LOGGING_FILE = os.path.join(PARENT_DIR, 'logfiles', 'logfile.txt')
CORPUS_DIR = os.path.join(PARENT_DIR, 'data')
logging.basicConfig(
filename=LOGGING_FILE,
level = logging.DEBUG,
filemode = 'w',
format = '%(asctime)s - %(levelname)s - %(message)s'
)
* * *
**train.py**
from .classifierNB import NaiveBayesClassifier
CUR_DIR = os.path.abspath('.')
PARENT_DIR = os.path.abspath(os.path.join(CUR_DIR, os.path.pardir))
LOGGING_FILE = os.path.join(PARENT_DIR, 'logfiles', 'logfile.txt')
logging.basicConfig(
filename=LOGGING_FILE,
level = logging.DEBUG,
filemode = 'w',
format = '%(asctime)s - %(levelname)s - %(message)s'
)
* * *
**classifierNB.py**
CUR_DIR = os.path.abspath('.')
PARENT_DIR = os.path.abspath(os.path.join(CUR_DIR, os.path.pardir))
LOGGING_FILE = os.path.join(PARENT_DIR, 'logfiles', 'logfile.txt')
logging.basicConfig(
filename=LOGGING_FILE,
level=logging.DEBUG,
filemode='w',
format='%(asctime)s - %(levelname)s - %(message)s'
)
**Error that I get**
When I do `$ python -m spamfilter.run.test` from the `spamfilter` parent
directory, I get this error. Here is the traceback.
$ python -m spamfilter.run.test
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/tasdik/Desktop/spamfilter_app/spamfilter/run/test.py", line 18, in <module>
from ..classifier.train import Trainer
File "spamfilter/classifier/train.py", line 27, in <module>
from .classifierNB import NaiveBayesClassifier
File "spamfilter/classifier/classifierNB.py", line 43, in <module>
format='%(asctime)s - %(levelname)s - %(message)s'
File "/usr/lib/python2.7/logging/__init__.py", line 1540, in basicConfig
hdlr = FileHandler(filename, mode)
File "/usr/lib/python2.7/logging/__init__.py", line 911, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib/python2.7/logging/__init__.py", line 936, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 2] No such file or directory: '/home/tasdik/Desktop/logfiles/logfile.txt'
The `logfile.txt` has been given a wrong absolute path. I tried hard coding
the path for `logile.txt`, but I faced the same problem when corpus
directories were to be accessed
Could anybody explain where am I going wrong?
**EDIT**
As suggested, I added the project path to `sys` path by adding
sys.path.append("/home/tasdik/Desktop/spamfilter_bad")
in each of the mentioned files. And did the imports like
from spamfilter.classifier.classifierNB.py import NaiveBayesClassifier
I think the imports happen just fine, but the file `logfile error still
persists`.
Here is an updated error traceback
$ python -m spamfilter.run.test
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/tasdik/Desktop/spamfilter_app/spamfilter/run/test.py", line 22, in <module>
from spamfilter.classifier.train import Trainer
File "spamfilter/classifier/train.py", line 30, in <module>
from spamfilter.classifier.classifierNB.py import NaiveBayesClassifier
File "spamfilter/classifier/classifierNB.py", line 46, in <module>
format='%(asctime)s - %(levelname)s - %(message)s'
File "/usr/lib/python2.7/logging/__init__.py", line 1540, in basicConfig
hdlr = FileHandler(filename, mode)
File "/usr/lib/python2.7/logging/__init__.py", line 911, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib/python2.7/logging/__init__.py", line 936, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 2] No such file or directory: '/home/tasdik/Desktop/logfiles/logfile.txt'
Answer: **Edit** : replace `os.path.abspath('.')` with `os.path.abspath(__file__)`
`os.path.abspath('.')` gives you different results when you execute your
script in different folders. `.` is the current path, it is based on which
path invokes the script
**Original Post**
You need to add your project into sys path, and make sure you have
`__init__.py` under folder `spamfilter`
`test.py`
import sys
sys.path.append("/path/to/spamfilter_app")
from spamfilter.classifier.train import Trainer
|
Python Random function
Question: Write a program to create an array of random numbers between 1000 to 2000 and
count the number of values that are higher than 1500.
I've kind of have the understanding of setting the range, but not counting the
number of returns.
What I have is this:
import random
for x in range(20):
a=random.randint(1000,2000)
b=(a>1500)
print b
print
this simply returns Trues or Falses, I need to know the total number of
numbers over 1500 not if they are or aren't Thanks
Answer: This is how you do it, assuming your original code is correct in other
respects:
import random
count = 0
for x in range(20):
a=random.randint(1000,2000)
# b=(a>1500)// This is expected to give a boolean
# print b
if (a > 1500):
count = count + 1
print count
|
Subsets and Splits