text
stringlengths 226
34.5k
|
---|
How to load a MATLAB file which has 2 arrays, in 1 code of line
Question: I have 1 file with 2 arrays inside it (x and y). This is the dictionary keys:
dict_keys(['__version__', 'x', '__header__', 'y', '__globals__'])
These are the instructions I write to call my arrays without the dict_keys:
x=sio.loadmat('C:/Users/rocio/Documents/Python Scripts/SLEEP/SLEEP_F4/FeaturesAll/AWA_FeaturesAll.mat')['x']
y=sio.loadmat('C:/Users/rocio/Documents/Python Scripts/SLEEP/SLEEP_F4/FeaturesAll/AWA_FeaturesAll.mat')['y']
Is there a way to do this with only one line of code?
I have tried this so far without success:
x_y=sio.loadmat('C:/Users/rocio/Documents/Python Scripts/SLEEP/SLEEP_F4/FeaturesAll/AWA_FeaturesAll.mat')['x']['y']
x_y=(sio.loadmat('C:/Users/rocio/Documents/Python Scripts/SLEEP/SLEEP_F4/FeaturesAll/AWA_FeaturesAll.mat')(['x','y']))
x_y=sio.loadmat('C:/Users/rocio/Documents/Python Scripts/SLEEP/SLEEP_F4/FeaturesAll/AWA_FeaturesAll.mat')(['x']['y'])
x_y=(sio.loadmat('C:/Users/rocio/Documents/Python Scripts/SLEEP/SLEEP_F4/FeaturesAll/AWA_FeaturesAll.mat')(['x']['y']))
Answer: Is it really that important to do this in _one line_? It makes sense to want
just one call to `loadmat()`, but insisting on one line seems unnecessary.
This looks pretty straightforward:
features = sio.loadmat('C:/Users/rocio/Documents/Python Scripts/SLEEP/SLEEP_F4/FeaturesAll/AWA_FeaturesAll.mat')
x = features['x']
y = features['y']
|
When I write to a text file in python 3.5.1 the text file is blank
Question: I'm writing to a text file but the final content is blank. Can anyone help.
def main():
types = input('What is the device type? Phone or Tablet')
save = open('type.txt', 'w')
save.write(types)
save.close
if types == 'phone':
import Type1
elif types == 'tablet':
import Type2
else:
main()
main()
I've tried what I could but I'm not an expert on python.
Answer: In Python every method call is with parentheses. Just use `save.close()`.
|
Coinbase Wallet API python Authentication Error, Invalid Signature
Question: Python 3.4 Coinbase Wallet API V2
* * *
I have been stuck for some time trying to figure out while this buy call (and
other api calls like get_payment_methods() and get_accounts() ) run into
authentication errors. I have successfully been able to run some of these api
calls alone in a separate file.
* * *
**What Does Not Work** :
class api_call(object):
def __init__(self):
self.CB_key = xxxxxxxx
self.CB_secret = yyyyyyyy
self.CB_account = zzzzzzzzz
self.CB_payment_method = aaaaaaaaaa
def buy_c(self, exchange, b_amount):
client = Client(self.CB_key, self.CB_secret)
buy = client.buy(self.CB_account, amount=str(b_amount), currency="USD", payment_method=self.CB_payment_method)
api = api_call()
buy = api.buy('COIN-BS', 1)
I have triple checked my accounts, keys and secrets and have also tried hard
coding them inside the class definition instead of using **init** members.
* * *
**What Works:**
from coinbase.wallet.client import Client
client = Client(<api_key>, <api_secret>)
buy = client.buy('zzzzzzzz', amount='1', currency="USD", payment_method='aaaaaaaaaa')
* * *
The error is as follows:
Traceback (most recent call last):
File "api_call.py", line 126, in <module>
buy = api.buy('COIN-BS', 1)
File "api_call.py", line 110, in buy
buy = client.buy_c( self.CB_account, amount=str(amount), currency="USD", payment_method="XXXXXXXXXXXX")
File "/home/LA/.local/lib/python3.4/site-packages/coinbase/wallet/client.py", line 381, in buy
response = self._post('v2', 'accounts', account_id, 'buys', data=params)
File "/home/LA/.local/lib/python3.4/site-packages/coinbase/wallet/client.py", line 132, in _post
return self._request('post', *args, **kwargs)
File "/home/LA/.local/lib/python3.4/site-packages/coinbase/wallet/client.py", line 116, in _request
return self._handle_response(response)
File "/home/LA/.local/lib/python3.4/site-packages/coinbase/wallet/client.py", line 125, in _handle_response
raise build_api_error(response)
coinbase.wallet.error.AuthenticationError: APIError(id=authentication_error): invalid signature
Im thinking that the problem may be due to the use of the API buy method
inside of a definition of a class file, that is my api_call.py class.I think
this because I can call the buy method ( and others ) just fine from separate
files and even outside of the class indentations inside of api_call.py.
* * *
Does anyone have any idea why this would raise an Authentication Error? I have
looked around in [error.py](https://github.com/coinbase/coinbase-
python/blob/master/coinbase/wallet/error.py), but haven't yet found a clue on
why this might be happening.
As always, any help or thoughts regarding the matter is much appreciated!
* * *
**EDIT**
After running the working and non working code on the same file, I was
successfully able to make both buys. After trying a few other things, I found
that apparently any POSTs to the API using those globally modified variables,
command line arguments, and updated object member variables will produce this
authentication error. Is this supposed to happen?
from coinbase.wallet.client import Client
#Globals
key = 'xxxxxx'
secret = 'yyyyyy'
account = 'zzzzzzz'
payment = 'aaaaaaa'
class api_call(object):
def __init__(self):
self.CB_key = None
self.CB_secret = None
self.CB_account = None
self.CB_payment_method = None
def buy_c(self, exchange, b_amount):
client = Client(key, secret)
buy = client.buy(account, amount=str(b_amount), currency="USD", payment_method=payment)
client = Client(key, secret)
buy = client.buy(account, amount='1', currency="USD", payment_method=payment)
api = api_call()
buy = api.buy_c('COIN-BS', 1)
Answer: after extensive static analysis, I have concluded this is probably your issue.
buy = api.buy('COIN-BS', 1) -> buy = api.buy_c('COIN-BS', 1)
also figure out why your stack trace has `client.buy_c` instead of
`client.buy`
|
Remove backslash with str.translate
Question: I wrote the following Python2.7 code to remove digits and the backslash
character (\\) from some string. I attempted to use the str.translate method,
because I had learned that it is very efficient. The code below successfully
removed digits from the string x, but is unable to remove the single backslash
in y. What did I do wrong?
import string
x = 'xb7'
y = '\xb7'
print x.translate(None, '\\' + string.digits)
print y.translate(None, '\\' + string.digits)
Answer: You don't have any strings with backslashes. `x` has the characters `'x'`,
`'b'`, and `'7'`, while `y` has a single character, `'·'`, denoted by the hex
code `b7`. If you want the literal string `'\xb7'`, with four characters in
it, use a raw string by prefixing an `r` in front of the literal.
>>> import string
>>> print r'\xb7'.translate(None, '\\' + string.digits)
xb
|
Spyder crashes at start: UnicodeDecodeError
Question: During a Spyder session my Linux froze. After startup, I could not start
Spyder; I got the following error instead:
(trusty)dreamer@localhost:~$ spyder
Traceback (most recent call last):
File "/home/dreamer/anaconda2/bin/spyder", line 2, in <module>
from spyderlib import start_app
File "/home/dreamer/anaconda2/lib/python2.7/site-packages/spyderlib/start_app.py", line 13, in <module>
from spyderlib.config import CONF
File "/home/dreamer/anaconda2/lib/python2.7/site-packages/spyderlib/config.py", line 736, in <module>
subfolder=SUBFOLDER, backup=True, raw_mode=True)
File "/home/dreamer/anaconda2/lib/python2.7/site-packages/spyderlib/userconfig.py", line 215, in __init__
self.load_from_ini()
File "/home/dreamer/anaconda2/lib/python2.7/site-packages/spyderlib/userconfig.py", line 260, in load_from_ini
self.readfp(configfile)
File "/home/dreamer/anaconda2/lib/python2.7/ConfigParser.py", line 324, in readfp
self._read(fp, filename)
File "/home/dreamer/anaconda2/lib/python2.7/ConfigParser.py", line 479, in _read
line = fp.readline()
File "/home/dreamer/anaconda2/lib/python2.7/codecs.py", line 690, in readline
return self.reader.readline(size)
File "/home/dreamer/anaconda2/lib/python2.7/codecs.py", line 545, in readline
data = self.read(readsize, firstline=True)
File "/home/dreamer/anaconda2/lib/python2.7/codecs.py", line 492, in read
newchars, decodedbytes = self.decode(data, self.errors)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xfe in position 2: invalid start byte
(trusty)dreamer@localhost:~$
I have found [this
solution](http://stackoverflow.com/questions/36958189/spyder-unicode-decode-
error-in-startup), which sounds very much like my problem, but am curious if
there are others, and whether anyone knows why this occurred.
Answer: My guess is that your spyder configuration file somehow got corrupted. This is
the file `spyder.ini`, which resides in a directory like `~/.spyder2` (the
exact name of the directory depends on the version you have installed). Maybe
the encoding of the configuration file changed or a Unicode byte order mark
was somehow introduced.
Possible solutions: use an editor to convert the file back to UTF-8; delete
the configuration file; delete the whole directory containing the
configuration file. The last two obviously delete any changes you made to the
configuration.
|
Why does print() print an empty tuple instead of a newline?
Question: **The error was caused by a typo. Please flag this question as off-topic.**
I am having a little issue with the following lines.
from __future__ import print_function
print()
If I open up my Windows CLI and run it, it runs as expected.
[](http://i.stack.imgur.com/bXcyP.png)
When I stick it in a program and execute it, instead of simply printing a
newline, it prints `()`.
[](http://i.stack.imgur.com/MYAVz.png)
Has anybody run into this before?
**Additional Details:**
If I run a program with just those two lines, it runs as expected.
But for some reason, in my program `print()` prints `()`. If I replace that
line with `print(1)`, it prints `1` as it should.
Running on Windows 8 64-bit. Python 2.7.11 (v2.7.11:6d1b6a68f775)
**Minimal, complete, and verifiable example:**
class A:
def f(self):
print()
if __name__ == '__main__':
a = A()
a.f()
**Final Update:**
Oh my!!!! I am an idiot.
I have a driver program that has the future import, but the class (which is
another file) does not! I do have statements like `print('abc',
file=sys.stderr)`, but they were not being executed, so the program ran no
problem.
My example above actually runs fine. The example I was running didn't have the
import. The file I was editing (otherwise an exact copy of the example) did.
Woops!!!!
Answer: [`print`](https://docs.python.org/2/reference/simple_stmts.html#the-print-
statement) is a special statement in python2.
When you do :
from __future__ import print_function
print()
You are actually calling the [print
function](https://docs.python.org/2/library/functions.html#print), which has
the same behavior as the one in python3.
In your program, you call the statement, and not the function. Hence, the
`print()` prints an empty tuple (which is indeed what `()` is).
**Additional note :**
If I add `from __future__ import print_function` at the beginning of your
example, I get a newline as expected, and not an empty tuple.
|
Python, copy only directories
Question: I have a program that has a list of some files. I have to copy only the
directories and the subdirectories from the list to a specified directories
and don't need to copy the files. I tried this, but it doesn't work.
def copiarDirs():
items = list.curselection()
desti = tkFileDialog.askdirectory()
for dirs in os.walk(items, topdown=False):
for name in dirs:
#for i in items :
aux=root+"/"+list.get(i)
tryhard=("cp "+str(aux)+" "+str(desti))
os.system(tryhard)
Answer: Try this:
import os
def copyDirs(source, destination):
for subdir, dirs, files in os.walk(source):
for f in files:
dir = destination + os.path.join(subdir).split(':')[1]
if not os.path.exists(dir):
os.makedirs(dir)
sourceDir = 'D:\\Work\\'
destDir = 'D:\\Dest\\'
copyDirs(sourceDir, destDir) #calling function
|
sending mails with different domain names in python
Question: I'm getting "SMTP AUTH extension not supported by server" while sending mails
with different domain names. For example, I have sent mails using
[email protected]. Here my domain is example.com.
when I trying send mail I am getting "SMTP AUTH extension not supported by
server" error
## here is my code
**settings.py**
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'smtp.somedomain.com'
EMAIL_HOST_USER = '[email protected]'
EMAIL_HOST_PASSWORD = '***********'
EMAIL_PORT = 25
EMAIL_USE_TLS = True
**views.py**
msg = "Hi,this is testing mail."
try:
send_mail('Appointment mail',msg,'',['[email protected]'])
response = 'Message sent successfully.You will receive response in very soon.Thank you.'
except Exception as e:
response = e
return HttpResponse(response)
Can anyone help me, Thanks in advance!
Answer: I think this could help you. [Getting 'str' object has no attribute 'get' in
Django](http://stackoverflow.com/questions/22788135/getting-str-object-has-no-
attribute-get-in-django). You cannot return 'str' directly as a response. You
need HttpResponse
from django.http import HttpResponse
return HttpResponse(response)
Hope this helps.
|
Split String with unicode and backslash with Python
Question: I am experiencing trouble extracting a float from a string. The string is the
output of webscraping:
input = u'<strong class="ad-price txt-xlarge txt-emphasis " itemprop="price">\r\n\xa3450.00pw</strong>'
I want to get:
`output: 3450.00`
but I didn't find a way to do it. I have tried to extract it with the split /
replace functions:
word.split("\xa")
word.replace('<strong class="ad-price txt-xlarge txt-emphasis " itemprop="price">\r\n\xa','')
I tried to use the `re` library. It does not work as well, it only extract
`450.00`
import re
num = re.compile(r'\d+.\d+')
num.findall(word)
[u'450.00']
Thus, I still have the same problem in the end with the `\` Do you have an
idea ?
Answer: `\xa3` is the pound sign.
import unidecode
print unidecode.unidecode(input)
<strong class="ad-price txt-xlarge txt-emphasis " itemprop="price">
PS450.00pw</strong>
To get the number from that, you better use regex:
import re
num = re.compile(r'\d+.\d+')
num.findall(input)[0]
**Result**
'450.00'
|
Unable to add network printer using python
Question: I am very new to python and trying to execute printer installation using
python but it doesn't work. If I execute the same using cmd, it works.
import os
os.system("rundll32 printui.dll PrintUIEntry /in /n \\print-kunnu.com\FollowYou")
When I run this, it shows output as `0` which indicates output is success. But
it doesn't add the printer.
If I run this in command prompt:
rundll32 printui.dll PrintUIEntry /in /n \\print-kunnu.com\FollowYou
it adds the printer.
Could you please let me know what wrong I am doing ?
Answer: This could be a path issue. You could try to provide absolute path for
rundll32 and the dll. Another possible issue would be the parsing. If you were
running on Linux, I would suggest using shlex but on Windows, I am not sure
how it behaves. Try to catch the exception via: import os try:
os.system("rundll32 printui.dll PrintUIEntry /in /n \print-
kunnu.com\FollowYou") except: exc_type, exc_obj, exc_tb = sys.exc_info() print
"Error: " + str(exc_type)
|
How to list HDFS directory contents using webhdfs?
Question: Is it possible to check to contents of a directory in HDFS using `webhdfs`?
This would work as `hdfs dfs -ls` normally would, but instead using `webhdfs`.
How do I list a `webhdfs` directory using Python 2.6 to do so?
Answer: You can use the `LISTSTATUS` verb. The docs are at [List a
Directory](https://hadoop.apache.org/docs/r1.0.4/webhdfs.html#LISTSTATUS), and
the following code can be found on the [WebHDFS REST
API](https://hadoop.apache.org/docs/r1.0.4/webhdfs.html) docs:
With `curl`, this is what it looks like:
curl -i "http://<HOST>:<PORT>/webhdfs/v1/<PATH>?op=LISTSTATUS"
The response is a [FileStatuses
JSON](https://hadoop.apache.org/docs/r1.0.4/webhdfs.html#FileStatuses) object:
{
"name" : "FileStatuses",
"properties":
{
"FileStatuses":
{
"type" : "object",
"properties":
{
"FileStatus":
{
"description": "An array of FileStatus",
"type" : "array",
"items" : fileStatusProperties
}
}
}
}
}
[fileStatusProperties](https://hadoop.apache.org/docs/r1.0.4/webhdfs.html#fileStatusProperties)
(for the `items` field) has this JSON schema:
var fileStatusProperties =
{
"type" : "object",
"properties":
{
"accessTime":
{
"description": "The access time.",
"type" : "integer",
"required" : true
},
"blockSize":
{
"description": "The block size of a file.",
"type" : "integer",
"required" : true
},
"group":
{
"description": "The group owner.",
"type" : "string",
"required" : true
},
"length":
{
"description": "The number of bytes in a file.",
"type" : "integer",
"required" : true
},
"modificationTime":
{
"description": "The modification time.",
"type" : "integer",
"required" : true
},
"owner":
{
"description": "The user who is the owner.",
"type" : "string",
"required" : true
},
"pathSuffix":
{
"description": "The path suffix.",
"type" : "string",
"required" : true
},
"permission":
{
"description": "The permission represented as a octal string.",
"type" : "string",
"required" : true
},
"replication":
{
"description": "The number of replication of a file.",
"type" : "integer",
"required" : true
},
"type":
{
"description": "The type of the path object.",
"enum" : ["FILE", "DIRECTORY"],
"required" : true
}
}
};
You can process the filenames in Python using
[pywebhdfs](http://pythonhosted.org/pywebhdfs/), like this:
import json
from pprint import pprint
from pywebhdfs.webhdfs import PyWebHdfsClient
hdfs = PyWebHdfsClient(host='host',port='50070', user_name='hdfs') # Use your own host/port/user_name config
data = hdfs.list_dir("dir/dir") # Use your preferred directory, without the leading "/"
file_statuses = data["FileStatuses"]
pprint file_statuses # Display the dict
for item in file_statuses["FileStatus"]:
print item["pathSuffix"] # Display the item filename
Instead of `print`ing each object, you can actually work with the items as you
need. The result from `file_statuses` is simply a Python `dict`, so it can be
used like any other `dict`, provided that you use the right keys.
|
End of script output before headers: wsgi.py
Question: I am trying to install my django project with Apache, mod_wsgi and python3.
but Apache still gives this error:
Exception ignored in: <module 'threading' from '/usr/lib/python3.4/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.4/threading.py", line 1288, in _shutdown
assert tlock is not None
AssertionError:
End of script output before headers: wsgi.py
I lost two days trying to fix this problem, I know that this error can be
produced by several reasons but I not find where the problem.
Here the **wsgi.py** content:
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import os
import site, sys
path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
if path not in sys.path:
sys.path.append(path)
sys.path.append('/var/www/myproject/myproject_env/bin/python3.4/dist-packages')
site.addsitedir('/var/www/myproject/myproject_env/bin/python3.4/dist-packages')
os.environ["DJANGO_SETTINGS_MODULE"] = "myproject.settings"
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
The Apache configuration is as follows:
<VirtualHost *:80>
ServerName mydomain.com
ServerAlias www.mydomain.com
DocumentRoot /var/www
Alias /static/ /var/www/myproject/static/
Alias /static/admin/ /var/www/myproject/static/admin/
Alias /uploads/ /var/www/myproject/uploads/
WSGIDaemonProcess myproject lang='fr_FR.UTF-8' locale='fr_FR.UTF-8' python-path=/var/www/myproject:/var/www/myproject/myproject_env/bin/python3.4/dist-packages
WSGIProcessGroup myproject
WSGIScriptAlias / /var/www/myproject/myproject/wsgi.py
WSGIApplicationGroup %{GLOBAL}
<Directory "/var/www/myproject/myproject/">
Require all granted
</Directory>
<Directory "/var/www/myproject/myproject/wsgi.py">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Require all granted
</Directory>
<Directory /static/admin/>
Require all granted
</Directory>
<Location "/uploads/">
SetHandler None
</Location>
ErrorLog /var/log/apache2/myproject.log
CustomLog /var/log/apache2/myproject.access.log combined
</VirtualHost>
Please anyone helps me fix this?
Answer: **Solved** : I changed the location of my Django project to another folder in
a new linux account. I think (and I'm not really sure) the error occurred
because **/var/www** contains another python project using cgi-bin, this maybe
creates conflict with my Django project.
|
Web Scraping with Python - Selecting div, h2 and h3 class
Question: This is my first time with Python and web scraping. Have been looking around
and still unable to get what I need to do.
Below are print screen of the elements that I've used via Chrome.
What I am trying to do is that, I am trying to get the apartment names and the
address from the selected city name.

import requests
from bs4 import BeautifulSoup
#url = 'http://www.homestead.ca/apartments-for-rent/'
rootURL = 'http://www.homestead.ca'
response = requests.get(rootURL)
html = response.content
soup = BeautifulSoup(html,'lxml')
dropdown_list = soup.select(".primary .child-pages a")
#city_names=[dropdown_list_value.text for dropdown_list_value in dropdown_list]
#print (city_names)
cityLinks=[rootURL + dropdown_list_value['href'] for dropdown_list_value in dropdown_list]
for cityLinks_select in dropdown_list: #Looping each city from the Apartment drop down list
print ('Selecting city:',cityLinks_select.text)
cityResponse = requests.get(cityLinks)
cityHtml = cityResponse.content
citySoup = BeautifulSoup(cityHtml,'lxml')
community_list = soup.select(".extended-search .property-container a[h2 h3]")
get and print the apartment link
get and print the apartment name
get and print the address of the apartment
Answer: As I commented, some of the data is dynamically created, if we look at the
source itself we see:
<div class="content">
<div class="title-container">
<h2 class="building-name"><%= building.get('name') %></h2>
<h3 class="address"><%= building.get('address').address %></h3>
</div>
<div class="rent">
<h4 class="sub-title">Rent from</h4>
<% if (building.get('statistics').suites.rates.min !== 'undefined') { %>
<% $min_rate = commaSeparateNumber(parseInt(building.get('statistics').suites.rates.min)); %>
<span class="rent-value">$<%= $min_rate %></span>
<% } %>
</div>
All we can get from the source is the building name, the address and the ph
number:
cityLinks = [rootURL + dropdown_list_value['href'] for dropdown_list_value in dropdown_list]
# you need to iterate over the joined urls
for city in cityLinks: # Looping each city from the Apartment drop down list
cityResponse = requests.get(city)
cityHtml = cityResponse.content
citySoup = BeautifulSoup(cityHtml, 'lxml')
# all the info we can parse is inside the div class="building-info"
for div in citySoup.select("div.building-info"):
print(div.select_one("h1.building-name").text.strip())
print(div.select_one("h2.location").text.strip())
print(div.select_one("div.contact-container div.phone").text.strip())
We can get all the data in _json_ format if we mimic an ajax request:
import requests
from bs4 import BeautifulSoup
from pprint import pprint as pp
rootURL = 'http://www.homestead.ca'
response = requests.get(rootURL)
html = response.content
soup = BeautifulSoup(html, 'lxml')
dropdown_list = soup.select(".primary .child-pages a")
cityLinks = (rootURL + dropdown_list_value['href'] for dropdown_list_value in dropdown_list)
# params for our request
params = {"show_promotions": "true",
"show_custom_fields": "true",
"client_id": "6",
"auth_token": "sswpREkUtyeYjeoahA2i",
"min_bed": "-1",
"max_bed": "100",
"min_bath": "0",
"max_bath": "10",
"min_rate": "0",
"max_rate": "4000",
"keyword": "false",
"property_types": "low-rise-apartment,mid-rise-apartment,high-rise-apartment,luxury-apartment,townhouse,house,multi-unit-house,single-family-home,duplex,tripex,semi",
"order": "max_rate ASC, min_rate ASC, min_bed ASC, max_bath ASC",
"limit": "50",
"offset": "0",
"count": "false"}
for city in cityLinks: # Looping each city from the Apartment drop down list
with requests.Session() as s:
r= s.get(city)
# we need to parse the city_id for out next request to work
soup = BeautifulSoup(r.content)
city_id = soup.select_one("div.hidden.search-data")["data-city-id"]
# update params with the city id
params["city_id"] = city_id
js = s.get("http://api.theliftsystem.com/v2/search", params=params).json()
pp(js)
Now we get data like:
[{u'address': {u'address': u'325 North Park Street',
u'city': u'Brantford',
u'city_id': 332,
u'country': u'Canada',
u'country_code': u'CAN',
u'intersection': u'',
u'neighbourhood': u'',
u'postal_code': u'N3R 2X4',
u'province': u'Ontario',
u'province_code': u'ON'},
u'availability_count': 6,
u'availability_status': 1,
u'availability_status_label': u'Available Now',
u'building_header': u'',
u'client': {u'email': u'[email protected]',
u'id': 6,
u'name': u'Homestead Land Holdings',
u'phone': u'613-546-3146',
u'website': u'www.homestead.ca'},
u'contact': {u'alt_extension': u'',
u'alt_phone': u'',
u'email': u'[email protected]',
u'extension': u'',
u'fax': u'(519) 752-6855',
u'name': u'',
u'phone': u'519-752-3596'},
u'details': {u'features': u'',
u'location': u'',
u'overview': u"Located on North Park Street and Memorial Avenue,this quiet building is within walking distance of the following: - Zehrs Plaza, North Park Plaza, Shoppers Drug Mart, Zehrs Grocery Store, Zellers, Pet Store, Party Supply Store, furniture store, variety store, Black's Photography, paint shop and veterinary clinic\xa0 - Restaurants and coffee shops\xa0 - Wayne Gretzky Recreational Arena\xa0 - Medical Clinic,Shoppers Home Health Care Clinic and Pharmacy\xa0 - Catholic Elementary School\xa0 - On bus route ",
u'suite': u''},
u'geocode': {u'distance': None,
u'latitude': u'43.1703624',
u'longitude': u'-80.2605725'},
u'id': 309,
u'matched_beds': [u'0', u'1', u'2'],
u'matched_suite_names': [u'Bachelor', u'One Bedroom', u'Two Bedroom'],
u'min_availability_date': u'',
u'name': u'North Park Tower',
u'office_hours': u'',
u'parking': {u'additional': u'', u'indoor': u'', u'outdoor': u''},
u'permalink': u'http://www.homestead.ca/apartments/325-north-park-street-brantford',
u'pet_friendly': True,
u'photo': u'1443018148_2.jpg',
u'photo_path': u'http://s3.amazonaws.com/lws_lift/homestead/images/gallery/full/1443018148_2.jpg',
u'promotion': {u'featured': 0},
u'property_type': u'High-rise-apartment',
u'statistics': {u'suites': {u'bathrooms': {u'average': 1.0,
u'max': 1.0,
u'min': 1.0},
u'bedrooms': {u'average': u'1.0',
u'max': 2,
u'min': 0},
u'rates': {u'average': 950.0,
u'max': 1275.0,
u'min': 625.0},
u'square_feet': {u'average': 0.0,
u'max': u'0.0',
u'min': u'0.0'}}},
u'thumbnail_path': u'http://s3.amazonaws.com/lws_lift/homestead/images/gallery/256/1443018148_2.jpg',
u'website': {u'description': u'', u'title': u'', u'url': u''}},
{u'address': {u'address': u'661 West Street',
u'city': u'Brantford',
u'city_id': 332,
u'country': u'Canada',
u'country_code': u'CAN',
u'intersection': u'',
u'neighbourhood': u'',
u'postal_code': u'N3R 6W9',
u'province': u'Ontario',
u'province_code': u'ON'},
u'availability_count': 6,
u'availability_status': 1,
u'availability_status_label': u'Available Now',
u'building_header': u'',
u'client': {u'email': u'[email protected]',
u'id': 6,
u'name': u'Homestead Land Holdings',
u'phone': u'613-546-3146',
u'website': u'www.homestead.ca'},
u'contact': {u'alt_extension': u'',
u'alt_phone': u'',
u'email': u'[email protected]',
u'extension': u'',
u'fax': u'(519) 751-0379',
u'name': u'',
u'phone': u'519-751-3867'},
u'details': {u'features': u'',
u'location': u'',
u'overview': u'Located in the North end of Brantford, Westgate Tower is in an area that resembles a city within a city. There are a variety of banks, grocery stores, drug stores, malls, a wide selection of fast food, fine dining restaurants and an after hours medical centre, within waking distance.',
u'suite': u''},
u'geocode': {u'distance': None,
u'latitude': u'43.1733242',
u'longitude': u'-80.2482991'},
u'id': 310,
u'matched_beds': [u'0', u'1', u'2'],
u'matched_suite_names': [u'Bachelor', u'One Bedroom', u'Two Bedroom'],
u'min_availability_date': u'',
u'name': u'Westgate Apartments',
u'office_hours': u'',
u'parking': {u'additional': u'', u'indoor': u'', u'outdoor': u''},
u'permalink': u'http://www.homestead.ca/apartments/661-west-street-brantford',
u'pet_friendly': True,
u'photo': u'1443017488_1.jpg',
u'photo_path': u'http://s3.amazonaws.com/lws_lift/homestead/images/gallery/full/1443017488_1.jpg',
u'promotion': {u'featured': 0},
u'property_type': u'High-rise-apartment',
u'statistics': {u'suites': {u'bathrooms': {u'average': 1.0,
u'max': 1.0,
u'min': 1.0},
u'bedrooms': {u'average': u'1.0',
u'max': 2,
u'min': 0},
u'rates': {u'average': 975.0,
u'max': 1300.0,
u'min': 650.0},
u'square_feet': {u'average': 0.0,
u'max': u'0.0',
u'min': u'0.0'}}},
u'thumbnail_path': u'http://s3.amazonaws.com/lws_lift/homestead/images/gallery/256/1443017488_1.jpg',
u'website': {u'description': u'', u'title': u'', u'url': u''}},
{u'address': {u'address': u'321 Fairview Drive',
u'city': u'Brantford',
u'city_id': 332,
u'country': u'Canada',
u'country_code': u'CAN',
u'intersection': u'',
u'neighbourhood': u'',
u'postal_code': u'N3R 2X6',
u'province': u'Ontario',
u'province_code': u'ON'},
u'availability_count': 8,
u'availability_status': 1,
u'availability_status_label': u'Available Now',
u'building_header': u'',
u'client': {u'email': u'[email protected]',
u'id': 6,
u'name': u'Homestead Land Holdings',
u'phone': u'613-546-3146',
u'website': u'www.homestead.ca'},
u'contact': {u'alt_extension': u'',
u'alt_phone': u'',
u'email': u'[email protected]',
u'extension': u'',
u'fax': u'(519) 752-6855',
u'name': u'',
u'phone': u'519-752-3596'},
u'details': {u'features': u'',
u'location': u'',
u'overview': u'Dornia Manor is a quiet, ninety-two unit apartment building located in the North end of Brantford. We offer one, two and three bedroom units and one penthouse suite. The building is located in close proximity to many major services such as banking, shopping, health services, recreational facilities, beauty shops, dry cleaners, schools and churches. There is a bus stop at the front door and highway 403 is within minutes.',
u'suite': u''},
u'geocode': {u'distance': None,
u'latitude': u'43.1706331',
u'longitude': u'-80.2584034'},
u'id': 308,
u'matched_beds': [u'1', u'2', u'3'],
u'matched_suite_names': [u'One Bedroom', u'Two Bedroom', u'Three Bedroom'],
u'min_availability_date': u'',
u'name': u'Dornia Manor',
u'office_hours': u'',
u'parking': {u'additional': u'', u'indoor': u'', u'outdoor': u''},
u'permalink': u'http://www.homestead.ca/apartments/321-fairview-drive-brantford',
u'pet_friendly': True,
u'photo': u'1443017947_1.jpg',
u'photo_path': u'http://s3.amazonaws.com/lws_lift/homestead/images/gallery/full/1443017947_1.jpg',
u'promotion': {u'featured': 0},
u'property_type': u'High-rise-apartment',
u'statistics': {u'suites': {u'bathrooms': {u'average': 1.375,
u'max': 2.0,
u'min': 1.0},
u'bedrooms': {u'average': u'2.25',
u'max': 3,
u'min': 1},
u'rates': {u'average': 1124.5,
u'max': 1350.0,
u'min': 899.0},
u'square_feet': {u'average': 0.0,
u'max': u'0.0',
u'min': u'0.0'}}},
u'thumbnail_path': u'http://s3.amazonaws.com/lws_lift/homestead/images/gallery/256/1443017947_1.jpg',
u'website': {u'description': u'', u'title': u'', u'url': u''}}]
That gives you the url, bedrooms and pretty much everything you could want.
Each dict in the list is one listing, you just need to access using the keys
to pull the data you want, for example:
for dct in js:
add = dct["address"]
print(add["city"])
print(add["postal_code"])
print(add["province"])
print(dct["permalink"])
Would give you:
Brantford
N3R 2X4
Ontario
http://www.homestead.ca/apartments/325-north-park-street-brantford
Brantford
N3R 6W9
Ontario
http://www.homestead.ca/apartments/661-west-street-brantford
Brantford
N3R 2X6
Ontario
http://www.homestead.ca/apartments/321-fairview-drive-brantford
The contact info is under `dct["contact"]` and the stats are under =
`dct["statistics"]`:
for dct in js:
contact = dct["contact"]
print(contact)
stats = dct["statistics"]
print(stats["suites"])
Which would give you:
{u'alt_phone': u'', u'fax': u'(519) 752-6855', u'name': u'', u'alt_extension': u'', u'phone': u'519-752-3596', u'extension': u'', u'email': u'[email protected]'}
{u'rates': {u'max': 1275.0, u'average': 950.0, u'min': 625.0}, u'bedrooms': {u'max': 2, u'average': u'1.0', u'min': 0}, u'bathrooms': {u'max': 1.0, u'average': 1.0, u'min': 1.0}, u'square_feet': {u'max': u'0.0', u'average': 0.0, u'min': u'0.0'}}
{u'alt_phone': u'', u'fax': u'(519) 751-0379', u'name': u'', u'alt_extension': u'', u'phone': u'519-751-3867', u'extension': u'', u'email': u'[email protected]'}
{u'rates': {u'max': 1300.0, u'average': 975.0, u'min': 650.0}, u'bedrooms': {u'max': 2, u'average': u'1.0', u'min': 0}, u'bathrooms': {u'max': 1.0, u'average': 1.0, u'min': 1.0}, u'square_feet': {u'max': u'0.0', u'average': 0.0, u'min': u'0.0'}}
{u'alt_phone': u'', u'fax': u'(519) 752-6855', u'name': u'', u'alt_extension': u'', u'phone': u'519-752-3596', u'extension': u'', u'email': u'[email protected]'}
{u'rates': {u'max': 1350.0, u'average': 1124.5, u'min': 899.0}, u'bedrooms': {u'max': 3, u'average': u'2.25', u'min': 1}, u'bathrooms': {u'max': 2.0, u'average': 1.375, u'min': 1.0}, u'square_feet': {u'max': u'0.0', u'average': 0.0, u'min': u'0.0'}}
You can put all that together to get whatever you need. Yo can tweak the
params and there are actually more if you check out the request in chrome
tools or firebug.
|
Python script runs perfect in PyCharm but in terminal don't
Question: I really don't understand... If you need screenshots of some settings please
tell me because I really don't know why it works in PyCharm but not outside
Pycharm...
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
import unittest
from datetime import datetime
class MYMaster(unittest.TestCase):
def Test_login(self):
<<<<< MY CODE >>>>>
if __name__ == '__main__':
unittest.main()
If I right click in PyCharm on line `class MYMaster(unittest.TestCase):` and
select option **Run 'Unittest in MYMaster** ' it will send this code
C:\Users\MyNameIs\AppData\Local\Programs\Python\Python35-32\python.exe"C:\Program Files (x86)\JetBrains\PyCharm 4.5.4\helpers\pycharm\utrunner.py" C:\Users\MyNameIs\PycharmProjects\untitled\MyProject\MyMain.py::MYMaster true
Testing started at 15:42 ...
Process finished with exit code 0
Empty test suite.
If I right click on `def Test_login(self):`and select option **Run 'Unittest
TestLogin** ' it will send this code (But it runs entire code and give
results).
C:\Users\MyNameIs\AppData\Local\Programs\Python\Python35-32\python.exe "C:\Program Files (x86)\JetBrains\PyCharm 4.5.4\helpers\pycharm\utrunner.py" C:\Users\MyNameIs\PycharmProjects\untitled\MyProject\MyMain.py::MYMaster::Test_login true
Testing started at 15:50 ...
Process finished with exit code 0
Now I decided to open the MyMain.py in Python IDLE where I clicked Run module
and this are the results:
= RESTART: C:\Users\MyNameIs\PycharmProjects\untitled\MyProject\MyMain.py =
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
> In Pycharm under Tool>Python Integrated Tools Default test runner =
> Unittests Docstring format = reStructuredText Checked checkbox Analyze
> Python code in docstrings
Answer: When you run `unittest`s in Pycharm, Pycharm provides a wrapper that executes
other code before your code.
To execute `unittest`s from the terminal, you should use this command,
provided that you have `python` set in your `PATH`:
`python -m unittest /path/to/script_with_tests.py`
You should also make sure that your functions' names start with `test`, ie
`test_login`.
From `unittest`
[documentation](https://docs.python.org/3.4/library/unittest.html):
> A testcase is created by subclassing unittest.TestCase. **The three
> individual tests are defined with methods whose names start with the letters
> test. This naming convention informs the test runner about which methods
> represent tests.**
|
gsutil acl set command AccessDeniedException: 403 Forbidden
Question: I am following the steps of setting up Django on Google App Engine, and since
Gunicorn does not serve static files, I have to store my static files to
Google Cloud Storage.
I am at the line with "Create a Cloud Storage bucket and make it publically
readable." on <https://cloud.google.com/python/django/flexible-
environment#run_the_app_on_your_local_computer>. I ran the following commands
as suggested:
$ gsutil mb gs://your-gcs-bucket
$ gsutil defacl set public-read gs://your-gcs-bucket
The first command is supposed to create a new storage bucket, and the second
line sets its default ACL. When I type in the command, the second line returns
an error.
Setting default object ACL on gs://your-gcs-bucket/...
AccessDeniedException: 403 Forbidden
I also tried other commands setting or getting acl, but all returns the same
error, with no additional information.
I am a newbie with google cloud services, could anyone point out what is the
problem?
Answer: I figured it out myself, and it is kind of silly. I didn't notice if the first
command is successful or not. And apparently it did not.
For a newbie like me, it is important to note that things like **bucket name
and project name are global** across its space. And what happened was that the
name I used to create a new bucket is already used by other people. And no
wonder that I do not have permission to access that bucket.
A better way to work with this is to name the bucket name wisely, like
prefixing project name and application name.
|
Python ftp.retrbinary() does not work when called parallelized in function
Question: I am using a script to retrieve data from a ftp-server. As I want to
parallelize the download, `ftp.retrbinary` is called within a function.
Atm the working code looks like this:
from ftplib import FTP
def download_file(file_in, target_file):
ftp.retrbinary('RETR '+file_in, open(target_file, 'wb').write)
return 0
ftp = FTP(FTP_HOST)
ftp.login(FTP_USER, FTP_PASS)
ftp.cwd(FTP_PATH)
for file_input in files_to_check:
download_file(target_dir,file_input)
As soon as I want to download parallelly, the download just gets stuck and no
data is transferred:
from ftplib import FTP
from joblib import Parallel, delayed
def download_file(file_in, target_file):
ftp.retrbinary('RETR '+file_in, open(target_file, 'wb').write)
return 0
ftp = FTP(FTP_HOST)
ftp.login(FTP_USER, FTP_PASS)
ftp.cwd(FTP_PATH)
Parallel(n_jobs=2)(delayed(download_file)(target_dir,file_input) for file_input in files_to_check)
Does anybody have an idea why `ftp.retrbinary` does not work for parallel
downloads?
Answer: You cannot use one FTP session for multiple parallel transfers. The FTP
protocol does not support that (contrary to the SFTP for example).
You have to open a separate FTP session for each parallel job.
|
How do I lazy evaluate variables in a python eval expression
Question: The scenario is that my user can supply an expression string for evaluation.
It could be:
"power=(x**2+y**2)**0.5"
Then I get an input stream of data with labels. E.g.:
x ; y ; z
1 ; 2 ; 3
1 ; 3 ; 4
And I will output a stream of data like this:
x ; y ; z ; power
3 ; 4 ; 3 ; 5.0
6 ; 8 ; 4 ; 10.0
But I would also give the user the possibility to use use more "expensive"
variables like e.g. 'sum':
"mysum = sum + 5"
But I don't want to calculate the 'sum' unless it is needed
So how do I best lazy evaluate the variables in the expression? Performance is
important, but not overly so.
Clear and understandable code is most important
I have tried to ask the question before - [How do I detect variables in a
python eval expression](http://stackoverflow.com/questions/37993137/how-do-i-
detect-variables-in-a-python-eval-expression). But apparently not being very
concise
I am using eval and namespace for it currently. Other methods are also
welcome.
Another approach that could give better performance is to detect all included
variables in the user expression to know beforehand what precalculated
variables will be needed.
A good answer to that would also be appreciated
Answer: The best solution to the question is also posted in the linked question:
import ast
def varsInExpression(expr):
st = ast.parse(expr)
return set(node.id for node in ast.walk(st) if type(node) is ast.Name)
This was posted by André Laszlo
It allows me to initialize the needed vars and functions before receiving any
data and only precalculate "smart" variables that are used
The lazy evaluation part has not yet received a good answer
|
Python: Accessing a particular cell in a data frame, change it, then save into a new version of the data frame
Question: Using Pandas, I have a data frame with a column containing a string that I am
splitting when a ; or , is seen:
import re
re.split(';|,',x)
I want to iterate through the column in the whole data frame and create a copy
of the current data frame with the new splits.
This is what I was trying based off of other answers here.
for row in x:
if pd.notnull(x):
SplitIDs = re.split(';|,',x)
df.iloc[0, df.columns.get_loc('x')] = SplitIDs
I don't know how to access the particular cell that the "for loop" is
currently looking at in order to change it to the split format (for the new
copy of the data frame).
If I could also get instruction on how to save these changes into a new copy
of the data frame, that would be great.
I apologize if my question is not clear. I am very new to scripting in general
- the more detailed your explanation is, the better. Thanks!
* * *
Alternatively, what if I wanted to create new columns every time the string is
split? For example, let's say the string was split into 3 parts now - instead
of having the 3 strings under the same existing column, I would like the 2 new
pieces are placed into new, adjacent columns.
If we went with this route, if the next row (in the same column) could split
into 2 (based on the same parameters we started with), it would take up space
of the existing column plus one of the new columns that we just created (and
the 3rd would be blank). OR if this row had MORE than the columns we just made
(and all the pieces couldn't fit), how do I keep making new columns to fit the
pieces?
Answer: Let me first describe how indexing works for pandas dataframe. Assuming you
have the following daframe:
df = DataFrame(randn(5,2),index=range(0,10,2),columns=list('AB'))
In [12]: df
Out[12]:
A B
0 0.767612 0.322622
2 0.875476 2.819955
4 1.876320 -1.591170
6 0.645850 -0.492359
8 0.148593 0.721617
Now for example in order to access a whole row you can use:
df.iloc[[2]]
A B
4 1.876320 -1.591170
You can find more examples here: [Pandas Slicing and
Indexing](http://pandas.pydata.org/pandas-docs/stable/indexing.html). Now let
say I want a new column where `C` where it is `A+B`. I can basically do the
following:
df['C'] = df['A'] + df['B']
Out[23]: df
A B C
0 0.767612 0.322622 1.090235
2 0.875476 2.819955 3.695431
4 1.876320 -1.591170 0.285151
6 0.645850 -0.492359 0.153490
8 0.148593 0.721617 0.870210
As you can see you do not need to access your data cell by cell, you can apply
a function to a whole column at the same time. Now, say your column where
strings are at is called myStrings, to create a new column based on results of
applying a regular expression to that, you can do the following:
df['new_string'] = df['myStrings'].str.replace(r'(\b\S)', r'+\1')
You can apply your own regular expression here. For more on `.str` function
you can check [here](http://pandas.pydata.org/pandas-docs/stable/text.html).
To be more specific about what you want:
data = {'raw': ['Arizona 1',
'Iowa 1',
'Oregon 0']}
df = pd.DataFrame(data, columns = ['raw'])
df
Out[31]:
raw
0 Arizona 1
1 Iowa 1
2 Oregon 0
And you want to split this based on space and save the two in two new columns
(or even a new dataframe):
df['firstSplit'] = df['raw'].str.split(' ').str.get(0)
This will result the following which I believe is what you are looking for:
df
Out[30]:
raw firstSplit
0 Arizona 1 Arizona
1 Iowa 1 Iowa
2 Oregon 0 Oregon
|
python gettext: specify locale in _()
Question: I am looking fo a way to set the language on the fly when requesting a
translation for a string in gettext. I'll explain why :
I have a multithreaded bot that respond to users by text on multiple servers,
thus needing to reply in different languages. The
[documentation](http://www.enseignement.polytechnique.fr/informatique/INF478/docs/Python3/library/gettext.html#changing-
languages-on-the-fly "doc") of gettext states that, to change locale while
running, you should do the following :
import gettext # first, import gettext
lang1 = gettext.translation('myapplication', languages=['en']) # Load every translations
lang2 = gettext.translation('myapplication', languages=['fr'])
lang3 = gettext.translation('myapplication', languages=['de'])
# start by using language1
lang1.install()
# ... time goes by, user selects language 2
lang2.install()
# ... more time goes by, user selects language 3
lang3.install()
But, this does not apply in my case, as the bot is multithreaded :
Imagine the 2 following snippets are running at the same time :
import time
import gettext
lang1 = gettext.translation('myapplication', languages=['fr'])
lang1.install()
message(_("Loading a dummy task")) # This should be in french, and it will
time.sleep(10)
message(_("Finished loading")) # This should be in french too, but it wont :'(
and
import time
import gettext
lang = gettext.translation('myapplication', languages=['en'])
time.sleep(3) # Not requested on the same time
lang.install()
message(_("Loading a dummy task")) # This should be in english, and it will
time.sleep(10)
message(_("Finished loading")) # This should be in english too, and it will
You can see that messages sometimes are translated in the wrong locale. But,
if I could do something like `_("string", lang="FR")`, the problem would
disappear !
Have I missed something, or I'm using the wrong module to do the task... I'm
using python3
Answer: The following simple example shows how to use a separate process for each
translator:
import gettext
import multiprocessing
import time
def translation_function(language):
try:
lang = gettext.translation('simple', localedir='locale', languages=[language])
lang.install()
while True:
print(_("Running translator"), ": %s" % language)
time.sleep(1.0)
except KeyboardInterrupt:
pass
if __name__ == '__main__':
thread_list = list()
try:
for lang in ['en', 'fr', 'de']:
t = multiprocessing.Process(target=translation_function, args=(lang,))
t.daemon = True
t.start()
thread_list.append(t)
while True:
time.sleep(1.0)
except KeyboardInterrupt:
for t in thread_list:
t.join()
The output looks like this:
Running translator : en
Traducteur en cours d’exécution : fr
Laufenden Übersetzer : de
Running translator : en
Traducteur en cours d’exécution : fr
Laufenden Übersetzer : de
When I tried this using threads, I only got an English translation. You could
create individual threads in each process to handle connections. You probably
do not want to create a new process for each connection.
|
Create new columns based on multiple conditions in Python
Question: I have the following dataframe:
data = [
(27450, 27450, 29420,"10/10/2016"),
(29420 , 36142, 29420, "10/10/2016"),
(11 , 11, 27450, "10/10/2016")]
#Create DataFrame base
df = pd.DataFrame(data, columns=("User_id","Actor1","Actor2", "Time"))
The first column contains the user_id, and each line represents one action
that he makes. Each user_id shows up either in "Actor1" or "Actor2" column.
First, I would like to create a new column where it will assign the value 1 if
the user_id is found in "Actor1" column and 0 otherwise.
Second, I would like to create a new column where for each user_id it will
store the "Actor"_i value that he interacted with.
For the above example, the output will look like:
Col1 Col2
1 29420
0 36142
1 27450
What is the most efficient pythonic way to do this?
Thanks a lot in advance!
Answer:
import numpy as np
import pandas as pd
data = [(27450, 27450, 29420,"10/10/2016"),
(29420 , 36142, 29420, "10/10/2016"),
(11 , 11, 27450, "10/10/2016")]
df = pd.DataFrame(data, columns=("User_id","Actor1","Actor2", "Time"))
mask = (df['User_id'] == df['Actor1'])
df['first actor'] = mask.astype(int)
df['other actor'] = np.where(mask, df['Actor2'], df['Actor1'])
print(df)
yields
User_id Actor1 Actor2 Time first actor other actor
0 27450 27450 29420 10/10/2016 1 29420
1 29420 36142 29420 10/10/2016 0 36142
2 11 11 27450 10/10/2016 1 27450
* * *
First create a boolean mask which is True when `User_id` equals `Actor1`:
In [51]: mask = (df['User_id'] == df['Actor1']); mask
Out[51]:
0 True
1 False
2 True
dtype: bool
Converting `mask` to ints creates the first column:
In [52]: mask.astype(int)
Out[52]:
0 1
1 0
2 1
dtype: int64
Then use `np.where` to select between two values. `np.where(mask, A, B)`
returns an array whose `ith` value is `A[i]` if `mask[i]` is True, and `B[i]`
otherwise. Thus, `np.where(mask, df['Actor2'], df['Actor1'])` takes the value
from `Actor2` where `mask` is True, and the value from `Actor1` otherwise:
In [53]: np.where(mask, df['Actor2'], df['Actor1'])
Out[53]: array([29420, 36142, 27450])
|
Can't connect to local Machine IP through TCP From Arduino Uno using SIM900 Shield
Question: So you have a basic understanding of the parts im using, I have:
* Arduino Uno
* Seeed Studio GPRS Shield v2.0 (<http://www.seeedstudio.com/wiki/GPRS_Shield_V2.0>)
* Ultimate GPS for Adafruit V3.3 (<https://www.adafruit.com/products/746?gclid=Cj0KEQjw3-W5BRCymr_7r7SFt8cBEiQAsLtM8qn4SCfVWIvAwW-x9Mu-FLeB6hLmVd0PAPVU8IAXXPgaAtaC8P8HAQ>)
Here is my problem: I have tested the Arduino stacked with the GPRS shield,
and it works fine with regards to accessing the internet through TCP, sending
SMS, etc.. However, my application requires me to send GPS data from the
adafruit GPS to a web server that I have already coded with Django and
postgresql. The backend is set up.
I need to send the data from the Uno (client) to my laptop (server), which I
coded in python (This is just to check whether it is creating a connection):
#!/usr/bin/env python
import socket
# import postgres database functions
TCP_IP = '192.168.1.112'
TCP_PORT = 10000
BUFFER_SIZE = 40
server_address = (TCP_IP,TCP_PORT)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created.'
# Bind socket to TCP server and port
try:
s.bind(server_address)
except socket.error as msg:
print 'Bind failed. Error Code : ' + str(msg[0]) + ' Message ' + msg[1]
sys.exit()
print 'Socket Bind Complete.'
# Start Listening on socket
s.listen(1) # Puts socket into server mode
print 'Listening on port: ', TCP_PORT
# Now Keep Talking with the client
while (1):
# Wait to accept a connection
conn, addr = s.accept() # Wait for incoming connection with accept()
print 'Connection address:', addr
data = conn.recv(BUFFER_SIZE)
if not data: break
print "recieved data: data", data
conn.send(data) #echo
conn.close()
I dont think there is a problem with this. From this I will post data to my
postgreSQL database. However, When I try to use AT commands on the SIM900
module to connect to the server using port 10000, I cannot connect:
AT+CIPSHUT
SHUT OK
AT+CGATT?
+CGATT: 1
OK
AT+CIPMUX=0
OK
AT+CSTT="fast.t-mobile.com","",""
OK
AT+CIICR
OK
AT+CIFSR
6.60.94.49
AT+CIPSTART="TCP","192.168.1.112,"10000"
OK
STATE: TCP CLOSED
CONNECT FAIL
I have tried connecting through TCP and replaced the AT+CIPSTART line with the
below statement and it worked, so I know TCP works:
AT+CIPSTART="TCP","www.vishnusharma.com", "80"
Is the IP i'm using wrong? I'm new to this, but if it makes a difference, im
using Ubuntu 16.04 partitioned on my Mac OSX. I have also checked the APN for
T-mobile and it seems fine.
Any help would be greatly appreciated. Thank You!
Answer: The IP you're using is inside a
[NAT](https://en.wikipedia.org/wiki/Network_address_translation) since it
starts with 192.168. Unless you have a private apn with the mobile operator
you're using, you won't be able to reach your Ubuntu from a public IP. Your
ISP gives you a public IP address which ir administrated by your router, so if
you want this to work, you'll have to do a [port
forwarding](https://en.wikipedia.org/wiki/Port_forwarding) from your router to
your Ubuntu.
To do the port forwarding you have to get in the router's configuration page
(Typically 192.168.1.1 but depends on the model) an there you'll have to
redirect the port XXX to 192.168.1.112:10000. After that you have to obtain
your public IP (`curl ifconfig.co`) and use it to access from the SIM900.
|
Django - aggregate queryset by week of year
Question: I have model looking like this:
class TestData(models.Model):
name = models.CharField(max_length=255)
one_value = models.IntegerField()
second_value = models.IntegerField()
create_at = models.DateTimeField()
Is any way to easy generate queryset of summed values for every week of year?
Django 1.9, Python 3
Answer:
from django.db.models import Func, F, Sum
class Week(Func):
def as_mysql(self, compiler, connection):
self.function = 'WEEK'
return super().as_sql(compiler, connection)
data = (TestData.objects
.filter(create_at__year=year)
.annotate(week=Week('create_at'))
.values('week')
.annotate(Sum('one_value')))
Or `Sum(F('one_value') + F('second_value'))`, depending on what sum you want
to get.
|
Python (pygame) using the sprites and classes to make clones of images
Question:
import pygame
import time
import random #Loads pygame and clock and random function
pygame.init() #Intiates pygame
display_width = 1440
display_height = 900
gameDisplay = pygame.display.set_mode((display_width,display_height))
clock = pygame.time.Clock() #Starts auto clock updater
AstImg = pygame.image.load('Images\Ast.gif') #Asteroid image
def asteroid(x,y): #Function for asteroid display
gameDisplay.blit(AstImg,(x,y))
def game():
background_image = pygame.image.load("Images/Space.jpg").convert()
e = 1
x = 2500
y = 2500
ax = 2500
ay = 2500
bx = 2500
by = 2500
cx = 2500
cy = 2500
dx = 2500
dy = 2500
while e == 1:
gameDisplay.blit(background_image, [0, 0])
asteroid(x,y)
asteroid(ax,ay)
asteroid(bx,by)
asteroid(cx,cy)
asteroid(dx,dy)
if x == 2500:
x = display_width
y = random.randrange(60,display_height - 60)
x += -2.5
if ax == 2500:
ax = display_width
ay = random.randrange(60,display_height - 60)
ax += -2.5
if bx == 2500:
bx = display_width
by = random.randrange(60,display_height - 60)
bx += -2.5
if cx == 2500:
cx = display_width
cy = random.randrange(60,display_height - 60)
cx += -2.5
if dx == 2500:
dx = display_width
dy = random.randrange(60,display_height - 60)
dx += -2.5
pygame.display.update()
clock.tick(120) #FPS
game()
I am trying to make this pygame code more efficient using sprites can someone
show me how to do this so i can spawn more asteroids I will eventually need
quite a few asteroids at once and this wont quite work well for it thanks
Answer: It looks like you're going to want to use classes to make your code more
compact. Essentially, a class is a collection of variables and functions that
may be used to define an object, such as an asteroid. Here is an example of a
very simple asteroid class:
class Asteroid(pygame.sprite.Sprite):
def __init__(self,x,y,image):
pygame.sprite.Sprite.__init__(self)
self.image = image
self.rect.x = x
self.rect.y = y
def update(self):
# insert movement code here
Now, mutliple instances of the asteroid class may be created. This would be
done like so:
ast1 = Asteroid(given_x,given_y,AstImg)
ast2 = Asteroid(given_x,given_y,AstImg)
ast3 = Asteroid(given_x,given_y,AstImg)
ast4 = Asteroid(given_x,given_y,AstImg)
ast5 = Asteroid(given_x,given_y,AstImg)
Even better would be to make a `for` loop which would create however many
asteroids you'd like and even randomize starting x and y values:
spriteList = pygame.sprite.Group()
for i in range(12):
ast = Asteroid(random.randrange(1,1440),random.randrange(1,900),AstImg)
ast.add(spriteList) # then in your while loop write spriteList.draw(gameDisplay)
# and spriteList.update()
I recommend looking further into Pygame sprite classes and how they work.
[Here](http://programarcadegames.com/index.php?chapter=introduction_to_sprites)
is a link a that may help.
|
HandlerSSHTunnelForwarderError with SSHTunnelForwarder
Question: I am trying to connect to my remote postgres db as follows:
from sshtunnel import SSHTunnelForwarder #Run pip install sshtunnel
from sqlalchemy.orm import sessionmaker #Run pip install sqlalchemy
with SSHTunnelForwarder(
('10.160.1.24', 22), #Remote server IP and SSH port
ssh_username = "<usr>",
ssh_password = "<pwd>",
remote_bind_address=('10.160.1.24', 5432),
local_bind_address=('127.0.0.1', 3334)
) as server:
server.start() #start ssh sever
print 'Server connected via SSH'
#connect to PostgreSQL
local_port = str(server.local_bind_port)
engine = create_engine('postgresql://<db_user>:<db_pwd>@127.0.0.1:' + local_port +'/<db_name>')
Session = sessionmaker(bind=engine)
session = Session()
print 'Database session created'
#test data retrieval
test = session.execute("SELECT * FROM <table_name>")
This is the output that I see:
File "/Library/Python/2.7/site-packages/sshtunnel.py", line 299, in handle
raise HandlerSSHTunnelForwarderError(msg)
HandlerSSHTunnelForwarderError: In #1 <-- ('127.0.0.1', 54265) to ('10.160.1.24', 5432) failed: ChannelException(2, 'Connect failed')
Any idea what I am doing wrong? I am able to connect to the postgresdb by
running the command `ssh -L 3334:localhost:5432 [email protected]` in
a seperate terminal and then connecting to the db at localhost:3334.
Answer: Found my mistake!
`remote_bind_address=('10.160.1.24', 5432)` should be
`remote_bind_address=('127.0.0.1', 5432),`
|
Why can't I see all stats for object from Facebook Graph API
Question: I'm using the [Python SDK for Facebook's Graph
API](https://github.com/mobolic/facebook-sdk) to fetch how many times a
Facebook page has been liked. I went to the [API
Explorer](https://developers.facebook.com/tools/explorer/) to obtain an access
token. The first time I chose the "Graph API Explorer" from the drop-down menu
for the Application (top-right). I then ran this code and got back what I
expected:
import facebook
ACCESS_TOKEN = "**********"
facebook_page_id = "168926019255" # https://www.facebook.com/seriouseats/
graph = facebook.GraphAPI(ACCESS_TOKEN)
page = graph.get_object(facebook_page_id)
print page
{u'about': u'The Destination for Delicious',
u'can_post': True,
u'category': u'Website',
u'checkins': 0,
u'cover': {u'cover_id': u'10154881161274256',
u'id': u'10154881161274256',
u'offset_x': 0,
u'offset_y': 43,
u'source': u'https://scontent.xx.fbcdn.net/t31.0-0/p180x540/13391436_10154881161274256_2605145572103420621_o.jpg'},
u'founded': u'December 2006',
u'has_added_app': False,
u'id': u'168926019255',
u'is_community_page': False,
u'is_published': True,
u'likes': 159050,
u'link': u'https://www.facebook.com/seriouseats/',
u'mission': u'Serious Eats is a site focused on celebrating and sharing food enthusiasm through recipes, dining guides, and more! Our team of expert editors and contributors are the last word on all that\u2019s delicious.',
u'name': u'Serious Eats',
u'parking': {u'lot': 0, u'street': 0, u'valet': 0},
u'talking_about_count': 3309,
u'username': u'seriouseats',
u'website': u'http://www.seriouseats.com',
u'were_here_count': 0}
I then went back to the API Explorer and changed the Application to my new
Facebook app that I created recently. I generated a new Access Token, swapped
it out, and ran the code above. This is the response I get back in the `page`
variable:
{u'id': u'168926019255', u'name': u'Serious Eats'}
As you can see, it only returns the `id` and the `name` of the page but the
other attributes -- specifically the `likes` attribute -- are missing.
So, **do I need to give my application permissions to see all attributes for
an object?** I've tried generating an Access Token from my App Id & App Secret
but still get the same results.
Answer: There are two things to look here.
1. Version of facebook API. In your first example when you got lots of result, you are using **version 2.2** (that is the default version of facebook python sdk). When you went and created new app in facebook, it has most likely used **version 2.6** as default. Therefore, it now only returns two to three fields and the rest you need to ask for.
2. Assuming you are indeed using version 2.6, how you can ask for is
to use the following code
page = graph.get_object(id='168926019255', fields='about, affiliation, awards, category')
This will give you
{'id': '168926019255', 'about': 'The Destination for Delicious', 'category': 'Website'}
Now you want to get the likes. Since likes are not a default field but an
"edge", you need to ask them using "connection". To do this, you can do the
following:
page = graph.get_connections(id='168926019255', connection_name='likes')
This will now give you all the likes
{'data': [{'id': '134049266672525', 'name': 'Tom Colicchio'}, {'id': '143533645671591', 'name': 'Hearth'}, {'id': '57909700259', 'name': 'Toro'}, ....
|
python - how to write empty tree node as empty string to xml file
Question: I want to remove elements of a certain tag value and then write out the `.xml`
file WITHOUT any tags for those deleted elements; is my only option to create
a new tree?
There are two options to remove/delete an element:
>
> [clear()](https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.clear)
> Resets an element. This function removes all subelements, clears all
> attributes, and sets the text and tail attributes to None.
At first I used this and it works for the purpose of removing the **data**
from the element but I'm still left with an empty element:
# Remove all elements from the tree that are NOT "job" or "make" or "build" elements
log = open("debug.log", "w")
for el in root.iter(*):
if el.tag != "job" and el.tag != "make" and el.tag != "build":
print("removed = ", el.tag, el.attrib, file=log)
el.clear()
else:
print("NOT", el.tag, el.attrib, file=log)
log.close()
tree.write("make_and_job_tree.xml", short_empty_elements=False)
The problem is that `xml.etree.ElementTree.ElementTree.write()` [still writes
out empty tags no matter
what:](https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.ElementTree.write)
> ...The keyword-only short_empty_elements parameter controls the formatting
> of elements that contain no content. If True (the default), they are emitted
> as a **single self-closed tag** , otherwise they are emitted as a **pair of
> start/end tags**.
Why isn't there an option to just not print out those empty tags! Whatever.
So then I thought I might try
>
> [remove(subelement)](https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.remove)
> Removes subelement from the element. Unlike the find* methods this method
> compares elements based on the instance identity, not on tag value or
> contents.
But this only operates on the child elements.
So I'd have to [do something
like](https://docs.python.org/3/library/xml.etree.elementtree.html#parsing-
xml):
for el in root.iter(*):
for subel in el:
if subel.tag != "make" and subel.tag != "job" and subel.tag != "build":
el.remove(subel)
But there's a big problem here: I'm invalidating the iterator by removing
elements, right?
Is it enough to simply check if the element is empty by adding `if subel`?:
if subel and subel.tag != "make" and subel.tag != "job" and subel.tag != "build"
Or do I have to get a new iterator to the tree elements every time I
invalidate it?
Remember: I just wanted to write out the xml file with no tags for the empty
elements.
Here's an example.
<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>
Let's say I want to remove any mention of `neighbor`. Ideally, I'd want this
output after the removal:
<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
</country>
</data>
Problem, is when I run the code using clear() (see first code block up above)
and write it to a file, I get this:
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor></neighbor><neighbor></neighbor></country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor></neighbor></country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor></neighbor><neighbor></neighbor></country>
</data>
Notice `neighbor` still appears.
I know I could easily run a regex over the output but there's gotta be a way
(or another Python api) that does this on the fly instead of requiring me to
touch my `.xml` file again.
Answer:
import lxml.etree as et
xml = et.parse("test.xml")
for node in xml.xpath("//neighbor"):
node.getparent().remove(node)
xml.write("out.xml",encoding="utf-8",xml_declaration=True)
Using elementTree, we need to find the _`parents of the neighbor nodes`_ then
find the _`neighbor nodes inside that parent`_ and remove them:
from xml.etree import ElementTree as et
xml = et.parse("test.xml")
for parent in xml.getroot().findall(".//neighbor/.."):
for child in parent.findall("./neighbor"):
parent.remove(child)
xml.write("out.xml",encoding="utf-8",xml_declaration=True)
Both will give you:
<?xml version='1.0' encoding='utf-8'?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
</country>
</data>
Using your attribute logic and modifying the xml a bit like below:
x = """<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Costa Rica" direction="W" make="foo" build="bar" job="blah"/>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W" make="foo" build="bar" job="blah"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>"""
Using lxml:
import lxml.etree as et
xml = et.fromstring(x)
for node in xml.xpath("//neighbor[not(@make) and not(@job) and not(@make)]"):
node.getparent().remove(node)
print(et.tostring(xml))
Would give you:
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Costa Rica" direction="W" make="foo" build="bar" job="blah"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W" make="foo" build="bar" job="blah"/>
</country>
</data>
The same logic in ElementTree:
from xml.etree import ElementTree as et
xml = et.parse("test.xml").getroot()
atts = {"build", "job", "make"}
for parent in xml.findall(".//neighbor/.."):
for child in parent.findall(".//neighbor")[:]:
if not atts.issubset(child.attrib):
parent.remove(child)
If you are using iter:
from xml.etree import ElementTree as et
xml = et.parse("test.xml")
for parent in xml.getroot().iter("*"):
parent[:] = (child for child in parent if child.tag != "neighbor")
You can see we get the exact same output:
In [30]: !cat /home/padraic/untitled6/test.xml
<?xml version="1.0"?>
<data>
<country name="Liechtenstein">#
<neighbor name="Austria" direction="E"/>
<rank>1</rank>
<neighbor name="Austria" direction="E"/>
<year>2008</year>
<neighbor name="Austria" direction="E"/>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>
In [31]: paste
def test():
import lxml.etree as et
xml = et.parse("/home/padraic/untitled6/test.xml")
for node in xml.xpath("//neighbor"):
node.getparent().remove(node)
a = et.tostring(xml)
from xml.etree import ElementTree as et
xml = et.parse("/home/padraic/untitled6/test.xml")
for parent in xml.getroot().iter("*"):
parent[:] = (child for child in parent if child.tag != "neighbor")
b = et.tostring(xml.getroot())
assert a == b
## -- End pasted text --
In [32]: test()
|
How to increment a variable contained within a class in python from outside of its class?
Question: I have a fairly simple python question, as I am pretty new to the language. I
started writing a quick program just for practice, but have now become
frustrated because I cannot get it to work.
import random
import sys
class Meta:
turncounter = 1
class Enemy:
life = 10
wis = 1
str = 3
def heal(self):
healscore = self.wis + random.randrange(1, 7, 1)
self.life += healscore
print "Enemy healed for " + str(healscore) + ".\n"
self.checklife()
Meta.turncounter += 1
def attack(self, player):
damage = self.str + random.randrange(1, 5, 1)
player.life -= damage
print "You took " + str(damage) + " damage.\n"
Player.checklife(player)
Meta.turncounter += 1
def checklife(self):
if self.life <= 0:
print "The enemy is dead.\n"
sys.exit(0)
else:
print "Enemy's HP: " + str(self.life) + ".\n"
class Player:
life = 50
wis = 3
str = 5
def heal(self):
healscore = self.wis + random.randrange(1, 7, 1)
self.life += healscore
print "You healed for " + str(healscore) + ".\n"
Meta.turncounter += 1
def attack(self, enemy):
damage = self.str + random.randrange(1, 5, 1)
enemy.life -= damage
print "You did " + str(damage) + " damage.\n"
Enemy.checklife(enemy)
Meta.turncounter += 1
def checklife(self):
if self.life <= 0:
sys.exit("You died!")
else:
print "HP: " + str(self.life) + ".\n"
paladin = Player()
hollow = Enemy()
turnmeta = Meta.turncounter % 2
move = random.randrange(1, 3, 1)
print turnmeta
print move
while turnmeta == 0:
if move == 1 and paladin.life <= 10:
paladin.heal()
print turnmeta
elif move != 0 or (move == 1 and hollow.life > 15):
paladin.attack(hollow)
print turnmeta
while turnmeta > 0:
if move == 1 and hollow.life <= 15:
print turnmeta
elif move != 0 or (move == 1 and hollow.life > 15):
hollow.attack(paladin)
print turnmeta
As you can see, this program isn't particularly complex; it is just meant to
be something to generally understand python syntax and loops and such. For
some reason, whenever I run the program, instead of the turncounter
incrementing and the paladin / hollow having a back and forth, the turncounter
stays locked in at 1, causing the hollow to attack until the paladin dies,
instantly ending the program.
Answer: The problem is your while-loop is relying on `turnmeta`, which doesn't change
when you increment `Meta.turncounter` in your class methods.
Notice:
>>> class Meta(object):
... turncounter = 0
...
>>> turnmeta = Meta.turncounter
>>> turnmeta
0
>>> Meta.turncounter += 1
>>> turnmeta
0
>>> Meta.turncounter
1
Just use `Meta.turncounter`.
That being said, your design, which relies heavily on class attributes, is not
good design, and skimming over your code I don't think you are doing what you
think you are doing. Python class definitions are different from Java.
You need to define instance attributes inside of an `__init__` method (or any
other method) using `self.attribute`, and not in the class namespace, as you
have done in your class definitions.
Read the docs: <https://docs.python.org/3.5/tutorial/classes.html>
|
Can't find a constant-time module in cryptography package used on AWS Lambda
Question: _[I am new to Python 2.7 and AWS Lambda, any help is appreciated]_
I followed the [AWS Lambda
tutorial](https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-
lambda/) and created a virtualenv to include Python libs associated with the
use of paramiko to copy a file to an SFTP server as a scheduled task on AWS
Lambda to run the following script:
import paramiko
def worker_handler(event, context):
host = "sftpserver.testdpom.com"
port = 22
transport = paramiko.Transport((host, port))
sftp = paramiko.SFTPClient.from_transport(transport)
username = "xxxx"
password = "xxxxxx"
transport.connect(username = username, password = password)
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.put("test.txt", "test.txt")
sftp.close()
transport.close()
return
{
'message' : "Script execution completed. See Cloudwatch logs for complete output"
}
The python script works correctly on my local machine but when I test the
package on AWS Lambda, I get the error "ImportError: No module named
_constant_time" and stack trace below.
**Can you think of any possible reason for this error in AWS Lambda
environment?**
File "/var/task/paramiko/kex_group1.py", line 111, in _parse_kexdh_reply
self.transport._verify_key(host_key, sig)
File "/var/task/paramiko/transport.py", line 1617, in _verify_key
key = self._key_info[self.host_key_type](Message(host_key))
File "/var/task/paramiko/rsakey.py", line 58, in __init__
).public_key(default_backend())
File "/var/task/cryptography/hazmat/backends/__init__.py", line 35, in default_backend
_default_backend = MultiBackend(_available_backends())
File "/var/task/cryptography/hazmat/backends/__init__.py", line 22, in _available_backends
"cryptography.backends"
File "/var/task/pkg_resources/__init__.py", line 2235, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/var/task/cryptography/hazmat/backends/openssl/__init__.py", line 7, in <module>
from cryptography.hazmat.backends.openssl.backend import backend
File "/var/task/cryptography/hazmat/backends/openssl/backend.py", line 15, in <module>
from cryptography import utils, x509
File "/var/task/cryptography/x509/__init__.py", line 7, in <module>
from cryptography.x509.base import (
File "/var/task/cryptography/x509/base.py", line 15, in <module>
from cryptography.x509.extensions import Extension, ExtensionType
File "/var/task/cryptography/x509/extensions.py", line 19, in <module>
from cryptography.hazmat.primitives import constant_time, serialization
File "/var/task/cryptography/hazmat/primitives/constant_time.py", line 9, in <module>
from cryptography.hazmat.bindings._constant_time import lib
ImportError: No module named _constant_time
Answer: Since lambda runs under the hood on amazon linux instances, you basically need
to:
1. spin up an amazon linux ec2 instance
2. create a virtualenv and `pip install` all packages you need
3. `scp` the files down to wherever your local deployment package lives
This all happens due to issues with how `pip install` does things differently
depending on whether you're on linux or mac (and I'm assuming windows as
well).
### here's a startup script to get the ec2 instance up to speed afaik
#!/bin/bash
sudo yum upgrade -y
sudo yum group install -y "Development tools"
sudo yum install -y \
python27 \
libffi libffi-devel \
openssl openssl-devel
virtualenv venv
source venv/bin/activate
pip install paramiko
The `paramiko` package will be in `/path/to/venv/lib/python2.7/site-
packages/paramiko` and the `cryptography` stuff will be in
`path/to/venv/lib64/python2.7/cryptography`.
I've been using a combination of `pip install` on my local mac and doing this
when a package doesn't work (like for `paramiko` and `psycopg2`), and there
are a few other helpful packages that people have pre-compiled and put up on
github elsewhere specifically for lambda.
HTH!
|
ImportError: cannot import name corpora with Gensim
Question: I have installed Anacoda Python v2.7 and Gensim v 0.13.0
I am using Spyder as IDE
I have the following simple code:
from gensim import corpora
* * *
I got the following error:
from gensim import corpora
File "gensim.py", line 7, in <module>
ImportError: cannot import name corpora
I reinstalled: \- Gensim \- Scipy \- Numpy but still have the same issue.
Answer: You might want to refer to this [issue](https://github.com/RaRe-
Technologies/gensim/issues/198). Apparently, Anaconda behaves weirdly:
bundling a different version of Numpy at runtime or something. I recommend
using `pip` to install Gensim. Or `easy_install` Here's a
[link](https://radimrehurek.com/gensim/install.html) to help you install it
properly.
|
Unable to read HTML content
Question: I'm building a webCrawler which needs to read links inside a webpage. For
which I'm using urllib2 library of python to open and read the websites.
I found a website where I'm unable to fetch any data. The URL is
"<http://www.biography.com/people/michael-jordan-9358066>"
My code,
import urllib2
response = urllib2.urlopen("http://www.biography.com/people/michael-jordan-9358066")
print response.read()
By running the above code, the content I get from the website, if I open it in
a browser and the content I get from the above code is very different. The
content from the above code does not include any data.
I thought it could be because of delay in reading the web page, so I
introduced a delay. Even after the delay, the response is the same.
response = urllib2.urlopen("http://www.biography.com/people/michael-jordan-9358066")
time.sleep(20)
print response.read()
The web page opens perfectly fine in a browser.
However, the above code works fine for reading Wikipedia or some other
websites. I'm unable to find the reason behind this odd behaviour. Please
help, thanks in advance.
Answer: What you are experiencing is most likely to be the effect of [dynamic web
pages](https://en.wikipedia.org/wiki/Dynamic_web_page). These pages do not
have static content for `urllib` or `requests` to get. The data is loaded on
site. You can use Python's [`selenium`](http://selenium-
python.readthedocs.io/) to solve this.
|
Haar- Cascade face detection OpenCv
Question: I used the following code to detect a face using Haar cascade classifiers
provided by OpenCv Python. But the faces are not detected and the square
around the face is not drawn. How to solve this?
import cv2
index=raw_input("Enter the index No. : ")
cascPath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
cap = cv2.VideoCapture(0)
cont=0
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=10,
minSize=(30, 30),
flags = cv2.cv.CV_HAAR_SCALE_IMAGE
)
for (x, y, w, h) in faces:
#cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('frame',frame)
inpt=cv2.waitKey(1)
if inpt & 0xFF == ord('q'):
break
elif inpt & 0xFF == ord('s') :
#name='G:\XCODRA\Integrated_v_01\EigenFaceRecognizer\img2'+index+"."+(str(cont))+".png"
name='IC_image\\'+index+"."+(str(cont))+".png"
resized = cv2.resize(gray,None,fx=200, fy=200, interpolation = cv2.INTER_AREA)
img=cv2.equalizeHist(resized)
cv2.imwrite(name,img)
print cont
cont+=1
Answer: Use the full path for the classifier.
|
Python translating C saxpy
Question: This is the C code:
btemp = (*beta)/(*beta_prev);
for (k=0; k<xsize*ysize; k++) {
parray[k] = zarray[k] + btemp*parray[k];
}
And I am doing the following in Python:
def saxpy(a, x, y):
return np.array([a * xi + yi for xi, yi in zip(x, y)], np.float32)
#...
btemp = beta / beta_prev
ptemp = saxpy(btemp, parray, zarray)
parray[:] = ptemp
In my code, it seems to work fine (`zarray` and `parray` are changing
constantly because they are inside a while loop).
But then I do:
btemp = beta / beta_prev
parray = saxpy(btemp, parray, zarray)
My code fail after iterating a couple of time in the loop, are not they the
same?
Answer: You changed `parray[:] =` to just `parray =`, which is not equivalent. The
former assigns the content of `parray` on an elementwise basis, which is an
important distinction if `parray` is not of the same type as the right hand
side of the assignment.
Consider the two cases:
>>> xs = [1, 2, 3, 4]
>>> xs[:] = tuple(2 * x for x in xs)
>>> xs
[2, 4, 6, 8]
>>> type(xs)
<class 'list'>
>>> xs = [1, 2, 3, 4]
>>> xs = tuple(2 * x for x in xs)
>>> xs
(2, 4, 6, 8)
>>> type(xs)
<class 'tuple'>
|
python replace not working
Question: I am trying to do multiple replaces in python but the replace is not working,
it only replaces the `<UNK>` but not `</s>`. Can anybody tell me where the
error is?
text=text.replace(":<UNK>","")
text=text.replace("</s>","")
Answer: Your code worked correctly but You can use regular expression to find and
replace the text.
import re
text = '1.595879e-04(Kan) 7.098440e-08(Şekerini:<UNK>) 2.558586e-06(Etkileyen) 7.671361e-07(Besinler) 3.731427e-02(</s>) (ailehekimligi-0000000001)'
output = re.sub(r':<UNK>', '', text)
output = re.sub(r':</s>', '', text)
print(output)
also if you have unicode string, you can use `u''` before text and your
replace statement.
|
Using type hints to translate Python to Cython
Question: Type Hints now are available in Python 3.5 version. In the specification ([PEP
484](https://www.python.org/dev/peps/pep-0484/)) the goals (and the non-goals)
are exposed clearly:
> # Rationale and Goals
>
>> This PEP aims to provide a standard syntax for type annotations, opening up
Python code to easier static analysis and refactoring, potential runtime type
checking, and (perhaps, in some contexts) code generation utilizing type
information. [...]
>>
>> Of these goals, static analysis is the most important.
>
> # Non-goals
>
>> Using type hints for **performance optimizations** is left as an exercise
for the reader.
On the other hand, Cython has been using for a long time static syntax to
improve performance. Usually, people rewrite some pieces of their code with
Cython syntax, compile them, and then import them back as independent modules.
It's a painful job do all that on a large code base. But the worst part is
that even when you follow correctly all the above steps, you don't have any
guarantee that you'll have a real improvement (because of compatibility
problems that might be caused if you are using some modules).
Would be a difficult task write a tool that **uses this new type hints**
things scattered in the code to **automatically translate them to Cython
syntax** and possibly do the rest of the job (compile them into modules and
import all them back)? It would be possible, therefore, to share the same
language syntax in all the code base.
Theoretically, it's possible to write a tool like that, but I'm not sure if be
worth (in terms of complexity to write it and the real improvement that would
be yield).
Thanks.
Answer: Someone else just asked about 484 and Cython, [PEP-484 Type Annotations with
own types](http://stackoverflow.com/questions/38005633/pep-484-type-
annotations-with-own-types), and I responded with a thread from a couple of
months back about 484 and numpy.
I have doubts about the suitability of this topic for Stackoverflow. It's a
research topic,not a 'how do I solve this coding problem' question.
Based on limited reading, the type-hints in 484 are preliminary, and any use
is limited to the code checker developed by the 484 authors. Py3 has had
annotations for a long time, but I've seen very few examples of code that
includes them. Certainly not in the `numpy` code that I focus on here.
Another point is that `cython` and `numpy` (and `numba`) are used with Py2
just as much, if not more, than Py3. So the latest bells-n-whistles in Py 3.5
are generally ignored by these users. The `@` operator is the only recent
addition that `numpy` users value.
You are welcome respond, but I may nominate this question for closure based on
it being a duplicate or off topic.
The `typing` module is being developed at <https://github.com/python/typing>
`mypy` is the type checker based on 484, <https://github.com/python/mypy>
(funny, `~/mypy` is the directory where I put all my SO testing scripts.)
That's where cutting edge Python type checking work is being done, not here.
|
Python anywhere modules not accessible outside main directory
Question: In my attempts to create a web app with python anywhere I have discovered that
my preferred module web.py is not preinstalled like other modules such as
flask. Upon looking through some forums I came to the understanding that
installation would occur in the following fashion in the hash console:
pip install --user web.py
It was however to my surprise that apparently:
Requirement already satisfied (use --upgrade to upgrade): web.py in /usr/local/lib/python2.7/dist-packages
Upon running the a python 2.7 shell in the main directory (if that is what is
actually happening when clicking "New 2.7 Shell") I successfully imported
'web', however when running an identical 'import web' outside of the main
directory in /site/run.py I was unsuccesful... Might someone inform me as to
what is necessary to correct this problem?
Answer: I apologize for my stupidity... It turns out that pythonanywhere defaults to
python 3.x and seeing as web.py is a 2.x module, it was unsuccessful. I can
only hope that perhaps this post will help some equally unassuming individual
such as myself in the future.
|
Accessing website with urllib returns error, retrieving information from Results Page
Question: Hello I created a code in python so I could access a reverse phone lookup site
and determine if a phone is a cell phone or land line. The website I am using
is whitepages, whose results page will only include the phrase "VoIP" if the
phone is a land line (which I have determined after looking at many results).
However, I am getting an error at the website accessing stage. So far my code
looks like:
import urllib
def Phone_Checker(number):
url = 'http://www.whitepages.com/reverse_phone'
enter = {'e.g. 206-867-5309': number}
door= urllib.parse.urlencode(enter)
open=door.encode('UTF-8')
fight= urllib.request.urlopen(url, open)
d = fight.read()
v="VoIP"
vv=v.encode("UTF-8")
if vv in d: #if VoIP it is landline
return False
else:
return True
I changed my strings into bytes because it was required for my variable "open"
to be in bytes for urlopen. In a version of the code I made to access a
different site it required a few other string conversion into bytes but I
cannot quite remember which information required this conversion (just a heads
up if the code after introducing the variable fight looks incorrect because I
have not been able to debug the code which follows because of my difficulty
with my urlopen. Whenever I run my code I receive this error
File "C:\Users\aa364\Anaconda3\lib\urllib\request.py", line 589, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
HTTPError: Requested Range Not Satisfiable
I was wondering how I could circumvent this error and if there is any possible
alternative to creating a program to verify if a phone is mobile or a landline
for DOMESTIC (USA) phone numbers. Thank you in advance!
Answer: Based on the stuff I'm reading and experimenting with to try to find an answer
on this, I think this is likely whitepages' doing. I have 3 reasons:
1. the error seems to be a result of whitepages only accepting requests from certain browsers ('User-Agents')
2. Upon changing the 'User-Agent' I get kicked to robots.txt (which is basically a response meaning "don't automate this")
3. Both of these things are likely the result of whitepages having a paid/premium-access API: obviously, they'll do whatever they can to stop people from accessing their information for free if they're trying to charge for it
So, I think the answer in this case is, unfortunately, find another
phonenumber lookup.
|
Python: many-to-many comparison to find required set of data
Question: This is my first question so please forgive any mistakes.
I have a large file(csv) with several(~10000000+) lines of information like
the following example:
date;box_id;box_length;box_width;box_height;weight;type
--snip--
1999-01-01 00:00:20;nx1124;10;4;5.5;2.1;oversea
1999-01-01 00:00:20;np11r4;8;3.25;2;4.666;local
--snip--
My objective is to read through each line and calculate the box's volume and
within 1 hour window(for example, 00:00:00 - 00:00:59) I have to record if 2
or more boxes are of similar volume (+-10% difference) and then record their
timestamp as well as type.
At the moment, I am using a brute-force approach:
* run through each line
* calculate volume
* go to next line and compute volume
* compare
* repeat till 1 hr time-difference is detected
* remove the first box from list
* add another box to the list
* repeat the process with second box
For example, if my 1 hour window has 1,2,3,4, I'm doing this
1
2 == 1
3 == 1 then == 2
4 == 1 then == 2 then == 3
5 == 2 then == 3 then == 4 # removed 1 from list(1hr window moved down)
6 == 2 then == 3 then == 4 then == 5
7 == 2 then == 3 then == 4 then == 5 then == 6
.... so on ....
This is the best I can think of since I have to compare each and every box
with others within a given time-window. But this is very very slow at the
moment.
**I am looking for a better algorithm but I am unsure as to which direction I
must go.** I am trying to learn some excellent tools(so far Pandas is my
favorite) but I am under the assumption that I need to implement some
algorithm first to allow these tools to deal with the data in the way I need
to.
If it helps I will post my python code(source).
**Update** Following are my code. I have omitted several lines(such as
try/catch block for invalid file path/format, type conversion error handling
etc). I have customized the code a bit to work for 5second window.
Following is the Box class
from datetime import datetime
from time import mktime
class Box(object):
""" Box model """
def __init__(self,data_set):
self.date = data_set[0]
self.timestamp = self.__get_time()
self.id = data_set[1]
self.length = float(data_set[2])
self.width = float(data_set[3])
self.height = float(data_set[4])
self.weight = int(data_set[5])
self.volume = self.__get_volume()
def __get_time(self):
""" convert from date string to unix-timestamp """
str_format = '%Y-%m-%d %H:%M:%S'
t_tuple = datetime.strptime(self.date, str_format).timetuple()
return mktime(t_tuple)
def __get_volume(self):
""" calculate volume of the box """
return (self.length * self.width * self.height)
Following is the actual program performing the comparison. I combined by
utility file and main.py file together for convenience.
from csv import reader
from io import open as open_file
from os import path
from sys import argv, exit
from time import time
# custom lib
from Box import Box
def main():
file_name = str.strip(argv[1])
boxes_5s = []
diff = 0
similar_boxes = []
for row in get_file(file_name):
if row:
box = Box(row)
if len(boxes_5s) > 0:
diff = box.timestamp - boxes_5s[0].timestamp
if diff < 6:
boxes_5s.append(box)
else:
similar_boxes += get_similar(boxes_5s)
del boxes_5s[0] # remove the oldest box
boxes_5s.append(box)
else:
boxes_5s.append(box)
print(similar_boxes)
def get_file(file_name):
""" open and return csv file pointer line by line """
with open_file(file_name,'rb') as f:
header = f.readline()
print(header)
rows = reader(f, delimiter=';')
for r in rows:
yield r
else:
yield ''
def get_similar(box_list):
""" compare boxes for similar volume """
num_boxes = len(box_list)
similar_boxes = []
record_str = "Box#{} Volm:{} and #{} Volm:{}"
for i in xrange(num_boxes):
box_1 = box_list[i]
for j in xrange(i+1, num_boxes):
box_2 = box_list[j]
vol_diff = abs((box_1.volume - box_2.volume)/box_1.volume) <= 0.1
if vol_diff: similar_boxes.append(record_str.format(box_1.id,box_1.volume,box_2.id, box_2.volume))
return similar_boxes
if __name__ == "__main__":
main()
Thank you.
Answer: Taking the first timestamp as start of a one hour window (instead of clock
hour bins always staring at hour:00:00) I think a quite feasible
implementation for data amounts as small as a few ten million lines of data
might be (expect time ordered entriesin file):
#! /usr/bin/env python
from __future__ import print_function
import csv
import datetime as dt
import math
import collections
FILE_PATH_IN = './box_data_time_ordered_100k_sparse.csv'
TS_FORMAT = '%Y-%m-%d %H:%M:%S'
TS_TOKEN = 'date'
SIMILAR_ENOUGH = 0.1
BoxEntry = collections.namedtuple(
'BoxEntry', ['start_ts', 'a_ts', 't_type', 'b_volume'])
def box_volume(box_length, box_width, box_height):
"""Volume in cubic of length units given."""
return box_length * box_width * box_height
def filter_similar_box_volumes(box_entries):
"""Ordered binary similarity comparator using complex algorithm
on a medium large slice of data."""
def _key(r):
"""sort on volume."""
return r.b_volume
entries_volume_ordered = sorted(box_entries, key=_key)
collector = []
for n, box_entry in enumerate(entries_volume_ordered[1:], start=1):
one = box_entry.b_volume
prev_box_entry = entries_volume_ordered[n]
previous = prev_box_entry.b_volume
if one and math.fabs(one - previous) / one < SIMILAR_ENOUGH:
if box_entry not in collector:
collector.append(box_entry)
if prev_box_entry not in collector:
collector.append(prev_box_entry)
return collector
def hourly_boxes_gen(file_path):
"""Simplistic generator, yielding hour slices of parsed
box data lines belonging to 1 hour window per yield."""
csv.register_dialect('boxes', delimiter=';', quoting=csv.QUOTE_NONE)
start_ts = None
cx_map = None
hour_data = []
an_hour = dt.timedelta(hours=1)
with open(file_path, 'rt') as f_i:
for row in csv.reader(f_i, 'boxes'):
if cx_map is None and row and row[0] == TS_TOKEN:
cx_map = dict(zip(row, range(len(row))))
continue
if cx_map and row:
a_ts = dt.datetime.strptime(row[cx_map[TS_TOKEN]], TS_FORMAT)
t_type = row[cx_map['type']]
b_length = float(row[cx_map['box_length']])
b_width = float(row[cx_map['box_width']])
b_height = float(row[cx_map['box_height']])
b_volume = box_volume(b_length, b_width, b_height)
if start_ts is None:
start_ts = a_ts
hour_data.append(
BoxEntry(start_ts, a_ts, t_type, b_volume))
elif a_ts - an_hour < start_ts:
hour_data.append(
BoxEntry(start_ts, a_ts, t_type, b_volume))
else:
yield filter_similar_box_volumes(hour_data)
hour_data = [BoxEntry(start_ts, a_ts, t_type, b_volume)]
start_ts = a_ts
if hour_data:
yield filter_similar_box_volumes(hour_data)
def main():
"""Do the thing."""
for box_entries in hourly_boxes_gen(FILE_PATH_IN):
for box_entry in box_entries:
print(box_entry.start_ts, box_entry.a_ts, box_entry.t_type)
if __name__ == '__main__':
main()
With sample input file:
date;box_id;box_length;box_width;box_height;weight;type
1999-01-01 00:00:20;nx1124;10;4;5.5;2.1;oversea
1999-01-01 00:00:20;np11r4;8;3.25;2;4.666;local
1999-01-01 00:10:20;np11r3;8;3.25;2.1;4.665;local
1999-01-01 00:20:20;np11r2;8;3.25;2.05;4.664;local
1999-01-01 00:30:20;np11r1;8;3.23;2;4.663;local
1999-01-01 00:40:20;np11r0;8;3.22;2;4.662;local
1999-01-01 00:50:20;dp11r4;8;3.24;2;4.661;local
1999-01-01 01:00:20;cp11r3;8;3.25;2;4.666;local
1999-01-01 01:01:20;bp11r2;8;3.26;2;4.665;local
1999-01-01 01:02:20;ap11r1;8;3.22;2;4.664;local
1999-01-01 01:03:20;zp11r0;12;3.23;2;4.663;local
1999-01-01 02:00:20;yp11r4;8;3.24;2;4.662;local
1999-01-01 04:00:20;xp11r4;8;3.25;2;4.661;local
1999-01-01 04:00:21;yy11r4;8;3.25;2;4.661;local
1999-01-01 04:00:22;xx11r4;8;3.25;2;4.661;oversea
1999-01-01 04:59:19;zz11r4;8;3.25;2;4.661;local
yields:
1999-01-01 00:00:20 1999-01-01 00:30:20 local
1999-01-01 00:00:20 1999-01-01 00:50:20 local
1999-01-01 00:00:20 1999-01-01 00:00:20 local
1999-01-01 00:00:20 1999-01-01 00:20:20 local
1999-01-01 00:00:20 1999-01-01 00:10:20 local
1999-01-01 00:00:20 1999-01-01 00:00:20 oversea
1999-01-01 00:00:20 1999-01-01 01:00:20 local
1999-01-01 01:00:20 1999-01-01 01:01:20 local
1999-01-01 01:00:20 1999-01-01 01:03:20 local
1999-01-01 04:00:20 1999-01-01 04:00:21 local
1999-01-01 04:00:20 1999-01-01 04:00:22 oversea
1999-01-01 04:00:20 1999-01-01 04:59:19 local
Some notes:
1. csv module used for reading, with a specific dialect (as semicolon is not default delimiter)
2. import datetime with alias, to access datetime class for strptime method without overriding the module name - YMMV
3. encapsulate the chunked hour window reader in a generator function
4. volume and similarity calculation in separate fuctions.
5. volume ordered simple filter algorithm that should be somehow O(m) for m being the number of candidate matches.
6. Use named tuple for compact storage but also meaningful addressing.
7. To implement a clock adjusted 1 hour window (not using the first timestamp to bootstrap), one needs adjust the code a bit, but should be trivial
Otherwise curiously awaiting the code sample from the OP ;-)
**updated** the similar enough filtering algorithm, so that event rich hours,
do not make an O(n^2) algorithm eat all our time ... (the _naive one with a
nested loop removed).
Adding a day full of entries every second to the sample with 3600 candidates
for the similarity check took approx 10 seconds for these approx 100k lines
(86400+).
|
PyQt5 app exits on error where PyQt4 app would not
Question: I have been developing a scientific application using PyQt4 for a couple of
weeks, and decided to switch over to PyQt5. Aside from a few things to iron
out one thing is puzzling me, and I'm not sure if its intended behavior or
not.
When Using PyQt4: if I had a python error (AttributeError, FileNotFoundError
or whatever) the error message would print out to the python console, but I
could continue using the PyQt4 gui application
When Using PyQt5, when I have a python error, the entire app closes on me. Is
this a setting, or is this intended behavior? This is potentially disastrous
as before if there was a bug, I could save the data I had acquired, but now
the application will just close without warning.
Here is an example that demonstrates the behavior. This script opens a widget
with a button that activates a file dialog. If a valid file is selected, the
code will print the filepointer object to the command line. If no file is
selected because the user hits cancel, then that case is not handled and
python tries to open a file with path ''. In this both PyQt4 and PyQt5
versions throw the same python error:
FileNotFoundError: [Errno 2] No such file or directory: ''
However, the PyQt4 version will leave the widget open and the user can
continue, whereas the PyQt5 version closes, with exit code of 1.
Here is the example code, executed by: "python script.py"
import sys
# from PyQt4 import QtGui as qt
# from PyQt4.QtCore import PYQT_VERSION_STR
from PyQt5 import QtWidgets as qt
from PyQt5.QtCore import PYQT_VERSION_STR
def open_a_file():
fname = qt.QFileDialog.getOpenFileName()
if PYQT_VERSION_STR[0] == '4':
f = open(fname, 'r')
print(f)
else:
f = open(fname[0], 'r')
print(f)
f.close()
if __name__ == '__main__':
app = qt.QApplication(sys.argv)
w = qt.QWidget()
w.resize(250, 150)
w.move(300, 300)
w.setWindowTitle('PyQt 4 v 5')
btn = qt.QPushButton("Open a file", w)
btn.clicked.connect(open_a_file)
w.show()
sys.exit(app.exec_())
Can I use PyQt5, but have it not crash the way that the PyQt4 version does?
Here is my current system information system information:
Windows 7 64-bit
Anaconda, Python 3.5
PyQt4 --> from conda sources
PyQt5 --> using:
conda install --channel https://conda.anaconda.org/m-labs qt5
conda install --channel https://conda.anaconda.org/m-labs pyqt5
both PyQt4 and PyQt5 are installed side by side
Answer: The old behavior can be forced by calling this code, which I found after more
searching. ~~I'm not sure I understand why this is bad behavior that needed to
be deprecated, but this does work.~~
I submit that this should not be the default behavior, and that properly
catching exceptions is the correct way to program, but given the specific
purpose of my programming, and my time constraints, I find it useful to have
access to as an optional mode, as I can still see the python exception traces
printed to the console, and won't lose any unsaved data because of an uncaught
exception.
import sys
def my_excepthook(type, value, tback):
# log the exception here
# then call the default handler
sys.__excepthook__(type, value, tback)
sys.excepthook = my_excepthook
|
Buildozer android APK Import Error
Question: Please help.
My kivy program runs perfect on the desktop (Mac OS, using buildozer and
Android-new toolchain).
However once i build the APK and test it on the android Emulator (Andyroid) i
get the following error in the logcat regarding the user class that i import.
Do i need to specify it somewhere in the spec file or something ?
D/HostConnection( 1738): HostConnection::get() New Host Connection established 0xb7f2d2b0, tid 1865
I/python ( 1738): [INFO ] [GL ] OpenGL version <OpenGL ES 2.0>
I/python ( 1738): [INFO ] [GL ] OpenGL vendor <Imagination Technologies>
I/python ( 1738): [INFO ] [GL ] OpenGL renderer <PowerVR SGX 544MP>
I/python ( 1738): [INFO ] [GL ] OpenGL parsed version: 2, 0
I/python ( 1738): [INFO ] [GL ] Texture max size <8192>
I/python ( 1738): [INFO ] [GL ] Texture max units <16>
I/python ( 1738): [INFO ] [Window ] auto add sdl2 input provider
I/python ( 1738): [INFO ] [Window ] virtual keyboard not allowed, single mode, not docked
D/AndroidRuntime( 2115):
D/AndroidRuntime( 2115): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<<
D/AndroidRuntime( 2115): CheckJNI is OFF
D/dalvikvm( 2115): Trying to load lib libjavacore.so 0x0
D/dalvikvm( 2115): Added shared lib libjavacore.so 0x0
D/dalvikvm( 2115): Trying to load lib libnativehelper.so 0x0
D/dalvikvm( 2115): Added shared lib libnativehelper.so 0x0
D/AndroidRuntime( 2115): Calling main entry com.android.commands.settings.SettingsCmd
D/dalvikvm( 2115): Note: class Landroid/app/ActivityManagerNative; has 157 unimplemented (abstract) methods
D/AndroidRuntime( 2115): Shutting down VM
D/SettingsProvider( 2213): User 0 external modification to /data/data/com.android.providers.settings/databases/settings.db; event=8
D/SettingsProvider( 2213): User 0 updating our caches for /data/data/com.android.providers.settings/databases/settings.db
I/python ( 1738): Traceback (most recent call last):
I/python ( 1738): File "main.py", line 72, in <module>
I/python ( 1738): from user import User
I/python ( 1738): ImportError: cannot import name User
I/python ( 1738): Python for android ended.
I/HostConnection( 1738): ~HostConnection
V/SDL ( 1738): onPause()
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause__
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause__
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause__
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause__
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause__
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause__
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause__
E/dalvikvm( 1738): Loading ARM symbol: Java_org_libsdl_app_SDLActivity_nativePause
V/SDL ( 1738): nativePause()
F/libc ( 1738): Fatal signal 11 (SIGSEGV) at 0x00000004 (code=1), thread 1738 (rg.test.rides16)
I/DEBUG ( 1315): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
Snippet from main.py
from user import User
from category import Category
from advert import Advert
from attending import Attending
Answer: There is a `user` package in python 2.7, and it overrides your own `user`
package. Try changing the name to something else, like `user_`. It's a minor
bug in the buildozer.
|
Open links from txt file in python
Question: I would like to ask for help with a rss program. What I'm doing is collecting
sites which are containing relevant information for my project and than check
if they have rss feeds. The links are stored in a txt file(one link on each
line). So I have a txt file with full of base urls what are needed to be
checked for rss.
I have found this code which would make my job much easier.
import requests
from bs4 import BeautifulSoup
def get_rss_feed(website_url):
if website_url is None:
print("URL should not be null")
else:
source_code = requests.get(website_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.find_all("link", {"type" : "application/rss+xml"}):
href = link.get('href')
print("RSS feed for " + website_url + "is -->" + str(href))
get_rss_feed("http://www.extremetech.com/")
But I would like to open my collected urls from the txt file, rather than
typing each, one by one.
So I have tryed to extend the program with this:
from bs4 import BeautifulSoup, SoupStrainer
with open('test.txt','r') as f:
for link in BeautifulSoup(f.read(), parse_only=SoupStrainer('a')):
if link.has_attr('http'):
print(link['http'])
But this is returning with an error, saying that beautifoulsoup is not a http
client.
I have also extended with this:
def open()
f = open("file.txt")
lines = f.readlines()
return lines
But this gave me a list separated with ","
I would be really thankfull if someone would be able to help me
Answer: Typically you'd do something like this:
with open('links.txt', 'r') as f:
for line in f:
get_rss_feed(line)
Also, it's a bad idea to define a function with the name `open` unless you
intend to replace the builtin function `open`.
|
how to remove the delay in obtaining data from the Arduino via COM?
Question: I have an Arduino connected to the joystick. Arduino sends the data via COM
port to my PC. On PC, the data processed by the program in Python, in which
the circle moving with joystick. The fact is that after a few minutes there is
a delay between the joystick and circle.
Code for Arduino
#define axis_X 0
#define axis_Y 1
int value_X, value_Y = 0;
void setup() {
Serial.begin(9600);
}
void loop() {
value_X = analogRead(axis_X);
Serial.print(value_X, DEC);
Serial.print("|");
value_Y = analogRead(axis_Y);
Serial.print(value_Y, DEC);
Serial.print("\n");
delay(20);
}
Code for PC
import Tkinter as tk
import serial
import os
import sys
import time
#connect to COM
ser = serial.Serial('COM11', 9600, dsrdtr = 1,timeout = 0)
def data():
time.sleep(0.02)
serialline = ser.readline().split("\n")
coord = []
if serialline[0]:
string = serialline[0]
coord = string.split("|")
return coord
#create window
root = tk.Tk()
canvas = tk.Canvas(root, width=1000, height=700, borderwidth=0, highlightthickness=0, bg="black")
canvas.grid()
def _create_circle(self, x, y, r, **kwargs):
return self.create_oval(x-r, y-r, x+r, y+r, **kwargs)
tk.Canvas.create_circle = _create_circle
r = 50
x = 100
y = 100
sm = 200
cir = canvas.create_circle(x, y, r, fill="blue", outline="#DDD", width=1)
root.wm_title("Circles and Arcs")
while 1:
coord = data()
x = int(coord[0])/5
y = int(coord[1])/5
canvas.coords(cir,x+ sm,y+sm,x+sm + 2*r,y+sm + 2*r)
root.update()
How to solve this problem?
Answer: In the Arduino code, only send the coordinates if the coordinates change:
int value_X = 0, value_Y = 0;
int old_X = 0, old_Y = 0;
void loop() {
value_X = analogRead(axis_X);
value_Y = analogRead(axis_Y);
if ( value_X != old_X || value_Y != old_Y )
{
Serial.print(value_X, DEC);
Serial.print("|");
Serial.print(value_Y, DEC);
Serial.print("\n");
old_X = value_X;
old_Y = value_Y;
}
delay(20);
}
In the python code, remove the delay (`time.sleep(0.02)`). My guess is that
the Arduino is sending data faster than the python code is receiving data, so
eventually you have a queue of unread messages for the python code to process.
|
Publish python project with imported modules
Question: I would like to publish a python project, but i use modules in it like socket.
How do i add to the project code that is not mine? Is that even legal?
Answer: There's no need to add modules that are in the standard library. For other
dependencies, you can
* test if the module is available (`try: import x except: error()`) and notify the user to install it or even automatically install it
* [package your program](https://pypi.python.org/pypi) and [have pip install the dependencies for you](https://pip.readthedocs.io/en/stable/user_guide/#requirements-files)
* use [cx_freeze](https://pypi.python.org/pypi/cx_Freeze) or similar to make a stand-alone package from your program that includes the modules
If you're on Windows, you could package it yourself with the minimal [Embedded
Distribution](https://docs.python.org/3/using/windows.html#embedded-
distribution) (from Python 3.5).
See the [Python wiki on deployment](https://wiki.python.org/moin/deployment)
for further reading.
Observe the licenses of your 3rd party modules.
|
Django Error - ImportError: cannot import name get_cache
Question: I was running my project using Django 1.8 and it was working properly. But
then I had to upgrade Django to 1.9 now when I again run my project it gave an
error - ImportError: cannot import name get_cache.
python manage.py syncdb
and I get following:
Traceback (most recent call last):
File "manage.py", line 10, in <module> execute_from_command_line(sys.argv)
File "/home/vermahim17/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 350,
in execute_from_command_line utility.execute()
File "/home/vermahim17/env/local/lib/python2.7/site-packages/django/core/management/__init__.py",
line 324, in execute django.setup()
File "/home/vermahim17/env/local/lib/python2.7/site-packages/django/__init__.py",
line 18, in setup apps.populate(settings.INSTALLED_APPS)
File "/home/vermahim17/env/local/lib/python2.7/site-packages/django/apps/registry.py",
line 85, in populate app_config = AppConfig.create(entry)
File "/home/vermahim17/env/local/lib/python2.7/site-packages/django/apps/config.py",
line 90, in create module = import_module(entry)
File "/usr/lib/python2.7/importlib/__init__.py",
line 37, in import_module __import__(name)
File "/home/vermahim17/env/local/lib/python2.7/site-packages/keyedcache/__init__.py",
line 27, in <module>
from django.core.cache import get_cache, InvalidCacheBackendError, DEFAULT_CACHE_ALIAS ImportError: cannot import name get_cache
Answer: I think , Django 1.9 doesn't have the provision to import get_cache method.
Please look into this to fix
<https://github.com/vstoykov/django-imagekit/commit/c26f8a0>
|
JSON to data frame after some calculations python
Question: I have a JSON file
{
"b0:47:bf:af:c1:42":
{
"No. of visits": 10, "cities":
{
"Mumbai": {"count": 5,"last_visited": "5/22/2016"},
"Kolkata": {"count": 2,"last_visited": "5/22/2016"},
"Amritsar":{"count": 3,"last_visited": "5/22/2016"}
}
}
}
there are large no. of keys like `"b0:47:bf:af:c1:42"` so what I want to take
this key as index or first column of a data frame and then store `max_visited
city` in one column which I have to get from data stored in each city like in
this case max_visited city is `"Mumbai"` whose count is `5` .and one more
column as `% visit to max_visited city` like in this case it is `50%` so first
row of data frame will be something like this.
mac_Address max_visited city % visit to max_visited city
0 b0:47:bf:af:c1:42 Mumbai 50
I have to do a lot of this kind of conversion from JSON to data frame applying
some calculations. I have put may problem a short and simple form, so any help
on this ? I am using python 2.7
Answer:
import json
d = {
"b0:47:bf:af:c1:42":
{
"No. of visits": 10, "cities":
{
"Mumbai": {"count": 5,"last_visited": "5/22/2016"},
"Kolkata": {"count": 2,"last_visited": "5/22/2016"},
"Amritsar":{"count": 3,"last_visited": "5/22/2016"}
}
},
"k0:k0:k0:k0:k0:k0":
{
"No. of visits": 24, "cities":
{
"Mumbai": {"count": 2,"last_visited": "5/22/2016"},
"Kolkata": {"count": 20,"last_visited": "5/22/2016"},
"Amritsar":{"count": 2,"last_visited": "5/22/2016"}
}
}
}
table = []
for mac, data in d.items():
row = [ mac ]
no_visits = data["No. of visits"]
max = 0
max_city = ""
for city, city_data in data["cities"].items():
if city_data["count"] > max:
max = city_data["count"]
max_city = city
row += [ max_city, max * 100 / no_visits ]
table.append(row)
print "mac_address\t\tmax_vis city\t\t%visit to max_vis city"
for row in table:
print "{}\t\t{}\t\t{}%".format(row[0], row[1], row[2])
Output:
$ python test.py
mac_address max_vis city %visit to max_vis city
b0:47:bf:af:c1:42 Mumbai 50%
k0:k0:k0:k0:k0:k0 Kolkata 83%
|
Changing a variable in a list does not affect the list
Question: I've been attempting to write some code to tune variables in my chess program,
and I found that this code doesn't do what I expect it to do at all.
import random
# Knight value, bishop value, rook value, queen value
values = [300, 300, 500, 900]
e1vals = values
e2vals = values
# Add a gaussian distributed random number to it
deltas = []
for i in range(0, len(values)):
x = random.gauss(0, 20)
deltas.append(x)
for i in range(0, len(values)):
e1vals[i] = values[i] + deltas[i]
e2vals[i] = values[i] - deltas[i]
print(e1vals)
print(e2vals)
Intuitively, the code here should simply add or subtract the values in deltas
to e1vals and e2vals, but instead it doesn't make any change other than
casting values to float.
I'm using Python 3.5.1 if that makes any difference.
Answer: The problem is that `e1values` and `e2values` refer to the _same_ list. So all
your code does is add a value to each item in the list, then subtract it
again, leaving you with the original value.
|
Python Saving long string in text file
Question: I have a long string that I want to save in a text file with the code:
`taxtfile.write(a)`
but because the string is too long, the saved file prints as:
"something something ..... something something"
how do I make sure it will save the entire string without truncating it ?
Answer: it should work regardless of the string length
this is the code I made to show it:
import random
a = ''
number_of_characters = 1000000
for i in range(number_of_characters):
a += chr(random.randint(97, 122))
print(len(a)) # a is now 1000000 characters long string
textfile = open('textfile.txt', 'w')
textfile.write(a)
textfile.close()
you can put number_of_characters to whatever number you like but than you must
wait for string to be randomized
and this is screenshot of textfile.txt: <http://prntscr.com/bkyvs9>
probably your problem is in string a.
|
I cant figure out the read part of the Python program. Please assist
Question: Write:
def main():
import random
#Open a file named numbers.txt.
myfile = open('numbers.txt', 'w')
file_size= random.randint(4,7)
#Produce the numbers
for i in range(file_size):
k = random.randrange(5,19,2)
#Write as many random intergers as the user request in the range of 5-19 on one line
#to the file.
myfile.write(str(num) + ' ')
#Close the file.
myfile.close()
print('File Saved')
#Call the main function
main()
Read: How do i get the read coding to display the random numbers and also
provide the sum?
def main():
import random
#Open a file named numbers.txt.
myfile = open('numbers.txt', 'r')
#Read/process the file's contents.
file_contents = myfile.read()
numbers = file_contents.split(" ")
odd = 0
num = int(file_contents)
for file_contents in numbers:
odd += num
#Close the file.
myfile.close()
#Print out integer totals
print('The total of the odd intergers is: ', odd)
Answer: You want to write every number into the file, therefore, you need to include
it within the for loop:
#Produce the numbers
for count in range(file_size):
num = random.randrange(5,19,2)
myfile.write(str(num) + ' ')
When processing the numbers in, you're on the right track, but you've got it
out of order:
numbers_as_strings = file_contents.split(" ")[:-1]
odd = 0
numbers is a list of strings representing each number, we want to iterate
through them and do something for each of them. You may wonder why I added
`[:-1]`? Because we made a string like `"1 2 3 4 5 "` See the last space? when
you `split()` that, you'll get `"1","2","3","4","5",""`, and we don't want the
last empty string `""`.
for number_as_string in numbers_as_strings:
odd += int(number_as_string)
finally, to print the list of numbers there is a nice way of doing that built
into python, `join()`. `' '.join()` says put all these together, and put that
space (`' '`) between them.
print(' '.join(numbers_as_strings))
|
Scrape special characters in Python Beautiful Soup
Question: How can I remove (or encode) the special characters from the page referenced
below?
import urllib2
from bs4 import BeautifulSoup
import re
link = "https://www.sec.gov/Archives/edgar/data/4281/000119312513062916/R2.htm"
request_headers = {"Accept-Language": "en-US,en;q=0.5", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Referer": "http://google.com", "Connection": "keep-alive"}
request = urllib2.Request(link, headers=request_headers)
html = urllib2.urlopen(request).read()
soup = BeautifulSoup(html, "html.parser")
soup = soup.encode('utf-8', 'ignore')
print(soup)
Answer: Unicode objects can only be printed if they can be converted to ASCII. If it
can't be encoded in ASCII, you'll get that error. You probably want to
explicitly encode it and then print the resulting soup:
import requests
from bs4 import BeautifulSoup
import re
link = "https://www.sec.gov/Archives/edgar/data/4281/000119312513062916/R2.htm"
request_headers = {"Accept-Language": "en-US,en;q=0.5", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Referer": "http://google.com", "Connection": "keep-alive"}
reuest = requests.get(link, headers=request_headers)
soup = BeautifulSoup(reuest.text,"lxml")
print(soup.encode('utf-8'))
|
Scraping Instagram followers page using selenium and python
Question: I have a question related to scraping the instagram followers page. I have a
code but it displays only 9 followers. Kindly help me.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
def login(driver):
username = "[email protected]" # <username here>
password = "xxxx" # <password here>
# Load page
driver.get("https://www.instagram.com/accounts/login/")
# Login
driver.find_element_by_xpath("//div/input[@name='username']").send_keys(username)
driver.find_element_by_xpath("//div/input[@name='password']").send_keys(password)
driver.find_element_by_xpath("//span/button").click()
# Wait for the login page to load
WebDriverWait(driver, 15).until(
EC.presence_of_element_located((By.LINK_TEXT, "See All")))
def scrape_followers(driver, account):
# Load account page
driver.get("https://www.instagram.com/{0}/".format(account))
# Click the 'Follower(s)' link
driver.find_element_by_partial_link_text("follower").click()
# Wait for the followers modal to load
xpath = "//div[@style='position: relative; z-index: 1;']/div/div[2]/div/div[1]"
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, xpath)))
# You'll need to figure out some scrolling magic here. Something that can
# scroll to the bottom of the followers modal, and know when its reached
# the bottom. This is pretty impractical for people with a lot of followers
# Finally, scrape the followers
xpath = "//div[@style='position: relative; z-index: 1;']//ul/li/div/div/div/div/a"
followers_elems = driver.find_elements_by_xpath(xpath)
return [e.text for e in followers_elems]
if __name__ == "__main__":
driver = webdriver.Firefox()
try:
login(driver)
followers = scrape_followers(driver, "instagram")
print(followers)
finally:
driver.quit()
This code was taken from another page. I dont understand how to scroll down
the followers page.
Answer: You can easily scroll down using javascript by increasing the scrollTop. You
run this scroll until the amount of users in the list no longer changes.
The difference in the amount of users can be checked using the following
function
count = 0
def check_difference_in_count(driver):
global count
new_count = len(driver.find_elements_by_xpath("//div[@role='dialog']//li"))
if count != new_count:
count = new_count
return True
else:
return False
And the following script scrolls down the user container until it has reached
the bottom
while 1:
# scroll down
driver.execute_script("document.querySelector('div[role=dialog] ul').parentNode.scrollTop=1e100")
try:
WebDriverWait(driver, 5).until(check_difference_in_count)
except:
break
|
Python ignoring while loop with True condition
Question: I'm writing a menu for a simple game, so I thought to use a while loop to let
the user choose his wanted option by clicking on it. While my program works
properly until this loop, it does not continue when reaching the line of the
following loop:
pygame.mouse.set_visible(True) # This is the last line processed.
while True: # This line ist not processed.
(curx,cury)= pygame.mouse.get_pos()
screen.blit(cursor,(curx-17,cury-21))
pygame.display.flip()
# rating values of the cursor position and pressed mouse button below:
(b1,b2,b3) = pygame.mouse.get_pressed() #getting states of mouse buttons
if (b1 == True or b2 == True or b3 == True): # "if one mouse button is pressed"
(cx,cy) = pygame.mouse.get_pos()
if (px <= curx <= px+spx and py <= cury <= py+spy):
return (screen,0)
elif (ox <= curx <= ox+sox and oy <= cury <= oy+soy):
return (screen,1)
elif (cx <= curx <= cx+scx and cy <= cury <= cy+scy):
return (screen,2)
else:
return (screen,3)
time.sleep(0.05)
I have already checked such things like wrong indentation.
BTW my interpreter (python.exe, Python 2.7.11) always does not respond after
reaching this line of the
while True:
Because the question was asked in a deleted answer:
I had a print("") between every above shown line to find the problematic line.
As I wrote 4 lines above: The interpreter (and with it the debugger and bug
report) has crashed without any further response.
The whole code of this function is: # MAIN MENU def smenu
(screen,res,menuimg,cursor): #preset: x,y = res click = False
print("preset") # TEST PART
# Fontimports, needed because of non-standard font in use
menu = pygame.image.load("menu.png")
playGame = pygame.image.load("play.png")
options = pygame.image.load("options.png")
crdts = pygame.image.load("credits.png")
print("Fontimport") # TEST PART
#SIZETRANSFORMATIONS
# setting of sizes
smx,smy = int((y/7)*2.889),int(y/7)
spx,spy = int((y/11)*6.5),int(y/11)
sox,soy = int((y/11)*5.056),int(y/11)
scx,scy = int((y/11)*5.056),int(y/11)
print("setting of sizes") # TEST PART
# setting real size of text 'n' stuff
menu = pygame.transform.scale(menu,(smx,smy))
playGame = pygame.transform.scale(playGame,(spx,spy))
options = pygame.transform.scale(options, (sox,soy))
crdts = pygame.transform.scale(crdts, (scx,scy))
cursor = pygame.transform.scale(cursor,(41,33))
print("actual size transformation") # TEST PART
#DISPLAY OF MENU
# fixing positions
mx, my = int((x/2)-((y/7)/2)*2.889),10 # position: 1. centered (x) 2. moved to the left for half of the text's length 3. positioned to the top(y), 10 pixels from edge
px, py = int((x/2)-((y/11)/2)*6.5),int(y/7+10+y/10) # position: x like above, y: upper edge -"menu"'s height, -10, - height/10
ox, oy = int((x/2)-((y/11)/2)*5.056),int(y/7+10+2*y/10+y/11)
cx, cy = int((x/2)-((y/11)/2)*5.056),int(y/7+10+3*y/10+2*y/11)
print("fixing positions") # TEST PART
# set to display
#screen.fill(0,0,0)
screen.blit(menuimg,(0,0))
screen.blit(menu,(mx,my))
screen.blit(playGame,(px,py))
screen.blit(options,(ox,oy))
screen.blit(crdts,(cx,cy))
pygame.display.flip()
print("set to display") # TEST PART
# request for input (choice of menu options)
pygame.mouse.set_visible(True)
print("mouse visible") # TEST PART last processed line
while (True):
print("While-loop") # TEST PART
curx,cury = pygame.mouse.get_pos()
screen.blit(cursor,(curx-17,cury-21))
pygame.display.flip()
# decision value below
(b1,b2,b3) = pygame.mouse.get_pressed() # getting mouse button's state
if (b1 == True or b2 == True or b3 == True): # condition true if a buton is pressed
(cx,cy) = pygame.mouse.get_pos()
if (px <= curx <= px+spx and py <= cury <= py+spy):
return (screen,0)
elif (ox <= curx <= ox+sox and oy <= cury <= oy+soy):
return (screen,1)
elif (cx <= curx <= cx+scx and cy <= cury <= cy+scy):
return (screen,2)
else:
return (screen,3)
time.sleep(0.05)
print("directly skipped")
Answer: I think that the problem is in the following line of your code:
if (b1 == True or b2 == True or b3 == True):
This condition never becomes true so you are stuck into the while loop without
your function returning anything.
|
Log all save/update/delete actions in all django models
Question: There are several models in my django app. Some of them derive from
models.Model, some - from django-hvad's translatable model. I want to log
every save/delete/update operation on them. I am aware of standard django
logger of admin actions, but they are too brief and non-verbose to satisfy my
needs.
Generally speaking, one common way to achieve this is to define super-class
with these operations and extend each model from it. This is not my case
because some of my models are translatable and some are not.
Second way are aspects/decorators. I guess, python/django must have something
like that, but I don't know what exactly :)
Please, provide me with the most suitable way to do this logging. Thanks!
Answer: You could write a mixin for your model.
import logging
class LogOnUpdateDeleteMixin(models.Model):
pass
def delete(self, *args, **kwargs):
super(LogOnUpdateDeleteMixin, self).delete(*args, **kwargs)
logging.info("%s instance %s (pk %s) deleted" % (str(self._meta), str(self), str(self.pk),) # or whatever you like
def save(self, *args, **kwargs):
super(LogOnUpdateDeleteMixin, self).save(*args, **kwargs)
logging.info("%s instance %s (pk %s) updated" % (str(self._meta), str(self), str(self.pk),) # or whatever you like
class Meta:
abstract = True
Now just use it in your model.
class MyModel(LogOnUpdateDeleteMixin, models.Model):
...
# Update/Delete actions will write to log. Re-use your mixin as needed in as many models as needed.
You can re-use this mixin again and again. Perform translation as you wish,
set some attributes in your models and check for them in the mixin.
|
GAE Python - OperationalError: (2013, 'Lost connection to MySQL server during query')
Question: I've been trying to connect to ClouSQL using Flexible Environments (vm:true)
but when I upload my app using:
gcloud preview app deploy --version MYVERSION
An error is thrown:
OperationalError: (2013, 'Lost connection to MySQL server during query')
I found out that it might be because the query is too large but I think that's
not the case because it works locally and on production when I wans't using
flexible environments with MySQLdb.
My code:
import os
import logging
import pymysql
class MySQL(object):
'''
classdocs
'''
# TO INSTALL LOCAL DB: http://stackoverflow.com/questions/30893734/no-module-named-mysql-google-app-engine-django
@classmethod
def getConnection(cls):
# When running on Google App Engine, use the special unix socket
# to connect to Cloud SQL.
if os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine/'):
logging.debug('PROJECT [%s], INSTANCE[%s] - USER [%s] - PASS [%s], SCHEMA [%s]',
os.getenv('CLOUDSQL_PROJECT'),
os.getenv('CLOUDSQL_INSTANCE'),
os.getenv('CLOUDSQL_USER'),
os.getenv('CLOUDSQL_PASS'),
os.getenv('CLOUDSQL_SCHEMA'))
db = pymysql.connect(unix_socket='/cloudsql/APP:REGION:INSTANCENAME')
#os.getenv('CLOUDSQL_PROJECT'),
#os.getenv('CLOUDSQL_INSTANCE')),
#user=os.getenv('CLOUDSQL_USER'),
#passwd=os.getenv('CLOUDSQL_PASS'),
#db=os.getenv('CLOUDSQL_SCHEMA'))
# When running locally, you can either connect to a local running
# MySQL instance, or connect to your Cloud SQL instance over TCP.
else:
db = pymysql.connect(host=os.getenv('DBDEV_HOST'), user=os.getenv('DBDEV_USER'),
passwd=os.getenv('DBDEV_PASS', ''), db=os.getenv('DBDEV_SCHEMA'))
return db
Any thoughts on this?
Thanks!
Answer: take a look in you my.cnf in the /etc/mysql/ directory and change the
parameter **max_allowed_packet** and set the value higher. then you must
restart the Database
you can also change this value via SQL like this:
MariaDB [yourSchema]> show GLOBAL variables like 'max_allowed_packet';
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 2097152 |
+--------------------+---------+
1 row in set (0.00 sec)
MariaDB [yourSchema]> SET GLOBAL max_allowed_packet=2*2097152;
Query OK, 0 rows affected (0.00 sec)
MariaDB [yourSchema]> show GLOBAL variables like 'max_allowed_packet';
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 4194304 |
+--------------------+---------+
1 row in set (0.00 sec)
MariaDB [yourSchema]>
MariaDB Manual:
> **max_allowed_packet**
>
> **Description:**
>
> Maximum size in bytes of a packet or a generated/intermediate string. The
> packet message buffer is initialized with the value from net_buffer_length,
> but can grow up to max_allowed_packet bytes. Set as large as the largest
> BLOB, in multiples of 1024. If this value is changed, it should be changed
> on the client side as well. See slave_max_allowed_packet for a specific
> limit for replication purposes.
>
> **Commandline:** \--max-allowed-packet=#
>
> **Scope:** Global
>
> **Dynamic:** Yes
>
> **Data Type:** numeric Default Value: 1048576 (1MB) <= MariaDB 10.1.6, 4M >=
> MariaDB 10.1.7, 1073741824 (1GB) (client-side)
>
> **Range:** 1024 to 1073741824
|
specific Regular expression search python
Question: This is my first post. I always come to this forum looking for an answer when
it comes to code.
I have been fighting with understanding regular expressions in Python, but it
is kind of hard.
I have text that looks like this:
Name: Clash1
Distance: -1.341m
Image Location: Test 1_navis_files\cd000001.jpg
HardStatus: New
Clash Point: 3.884m, -2.474m, 2.659m
Date Created: 2016/6/2422:45:09
Item 1
GUID: 6efaec51-b699-4d5a-b947-505a69c31d52
Path: File ->Colisiones_v2015.dwfx ->Segment ->Pipes (1) ->Pipe Types (1) ->Default (1) ->Pipe Types [2463] ->Shell
Item Name: Pipe Types [2463]
Item Type: Shell
Item 2
GUID: 6efaec51-b699-4d5a-b947-505a69c31dea
Path: File ->Colisiones_v2015.dwfx ->Segment ->Walls (4) ->Basic Wall (4) ->Wall 1 (4) ->Basic Wall [2343] ->Shell
Item Name: Basic Wall [2343]
Item Type: Shell
------------------
Name: Clash2
Distance: -1.341m
Image Location: Test 1_navis_files\cd000002.jpg
HardStatus: New
Clash Point: 3.884m, 3.533m, 2.659m
Date Created: 2016/6/2422:45:09
Item 1
GUID: 6efaec51-b699-4d5a-b947-505a69c31d52
Path: File ->Colisiones_v2015.dwfx ->Segment ->Pipes (1) ->Pipe Types (1) ->Default (1) ->Pipe Types [2463] ->Shell
Item Name: Pipe Types [2463]
Item Type: Shell
Item 2
GUID: 6efaec51-b699-4d5a-b947-505a69c31de8
Path: File ->Colisiones_v2015.dwfx ->Segment ->Walls (4) ->Basic Wall (4) ->Wall 1 (4) ->Basic Wall [2341] ->Shell
Item Name: Basic Wall [2341]
Item Type: Shell
------------------
What I need to do is to create a list that extracts for every chunk of text
(separated by the `-------------------------------`) the following things as a
string: the clash name and the clash point.
For example: `Clash 1 3.884, 3.533, 2.659`
I am really new to Python, and really do not have much understanding about
regular expressions.
Can anyone give me some clues about using regex to extract this values from
the text?
I did something like this:
exp = r'(?<=Clash Point\s)(?<=Point\s)([0-9]*)'
match = re.findall(exp, html)
if match:
OUT.append(match)
else:
OUT = 'fail'
but I know I am far from my goal.
Answer: If you're looking for a regex solution, you could come up with:
^Name:\s* # look for Name:, followed by whitespaces
# at the beginning of a line
(?P<name>.+) # capture the rest of the line
# in a group called "name"
[\s\S]+? # anything afterwards lazily
^Clash\ Point:\s* # same construct as above
(?P<point>.+) # same as the other group
See [**a demo on regex101.com**](https://regex101.com/r/eP4zP6/1).
* * *
Translated into `Python` code, this would be:
import re
rx = re.compile(r"""
^Name:\s*
(?P<name>.+)
[\s\S]+?
^Clash\ Point:\s*
(?P<point>.+)""", re.VERBOSE|re.MULTILINE)
for match in rx.finditer(your_string_here):
print match.group('name')
print match.group('point')
This will output:
Clash1
3.884m, -2.474m, 2.659m
Clash2
3.884m, 3.533m, 2.659m
See [**a working demo on ideone.com**](http://ideone.com/o1kuJ4).
|
statsmodels: What are the allowable formats to give to result.predict() for out-of-sample prediction using formula
Question: I am trying to use `statsmodels` in python to impute some values in a Pandas
`DataFrame`.
The third and fourth attempts below (df2 and df3) give an error `***
AttributeError: 'DataFrame' object has no attribute 'design_info'` This seems
a strange error, since dataframes never have such an attribute.
In any case, I do not understand what I should be passing to predict() in
order to get a prediction for the missing value of A in df2. It might also be
nice if the df3 case would give me a prediction which included np.nan for the
last element.
import pandas as pd
import numpy as np
import statsmodels.formula.api as sm
df0 = pd.DataFrame({"A": [10,20,30,324,2353,],
"B": [20, 30, 10, 100, 2332],
"C": [0, -30, 120, 11, 2]})
result0 = sm.ols(formula="A ~ B + C ", data=df0).fit()
print result0.summary()
test0 = result0.predict(df0) #works
print test0
df1 = pd.DataFrame({"A": [10,20,30,324,2353,],
"B": [20, 30, 10, 100, 2332],
"C": [0, -30, 120, 11, 2]})
result1 = sm.ols(formula="A ~ B+ I(C**2) ", data=df1).fit()
print result1.summary()
test1 = result1.predict(df1) #works
print test1
df2 = pd.DataFrame({"A": [10,20,30,324,2353,np.nan],
"B": [20, 30, 10, 100, 2332, 2332],
"C": [0, -30, 120, 11, 2, 2 ]})
result2 = sm.ols(formula="A ~ B + C", data=df2).fit()
print result2.summary()
test2 = result2.predict(df2) # Fails
newvals=df2[['B','C']].dropna()
test2 = result2.predict(newvals) # Fails
test2 = result2.predict(dict([[vv,df2[vv].values] for vv in newvals.columns])) # Fails
df3 = pd.DataFrame({"A": [10,20,30,324,2353,2353],
"B": [20, 30, 10, 100, 2332, np.nan],
"C": [0, -30, 120, 11, 2, 2 ]})
result3 = sm.ols(formula="A ~ B + C", data=df3).fit()
print result3.summary()
test3 = result3.predict(df3) # Fails
**Update using pre-release statsmodels**
Using the new release candidate for statsmodels 0.8, the df2 example, above,
now works. However, the third (df3) example fails on `result3.predict(df3)`
with `ValueError: Wrong number of items passed 5, placement implies 6`
Dropping the last row, which contains the np.nan, works, i.e.
`result3.predict(df3[:-1])` predicts correctly for the rows for which
prediction is possible.
It would still be nice for there to be an option to pass the entire df3, but
receive np.nan as prediction for the last row.
Answer: By way of answering this question, here is my resulting method to fill in some
values in a dataframe with an arbitrary (OLS) model. It drops the np.nans as
needed before predicting.
#!/usr/bin/python
import statsmodels.formula.api as sm
import pandas as pd
import numpy as np
def df_impute_values_ols(adf,outvar,model, verbose=True):
"""Specify a Pandas DataFrame with some null (eg. np.nan) values in column <outvar>.
Specify a string model (in statsmodels format, which is like R) to use to predict them when they are missing. Nonlinear transformations can be specified in this string.
e.g.: model=' x1 + np.sin(x1) + I((x1-5)**2) '
At the moment, this uses OLS, so outvar should be continuous.
Two dfs are returned: one containing just the updated rows and a
subset of columns, and version of the incoming DataFrame with some
null values filled in (those that have the model variables) will
be returned, using single imputation.
This is written to work with statsmodels 0.6.1 (see https://github.com/statsmodels/statsmodels/issues/2171 ) ie this is written in order to avoid ANY NaN's in the modeldf. That should be less necessary in future versions.
To do:
- Add plots to verbose mode
- Models other than OLS should be offered
Issues:
- the "horrid kluge" line below will give trouble if there are
column names that are part of other column names. This kludge should be
temporary, anyway, until statsmodels 0.8 is fixed and released.
The latest version of this method will be at
https://github.com/cpbl/cpblUtilities/ in stats/
"""
formula=outvar+' ~ '+model
rhsv=[vv for vv in adf.columns if vv in model] # This is a horrid kluge! Ne
updateIndex= adf[pd.isnull(adf[outvar]) ] [rhsv].dropna().index
modeldf=adf[[outvar]+rhsv].dropna()
results=sm.ols(formula, data=modeldf).fit()
if verbose:
print results.summary()
newvals=adf[pd.isnull(adf[outvar])][rhsv].dropna()
newvals[outvar] = results.predict(newvals)
adf.loc[updateIndex,outvar]=newvals[outvar]
if verbose:
print(' %d rows updated for %s'%(len(newvals),outvar))
return(newvals, adf)
def test_df_impute_values_ols():
# Find missing values and fill them in:
df = pd.DataFrame({"A": [10, 20, 30, 324, 2353, np.nan],
"B": [20, 30, 10, 100, 2332, 2332],
"C": [0, np.nan, 120, 11, 2, 2 ]})
newv,df2=df_impute_values_ols(df,'A',' B + C ', verbose=True)
print df2
assert df2.iloc[-1]['A']==2357.5427562610648
assert df2.size==18
# Can we handle some missing values which also have missing predictors?
df = pd.DataFrame({"A": [10, 20, 30, 324, 2353, np.nan, np.nan],
"B": [20, 30, 10, 100, 2332, 2332, 2332],
"C": [0, np.nan, 120, 11, 2, 2, np.nan ]})
newv,df2=df_impute_values_ols(df,'A',' B + C + I(C**2) ', verbose=True)
print df2
assert pd.isnull( df2.iloc[-1]['A'] )
assert df2.iloc[-2]['A'] == 2352.999999999995
|
How to delete alphanumeric words out of a Unicode file
Question: I need to use a dictionary database, but most of it is some alphanumeric
useless stuff, and the interesting fields are either non alphanumeric (such as
chinese characters) or inside some brackets. I searched a lot, learned about a
lot of tools like sed, awk, grep, ect I even thought about creating a Python
script to sort it out, but I never managed to find of a solution.
A line of the database looks like this:
助 L1782 DN1921 K407 O431 DO346 MN2313 MP2.0376 E314 IN623 DA633 DS248 DF367 DH330 DT284 DC248 DJ826 DG211 DM1800 P1-5-2 I2g5.1 Q7412.7 DR3945 Yzhu4 Wjo ジョ たす.ける たす.かる す.ける すけ {help} {rescue} {assist}
I need it to be like this :
助 ジョ たす.ける たす.かる す.ける すけ {help} {rescue} {assist}
Ho can I do this using any of the tools mentioned above?
Answer: Here is a Python solution if you would still like one:
import re
alpha_brack = re.compile(r"([a-zA-Z0-9.\-]+)|({.*?})")
my_string = """
助 L1782 DN1921 K407 O431 DO346 MN2313 MP2.0376 E314 IN623 DA633 DS248 DF367
DH330 DT284 DC248 DJ826 DG211 DM1800 P1-5-2 I2g5.1 Q7412.7 DR3945 Yzhu4
Wjo ジョ たす.ける たす.かる す.ける すけ {help} {rescue} {assist}"""
match = alpha_brack.findall(my_string)
new_string = my_string
for g0, _ in match: # only care about first group!
new_string = new_string.replace(g0,'',1) # replace only first occurence!
final = re.sub(r'\s{2,}',' ', new_string) # finally, clean up whitespace
print(final)
My results:
'助ジョ たすける たすかる すける すけ {help} {rescue} {assist}'
|
Google Vision API text detection Python example uses project: "google.com:cloudsdktool" and not my own project
Question: I am working on the python example for Cloud Vision API from [github
repo](https://github.com/GoogleCloudPlatform/cloud-
vision/tree/master/python/text).
I have already setup the project and activated the service account with its
key. I have also called the `gcloud auth` and entered my credentials.
Here is my code (as derived from the python example of Vision API text
detection):
import base64
import os
import re
import sys
from googleapiclient import discovery
from googleapiclient import errors
import nltk
from nltk.stem.snowball import EnglishStemmer
from oauth2client.client import GoogleCredentials
import redis
DISCOVERY_URL = 'https://{api}.googleapis.com/$discovery/rest?version={apiVersion}' # noqa
BATCH_SIZE = 10
class VisionApi:
"""Construct and use the Google Vision API service."""
def __init__(self, api_discovery_file='/home/saadq/Dev/Projects/TM-visual-search/credentials-key.json'):
self.credentials = GoogleCredentials.get_application_default()
print self.credentials.to_json()
self.service = discovery.build(
'vision', 'v1', credentials=self.credentials,
discoveryServiceUrl=DISCOVERY_URL)
print DISCOVERY_URL
def detect_text(self, input_filenames, num_retries=3, max_results=6):
"""Uses the Vision API to detect text in the given file.
"""
images = {}
for filename in input_filenames:
with open(filename, 'rb') as image_file:
images[filename] = image_file.read()
batch_request = []
for filename in images:
batch_request.append({
'image': {
'content': base64.b64encode(
images[filename]).decode('UTF-8')
},
'features': [{
'type': 'TEXT_DETECTION',
'maxResults': max_results,
}]
})
request = self.service.images().annotate(
body={'requests': batch_request})
try:
responses = request.execute(num_retries=num_retries)
if 'responses' not in responses:
return {}
text_response = {}
for filename, response in zip(images, responses['responses']):
if 'error' in response:
print("API Error for %s: %s" % (
filename,
response['error']['message']
if 'message' in response['error']
else ''))
continue
if 'textAnnotations' in response:
text_response[filename] = response['textAnnotations']
else:
text_response[filename] = []
return text_response
except errors.HttpError as e:
print("Http Error for %s: %s" % (filename, e))
except KeyError as e2:
print("Key error: %s" % e2)
vision = VisionApi()
print vision.detect_text(['test_article.png'])
This is the error message I am getting:
Http Error for test_article.png: <HttpError 403 when requesting https://vision.googleapis.com/v1/images:annotate?alt=json returned "Google Cloud Vision API has not been used in project google.com:cloudsdktool before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/vision.googleapis.com/overview?project=google.com:cloudsdktool then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.">
I want to be able to use my own project for the example and not the default
(google.com:cloudsdktool).
Answer: Download the credentials you created and update the
GOOGLE_APPLICATION_CREDENTIALS environment variable to point to that file:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/credentials-key.json
Reference: <https://github.com/GoogleCloudPlatform/cloud-
vision/tree/master/python/text#set-up-to-authenticate-with-your-projects-
credentials>
|
OpenCV grabcut() background color and Contour in Python
Question: I am using Python and OpenCV. I am now using `grabcut()` to crop out the
object I want. Here is my code:
img = cv2.imread('test.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
mask = np.zeros(img.shape[:2], np.uint8)
bgdModel = np.zeros((1, 65), np.float64)
fgdModel = np.zeros((1, 65), np.float64)
rect = (2,2,630,930)
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0), 0,1).astype('uint8')
img = img*mask2[:,:, np.newaxis]
[](http://i.stack.imgur.com/gpo6i.jpg)
[](http://i.stack.imgur.com/yNFCs.jpg)
Afterwards, I try to find out the contour.
I have tried to find the contour by the code below:
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
And it returns `a contours array` with length `48`. When I draw this out:
[](http://i.stack.imgur.com/BqZqK.png)
**First question is how can I get the contour (array) of this grab cut?**
[](http://i.stack.imgur.com/WKjAC.png)
Second question: as you can see, the background color is black. **How can I
change the background color to white?**
Thank you.
Answer: At first, you need to get the background. To this must be subtracted from the
original image with the mask image. And then change the black background to
white (or any color). And then back to add with the image of the mask.
import numpy as np
import cv2
cv2.namedWindow(‘image’, cv2.WINDOW_NORMAL)
#Load the Image
imgo = cv2.imread(‘input.jpg’)
height, width = imgo.shape[:2]
#Create a mask holder
mask = np.zeros(imgo.shape[:2],np.uint8)
#Grab Cut the object
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
#Hard Coding the Rect… The object must lie within this rect.
rect = (10,10,width-30,height-30)
cv2.grabCut(imgo,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask = np.where((mask==2)|(mask==0),0,1).astype(‘uint8’)
img1 = imgo*mask[:,:,np.newaxis]
#Get the background
background = imgo – img1
#Change all pixels in the background that are not black to white
background[np.where((background > [0,0,0]).all(axis = 2))] =[255,255,255]
#Add the background and the image
final = background + img1
#To be done – Smoothening the edges….
cv2.imshow(‘image’, final )
k = cv2.waitKey(0)
if k==27:
cv2.destroyAllWindows()
Information taken from the site
<https://nxtify.wordpress.com/2015/02/24/image-background-removal-using-
opencv-in-python/>
|
Why is my Spark DataFrame much slower than RDD?
Question: I have a very simple Spark DataFrame, and when running a DataFrame groupby,
the performance is terrible - about 8x slower than the (in my head) equivalent
RDD reduceByKey...
My cached DF is just two columns, customer and name with only 50k rows:
== Physical Plan ==
InMemoryColumnarTableScan [customer#2454,name#2456], InMemoryRelation [customer#2454,name#2456], true, 10000, StorageLevel(true, true, false, true, 1), Scan ParquetRelation[customer#2454,name#2456] InputPaths: hdfs://nameservice1/tmp/v2_selected_parquet/test_parquet2, None
When I run the following two snippets, I'd expect similar performance, not the
rdd version to run in 10s and the DF version in 85s...
rawtempDF2.rdd.map(lambda x: (x['name'], 1)).reduceByKey(lambda x,y: x+y).collect()
rawtempDF2.groupby('name').count().collect()
Am I missing something really fundamental here? FWIW, the RDD version runs 54
stages, and the DF version is 227 :/
Edit: I'm using Spark 1.6.1 and Python 3.4.2. Edit2: Also, the source parquet
was partitioned customer/day/name - currently 27 customers, 1 day, c. 45
names.
Answer: Both numbers seem to be relatively high and it is not exactly clear how you
create `DataFrame` or measure the time but in general difference like this can
be explained by a low number of records compared to the number of partitions.
Default value for `spark.sql.shuffle.partitions` is 200 which in the number of
tasks you get. With 50K records an overhead of starting a task will be higher
than a speedup you can gain from the parallel execution. Let's illustrate that
with a simple example. First lets create an example data:
import string
import random
random.seed(323)
def random_string():
n = random.randint(3, 6)
return (''.join(random.choice(string.ascii_uppercase) for _ in range(n)), )
df = (sc
.parallelize([random_string() for _ in range(50000)], 8).toDF(["name"])
.cache())
And measure the time depending on number of `shuffle.partitions`:
sqlContext.setConf("spark.sql.shuffle.partitions", "1")
%timeit -n 10 df.groupby('name').count().collect()
## 10 loops, best of 3: 504 ms per loop
sqlContext.setConf("spark.sql.shuffle.partitions", "1")
%timeit -n 10 df.groupby('name').count().collect()
## 10 loops, best of 3: 451 ms per loop
sqlContext.setConf("spark.sql.shuffle.partitions", "100")
%timeit -n 10 df.groupby('name').count().collect()
## 10 loops, best of 3: 624 ms per loop
sqlContext.setConf("spark.sql.shuffle.partitions", "200")
%timeit -n 10 df.groupby('name').count().collect()
## 10 loops, best of 3: 778 ms per loop
sqlContext.setConf("spark.sql.shuffle.partitions", "1000")
%timeit -n 10 df.groupby('name').count().collect()
## 10 loops, best of 3: 1.75 s per loop
Although these values are not comparable to what you claim and this data has
been collected in a local mode you can see that relatively clear pattern. The
same applies to RDD:
from operator import add
%timeit -n 10 df.rdd.map(lambda x: (x['name'], 1)).reduceByKey(add, 1).collect()
## 10 loops, best of 3: 414 ms per loop
%timeit -n 10 df.rdd.map(lambda x: (x['name'], 1)).reduceByKey(add, 10).collect()
## 10 loops, best of 3: 439 ms per loop
%timeit -n 10 df.rdd.map(lambda x: (x['name'], 1)).reduceByKey(add, 100).collect()
## 10 loops, best of 3: 1.3 s per loop
%timeit -n 10 df.rdd.map(lambda x: (x['name'], 1)).reduceByKey(add, 1000).collect()
## 10 loops, best of 3: 8.41 s per loop
In a proper distributed environment this will be higher due to cost of the
network IO.
Just for comparison lets check how long does it take to execute this task
locally without Spark
from collections import Counter
data = df.rdd.flatMap(lambda x: x).collect()
%timeit -n 10 Counter(data)
## 10 loops, best of 3: 9.9 ms per loop
You should also take a look at the data locality. Depending on a storage you
use and configuration this can add additional delay to your jobs even with a
small input like this.
|
python: filter a list of strings based on a string name
Question: I have a lists of strings that look like this:
li1 = ['pin 2.3','vlo 5.4', 'lu 1.3', '3 packages installed', '', 'bla']
l12 = ['pin 2.3','vlo 5.4', '2 packages installed', 'bla', 'bla']
I want to filter out from the lists the strings 'x packages installed' and
those that follow to have:
out1 = ['pin 2.3','vlo 5.4', 'lu 1.3']
out2 = ['pin 2.3','vlo 5.4']
How can I do that using list comprehension? thanks
Answer: You can use
[`itertools.takewhile`](https://docs.python.org/2.7/library/itertools.html#itertools.takewhile),
which takes items from the list until the given condition is not passed:
from itertools import takewhile
l12 = ['pin 2.3','vlo 5.4', '2 packages installed', 'bla', 'bla']
li1 = ['pin 2.3','vlo 5.4', 'lu 1.3', '3 packages installed', '', 'bla']
r12 = takewhile(lambda x: "packages installed" not in x, l12)
ri1 = takewhile(lambda x: "packages installed" not in x, li1)
print(list(r12))
# ['pin 2.3', 'vlo 5.4']
print(list(ri1))
# ['pin 2.3', 'vlo 5.4', 'lu 1.3']
|
Scrapy and Pycharm - Stange import error - No module named [insert name of scrapyproject here]
Question: Hi Stackoverflow Community
I encountered the following issue. I have a scrapy project which I added to my
project:
-.idea
-associate
-core
-scrapyproject
-- scrapyproject_one
--- spiders
---- __iniy.py__
---- dmoz_spider.py
-- __init__.py
-- items.py
-- pipelines.py
-- settings.py
My dmoz_spider.py looks like this:
import scrapy
from scrapyproject.scrapyproject_one import items
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
for sel in response.xpath('//ul/li'):
item = items.ScrapyprojectItem()
item['title'] = sel.xpath('a/text()').extract()
item['link'] = sel.xpath('a/@href').extract()
item['desc'] = sel.xpath('text()').extract()
yield item
But when I navigate into the scrapyproject> folder and execute
scrapy dmoz crawl
I receive the following error:
Traceback (most recent call last):
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\runpy.py", line 170, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35-32\Scripts\scrapy.exe\__main__.py", line 9, in <module>
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\cmdline.py", line 108, in execute
settings = get_project_settings()
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\utils\project.py", line 60, in get_proj
ect_settings
settings.setmodule(settings_module_path, priority='project')
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\settings\__init__.py", line 282, in set
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\utils\project.py", line 60, in g
et_project_settings
settings.setmodule(settings_module_path, priority='project')
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\settings\__init__.py", line 282,
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\cmdline.py", line 108, in
execute
settings = get_project_settings()
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\utils\project.py", line 60
, in get_project_settings
settings.setmodule(settings_module_path, priority='project')
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\settings\__init__.py", lin
e 282, in setmodule
ne 60, in get_project_settings
settings.setmodule(settings_module_path, priority='project')
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\settings\__init__.py"
, line 282, in setmodule
ct.py", line 60, in get_project_settings
settings.setmodule(settings_module_path, priority='project')
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\site-packages\scrapy\settings\__
init__.py", line 282, in setmodule
module = import_module(module)
File "c:\users\admin\appdata\local\programs\python\python35-32\lib\importlib\__init__.py", line 126
126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 944, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 956, in _find_and_load_unlocked
ImportError: No module named 'scrapyproject'
Wondering whether anyone would know how I might be able to approach this. Any
advice would be highly appreciated!
M
Answer: All right, I figured it out.
What I needed to do was to declare my 'spiderproject' folder as a 'Sources
folder' in PyCharm.
You can do that by going to File>Settings>Project:[Project Name]>Project
Structure.
Select the level 1 project folder of your scrapy project (in this case
'spiderproject') and Mark as Sources by clicking the blue Folder at the top.
Then go to your spider and
from spiderproject.items import [whatever you named your item class you defined in items.py ]
Hope this helps.
M
|
python float to string without precision loss
Question: For python 3 I want to convert a float to a string, with possibly different
length (i.e. number of digits) but with full precision.
Also I need to have a decimal point in any case:
1 -> '1.'
1/10 -> '0.1000000000000000055511151231257827021181583404541015625'
currently my code is this:
from decimal import Decimal
def formatMostSignificantDigits(x):
out = str(Decimal(x))
if out.find('.') < 0:
out += '.'
return out
can this be done more elegantly? (`e` notation would be possible, too)
Answer: Use Pythons [string formatting
functions](https://docs.python.org/3/library/string.html#format-examples):
>>> x = 1.0; '{:.55f}'.format(x)
'1.0000000000000000000000000000000000000000000000000000000'
>>> x = 1/10; '{:.55f}'.format(x)
'0.1000000000000000055511151231257827021181583404541015625'
If you want to be able to feed it integers (such as `1`) as well, use
`'{:.55f}'.format(float(x))`.
If you want to strip any trailing zeroes, use
`'{:.55f}'.format(x).rstrip('0')`.
Note that 55 decimals after the point is way overkill (but it's what you
showed in your question); 16 digits should suffice to express the full
precision of double-precision IEEE 754 floats (20 digits for the 80-bit
extended-precision you might encounter).
|
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [1000,625]
Question: I get the above unexpected error when trying to run this code:
# -*- coding: utf-8 -*-
"""
Created on Fri Jun 24 10:38:04 2016
@author: andrea
"""
# pylint: disable=missing-docstring
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import time
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow as tf
from pylab import *
import argparse
import mlp
# Basic model parameters as external flags.
tf.app.flags.FLAGS = tf.python.platform.flags._FlagValues()
tf.app.flags._global_parser = argparse.ArgumentParser()
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.')
flags.DEFINE_integer('max_steps', 20, 'Number of steps to run trainer.')
flags.DEFINE_integer('batch_size', 1000, 'Batch size. Must divide evenly into the dataset sizes.')
flags.DEFINE_integer('num_samples', 100000, 'Total number of samples. Needed by the reader')
flags.DEFINE_string('training_set_file', 'godzilla_dataset_size625', 'Training set file')
flags.DEFINE_string('test_set_file', 'godzilla_testset_size625', 'Test set file')
flags.DEFINE_string('test_size', 1000, 'Test set size')
def placeholder_inputs(batch_size):
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, mlp.NUM_INPUT))
labels_placeholder = tf.placeholder(tf.float32, shape=(batch_size, mlp.NUM_OUTPUT))
return images_placeholder, labels_placeholder
def fill_feed_dict(data_set_file, images_pl, labels_pl):
for l in range(int(FLAGS.num_samples/FLAGS.batch_size)):
data_set = genfromtxt("../dataset/" + data_set_file, skip_header=l*FLAGS.batch_size, max_rows=FLAGS.batch_size)
data_set = reshape(data_set, [FLAGS.batch_size, mlp.NUM_INPUT + mlp.NUM_OUTPUT])
images = data_set[:, :mlp.NUM_INPUT]
labels_feed = reshape(data_set[:, mlp.NUM_INPUT:], [FLAGS.batch_size, mlp.NUM_OUTPUT])
images_feed = reshape(images, [FLAGS.batch_size, mlp.NUM_INPUT])
feed_dict = {
images_pl: images_feed,
labels_pl: labels_feed,
}
yield feed_dict
def reader(data_set_file, images_pl, labels_pl):
data_set = loadtxt("../dataset/" + data_set_file)
images = data_set[:, :mlp.NUM_INPUT]
labels_feed = reshape(data_set[:, mlp.NUM_INPUT:], [data_set.shape[0], mlp.NUM_OUTPUT])
images_feed = reshape(images, [data_set.shape[0], mlp.NUM_INPUT])
feed_dict = {
images_pl: images_feed,
labels_pl: labels_feed,
}
return feed_dict, labels_pl
def run_training():
tot_training_loss = []
tot_test_loss = []
tf.reset_default_graph()
with tf.Graph().as_default() as g:
images_placeholder, labels_placeholder = placeholder_inputs(FLAGS.batch_size)
test_images_pl, test_labels_pl = placeholder_inputs(FLAGS.test_size)
logits = mlp.inference(images_placeholder)
test_pred = mlp.inference(test_images_pl, reuse=True)
loss = mlp.loss(logits, labels_placeholder)
test_loss = mlp.loss(test_pred, test_labels_pl)
train_op = mlp.training(loss, FLAGS.learning_rate)
#summary_op = tf.merge_all_summaries()
init = tf.initialize_all_variables()
saver = tf.train.Saver()
sess = tf.Session()
#summary_writer = tf.train.SummaryWriter("./", sess.graph)
sess.run(init)
test_feed, test_labels_placeholder = reader(FLAGS.test_set_file, test_images_pl, test_labels_pl)
# Start the training loop.
for step in xrange(FLAGS.max_steps):
start_time = time.time()
feed_gen = fill_feed_dict(FLAGS.training_set_file, images_placeholder, labels_placeholder)
i=1
for feed_dict in feed_gen:
_, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)
_, test_loss_val = sess.run([test_pred, test_loss], feed_dict=test_feed)
tot_training_loss.append(loss_value)
tot_test_loss.append(test_loss_val)
#if i % 10 == 0:
#print('%d minibatches analyzed...'%i)
i+=1
if step % 1 == 0:
duration = time.time() - start_time
print('Epoch %d (%.3f sec):\n training loss = %f \n test loss = %f ' % (step, duration, loss_value, test_loss_val))
predictions = sess.run(test_pred, feed_dict=test_feed)
savetxt("predictions", predictions)
savetxt("training_loss", tot_training_loss)
savetxt("test_loss", tot_test_loss)
plot(tot_training_loss)
plot(tot_test_loss)
figure()
scatter(test_feed[test_labels_placeholder], predictions)
#plot([.4, .6], [.4, .6])
run_training()
#if __name__ == '__main__':
# tf.app.run()
this is mlp:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
NUM_OUTPUT = 1
NUM_INPUT = 625
NUM_HIDDEN = 5
def inference(images, reuse=None):
with tf.variable_scope('hidden1', reuse=reuse):
weights = tf.get_variable(name='weights', shape=[NUM_INPUT, NUM_HIDDEN], initializer=tf.contrib.layers.xavier_initializer())
weight_decay = tf.mul(tf.nn.l2_loss(weights), 0.00001, name='weight_loss')
tf.add_to_collection('losses', weight_decay)
biases = tf.Variable(tf.constant(0.0, name='biases', shape=[NUM_HIDDEN]))
hidden1_output = tf.nn.relu(tf.matmul(images, weights)+biases, name='hidden1')
with tf.variable_scope('output', reuse=reuse):
weights = tf.get_variable(name='weights', shape=[NUM_HIDDEN, NUM_OUTPUT], initializer=tf.contrib.layers.xavier_initializer())
weight_decay = tf.mul(tf.nn.l2_loss(weights), 0.00001, name='weight_loss')
tf.add_to_collection('losses', weight_decay)
biases = tf.Variable(tf.constant(0.0, name='biases', shape=[NUM_OUTPUT]))
output = tf.nn.relu(tf.matmul(hidden1_output, weights)+biases, name='output')
return output
def loss(outputs, labels):
rmse = tf.sqrt(tf.reduce_mean(tf.square(tf.sub(labels, outputs))), name="rmse")
tf.add_to_collection('losses', rmse)
return tf.add_n(tf.get_collection('losses'), name='total_loss')
def training(loss, learning_rate):
tf.scalar_summary(loss.op.name, loss)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimizer.minimize(loss, global_step=global_step)
return train_op
here the error:
Traceback (most recent call last):
File "<ipython-input-1-f16dfed3b99b>", line 1, in <module>
runfile('/home/andrea/test/python/main_mlp_yield.py', wdir='/home/andrea/test/python')
File "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 81, in execfile
builtins.execfile(filename, *where)
File "/home/andrea/test/python/main_mlp_yield.py", line 127, in <module>
run_training()
File "/home/andrea/test/python/main_mlp_yield.py", line 105, in run_training
_, test_loss_val = sess.run([test_pred, test_loss], feed_dict=test_feed)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 636, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call
raise type(e)(node_def, op, message)
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [1000,625]
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[1000,625], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'Placeholder', defined at:
File "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/start_ipython_kernel.py", line 205, in <module>
__ipythonkernel__.start()
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelapp.py", line 442, in start
ioloop.IOLoop.instance().start()
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/ioloop.py", line 162, in start
super(ZMQIOLoop, self).start()
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 883, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 276, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 391, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/ipkernel.py", line 199, in do_execute
shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2723, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2831, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-1-f16dfed3b99b>", line 1, in <module>
runfile('/home/andrea/test/python/main_mlp_yield.py', wdir='/home/andrea/test/python')
File "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 81, in execfile
builtins.execfile(filename, *where)
File "/home/andrea/test/python/main_mlp_yield.py", line 127, in <module>
run_training()
File "/home/andrea/test/python/main_mlp_yield.py", line 79, in run_training
images_placeholder, labels_placeholder = placeholder_inputs(FLAGS.batch_size)
File "/home/andrea/test/python/main_mlp_yield.py", line 37, in placeholder_inputs
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, mlp.NUM_INPUT))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 895, in placeholder
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 1238, in _placeholder
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2260, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1230, in __init__
self._traceback = _extract_stack()
I really don't understand why. It looks to me that I'm feeding all the
placeholders before using them. I also removed the "merge_all_summaries" since
this problem is similar to other
([this](http://stackoverflow.com/questions/35413618/tensorflow-placeholder-
error-when-using-tf-merge-all-summaries) and
[this](http://stackoverflow.com/questions/35114376/error-when-computing-
summaries-in-tensorflow)), but it didn't help
EDIT: training data: 100000 samples x 625 features test data: 1000 samples x
625 features num. output: 1
Answer: I think the problem is in this code:
def loss(outputs, labels):
rmse = tf.sqrt(tf.reduce_mean(tf.square(tf.sub(labels, outputs))), name="rmse")
tf.add_to_collection('losses', rmse)
return tf.add_n(tf.get_collection('losses'), name='total_loss')
You're adding up all the losses from collection 'losses', including both your
training and test losses. In particular, in this code:
loss = mlp.loss(logits, labels_placeholder)
test_loss = mlp.loss(test_pred, test_labels_pl)
The first call to mlp.loss will add training losses to the 'losses'
collection. The second call to mlp.loss will incorporate those values in its
result. So when you try to compute the test_loss, Tensorflow complains that
you didn't feed all of the inputs (the training placeholders).
Perhaps you meant something like this?
def loss(outputs, labels):
rmse = tf.sqrt(tf.reduce_mean(tf.square(tf.sub(labels, outputs))), name="rmse")
return rmse
I hope that helps!
|
result of coupled ODE in python code is different from mathematica
Question: As per my language knowledge my code is written correct. But It is not giving
me correct solution (plot). When I had solved same system of ODE's in
mathematica, I have correct solution and both solutions are totally different.
I am writing a research project so I need a proper code in python. could you
please let me know the mistake of mine code. [python code
solution](http://i.stack.imgur.com/B9vhE.png) [Mathematica
solution](http://i.stack.imgur.com/djLRv.png)
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as si
##Three system
def func(state, T):
H = state[0]
P = state[1]
R = state[2]
Hd = -(16./3.)*np.pi*P
Pd = -4.*H*P
Rd = H*R
return Hd,Pd,Rd
T = np.linspace(0.1,0.9,50)
state0 = [1,0.0001, 0.1]
s = si.odeint(func, state0, T)
h = np.transpose(s)
plt.plot(T,h[0])
plt.show()
Mathematica code
Clear[H,\[Rho],a]
Eq1=(H'[t] == -16 \[Pi] \[Rho][t]/3)
Eq2= (\[Rho]'[t] == -4 H[t] \[Rho][t])
Eq3 = (a'[t] == H[t] a[t])
sol=NDSolve[{Eq1,Eq2, Eq3,
H[0.1]==0.1, \[Rho][0.1]==0.1, a[0.1]==0.1},
{H[t],\[Rho][t],a[t]}, {t,0.1, 0.9}]
Plot[Evaluate[{H[t]}/.sol],{t,0.1,0.9}]
[](http://i.stack.imgur.com/KIK5V.png)
Answer: Both codes are correct, I just turned off my laptop and on it again, and it
gives me the correct result (as mathematica)
|
How to write the data of 3 dictionaries in a table separated by tab into a text file?
Question: Say I have the following 3 dictionaries:
d1 = {'Ben': {'Skill': 'true', 'Magic': 'false'}, 'Tom': {'Skill': 'true', 'Magic': 'true'}}
d2 = {'Ben': {'Strength': 'wo_mana', 'Int': 'wi_mana', 'Speed': 'wo_mana'}, 'Tom': {'Int': 'wi_mana', 'Agility': 'wo_mana'}}
d3 = {'Ben': {'Strength': '1.10', 'Int': '1.20', 'Speed': '1.50'}, 'Tom': {'Int': '1.40', 'Agility': '1.60'}}
I want to write the data of the 3 dictionaries above into a table separated by
tab into a .txt or .csv file using `with open('filename', 'w') as f:`
My desired output (when opened in Excel):
Name Skill Magic wo_mana wi_mana
Ben true false Strength = 1.10 Int = 1.20
Speed = 1.50
Tom true true Agility = 1.60 Int = 1.40
My code so far:
with open('output.txt', 'w')as f:
f.write("Name\tSkill\tMagic\two_mana\twi_mana\n")
for key in d1:
f.write('%s\t%s\t%s\n' %(key, d1[key]['Skill'], d1[key]['Magic']))
and I got this:
Name Skill Magic wo_mana wi_mana
Ben true false
Tom true true
How am I supposed to write the `wo_mana` and `wi_mana` part without using the
`xlsxwriter` module?
Note:
a) The 3 dictionaries are created when extracting the data from a input file,
the keys and values are not defined by myself, hence I do not know the order
of the keys and values in the dictionaries.
b) I wish to write into a .txt or .csv file which will be opened in Excel with
tab as the delimiter.
c) I am using Python 2.7.
Answer: I was not able to produce your exact desired output, but I got something that
will work in Excel.
Your three dictionaries each contain information about an object (or
character) so I created a character class rather than use your dictionaries.
class Character(object):
def __init__(self, name, skill, magic, skill_list):
"""
Initialize the character. Skill and magic are boolean.
Skill list is a list of skill tuples. A skill tuple has
the format: (skill, value, mana)
"""
self.name = name
self.skill = skill
self.magic = magic
self.skills = {s[0]: (s[1], s[2]) for s in skill_list}
Then create each character:
ben_skills = [
('Strength', 1.10, 'wo_mana'),
('Speed', 1.50, 'wo_mana'),
('Int', 1.20, 'wi_mana')
]
tom_skills = [
('Agility', 1.60, 'wo_mana'),
('Int', 1.40, 'wi_mana')
]
characters = [
Character('Ben', True, False, ben_skills),
Character('Tom', True, True, tom_skills)
]
And write them to a CSV file (Excel knows how to read these):
with open('output.csv', 'w') as f:
f.write('Name,Skill,Magic,wo_mana,wi_mana\n')
for c in characters:
wo_mana = []
wi_mana = []
for k, s in c.skills.items():
if s[1] == 'wo_mana':
wo_mana.append('{} = {}'.format(k, s[0]))
elif s[1] == 'wi_mana':
wi_mana.append('{} = {}'.format(k, s[0]))
f.write('{},{},{},{},{}\n'.format(
c.name,
str(c.skill),
str(c.magic),
'; '.join(wo_mana),
'; '.join(wi_mana)
))
There is probably a better way to do this. Using `import csv` could have
improvements (read about CSV
[here](https://docs.python.org/2/library/csv.html)).
Hope this helps!
|
How does Perl avoid shebang loops?
Question: `perl` interprets the shebang itself and mimics the behavior of `exec*(2)`. I
think it emulates the Linux behavior of splitting on all whitespace instead of
BSD first-whitespace-only thing, but never mind that.
Just as a quick demonstration `really_python.pl`
#!/usr/bin/env python
# the following line is correct Python but not correct Perl
from collections import namedtuple
print "hi"
prints hi when invoked as `perl really_python.pl`.
Also, the following programs will do the right thing regardless of whether
they are invoked as `perl program` or `./program`.
#!/usr/bin/perl
print "hi\n";
and
#!/usr/bin/env perl
print "hi\n";
I don't understand why the program isn't infinite looping. In either of the
above cases, the shebang line either is or resolves to an absolute path to the
`perl` interpreter. It seems like the next thing that should happen after that
is `perl` parses the file, notices the shebang, and delegates to the shebang
path (in this case itself). Does `perl` compare the shebang path to its own
`ARGV[0]`? Does `perl` look at the shebang string and see if it contains
`"perl"` as a substring?
I tried to use a symlink to trigger the infinite loop behavior I was
expecting.
$ ln -s /usr/bin/perl /tmp/p
#!/tmp/p
print "hi\n";
but that program printed "hi" regardless of how it was invoked.
On OS X, however, I was able to trick `perl` into an infinite shebang loop
with a script.
Contents of `/tmp/pscript`
#!/bin/sh
perl "$@"
Contents of perl script
#!/tmp/pscript
print "hi\n";
and this does infinite loop (on OS X, haven't tested it on Linux yet).
`perl` is clearly going to a lot of trouble to handle shebangs correctly in
reasonable situations. It isn't confused by symlinks and isn't confused by
normal `env` stuff. What exactly is it doing?
Answer: The documentation for this feature is found in
[perlrun](http://perldoc.perl.org/perlrun.html).
> If the `#!` line does not contain the word "perl" nor the word "indir", the
> program named after the `#!` is executed instead of the Perl interpreter.
> This is slightly bizarre, but it helps people on machines that don't do
> `#!`, because they can tell a program that their SHELL is _/usr/bin/perl_ ,
> and Perl will then dispatch the program to the correct interpreter for them.
So, if the shebang contains `perl` or `indir`, the interpreter from the
shebang line isn't executed.
[Additionally](http://perl5.git.perl.org/perl.git/blob/be2c0c650b028f54e427f2469a59942edfdff8a9:/toke.c#l5116),
the interpreter from the shebang line isn't executed if `argv[0]` doesn't
contain `perl`. This is what prevents the infinite loop in your example.
* When launched using `perl /tmp/pscript`,
1. the kernel executes `perl /tmp/pscript`,
2. then `perl` executes `/tmp/p /tmp/pscript`.
3. At this point, `argv[0]` doesn't contain `perl`, so the shebang line is no longer relevant.
* When launched using `/tmp/pscript`,
1. the kernel executes `/tmp/p /tmp/pscript`.
2. At this point, `argv[0]` doesn't contain `perl`, so the shebang line is no longer relevant.
|
Training a simple net doesn't appear to change values of variables more than once
Question: I'm sure I'm missing something obvious. Here's the tail end of my code:
# simple loss function
loss = tf.reduce_sum(tf.abs(tf.sub(x4, yn)))
train_step = tf.train.GradientDescentOptimizer(0.000001).minimize(loss)
with tf.Session() as sess:
tf.initialize_all_variables().run()
print(sess.run([tf.reduce_sum(w1), tf.reduce_sum(b1)]))
for i in range(5):
# fill in x1 and yn
sess.run(train_step, feed_dict={x1: in_images, yn: out_images})
print(sess.run([tf.reduce_sum(w1), tf.reduce_sum(b1)]))
The network descending from the loss function is a simple CNN, with conv2d's
and bias_adds, and elu's. I wanted to take a look at how the weights and
biases for the first layer change. The first print returns the expected values
([ +/- 100 or so, 0]), as w1 was initialized with a random normal and b1
initialized with zeros.
The second print statement gives a different value pair, as expected.
What's not expected is that each time through the loop, the second print
statement prints the same value pair, as though each invocation of train_step
is doing the same thing each time, rather than updating the values of the
Variables in the loss network.
What am I missing here?
Here's a cut and paste of the interesting part of the run:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 970, pci bus id: 0000:01:00.0)
[-50.281082, 0.0]
W tensorflow/core/common_runtime/bfc_allocator.cc:213] Ran out of memory trying to allocate 3.98GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
[112.52832, 0.078026593]
[112.52832, 0.078026593]
[112.52832, 0.078026593]
[112.52832, 0.078026593]
[112.52832, 0.078026593]
I can post the network itself if necessary, but I suspect the problem is my
mental model of how tensorflow updates state.
* * *
Here's the entire python program, with a dummy routine for the image input to
show the issue:
import tensorflow as tf
import numpy as np
from scipy import misc
H = 128
W = 128
x1 = tf.placeholder(tf.float32, [None, H, W, 1], "input_image")
yn = tf.placeholder(tf.float32, [None, H-12, W-12, 1], "test_image")
w1 = tf.Variable(tf.random_normal([7, 7, 1, 64])) # 7x7, 1 input chan, 64 output chans
b1 = tf.Variable(tf.constant(0.1, shape=[64]))
x2 = tf.nn.conv2d(x1, w1, [1,1,1,1], "VALID")
x2 = tf.nn.bias_add(x2, b1)
x2 = tf.nn.elu(x2)
w2 = tf.Variable(tf.random_normal([5, 5, 64, 32])) # 5x5, 64 input 32 output chans
b2 = tf.Variable(tf.constant(0.1, shape=[32]))
x3 = tf.nn.conv2d(x2, w2, [1,1,1,1], "VALID")
x3 = tf.nn.bias_add(x3, b2)
x3 = tf.nn.elu(x3)
w3 = tf.Variable(tf.random_normal([3, 3, 32, 1]))
b3 = tf.Variable(tf.constant(0.1, shape=[1]))
x4 = tf.nn.conv2d(x3, w3, [1,1,1,1], "VALID")
x4 = tf.nn.bias_add(x4, b3)
x4 = tf.nn.elu(x4)
loss = tf.reduce_sum(tf.abs(tf.sub(x4, yn)))
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
# fake for testing
in_images = np.random.rand(20, 128, 128, 1)
out_images = np.random.rand(20, 116, 116, 1)
with tf.Session() as sess:
tf.initialize_all_variables().run()
print(sess.run([tf.reduce_mean(w1), tf.reduce_mean(b1)]))
for i in range(5):
# fill in x1 and yn
sess.run(train_step, feed_dict={x1: in_images, yn: out_images})
print(sess.run([tf.reduce_mean(w1), tf.reduce_mean(b1)]))
I've looked at a bunch of other training examples and I'm still not seeing
what I am doing wrong. Changing the learning rate will just change the numbers
printed but the behavior remains the same, no apparent change from running the
optimizer.
Answer: The error was in the way I computed my loss function. I just added up all of
the errors across the batch, rather than taking the mean error for each pair
of images. The following loss function
# simple loss function
diff_image = tf.abs(tf.sub(x4,yn))
# sum over all dimensions except batch dim
err_sum = tf.reduce_sum(diff_image, [1,2,3])
#take mean over batch
loss = tf.reduce_mean(err_sum)
actually starts converging with the AdamOptimizer. The
GradientDescentOptimizer still exhibits the "change once only" feature, and
I'll go treat it as a bug and post on github.
|
Equivalent code in python (time)
Question: Javascript code:
var date = new Date(1466278504960);
return: Sat Jun 18 2016 20:35:04 GMT+0100 (WEST)
How can I convert the same number to date in python ?
When I use
datetime.datetime.fromtimestamp(int("1466278504960")).strftime('%Y-%m-%d %H:%M:%S'))
I receive this error: ValueError: year is out of range
Answer: [`datetime.datetime.fromtimestamp`](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromtimestamp)
will do this, but you need to divide the value by `1000` first (the numeric
value you give and JavaScript's `Date` expects is in _milliseconds_ since the
epoch, where Python's API takes a floating point _seconds_ since the epoch):
from datetime import datetime
date = datetime.fromtimestamp(1466278504960 / 1000.)
That makes the raw `datetime` object; if you want it formatted the same, you
should take a look at [`datetime` object's `strftime`
method](https://docs.python.org/3/library/datetime.html#datetime.datetime.strftime).
|
Running phantomjs on linux using python
Question: I followed [this link](http://stackoverflow.com/questions/8778513/how-can-i-
setup-run-phantomjs-on-ubuntu) and now when I type `phan` and then tab (`\t`)
it does autocomplete to phantomJS.
Yet if I run `phantomJS -v` or `phantomJS --version` I get:
bash: /usr/local/bin/phantomjs: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
Additionally if I try to run:
>>> from selenium import webdriver
>>> driver = webdriver.PhantomJS()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/phantomjs/webdriver.py", line 50, in __init__
service_args=service_args, log_path=service_log_path)
File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/phantomjs/service.py", line 50, in __init__
service.Service.__init__(self, executable_path, port=port, log_file=open(log_path, 'w'))
IOError: [Errno 13] Permission denied: 'ghostdriver.log'
>>>
If I try to follow [this I
get](http://stackoverflow.com/questions/17048594/how-to-disable-or-change-the-
path-of-ghostdriver-log):
>>> import os
>>> driver = webdriver.PhantomJS(service_log_path=os.path.devnull)
Exception AttributeError: "'Service' object has no attribute 'log_file'" in <bound method Service.__del__ of <selenium.webdriver.phantomjs.service.Service object at 0x7f182ec13690>> ignored
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/phantomjs/webdriver.py", line 51, in __init__
self.service.start()
File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/common/service.py", line 69, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH.
>>>
Is my selenium/phatnomjs installed with the proper rights?
I created a directory `/home/ec2-user/temp` and set:
chmod 777 /home/ec2-user/temp
Yet
>>> from selenium import webdriver
>>> driver = webdriver.PhantomJS(service_log_path='/home/ec2-user/temp/ghostdriver.log')
Yields:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/phantomjs/webdriver.py", line 51, in __init__
self.service.start()
File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/common/service.py", line 69, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH.
If I type `which phantomjs` I get:
$ which phantomjs
/usr/local/bin/phantomjs
Answer: It very much sounds like a 64 vs 32 bits issue.
To find out whats the version of your ubuntu, you can run
$ uname -i
x86_64
Then make sure to [download](http://phantomjs.org/download.html) the correct
version of phantom
* 32 bits version: <https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-i686.tar.bz2>
* 64 bits version: <https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2>
Also make sure your lib are corresponding to the version of your OS.
|
How to import GstPbutils?
Question: I'm trying to use the GstPbutils python3 module, but just importing it breaks
everything, here is the code:
#!/usr/bin/python3
import gi
gi.require_version('GstPbutils', '1.0')
from gi.repository import GstPbutils
print('Hello World!')
And the output:
/usr/lib/python3/dist-packages/gi/module.py:178: Warning: g_array_append_vals: assertion 'array' failed
g_type = info.get_g_type()
/usr/lib/python3/dist-packages/gi/module.py:178: Warning: g_hash_table_lookup: assertion 'hash_table != NULL' failed
g_type = info.get_g_type()
/usr/lib/python3/dist-packages/gi/module.py:178: Warning: g_hash_table_insert_internal: assertion 'hash_table != NULL' failed
g_type = info.get_g_type()
Hello World!
Is my distribution broken? Am I doing it wrong?
Answer: Those are just warnings, everything should works fine, anyway you can remove
them using the following code:
import sys
sys.modules["gi.overrides.Gst"] = None
sys.modules["gi.overrides.GstPbutils"] = None
Source: <https://bugzilla.gnome.org/show_bug.cgi?id=736260>
|
How can I convert a string sent over UART to integer in python?
Question: I'm having difficulty converting a string I'm receiving over UART to the
decimal version of it. I read in one byte with `port.read(1)` then print it
print "%s: %s" % ( time.ctime(time.time()), str)
This prints out the expected character that matches with the decimal value for
that [ascii character](http://www.asciitable.com/index/asciifull.gif). I'm
sending known valued between 0 and 100. My issue is I can't convert this to
the decimal value and print that instead of say '*' for 42. I'm pretty sure
the default encoding/decoding is correct since the integer value in C is
showing the correct character when received in python.
It may also help to mention the sender is an AVR microcontroller programmed in
avr-gcc. I've messed around with decode('utf8'), decode("ISO-8859-1"), and
decode("ISO-8859-2") but again I'm pretty sure this is not what I want. I've
also tried converting the read value to a decimal with the int() function with
no luck. I get.
ValueError: invalid literal for int() with base 10: '\xe4'
Answer: python works perfectly, it is your avr code that is broken.
remember what you are sending is a byte with the value 0 to 100, then on the
python side you are treating this byte as a string, but characters are mapped
through ascii or unicode character maps, so you are getting some characters,
but not necessary integer:
In [1]> chr(48)
Out[1]> '0'
So you can change the code on the avr, so instead of one byte it will send one
to three bytes, eg. '100' will consist of three bytes:
'\x31\x30\x30'
# 49 48 48 - in decimal
or
you will convert data on the python side:
value = port.read(1) # reading one byte
int_value = ord(value)
# or using struct module - there you can decode multiple values in one go
import struct
int_value = struct.unpack('B', value)[0] # struct always is returning tuple
|
deleting a file after uploading to s3 in python
Question:
def upload(s):
conn=tinys3.Connection("AKIAJPOZEBO47FJYS3OA","04IZL8X9wlzBB5LkLlZD5GI/",tls=True)
f = open(s,'rb')
z=str(datetime.datetime.now().date())
x=z+'/'+s
conn.upload(x,f,'crawling1')
os.remove(s)
The file is not deleting after i upload to `s3` it is not deleting in the
local directory any alternate solutions?
Answer: You have to close the file before you can delete it:
import os
a = open('a')
os.remove('a')
>> Traceback (most recent call last):
File "main.py", line 35, in <module>
os.remove('a')
PermissionError: [WinError 32] The process cannot access the file because
it is being used by another process: 'a'
You should add `f.close()` before the call to `os.remove`, or simply use
`with`:
with open(s,'rb') as f:
conn = tinys3.Connection("AKIAJPOZEBO47FJYS3OA","04IZL8X9wlzBB5LkLlZD5GI/",tls=True)
z = str(datetime.datetime.now().date())
x = z + '/' + s
conn.upload(x, f, 'crawling1')
os.remove(s)
|
Iterate through JSON [Python]
Question: I am reading the following JSON file in python:
{
"name": "Property",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"uuid": {
"type": "string"
},
"userID": {
"type": "number"
},
"address": {
"type": "string"
},
"price": {
"type": "number"
},
"lastUpdated": {
"type": "string"
}
},
"validations": [],
"relations": {
"rooms": {
"type": "hasMany",
"model": "Room",
"foreignKey": "id"
},
"addedByUser": {
"type": "hasMany",
"model": "User_ESPC",
"foreignKey": "id"
}
},
"acls": [],
"methods": {}
}
I am trying to read the `properties` and get the name of the property (such as
"uuid") and for each name I want to read the type of the object. So far my
code lists all of the properties like that:
Property name: price
Property name: userID
Property name: uuid
Property name: lastUpdated
Property name: address
The code that does that is:
import json
#json_file='a.json'
json_file='common/models/property.json'
with open(json_file, 'r') as json_data:
data = json.load(json_data)
propertyName = data["name"]
properties = data["properties"]
# print (properties)
for property in properties:
print ('Property name: ' + property)
# propertyType = property["type"]
# print (propertyType)
The problem is when I uncomment the bottom 2 lines which should get the type
of the property object I get an error:
Property name: price
Traceback (most recent call last):
File "exportPropertyToAndroid.py", line 19, in <module>
propertyType = property["type"]
TypeError: string indices must be integers
Answer: Iterating over a dictionary yields its keys. `properties` is a dictionary:
properties = data["properties"]
and when you iterate over it in:
for property in properties:
print ('Property name: ' + property)
# propertyType = property["type"]
# print (propertyType)
`property` references each key in turn. As your dictionary represents JSON
data, the keys are strings and the error is quite self explanatory.
`property["type"]` is trying to get a character from the string at the indice
`"type"`.
Instead you should either use the key `property` to fetch additional values
from the dictionary:
for property in properties:
print ('Property name: ' + property)
propertyType = properties[property]["type"]
print(propertyType)
or iterate over keys and values:
for property, value in properties.items():
print ('Property name: ' + property)
propertyType = value["type"]
print(propertyType)
|
Parse POST HTTP response using Python
Question: i want to parse a POST HTTP response using python.
My response looks like:
{
"Result": 0,
"ResponseStatus": {
"ErrorCode": null,
"Message": null,
"StackTrace": null,
"Errors": null
},
"SessionId": "68ebcd6f-0aef-420d-a12b-c953f8df8ed1",
"ResponseHeader": {
"Succeeded": true,
"Errors": []
}
}
I want to parse the - "SessionID" to a 2nd http request. How can i achieve it?
Thanks !
Answer:
import json
response = '{"Result": 0, "ResponseStatus": { "ErrorCode": null,"Message": null, "StackTrace": null, "Errors": null },"SessionId": "68ebcd6f-0aef-420d-a12b-c953f8df8ed1", "ResponseHeader": { "Succeeded": true, "Errors": [] } }'
json_response = json.loads(response)
print json_response['SessionId']
I guess you are using urllib, I recommend using [requests](http://docs.python-
requests.org/en/master/)
|
Python MySQLdb - Connection in a class
Question: I am making a Python project where I have to seek and retreive data from a
database.
I tried making a class, in which I declare the connection and do my queries,
here is moreless what I have so far.
import MySQLdb
dbc =("localhost","root","1234","users")
class sql:
db = MySQLdb.connect(dbc[0],dbc[1],dbc[2],dbc[3])
cursor = db.cursor()
def query(self,sql):
sql.cursor.execute(sql)
return sql.cursor.fetchone()
def rows(self):
return sql.cursor.rowcount
sqlI = sql()
print(sqlI.query("SELECT `current_points` FROM `users` WHERE `nick` = 'username';"))
So, the main problem is that the variable `db` and `cursor` are not callable
from other def's/functions from the same Class. What I'd like to get, is a
polished query, where I can make queries and retreive it's content. This would
summarize my code, therefore I should do.
Answer: That's not how you write classes in Python. You need to define your connection
and cursor inside the `__init__` method, and refer to them via `self`.
class sql:
dbc = ("localhost","root","1234","users")
def __init__(self):
db = MySQLdb.connect(*self.dbc)
self.cursor = db.cursor()
def query(self,sql):
self.cursor.execute(sql)
return self.cursor.fetchone()
def rows(self):
return self.cursor.rowcount
|
Scrapy Xpath output empty
Question: I would like extract data on this website : <http://www.pokepedia.fr/Pikachu>
I'm learning the python and how to use Scrapy and my problem is : Why I can't
retrieve the data with Xpath ?
My Xpath look good when i test xpath in my browser, it return me the correct
value. (Google Chrome)
import re
from scrapy import Spider
from scrapy.selector import Selector
from stack.items import StackItem
class StackSpider(Spider):
name = "stack"
allowed_domains = ["pokepedia.fr"]
start_urls = [
"http://www.pokepedia.fr/Pikachu",
]
def unicodize(seg):
if re.match(r'\\u[0-9a-f]{4}', seg):
return seg.decode('unicode-escape')
return seg.decode('utf-8')
def parse(self, response):
pokemon = Selector(response).xpath('//*[@id="mw-content-text"]/table[2]')
for question in pokemon:
item = StackItem()
item['title'] = question.xpath(
'//*[@id="mw-content-text"]/table[2]/tbody/tr[1]/th[2]/text()').extract()[0]
yield item
I want to extract the name of the pokemon in the page but when I use :
scrapy crawl stack -o items.json -t json
My Json output :
[
In my console i've this error :
IndexError : list index out of range
I've followed this tuto : <https://realpython.com/blog/python/web-scraping-
with-scrapy-and-mongodb/>
Answer: Try this
question.xpath( '//*[@id="mw-content-text"]/table[2]/tr[1]/th[2]/text()').extract()[0]
The browser adds the _tbody_ tags. They are not in the original HTML, that's
why scrapy returns an empty file.
PS: you might want to consider using
scrapy shell URL
and then using
response.xpath('...YOUR SELECTOR..')
for debugging and testing.
|
How to increase the model accuracy of logistic regression in Scikit python?
Question: I am trying to predict the admit variable with predictors such as gre,gpa and
ranks.But the prediction accuracy is very less(0.66).The dataset is given
below. <https://gist.github.com/abyalias/3de80ab7fb93dcecc565cee21bd9501a>
Please find the codes below:
In[73]: data.head(20)
Out[73]:
admit gre gpa rank_2 rank_3 rank_4
0 0 380 3.61 0.0 1.0 0.0
1 1 660 3.67 0.0 1.0 0.0
2 1 800 4.00 0.0 0.0 0.0
3 1 640 3.19 0.0 0.0 1.0
4 0 520 2.93 0.0 0.0 1.0
5 1 760 3.00 1.0 0.0 0.0
6 1 560 2.98 0.0 0.0 0.0
y = data['admit']
x = data[data.columns[1:]]
from sklearn.cross_validation import train_test_split
xtrain,xtest,ytrain,ytest = train_test_split(x,y,random_state=2)
ytrain=np.ravel(ytrain)
#modelling
clf = LogisticRegression(penalty='l2')
clf.fit(xtrain,ytrain)
ypred_train = clf.predict(xtrain)
ypred_test = clf.predict(xtest)
In[38]: #checking the classification accuracy
accuracy_score(ytrain,ypred_train)
Out[38]: 0.70333333333333337
In[39]: accuracy_score(ytest,ypred_test)
Out[39]: 0.66000000000000003
In[78]: #confusion metrix...
from sklearn.metrics import confusion_matrix
confusion_matrix(ytest,ypred)
Out[78]:
array([[62, 1],
[33, 4]])
The ones are wrongly predicting.How to increase the model accuracy?
Answer: Since machine learning is more about experimenting with the features and the
models, there is no correct answer to your question. Some of my suggestions to
you would be:
**1\. Feature Scaling and/or Normalization** \- Check the scales of your _gre_
and _gpa_ features. They differ on 2 orders of magnitude. Therefore, your
_gre_ feature will end up dominating the others in a classifier like Logistic
Regression. You can normalize all your features to the same scale before
putting them in a machine learning model.[This](http://scikit-
learn.org/stable/modules/preprocessing.html) is a good guide on the various
feature scaling and normalization classes available in scikit-learn.
**2\. Class Imbalance** \- Look for class imbalance in your data. Since you
are working with admit/reject data, then the number of rejects would be
significantly higher than the admits. Most classifiers in SkLearn including
[`LogisticRegression`](http://scikit-
learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
have a `class_weight` parameter. Setting that to `balanced` might also work
well in case of a class imbalance.
**3\. Optimize other scores** \- You can optimize on other metrics also such
as _Log Loss_ and _F1-Score_. The F1-Score could be useful, in case of class
imbalance. [This](http://scikit-
learn.org/stable/modules/model_evaluation.html) is a good guide that talks
more about scoring.
**4\. Hyperparameter Tuning - Grid Search** \- You can improve your accuracy
by performing a Grid Search to tune the hyperparameters of your model. For
example in case of `LogisticRegression`, the parameter `C` is a
hyperparameter. Also, you should avoid using the test data during grid search.
Instead perform cross validation. Use your test data only to report the final
numbers for your final model. Please note that GridSearch should be done for
all models that you try because then only you will be able to tell what is the
best you can get from each model. Scikit-Learn provides the
[`GridSearchCV`](http://scikit-
learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html)
class for this. [This](http://scikit-
learn.org/stable/modules/grid_search.html) article is also a good starting
point.
**5\. Explore more classifiers** \- Logistic Regression learns a linear
decision surface that separates your classes. It could be possible that your 2
classes may not be linearly separable. In such a case you might need to look
at other classifiers such [**Support Vector Machines**](http://scikit-
learn.org/stable/modules/generated/sklearn.svm.SVC.html) which are able to
learn more complex decision boundaries. You can also start looking at Tree-
Based classifiers such as [**Decision Trees**](http://scikit-
learn.org/stable/modules/tree.html) which can learn rules from your data.
Think of them as a series of If-Else rules which the algorithm automatically
learns from the data. Often, it is difficult to get the right [Bias-Variance
Tradeoff](http://scott.fortmann-roe.com/docs/BiasVariance.html) with Decision
Trees, so I would recommend you to look at [Random Forests](http://scikit-
learn.org/stable/modules/ensemble.html#forest) if you have a considerable
amount of data.
**6\. Error Analysis** \- For each of your models, go back and look at the
cases where they are failing. You might end up finding that some of your
models work well on one part of the parameter space while others work better
on other parts. If this is the case, then [Ensemble Techniques](http://scikit-
learn.org/stable/modules/ensemble.html) such as
[`VotingClassifier`](http://scikit-
learn.org/stable/modules/ensemble.html#votingclassifier) techniques often give
the best results. Models that win Kaggle competitions are many times ensemble
models.
**7\. More Features** _ If all of this fails, then that means that you should
start looking for more features.
Hope that helps!
|
python-social-auth with Django: ImportError: No module named 'openid.association'
Question: I am trying to use `python-social-auth` with Django 1.9 and Python 3. As far
as I can tell, I have installed all the necessary requirements, and have all
the required settings in my `settings.py`. However, when I try to run
migrations, or run the Django dev server, I get the following error:
ImportError: No module named 'openid.association'
The full traceback is as follows:
Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x7f6fe7ea5a60>
Traceback (most recent call last):
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/django/utils/autoreload.py", line 249, in raise_last_exception
six.reraise(*_exception)
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/django/apps/config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 662, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/social/apps/django_app/default/models.py", line 9, in <module>
from social.storage.django_orm import DjangoUserMixin, \
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/social/storage/django_orm.py", line 5, in <module>
from social.storage.base import UserMixin, AssociationMixin, NonceMixin, \
File "/home/ethan/.virtualenvs/flywithme/lib/python3.5/site-packages/social/storage/base.py", line 12, in <module>
from openid.association import Association as OpenIdAssociation
ImportError: No module named 'openid.association'
One suggest I found in my searching was to get rid of `python-openid` and
install `python3-openid`. This didn't work for me. I have also seen a number
of posts related to `ImportError`s and `python-social-auth`, but have not been
able to come up with a solution that works for me. I assume that I have
misconfigured/failed to configure something, but I am not sure what. What am I
doing wrong here?
Answer: I just had the exact same problem (Python 3.5, Django 1.9.8) and could
actually resolve the issue by uninstalling _all_ versions of python-openid and
afterwards removing _and reinstalling_ python-social-auth.
Seemingly something went wrong when installing PSA whilst python-openid was
still available. So make sure to **remove both versions** , so python-openid
and python3-openid, and then **remove PSA as well and try reinstalling it**.
In the log, you should now see python3-openid getting installed alongside PSA.
After doing so I could apply all migrations without a problem.
If that does not work for you resp. does not install python3-openid, you could
also try installing PSA from git using `pip install
git+https://github.com/omab/python-social-auth.git`. Apparently that helped a
person who ran into a similar issue a year ago
(<https://github.com/omab/python-social-auth/issues/588>).
Hope it helps!
|
How can I disable ExtDeprecationWarning for external libs in flask
Question: When I run my script, I get this output:
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.sqlalchemy is deprecated, use flask_sqlalchemy instead.
.format(x=modname), ExtDeprecationWarning
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.marshmallow is deprecated, use flask_marshmallow instead.
.format(x=modname), ExtDeprecationWarning
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restful is deprecated, use flask_restful instead.
.format(x=modname), ExtDeprecationWarning
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restful.fields is deprecated, use flask_restful.fields instead.
.format(x=modname), ExtDeprecationWarning
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restful.reqparse is deprecated, use flask_restful.reqparse instead.
.format(x=modname), ExtDeprecationWarning
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restplus is deprecated, use flask_restplus instead.
.format(x=modname), ExtDeprecationWarning
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restful.representations is deprecated, use flask_restful.representations instead.
.format(x=modname), ExtDeprecationWarning
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.script is deprecated, use flask_script instead.
.format(x=modname), ExtDeprecationWarning
/app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.migrate is deprecated, use flask_migrate instead.
.format(x=modname), ExtDeprecationWarning
I don't really care about this, because external libs are causing this. I
can't update these libs as I don't own them and I see for several there are
pull requests pending.
How can I get some peace and quiet?
Answer: First, you _should_ care about this because the packages you're using aren't
up to date. Report a bug that they should switch to using direct import names,
such as `flask_sqlalchemy`, rather than the `flask.ext` import hook.
Add a
[`warnings.simplefilter`](https://docs.python.org/3.5/library/warnings.html)
line to filter out these warnings. You can place it wherever you're
configuring your application, before performing any imports that would raise
the warning.
import warnings
from flask.exthook import ExtDeprecationWarning
warnings.simplefilter('ignore', ExtDeprecationWarning)
|
Sort JSON dictionaries using datetime format not consistent
Question: I have JSON file (post responses from an API) - I need to sort the
dictionaries by a certain key in order to parse the JSON file in chronological
order. After studying the data, I can sort it by the date format in metadata
or by the number sequences of the S5CV[0156]P0.xml
One text example that you can load in JSON here -
<http://pastebin.com/0NS5BiDk>
I have written 2 codes to sort the list of objects by a certain key. The 1st
one sorts by the 'text' of the xml. The 2nd one by [metadata][0][value].
The 1st one works, but a few of the XMLs, even if they are higher in number,
actually have documents inside older than I expected.
For the 2nd code the format of date is not consistent and sometimes the value
is not present at all. I am struggling to extract the datetime format in a
consistent way. The second one also gives me an error, but I cannot figure out
why - string indices must be integers.
# 1st code (it works but not ideal)
# load post response r1 in json (python 3.5)
j=r1.json()
# iterate through dictionaries and sort by the 4 num of xml (ex. 0156)
list = []
for row in j["tree"]["children"][0]["children"]:
list.append(row)
newlist = sorted(list, key=lambda k: k['text'][-9:])
print(newlist)
# 2nd code. I need something to make consistent datetime,
# except missing values and solve the list index error
list = []
for row in j["tree"]["children"][0]["children"]:
list.append(row)
# extract the last 3 blocks of characters from the [metadata][0][value]
# usually are like this "7th april, 1922." and trasform in datatime format
# using dparser.parse
def date(key):
return dparser.parse((' '.join(key.split(' ')[-3:])),fuzzy=True)
def order(slist):
try:
return sorted(slist, key=lambda k: k[date(["metadata"][0]["value"])])
except ValueError:
return 0
print(order(list))
#update
orig_list = j["tree"]["children"][0]["children"]
cleaned_list = sorted((x for x in orig_list if extract_date(x) != DEFAULT_DATE),
key=extract_date)
first_date = extract_date(cleaned_list[0])
if first_date != DEFAULT_DATE: # valid date found?
cleaned_list [0] ['date'] = first_date
print(first_date)
middle_date = extract_date(cleaned_list[len(cleaned_list)//2])
if middle_date != DEFAULT_DATE: # valid date found?
cleaned_list [0] ['date'] = middle_date
print(middle_date)
last_date = extract_date(cleaned_list [-1])
if last_date != DEFAULT_DATE: # valid date found?
cleaned_list [0] ['date'] = last_date
print(last_date)
Answer: Clearly you can't use the .xml filenames to sort the data if it's unreliable,
so the most promising strategy seems to be what you're attempting to do in the
2nd code.
When I mentioned needing a datetime to sort the items in my comments to your
other question, I literally meant something like
[`datetime.date`](https://docs.python.org/3/library/datetime.html#date-
objects) instances, not strings like `"28th july, 1933"`, which wouldn't
provide the proper ordering needed since they would be compared
lexicographically with one another, not numerically like `datetime.date`s.
Here's something that seems to work. It uses the `re` module to search for the
date pattern in the strings that usually contain them (those with a `"name"`
associated with the value `"Comprising period from"`). If there's more than
one date match in the string, it uses the last one. This is then converted
into a `date` instance and returned as the value to key on.
Since some of the items don't have valid date strings, a default one is
substituted for sorting purposes. In the code below, a earliest valid date is
used as the default—which makes all items with date problems appear at the
beginning of the sorted list. Any items following them should be in the proper
order.
Not sure what you should do about items lacking date information—if it isn't
there, your only options are to guess a value, ignore them, or consider it an
error.
# v3.2.1
import datetime
import json
import re
# default date when one isn't found
DEFAULT_DATE = datetime.date(1, 1, datetime.MINYEAR) # 01/01/0001
MONTHS = ('january february march april may june july august september october'
' november december'.split())
# dictionary to map month names to numeric values 1-12
MONTH_TO_ORDINAL = dict( zip(MONTHS, range(1, 13)) )
DMY_DATE_REGEX = (r'(3[01]|[12][0-9]|[1-9])\s*(?:st|nd|rd|th)?\s*'
+ r'(' + '|'.join(MONTHS) + ')(?:[,.])*\s*'
+ r'([0-9]{4})')
MDY_DATE_REGEX = (r'(' + '|'.join(MONTHS) + ')\s+'
+ r'(3[01]|[12][0-9]|[1-9])\s*(?:st|nd|rd|th)?,\s*'
+ r'([0-9]{4})')
DMY_DATE = re.compile(DMY_DATE_REGEX, re.IGNORECASE)
MDY_DATE = re.compile(MDY_DATE_REGEX, re.IGNORECASE)
def extract_date(item):
metadata0 = item["metadata"][0] # check only first item in metadata list
if metadata0.get("name") != "Comprising period from":
return DEFAULT_DATE
else:
value = metadata0.get("value", "")
matches = DMY_DATE.findall(value) # try dmy pattern (most common)
if matches:
day, month, year = matches[-1] # use last match if more than one
else:
matches = MDY_DATE.findall(value) # try mdy pattern...
if matches:
month, day, year = matches[-1] # use last match if more than one
else:
print('warning: date patterns not found in "{}"'.format(value))
return DEFAULT_DATE
# convert strings found into numerical values
year, month, day = int(year), MONTH_TO_ORDINAL[month.lower()], int(day)
return datetime.date(year, month, day)
# test files: 'json_sample.txt', 'india_congress.txt', 'olympic_games.txt'
with open('json_sample.txt', 'r') as f:
j = json.load(f)
orig_list = j["tree"]["children"][0]["children"]
sorted_list = sorted(orig_list, key=extract_date)
for item in sorted_list:
print(json.dumps(item, indent=4))
To answer your latest follow-on questions, you could leave out all the items
in the list that don't have recognizable dates by using `extract_date()` to
filter them out beforehand in a [generator
expression](https://docs.python.org/3/howto/functional.html#generator-
expressions-and-list-comprehensions) with something like this:
# to obtain a list containing only entries with a parsable date
cleaned_list = sorted((x for x in orig_list if extract_date(x) != DEFAULT_DATE),
key=extract_date)
Once you have a sorted list of items that all have a valid date, you can do
things like the following, again reusing the `extract_date()` function:
# extract and display dates of items in cleaned list
print('first date: {}'.format(extract_date(cleaned_list[0])))
print('middle date: {}'.format(extract_date(cleaned_list[len(cleaned_list)//2])))
print('last date: {}'.format(extract_date(cleaned_list[-1])))
Calling `extract_date()` on the same item multiple times is somewhat
inefficient. To avoid that you could easily add the `datetime.date` value it
returns to the object on-the-fly since it's a dictionary, and then just refer
to it as often as needed with very little additional overhead:
# add extracted datetime.date entry to a list item[i] if a valid one was found
date = extract_date(some_list[i])
if date != DEFAULT_DATE: # valid date found?
some_list[i]['date'] = date # save by adding it to object
This effectively caches the extracted date by storing it in the item itself.
Afterwards, the `datetime.date` value can simply be referenced with
`some_list[i]['date']`.
As a concrete example, consider this revised example of displaying the datesof
the first, middle, and last objects:
# display dates of items in cleaned list
print('first date: {}'.format(cleaned_list[0]['date']))
middle = len(cleaned_list)//2
print('middle date: {}'.format(cleaned_list[middle]['date']))
print('last date: {}'.format(cleaned_list[-1]['date']))
|
Flask-edits: AttributeError: 'TokenStream' object has no attribute 'next'
Question: I am trying to test the flask-edits package
(<https://github.com/nathancahill/Flask-Edits>)
Can anyone help with this error: AttributeError: 'TokenStream' object has no
attribute 'next'
@app.route('/')
def hello_world():
return render_template('test.html')
if __name__ == '__main__':
app.run(debug=True)
The template:
<!DOCTYPE html>
<html>
<head>
<title>Haldane</title>
</head>
<body>
<p>Test</p>
{% editable 'Section name' %}
Python is a programming language that lets you work quickly and integrate systems more effectively.
{% endeditable %}
</body>
</html>
The error occurs here:
"""Jinja extensions to mark sections as editable
"""
import hashlib
from collections import OrderedDict
from jinja2.nodes import Output, Template, TemplateData
from jinja2.ext import Extension
class EditableExtension(Extension):
tags = set(['editable'])
def parse(self, parser):
_db = self.environment.edits
# Skip begining node
parser.stream.next()
The error:
File "/anaconda/lib/python3.5/site-packages/flask_edits/editable.py", line 18, in parse
parser.stream.next()
AttributeError: 'TokenStream' object has no attribute 'next'
Gist including the code:
<https://gist.github.com/archienorman11/98993d66fc30283ba113f8a4f2b39669>
Answer: Assuming Flask-Edits wants to support Python 3, this is a bug in Flask-Edits.
It should use the builtin
[`next`](https://docs.python.org/3/library/functions.html#next) function to
advance iterators: `next(parser.stream)`. The method on the iterator changed
from `next` to `__next__` between Python 2 and 3. The builtin function works
for both.
|
Issue with requests module in python for AWS Lambda
Question: I am writing a lambda function with an intent that uses requests to pull
information from a _Wolfram_ CloudObject. Here is the relevant part of the
code:
from __future__ import print_function
import requests
.
.
.
def on_intent(intent_request, session):
print("on_intent requestID=" + intent_request['requestID'] + ", sessionID=" + session['sessionId'])
intent = intent_request['intent']
intent_name = intent_request['intent']['name']
# Dispatch to skill's intent handlers
if intent_name == "GetEvent":
return call_wolfram(intent, session)
elif intent_name == "AMAZON.HelpIntent":
return get_welcome_response()
elif intent_name == "AMAZON.CancelIntent" or intent_name == "AMAZON.StopIntent":
return handle_session_end_request()
else:
raise ValueError("Invalid intent")
.
.
.
# Functions that control skill's behavior
def call_wolfram(intent, session):
url = "https://path-to-cloud-object"
query = {'string1': 'VESSEL', 'string2': 'EVENT', 'RelString': 'TRIGGERED'}
r = requests.get(url, params=query)
session_attributes = {"r_result": r}
speech_output = "Congrats, dummy! It worked"
card_title = "Query"
should_end_session = True
return build_response({}, build_speechlet_response(card_title, speech_output, None, should_end_session)
Most of the rest of the code follows the `MyColorIs` example template given by
AWS with minimal changes. When the lambda function is tested, the error
message gives me a json file with stackTrace; I've narrowed down the issue to
the lines of code `r = requests.get()` and `session_attributes = {}`, because
when commented out, the lambda execution is successful. This is my first
project with python, so I am new to the language as well. For good measure,
here is the error message I get after lambda executes:
* * *
{
"stackTrace": [
[
"/var/task/query_lambda.py",
27,
"lambda_handler",
"return on_intent(event['request'], event['session'])"
],
[
"/var/task/query_lambda.py",
65,
"on_intent",
"return call_wolfram(intent, session)"
],
[
"/var/task/query_lambda.py",
113,
"call_wolfram",
"r = requests.get(url, params=query)"
],
[
"/var/task/requests/api.py",
71,
"get",
"return request('get', url, params=params, **kwargs)"
],
[
"/var/task/requests/api.py",
57,
"request",
"return session.request(method=method, url=url, **kwargs)"
],
[
"/var/task/requests/sessions.py",
475,
"request",
"resp = self.send(prep, **send_kwargs)"
],
[
"/var/task/requests/sessions.py",
585,
"send",
"r = adapter.send(request, **kwargs)"
],
[
"/var/task/requests/adapters.py",
477,
"send",
"raise SSLError(e, request=request)"
]
],
"errorType": "SSLError",
"errorMessage": "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)"
}
Answer: You can read more about requests' use of certificates here:
<http://docs.python-requests.org/en/master/user/advanced/>
There are two ways to get around this problem:
* Find the certificate that you are missing, and get it installed on the system that's failing.
* Ignore the certificates altogether by passing `verify=False` to `requests.get`:
`r = requests.get(url, params=query, verify=False)`
The second method is quicker, but less secure; that may or may not matter for
your intended use.
|
How to customize pybusyinfo window in (windows OS) to make it appear at top corner of window and the other formatting options?
Question: I am writing a python script to get the climate conditions in particular area
every 30 minutes and give a popup notification.
This code gives popup at the center of the screen which is annoying.I wish to
have the popup similar to notify-send in linux[which appears at right corner]
and the message is aligned at the center of pybusyinfo window ,and how to
align it to right?
Any change of code in pybusyinfo would be helpful.
import requests
from bs4 import BeautifulSoup
import datetime,time
import wx
import wx.lib.agw.pybusyinfo as PBI
now = datetime.datetime.now()
hour=now.hour
# gets current time
def main():
chrome_path = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s'
g_link = 'http://www.accuweather.com/en/in/tambaram/190794/hourly-weather-forecast/190794?hour='+str(hour)
g_res= requests.get(g_link)
g_links= BeautifulSoup(g_res.text,"lxml")
if hour > 18 :
temp = g_links.find('td', {'class' :'first-col bg-s'}).text
climate = g_links.find('td', {'class' :'night bg-s icon first-col'}).text
else :
temp = g_links.find('td', {'class' :'first-col bg-c'}).text
climate = g_links.find('td', {'class' :'day bg-c icon first-col'}).text
for loc in g_links.find_all('h1'):
location=loc.text
info = location +' ' + str(now.hour)+':'+str(now.minute)
#print 'Temp : '+temp
#print climate
def showmsg():
app = wx.App(redirect=False)
title = 'Weather'
msg= info+'\n'+temp + '\n'+ climate
d = PBI.PyBusyInfo(msg,title=title)
return d
if __name__ == '__main__':
d = showmsg()
time.sleep(6)
while True:
main()
time.sleep(1800)
Answer:
screen_size = wx.DisplaySize()
d_size = d._infoFrame.GetSize()
pos_x = screen_size[0] - d_size[0] # Right - popup.width (aligned to right side)
pos_y = screen_size[1] - d_size[1] # Bottom - popup.height (aligned to bottom)
d.SetPosition((pos_x,pos_t))
d.Update() # force redraw ... (otherwise your "work " will block redraw)
to align the text you will need to subclass PyBusyFrame
class MyPyBusyFrame(PBI.PyBusyFrame):
def OnPaint(self, event):
"""
Handles the ``wx.EVT_PAINT`` event for L{PyInfoFrame}.
:param `event`: a `wx.PaintEvent` to be processed.
"""
panel = event.GetEventObject()
dc = wx.BufferedPaintDC(panel)
dc.Clear()
# Fill the background with a gradient shading
startColour = wx.SystemSettings_GetColour(wx.SYS_COLOUR_ACTIVECAPTION)
endColour = wx.WHITE
rect = panel.GetRect()
dc.GradientFillLinear(rect, startColour, endColour, wx.SOUTH)
# Draw the label
font = wx.SystemSettings_GetFont(wx.SYS_DEFAULT_GUI_FONT)
dc.SetFont(font)
# Draw the message
rect2 = wx.Rect(*rect)
rect2.height += 20
#############################################
# CHANGE ALIGNMENT HERE
#############################################
dc.DrawLabel(self._message, rect2, alignment=wx.ALIGN_CENTER|wx.ALIGN_CENTER)
# Draw the top title
font.SetWeight(wx.BOLD)
dc.SetFont(font)
dc.SetPen(wx.Pen(wx.SystemSettings_GetColour(wx.SYS_COLOUR_CAPTIONTEXT)))
dc.SetTextForeground(wx.SystemSettings_GetColour(wx.SYS_COLOUR_CAPTIONTEXT))
if self._icon.IsOk():
iconWidth, iconHeight = self._icon.GetWidth(), self._icon.GetHeight()
dummy, textHeight = dc.GetTextExtent(self._title)
textXPos, textYPos = iconWidth + 10, (iconHeight-textHeight)/2
dc.DrawBitmap(self._icon, 5, 5, True)
else:
textXPos, textYPos = 5, 0
dc.DrawText(self._title, textXPos, textYPos+5)
dc.DrawLine(5, 25, rect.width-5, 25)
size = self.GetSize()
dc.SetPen(wx.Pen(startColour, 1))
dc.SetBrush(wx.TRANSPARENT_BRUSH)
dc.DrawRoundedRectangle(0, 0, size.x, size.y-1, 12)
then you would have to create your own BusyInfo function that instanciated
your frame and returns it (see
<https://github.com/wxWidgets/wxPython/blob/master/wx/lib/agw/pybusyinfo.py#L251>
)
|
Cannot run pyspark in Jupyter
Question: I have Windows 10 and have installed spark following the instructions from:
<https://hernandezpaul.wordpress.com/2016/01/24/apache-spark-installation-on-
windows-10/>
Now I open my jupyter notebook, and type the following:
import os
import sys
# Path for spark source folder
os.environ['SPARK_HOME']="c:\\Spark"
# Append pyspark to Python Path
sys.path.append("C:\\Spark")
sys.path.append("C:\\Spark\\python")
sys.path.append("C:\\Spark\\python\\lib")
sys.path.append("C:\\Spark\\python\\lib\\py4j-0.9-src.zip")
from pyspark import SparkContext
from pyspark import SparkConf
and seems that cannoot load the accumulators library, as I get the following
error:
--------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-54-68cce399fff2> in <module>()
12 sys.path.append("C:\\Spark\\python\\pyspark")
13
---> 14 from pyspark import SparkContext
15 from pyspark import SparkConf
16
C:\Spark\python\pyspark\__init__.py in <module>()
39
40 from pyspark.conf import SparkConf
---> 41 from pyspark.context import SparkContext
42 from pyspark.rdd import RDD
43 from pyspark.files import SparkFiles
C:\Spark\python\pyspark\context.py in <module>()
26 from tempfile import NamedTemporaryFile
27
---> 28 from pyspark import accumulators
29 from pyspark.accumulators import Accumulator
30 from pyspark.broadcast import Broadcast
ImportError: cannot import name accumulators
These is how my sys.path looks like, which I assume are the correct folders:
['',
'C:\\Anaconda2\\python27.zip',
'C:\\Anaconda2\\DLLs',
'C:\\Anaconda2\\lib',
'C:\\Anaconda2\\lib\\plat-win',
'C:\\Anaconda2\\lib\\lib-tk',
'C:\\Anaconda2',
'c:\\anaconda2\\lib\\site-packages\\sphinx-1.3.5-py2.7.egg',
'c:\\anaconda2\\lib\\site-packages\\setuptools-20.3-py2.7.egg',
'C:\\Anaconda2\\lib\\site-packages',
'C:\\Anaconda2\\lib\\site-packages\\win32',
'C:\\Anaconda2\\lib\\site-packages\\win32\\lib',
'C:\\Anaconda2\\lib\\site-packages\\Pythonwin',
'C:\\Anaconda2\\lib\\site-packages\\IPython\\extensions',
'C:\\Users\\Manuel\\.ipython',
'C:\\Spark',
'C:\\Spark\\python',
'C:\\Spark\\python\\lib',
'C:\\Spark\\python\\lib\\py4j-0.9-src.zip',
'C:\\Spark\\python\\pyspark']
Any help will be much appreciated.
Thanks!
Answer: This has been resolved by installing winutils.exe as described in
[Resolving Spark 1.6.0 "java.lang.NullPointerException, not
found](https://blogs.msdn.microsoft.com/arsen/2016/02/09/resolving-
spark-1-6-0-java-lang-nullpointerexception-not-found-value-sqlcontext-error-
when-running-spark-shell-on-windows-10-64-bit/)
|
How to keep trying to establish connection in Python
Question: If the server is not up when I try to run the following code, I just get a
Connection refused error.
How can I make the sender below to keep trying to establish connection and
perhaps sending until the remote server is indeed up and the connection is
successfully established?
HOST = client_ip # The remote host
PORT = port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.sendall(msg)
if expect_receive:
received_data = s.recv(1024)
print received_data
#client has started
s.close()
return
Answer: How about brute force? Something like this
import time
while 1:
HOST = client_ip # The remote host
PORT = port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((HOST, PORT))
except:
print("FAILED. Sleep briefly & try again")
time.sleep(10)
continue
s.sendall(msg)
if expect_receive:
received_data = s.recv(1024)
print received_data
#client has started
s.close()
return
|
Converting Google Analytics Reporting API V4 request results to csv with Python
Question: I'm trying to create a nicely formatted csv file with Python from a Google
Analytics Reporting API V4 request results.
The setup is using the provided example "Hello Analytics Reporting API V4."
<https://developers.google.com/analytics/devguides/reporting/core/v4/quickstart/service-
py#3_setup_the_sample>
The following results are as expected:
ga:date: 20160601
ga:sessions: 19802
ga:pageviews: 53369
ga:users: 17656
ga:date: 20160602
ga:sessions: 33718
ga:pageviews: 71857
ga:users: 30266
What is needed would be something like this:
ga:date: ga:sessions: ga:pageviews: ga:users:
20160601 19802 53369 17656
20160602 33718 71857 30266
I'm sure there is a straightforward solution with Python for this one.
Answer: Not sure it is very straightforward but it works.
import sys
from collections import OrderedDict
s="""ga:date: 20160601
ga:sessions: 19802
ga:pageviews: 53369
ga:users: 17656
ga:date: 20160602
ga:sessions: 33718
ga:pageviews: 71857
ga:users: 30266
"""
d = OrderedDict()
for l in s.splitlines():
k,v = l.split(" ")
if k not in d:
d[k] = []
d[k].append(v)
nb_values = len(d[k]) # any will do
sys.stdout.write(" ".join(d.keys()))
sys.stdout.write("\n")
for i in range(nb_values):
z = [d[k][i] for k in d.keys()]
sys.stdout.write(" ".join(z))
sys.stdout.write("\n")
Result:
ga:date: ga:sessions: ga:pageviews: ga:users:
20160601 19802 53369 17656
20160602 33718 71857 30266
|
Python Removing Columns
Question: I am trying to remove the last two columns from my data frame by using Python.
The issue is there are cells with values in the last two columns that we don't
need, and those columns don't have headers.
Here's the code I wrote, but I'm really new to Python, and don't know how to
take my original data and remove the last two columns.
import csv
with open("Filename","rb") as source:
rdr= csv.reader( source )
with open("Filename","wb") as result:
wrt= csv.writer ( result )
for r in rdr:
wrt.writerow( (r[0], r[1], r[2], r[3], r[4], r[5], r[6], r[7], r[8], r[9], r[10], r[11]) )
Thanks!
Answer: The proper Pythonic way to perform something like this is through _slicing_ :
r[start:stop(:step)]
`start` and `stop` are indexes, where positive indexes are counted from the
front and negative is counted from the end. Blank `start`s and `stop`s are
treated as the beginning and the end of `r` respectively. `step` is an
optional parameter that I'll explain later. Any slice returns an array, which
you can perform additional operations on or just return immediately.
In order to remove the last two values, you can use the slice
r[:-2]
### Additional fun with `step`
Now that `step` parameter. It allows you to pick every `step`th value from the
selected slice. With an array of, say, `r = [0,1,2,3,4,5,6,7,8,9,10]` you can
pick every other number starting with the first (all of the even numbers) with
the slice `r[::2]`. In order to get results in reverse order, you can make the
step negative:
> r = [0,1,2,3,4,5,6,7,8,9,10]
[0,1,2,3,4,5,6,7,8,9,10]
> r[::-1]
[10,9,8,7,6,5,4,3,2,1,0]
|
Installing opencv in python
Question: I'm having some trouble installing OpenCV. I have been using Anaconda, and I
copied the `cv2.pyd` file into the `...\Lib\site-packages` folder. When I get
type `import cv2` into Python I get this error:
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: DLL load failed: The specified module could not be found.`
I've also tried a pip install via command prompt: `C:\Users\SCD>pip install
cv2` results: `
Collecting cv2
Could not find a version that satisfies the requirement cv2 (from versions: )
No matching distribution found for cv2`
Can someone help?
Answer: I think the proper input should be 'import cv2' not 'install cv2'. After this
print cv2.**version** should show you that it installed properly. Hope this
helps.
|
Python quit unexpectedly, Segmentation fault: 11
Question: I have installed python 2.7.12. I used "pip install pulp" to install pulp
package. My problem is that "import pulp" gives me the following error. How
can I solve this problem. Let me know if you need something else to debug. I
have a Mackbook pro with El Capitan 10.11.5 os.
import pulp
Segmentation fault: 11
Process: Python [1707]
Path: /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python
Version: 2.7.12 (2.7.12)
Code Type: X86-64 (Native)
Parent Process: bash [1695]
Responsible: Terminal [1693]
User ID: 501
Date/Time: 2016-06-28 12:43:05.355 -0700
OS Version: Mac OS X 10.11.5 (15F34)
Report Version: 11
Anonymous UUID: BAE25C51-36E8-EE34-FFC5-11B186F972FB
Sleep/Wake UUID: 18A10F82-634B-4BE9-9373-AE13BA40FC4C
Time Awake Since Boot: 32000 seconds
Time Since Wake: 2900 seconds
System Integrity Protection: enabled
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000008
VM Regions Near 0x8:
-->
__TEXT 0000000100000000-0000000100001000 [ 4K] r-x/rwx SM=COW /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 org.python.python 0x0000000103427889 PyImport_AddModule + 24
1 gurobipy.so 0x00000001015a4327 __Pyx_FetchCommonType + 23
Thread 0 crashed with X86 Thread State (64-bit):
rax: 0x0000000000000000 rbx: 0x00000001001a74c0 rcx: 0x0000000000000001 rdx: 0x0000000000000003
rdi: 0x000000010169cf11 rsi: 0x0000000000000010 rbp: 0x00007fff5fbfd230 rsp: 0x00007fff5fbfd210
r8: 0x0000000101712030 r9: 0x0000000000000000 r10: 0x0000000000001002 r11: 0xfffffffffe2b2279
r12: 0x0000000000000002 r13: 0x0000000000000000 r14: 0x000000010169cf11 r15: 0x00007fff7b364070
rip: 0x0000000103427889 rfl: 0x0000000000010206 cr2: 0x0000000000000008
Logical CPU: 0
Error Code: 0x00000004
Trap Number: 14
Binary Images:
0x100000000 - 0x100000fff +org.python.python (2.7.12 - 2.7.12) /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
0x100003000 - 0x100175ff7 +org.python.python (2.7.12, [c] 2001-2016 Python Software Foundation. - 2.7.12) <831DC7C1-B842-23D7-69C8-73A7D5E5574C> /Library/Frameworks/Python.framework/Versions/2.7/Python
0x1002fa000 - 0x1002fcff7 +_locale.so (???) <53986AC4-ACA1-2D91-18B0-D82D415A3A23> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_locale.so
0x101100000 - 0x101102ff7 +readline.so (???) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/readline.so
0x101109000 - 0x10115dfe7 +libncursesw.5.dylib (5) <3F0079C0-01C1-3CB8-19CA-F9B49AA4F4A4> /Library/Frameworks/Python.framework/Versions/2.7/lib/libncursesw.5.dylib
0x1011ae000 - 0x1011b1ff7 +strop.so (???) <40B05D3E-1DED-ED4E-6436-D230D8105431> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/strop.so
0x1011b6000 - 0x1011bdff7 +itertools.so (???) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/itertools.so
0x1011c8000 - 0x1011caff7 +time.so (???) <0D2E7145-66AD-2D3C-66E4-1B9ADC5C7A59> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/time.so
0x1011d0000 - 0x1011d3fff +select.so (???) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/select.so
0x1011d9000 - 0x1011daff7 +fcntl.so (???) <8034A386-5C9A-5BFB-B43D-5CDC23C97208> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/fcntl.so
0x1011dd000 - 0x1011e1ff7 +_struct.so (???) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_struct.so
0x1011e8000 - 0x1011eafef +binascii.so (???) <1B2157C5-3275-D9B6-D20F-076434EFFC93> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/binascii.so
0x1011ee000 - 0x1011effff +cStringIO.so (???) <4F4158C8-40AC-BD52-5585-747EF47FA628> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/cStringIO.so
0x1011f4000 - 0x1011f8fff +_collections.so (???) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_collections.so
0x101380000 - 0x101384fff +operator.so (???) <198AB272-F92F-F09D-86DB-4DC804FB50E3> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/operator.so
0x10138b000 - 0x10138cfff +_heapq.so (???) <71697426-5211-AEBF-F5D0-D32452547F9E> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_heapq.so
0x101390000 - 0x1013a5ff7 +_io.so (???) <9FA7A71E-88D4-8909-3F82-BD0AC812C3E5> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.so
0x1013fd000 - 0x101402fe7 +math.so (???) <630A9AF7-CA15-F304-56AE-1BFC4D4E1B20> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/math.so
0x101409000 - 0x10140afff +_hashlib.so (???) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_hashlib.so
0x10140e000 - 0x10140ffff +_random.so (???) <9AD51EBD-D930-95AC-DD39-A396715FD4AC> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_random.so
0x101512000 - 0x101528ff7 +_ctypes.so (???) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_ctypes.so
0x101579000 - 0x1016a2ff7 +gurobipy.so (0) /Library/Python/2.7/site-packages/gurobipy/gurobipy.so
0x103000000 - 0x10336bff7 +libgurobi65.so (0) /Library/gurobi651/*/libgurobi65.so
0x103390000 - 0x103481ff7 org.python.python (2.7.10 - 2.7.10) <83AFAAA7-BDFA-354D-8A7A-8F40A30ACB91> /System/Library/Frameworks/Python.framework/Versions/2.7/Python
0x7fff63771000 - 0x7fff637a825f dyld (360.22) /usr/lib/dyld
0x7fff888c8000 - 0x7fff888caff7 libquarantine.dylib (80) <0F4169F0-0C84-3A25-B3AE-E47B3586D908> /usr/lib/system/libquarantine.dylib
0x7fff888cb000 - 0x7fff888e7ff7 libsystem_malloc.dylib (67.40.1) <5748E8B2-F81C-34C6-8B13-456213127678> /usr/lib/system/libsystem_malloc.dylib
0x7fff889f0000 - 0x7fff889f0ff7 libkeymgr.dylib (28) <8371CE54-5FDD-3CE9-B3DF-E98C761B6FE0> /usr/lib/system/libkeymgr.dylib
0x7fff889f3000 - 0x7fff88d88fdb com.apple.vImage (8.0 - 8.0) <4BAC9B6F-7482-3580-8787-AB0A5B4D331B> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage
0x7fff88d8f000 - 0x7fff88d96ff7 libcompiler_rt.dylib (62) /usr/lib/system/libcompiler_rt.dylib
0x7fff8a36d000 - 0x7fff8a7e3fff com.apple.CoreFoundation (6.9 - 1258.1) <943A1383-DA6A-3DC0-ABCD-D9AEB3D0D34D> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
0x7fff8a7e4000 - 0x7fff8a7e5fff libsystem_blocks.dylib (65) <1244D9D5-F6AA-35BB-B307-86851C24B8E5> /usr/lib/system/libsystem_blocks.dylib
0x7fff8ad10000 - 0x7fff8ae1ffe7 libvDSP.dylib (563.5) <9AB6CA3C-4F0E-35E6-9184-9DF86E7C3DAD> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib
0x7fff8b6b5000 - 0x7fff8b6defff libsystem_info.dylib (477.50.4) /usr/lib/system/libsystem_info.dylib
0x7fff8c394000 - 0x7fff8c39dff3 libsystem_notify.dylib (150.40.1) /usr/lib/system/libsystem_notify.dylib
0x7fff8d8a5000 - 0x7fff8dc07f3f libobjc.A.dylib (680) <7489D2D6-1EFD-3414-B18D-2AECCCC90286> /usr/lib/libobjc.A.dylib
0x7fff8e046000 - 0x7fff8e057ff7 libz.1.dylib (61.20.1) /usr/lib/libz.1.dylib
0x7fff8e5f5000 - 0x7fff8e5faff7 libmacho.dylib (875.1) <318264FA-58F1-39D8-8285-1F6254EE410E> /usr/lib/system/libmacho.dylib
0x7fff8f2d2000 - 0x7fff8f2fbfff libc++abi.dylib (125) /usr/lib/libc++abi.dylib
0x7fff8f3c1000 - 0x7fff8f3c2ffb libremovefile.dylib (41) <552EF39E-14D7-363E-9059-4565AC2F894E> /usr/lib/system/libremovefile.dylib
0x7fff8f99e000 - 0x7fff8f9f1ff7 libc++.1.dylib (120.1) <8FC3D139-8055-3498-9AC5-6467CB7F4D14> /usr/lib/libc++.1.dylib
0x7fff8f9f2000 - 0x7fff8f9f2ff7 liblaunch.dylib (765.50.8) <834ED605-5114-3641-AA4D-ECF31B801C50> /usr/lib/system/liblaunch.dylib
0x7fff90e03000 - 0x7fff91010fff libicucore.A.dylib (551.51.3) <5BC80F94-C90D-3175-BD96-FF1DC222EC9C> /usr/lib/libicucore.A.dylib
0x7fff91108000 - 0x7fff91110ffb libsystem_dnssd.dylib (625.50.5) <4D10E12B-59B5-386F-82DA-326F18028F0A> /usr/lib/system/libsystem_dnssd.dylib
0x7fff91843000 - 0x7fff91859ff7 libLinearAlgebra.dylib (1162.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLinearAlgebra.dylib
0x7fff92171000 - 0x7fff9256dfff libLAPACK.dylib (1162.2) <987E42B0-5108-3065-87F0-9DF7616A8A06> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib
0x7fff92acc000 - 0x7fff92aceff7 libsystem_configuration.dylib (802.40.13) <3DEB7DF9-6804-37E1-BC83-0166882FF0FF> /usr/lib/system/libsystem_configuration.dylib
0x7fff92c73000 - 0x7fff92ca0fff libdispatch.dylib (501.40.12) /usr/lib/system/libdispatch.dylib
0x7fff92ed1000 - 0x7fff92ed1fff com.apple.Accelerate.vecLib (3.10 - vecLib 3.10) <054DFE32-737D-3211-9A14-0FC5E1A880E3> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib
0x7fff92efc000 - 0x7fff92fe2ff7 libcrypto.0.9.8.dylib (59.40.2) <2486D801-C756-3488-B519-1AA6807E8948> /usr/lib/libcrypto.0.9.8.dylib
0x7fff94381000 - 0x7fff9438cff7 libcommonCrypto.dylib (60075.50.1) <93732261-34B4-3914-B7A2-90A81A182DBA> /usr/lib/system/libcommonCrypto.dylib
0x7fff946fc000 - 0x7fff94742ff7 libauto.dylib (186) <999E610F-41FC-32A3-ADCA-5EC049B65DFB> /usr/lib/libauto.dylib
0x7fff948bb000 - 0x7fff948bcffb libSystem.B.dylib (1226.10.1) /usr/lib/libSystem.B.dylib
0x7fff948bd000 - 0x7fff94a24fff libBLAS.dylib (1162.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
0x7fff94a25000 - 0x7fff94a25ff7 libunc.dylib (29) /usr/lib/system/libunc.dylib
0x7fff94a40000 - 0x7fff94a43ffb libdyld.dylib (360.22) /usr/lib/system/libdyld.dylib
0x7fff94a44000 - 0x7fff94a5bff7 libsystem_asl.dylib (323.50.1) <41F8E11F-1BD0-3F1D-BA3A-AA1577ED98A9> /usr/lib/system/libsystem_asl.dylib
0x7fff94f88000 - 0x7fff94f89fff libsystem_secinit.dylib (20) <32B1A8C6-DC84-3F4F-B8CE-9A52B47C3E6B> /usr/lib/system/libsystem_secinit.dylib
0x7fff9511c000 - 0x7fff95125ff7 libsystem_pthread.dylib (138.10.4) <3DD1EF4C-1D1B-3ABF-8CC6-B3B1CEEE9559> /usr/lib/system/libsystem_pthread.dylib
0x7fff951d8000 - 0x7fff951ddff3 libunwind.dylib (35.3) /usr/lib/system/libunwind.dylib
0x7fff9522d000 - 0x7fff9525eff7 libncurses.5.4.dylib (46) /usr/lib/libncurses.5.4.dylib
0x7fff9526a000 - 0x7fff9526afff com.apple.Accelerate (1.10 - Accelerate 1.10) <185EC96A-5AF0-3620-A4ED-4D3654D25B39> /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate
0x7fff9563d000 - 0x7fff9564dfff libbsm.0.dylib (34) <7E14504C-A8B0-3574-B6EB-5D5FABC72926> /usr/lib/libbsm.0.dylib
0x7fff96061000 - 0x7fff96064fff libsystem_sandbox.dylib (460.50.4) <150A9D3D-F69E-32F7-8C7B-8E72CAAFF7E4> /usr/lib/system/libsystem_sandbox.dylib
0x7fff96065000 - 0x7fff9606dfef libsystem_platform.dylib (74.40.2) <29A905EF-6777-3C33-82B0-6C3A88C4BA15> /usr/lib/system/libsystem_platform.dylib
0x7fff96e16000 - 0x7fff96e8bfff com.apple.framework.IOKit (2.0.2 - 1179.50.2) /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit
0x7fff974bd000 - 0x7fff9754afff libsystem_c.dylib (1082.50.1) /usr/lib/system/libsystem_c.dylib
0x7fff978d9000 - 0x7fff978eaff7 libsystem_trace.dylib (201.10.3) /usr/lib/system/libsystem_trace.dylib
0x7fff97be1000 - 0x7fff97be9fff libsystem_networkextension.dylib (385.40.36) <66095DC7-6539-38F2-95EE-458F15F6D014> /usr/lib/system/libsystem_networkextension.dylib
0x7fff98396000 - 0x7fff9840dfeb libcorecrypto.dylib (335.50.1) /usr/lib/system/libcorecrypto.dylib
0x7fff9849d000 - 0x7fff98503ff7 libsystem_network.dylib (583.50.1) /usr/lib/system/libsystem_network.dylib
0x7fff98c6d000 - 0x7fff98c84ff7 libsystem_coretls.dylib (83.40.5) /usr/lib/system/libsystem_coretls.dylib
0x7fff98cdf000 - 0x7fff98cfdff7 libsystem_kernel.dylib (3248.50.21) <78E54D59-D2B0-3F54-9A4A-0A68D671F253> /usr/lib/system/libsystem_kernel.dylib
0x7fff98d0d000 - 0x7fff98d0dfff libenergytrace.dylib (10.40.1) <0A491CA7-3451-3FD5-999A-58AB4362682B> /usr/lib/libenergytrace.dylib
0x7fff98f6c000 - 0x7fff98f74fff libcopyfile.dylib (127) /usr/lib/system/libcopyfile.dylib
0x7fff98f75000 - 0x7fff98fa4ffb libsystem_m.dylib (3105) <08E1A4B2-6448-3DFE-A58C-ACC7335BE7E4> /usr/lib/system/libsystem_m.dylib
0x7fff9934d000 - 0x7fff9934ffff libsystem_coreservices.dylib (19.2) <1B3F5AFC-FFCD-3ECB-8B9A-5538366FB20D> /usr/lib/system/libsystem_coreservices.dylib
0x7fff995ac000 - 0x7fff995e2fff libssl.0.9.8.dylib (59.40.2) <523FEBFA-4BF7-3A69-83B7-164265BE7F4D> /usr/lib/libssl.0.9.8.dylib
0x7fff99716000 - 0x7fff99717fff libDiagnosticMessagesClient.dylib (100) <4243B6B4-21E9-355B-9C5A-95A216233B96> /usr/lib/libDiagnosticMessagesClient.dylib
0x7fff99ec4000 - 0x7fff99ecffff libkxld.dylib (3248.50.21) <99195052-038E-3490-ACF8-76F9AC43897E> /usr/lib/system/libkxld.dylib
0x7fff9af94000 - 0x7fff9af95fff com.apple.TrustEvaluationAgent (2.0 - 25) <0239494E-FEFE-39BC-9FC7-E251BA5128F1> /System/Library/PrivateFrameworks/TrustEvaluationAgent.framework/Versions/A/TrustEvaluationAgent
0x7fff9afa5000 - 0x7fff9afb6fff libSparseBLAS.dylib (1162.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparseBLAS.dylib
0x7fff9c5fa000 - 0x7fff9c6aafe7 libvMisc.dylib (563.5) <6D73C20D-D1C4-3BA5-809B-4B597C15AA86> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib
0x7fff9cfbc000 - 0x7fff9cfc0fff libcache.dylib (75) <9548AAE9-2AB7-3525-9ECE-A2A7C4688447> /usr/lib/system/libcache.dylib
0x7fff9d7d5000 - 0x7fff9d7feff7 libxpc.dylib (765.50.8) <54D1328E-054E-3DAA-89E2-375722F9D18F> /usr/lib/system/libxpc.dylib
0x7fff9d80f000 - 0x7fff9d82dffb libedit.3.dylib (43) <1D3E3152-4001-3C19-B56A-7543F1BBA47C> /usr/lib/libedit.3.dylib
External Modification Summary:
Calls made by other processes targeting this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by all processes on this machine:
task_for_pid: 14184
thread_create: 0
thread_set_state: 0
VM Region Summary:
ReadOnly portion of Libraries: Total=127.4M resident=0K(0%) swapped_out_or_unallocated=127.4M(100%)
Writable regions: Total=50.5M written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=50.5M(100%)
VIRTUAL REGION
REGION TYPE SIZE COUNT (non-coalesced)
=========== ======= =======
Activity Tracing 2048K 2
Kernel Alloc Once 4K 2
MALLOC 39.9M 19
MALLOC guard page 16K 4
STACK GUARD 56.0M 2
Stack 8192K 2
VM_ALLOCATE 264K 4
__DATA 4348K 87
__LINKEDIT 92.7M 27
__TEXT 34.7M 87
__UNICODE 552K 2
shared memory 12K 4
=========== ======= =======
TOTAL 238.3M 230
Model: MacBookPro11,1, BootROM MBP111.0138.B17, 2 processors, Intel Core i7, 3 GHz, 16 GB, SMC 2.16f68
Graphics: Intel Iris, Intel Iris, Built-In
Memory Module: BANK 0/DIMM0, 8 GB, DDR3, 1600 MHz, 0x80AD, 0x484D54343147533641465238412D50422020
Memory Module: BANK 1/DIMM0, 8 GB, DDR3, 1600 MHz, 0x80AD, 0x484D54343147533641465238412D50422020
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x112), Broadcom BCM43xx 1.0 (7.21.95.175.1a6)
Bluetooth: Version 4.4.5f3 17904, 3 services, 18 devices, 1 incoming serial ports
Network Service: Wi-Fi, AirPort, en0
Serial ATA Device: APPLE SSD SM0512F, 500.28 GB
USB Device: USB 3.0 Bus
USB Device: Apple Internal Keyboard / Trackpad
USB Device: BRCM20702 Hub
USB Device: Bluetooth USB Host Controller
Thunderbolt Bus: MacBook Pro, Apple Inc., 17.2
Answer: Did you run the tests? From the [pypi](https://pypi.python.org/pypi/PuLP) page
> On Linux and OSX systems the tests must be run to make the default solver
> executable.
|
using import inside class
Question: I am completely new to the python class concept. After searching for a
solution for some days, I hope I will get help here:
I want a python class where I import a function and use it there. The main
code should be able to call the function from the class. for that I have two
files in the same folder.
* * *
Thanks to @cdarke, @DeepSpace and @MosesKoledoye, I edited the mistake, but
sadly that wasn't it.
I still get the Error:
test 0
Traceback (most recent call last):
File "run.py", line 3, in <module>
foo.doit()
File "/Users/ls/Documents/Entwicklung/RaspberryPi/test/test.py", line 8, in doit
self.timer(5)
File "/Users/ls/Documents/Entwicklung/RaspberryPi/test/test.py", line 6, in timer
zeit.sleep(2)
NameError: global name 'zeit' is not defined
* * *
* * *
@wombatz got the right tip: it must be self.zeit.sleep(2) or
Test.zeit.sleep(2). the import could be also done above the class declaration.
* * *
**Test.Py**
class Test:
import time as zeit
def timer(self, count):
for i in range(count):
print("test "+str(i))
self.zeit.sleep(2) <-- self is importent, otherwise, move the import above the class declaration
def doit(self):
self.timer(5)
and
**run.py**
from test import Test
foo = Test()
foo.doit()
when I try to `python run.py` I get this error:
test 0
Traceback (most recent call last):
File "run.py", line 3, in <module>
foo.doit()
File "/Users/ls/Documents/Entwicklung/RaspberryPi/test/test.py", line 8, in doit
self.timer(5)
File "/Users/ls/Documents/Entwicklung/RaspberryPi/test/test.py", line 6, in timer
sleep(2)
NameError: global name 'sleep' is not defined
What I understand from the error is that the import in the class is not
recognized. But how can I achive that the import in the class is recognized?
Answer: `sleep` is not a python builtin, and the name as is, does not reference any
object. So Python has rightly raised a `NameEror`.
You intend to:
import time as zeit
zeit.sleep(2)
And move `import time as zeit` to the top of the module.
The `time` module aliased as `zeit` is probably not appearing in your module's
global symbol table because it was imported inside a `class`.
|
Detecting ordered iterables (sequences) in python
Question: I am attempting to build a function that takes an iterable and returns a
tuple, provided that the iterable will always be iterated over in a canonical
way. For example, if the input iterable is `list` or `tuple`-like, I want to
accept the input, but not if it is `dict`-like (where there isn't a guarantee
on the order of the keys). Is there any python function to detect the
different between objects that are always iterated in the same order vs. those
where the order could change version-to-version or depend on `PYTHONHASHSEED`?
`isinstance(x, collections.Sequence)` does most of what I want, but generators
are not sequences. The following code seems to do what I want, but I'm not
sure if I'm leaving something out or if there is a more general way to capture
the idea of an ordered, but not necessarily indexable, iterable.
import collections, types
def to_tuple(x):
if isinstance(x, collections.Sequence) or isinstance(x, types.GeneratorType):
return tuple(x)
raise Exception("Cannot be iterated canonically")
Answer: There's no such function. Even with generators, you'd want to be able to catch
(x for x in {1, 2, 3})
but permit
(x for x in [1, 2, 3])
I'd recommend just raising a warning if `type(x) is dict`. Not even
`isinstance(x, dict)`, because OrderedDicts are ordered.
|
pylint, coroutines, decorators and type inferencing
Question: I'm working on a Google AppEngine project and I recently upgraded my pylint
version to:
No config file found, using default configuration
pylint 1.5.6,
astroid 1.4.6
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
This seems to have broken some type inferencing. Specifically, GAE's [`ndb`
uses a decorator and a generator function to return a "Future"
object](https://github.com/GoogleCloudPlatform/datastore-ndb-
python/blob/master/ndb/tasklets.py#L1042) like this:
@ndb.tasklet
def coroutine_like(item_id):
# do something here...
item = yield EntityType.get_by_id_async(item_id)
raise ndb.Return(item)
I might call it something like this:
future = coroutine_like('12345')
# Do other stuff
entity = future.get_result()
Previously, I didn't have any problems with the linter here. Now I'm getting:
E: 42,17: Generator 'generator' has no 'get_result' member (no-member)
E: 48,17: Generator 'generator' has no 'get_result' member (no-member)
E: 60,25: Generator 'generator' has no 'get_result' member (no-member)
E: 74, 8: Generator 'generator' has no 'wait' member (no-member)
E: 88, 8: Generator 'generator' has no 'wait' member (no-member)
E: 95,17: Generator 'generator' has no 'get_result' member (no-member)
I realize that I can `# pylint: disable=no-member` those lines individually
but that would be cumbersome. I also realize that I can suppress that warning
at the module level by adding the suppression code at the module level and I
can globally suppress the warning by modifying my pylintrc file. I don't
really want to do those things. I would much rather (somehow) tell pylint that
things decorated with the `@ndb.tasklet` decorator return `ndb.Future`
instances. I've seen that there are [ways to register type-inferencing
helpers](https://www.logilab.org/blogentry/78354)1 for pylint, but I'm not
sure how to make them work with my decorator of a generator function.
1Note that is a pretty old blog post... I think that `logilab.astng` is no
longer in use and now you would use `astroid` instead, but that doesn't get me
_too_ much closer to the answer that I'm looking for...
Answer: That blog post is definitely very old, things have changed for a while now.
You might take a look at the way how astroid's brain modules are implemented
(<https://github.com/PyCQA/astroid/tree/master/astroid/brain>). They usually
are AST transformers, which are applied to particular ASTs, providing
modifications in order for pylint to understand what exactly is happening with
your code.
A transform is usually a function, which receives a node and is supposed to
return a new node or the same node modified (be warned though that in the
future, we will remove support for modifying the same node, they will become
immutable)
You can register one through
astroid.MANAGER.register_transform(type_of_node, transform_function)
but is usually okay to provide a filter to register_transform, so that it
would be applied only to particular nodes you are interested in. The filter is
the third argument of register_transform and it is a function that receives a
node and should return a boolean, true if the node should be transformed,
false otherwise. You can also this transform as an inference tip, that would
be used instead of the normal inference mechanism, by wrapping the second
argument in `astroid.inference_tip(...)`. This is probably what you want,
since you want to help pylint infer this function properly, rather than adding
constructs to the AST itself. In this particular case, the transform could
return an instance of ndb.Return, initialized with the yield points you have
in your function. Also, note that you can build the AST from a string, with
only the code representation, as in:
ast = astroid.parse('''...'''
return ast
But if you want a more fine grained approach, you can build the AST yourself
(crude example):
from astroid import MANAGER
module = MANAGER.ast_from_module_name('ndb')
cls = next(module.igetattr('Return'))
instance = cls.instantiate_class()
node = astroid.Return(...)
node.value = ... node
return node
Also, note though that creating new nodes will change with the newest release,
by using proper constructor methods for building them, instead of adding
attributes manually.
Hope this helps.
|
Python: Simple Web Crawler using BeautifulSoup4
Question: I have been following TheNewBoston's Python 3.4 tutorials that use Pycharm,
and am currently on the tutorial on how to create a web crawler. I Simply want
to download all of XKCD's Comics. Using the archive that seemed very easy.
Here is [my code](http://pastebin.com/eNTPqGQn), followed by
[TheNewBoston](http://pastebin.com/EscfdDV2)'s. Whenever I run the code,
nothing happens. It runs through and says, "Process finished with exit code 0"
Where did I screw up?
TheNewBoston's Tutorial is a little dated, and the website used for the crawl
has changed domains. I will comment the part of the video that seems to
matter.
My code:
mport requests
from urllib import request
from bs4 import BeautifulSoup
def download_img(image_url, page):
name = str(page) + ".jpg"
request.urlretrieve(image_url, name)
def xkcd_spirder(max_pages):
page = 1
while page <= max_pages:
url = r'http://xkcd.com/' + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('div', {'img': 'src'}):
href = link.get('href')
print(href)
download_img(href, page)
page += 1
xkcd_spirder(5)
Answer: The _comic_ is in the div with the id _comic_ , you just need to pull the
_src_ from _img_ inside that div then join it to the _base_ url and finally
request the content and write, I use the _basename_ as the name to save the
file under.
I also replaced your while with a range loop and did all the http requests
just using requests:
import requests
from bs4 import BeautifulSoup
from os import path
from urllib.parse import urljoin # python2 -> from urlparse import urljoin
def download_img(image_url, base):
# path.basename(image_url)
# http://imgs.xkcd.com/comics/tree_cropped_(1).jpg -> tree_cropped_(1).jpg -
with open(path.basename(image_url), "wb") as f:
# image_url is a releative path, we have to join to the base
f.write(requests.get(urljoin(base,image_url)).content)
def xkcd_spirder(max_pages):
base = "http://xkcd.com/"
for page in range(1, max_pages + 1):
url = base + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
# we only want one image
img = soup.select_one("#comic img") # or .find('div',id= 'comic').img
download_img(img["src"], base)
xkcd_spirder(5)
Once you run the code you will see we get the first five comics.
|
Merge a list of pandas dataframes
Question: There has been many similar questions but none specifically to this.
I have a list of data frames and I need to merge them together using a unique
column `(date)`. Field names are different so concat is out.
I can manually use `df[0].merge(df[1],on='Date').merge(df[3],on='Date)` etc.
to merge each df one by one, but the issue is that the number of data frames
in the list differs with user input.
Is there any way to merge that just combines all data frames in a list at one
go? Or perhaps some for in loop at does that?
I am using Python 2.7.
Answer: You can use `reduce` function where `dfList` is your list of data frames:
import pandas as pd
reduce(lambda x, y: pd.merge(x, y, on = 'Date'), dfList)
As a demo:
df = pd.DataFrame({'Date': [1,2,3,4], 'Value': [2,3,3,4]})
dfList = [df, df, df]
dfList
# [ Date Value
# 0 1 2
# 1 2 3
# 2 3 3
# 3 4 4, Date Value
# 0 1 2
# 1 2 3
# 2 3 3
# 3 4 4, Date Value
# 0 1 2
# 1 2 3
# 2 3 3
# 3 4 4]
reduce(lambda x, y: pd.merge(x, y, on = 'Date'), dfList)
# Date Value_x Value_y Value
# 0 1 2 2 2
# 1 2 3 3 3
# 2 3 3 3 3
# 3 4 4 4 4
|
How to delete entire row of data set given a condition on a column in csv file?
Question: Here is a snippet of the following data-set in csv format:
quantity revenue time_x transaction_id user_id
1 0 57:57.0 0 0 0
1 0 18:59.0 0 1
I want to delete the entire row when the user_id is empty. How do I do this in
python? So far, here's my code:
activity = pd.read_csv("activity(delimited).csv", delimiter=';', error_bad_lines=False, dtype=object)
impression = pd.read_csv("impression(delimited).csv", delimiter=';', error_bad_lines=False, dtype=object)
click = pd.read_csv("click(delimited).csv", delimiter=';', error_bad_lines=False, dtype=object)
pre_merge = activity.merge(impression, on="user_id", how="outer")
merged = pre_merge.merge(click, on="user_id", how="outer")
merged.to_csv("merged.csv", index=False)
open_merged = pd.read_csv("merged.csv", delimiter=',', error_bad_lines= False, dtype=object)
filtered_merged = open_merged.dropna(axis='columns', how='all')
Also, how can I write the code in an efficient manner?
Answer: With Pandas:
import pandas as pd
df = pd.read_csv("path/to/csv/data.csv", delimiter=';', error_bad_lines=False)
df = df[pd.notnull(df.user_id)] # boolean indexing
# Shift user_id to first column
df = df.set_index("user_id")
df = df.reset_index()
df.to_csv("path/to/csv/data.csv", index=False)
The bracket notation allows you provide an iterable of boolean values. This is
called [boolean indexing](http://pandas.pydata.org/pandas-
docs/stable/indexing.html#boolean-indexing). Similar concepts and syntax are
used in numpy, matlab and R
|
My spark app is too slow, how can I increase the speed significantly?
Question: This is part of my spark code which is very slow. By slow I mean for 70
Million data rows it takes almost 7 minutes to run the code but I need it to
run in under 5 seconds if possible. I have a cluster with 5 spark nodes with
80 cores and 177 GB memory of which 33Gb are currently used.
range_expr = col("created_at").between(
datetime.now()-timedelta(hours=timespan),
datetime.now()-timedelta(hours=time_delta(timespan))
)
article_ids = sqlContext.read.format("org.apache.spark.sql.cassandra").options(table="table", keyspace=source).load().where(range_expr).select('article','created_at').repartition(64*2)
axes = sqlContext.read.format("org.apache.spark.sql.cassandra").options(table="table", keyspace=source).load()
#article_ids.join(axes,article_ids.article==axes.article)
speed_df = article_ids.join(axes,article_ids.article==axes.article).select(axes.article,axes.at,axes.comments,axes.likes,axes.reads,axes.shares) \
.map(lambda x:(x.article,[x])).reduceByKey(lambda x,y:x+y) \
.map(lambda x:(x[0],sorted(x[1],key=lambda y:y.at,reverse = False))) \
.filter(lambda x:len(x[1])>=2) \
.map(lambda x:x[1][-1]) \
.map(lambda x:(x.article,(x,(x.comments if x.comments else 0)+(x.likes if x.likes else 0)+(x.reads if x.reads else 0)+(x.shares if x.shares else 0))))
I believe especially this part of the code is particularly slow:
sqlContext.read.format("org.apache.spark.sql.cassandra").options(table="table", keyspace=source).load()
When put in spark it transforms into this which I think causes it to be slow :
javaToPython at NativeMethodAccessorImpl.java:-2
[](http://i.stack.imgur.com/S6Noa.png)
Any help would really be appreciated. Thanks
**EDIT**
The biggest speed problem seems to be JavatoPython. The attached picture is
only for part of my data and is already very slow.
[](http://i.stack.imgur.com/iTJvW.png)
**EDIT (2)**
About `len(x1)>=2`:
Sorry for the long elaboration but I really hope I can solve this problem, so
making people understand a quite complex problem in detail is crucial:
this is my rdd example:
rdd1 =
[(1,3),(1,5),(1,6),(1,9),(2,10),(2,76),(3,8),(4,87),(4,96),(4,109),(5,10),(6,19),(6,18),(6,65),(6,43),(6,81),(7,12),(7,96),(7,452),(8,59)]
After the spark transformation rdd1 has this form: rdd_result =
[(1,9),(2,76),(4,109),(6,81),(7,452)] the result does not contain (3,8),(5,10)
because the key 3 or 5 only occur once, I don't want the 3 or 5 to appear.
below is my program:
first:rdd1 reduceByKey then the result is:
rdd_reduceByKey=[(1,[3,5,6,9]),(2,[10,76]),(3,[8]),(4,[87,96,109]),(5,[10]),(6,[19,18,65,43,81]),(7,[12,96,452,59]))]
second:rdd_reduceByKey filter by
len(x[1](http://i.stack.imgur.com/S6Noa.png))>=2 then result is:
rdd_filter=[(1,[3,5,6,9]),(2,[10,76]),(4,[87,96,109]),(6,[19,18,65,43,81]),(7,[12,96,452,59]))]
so the len(x[1](http://i.stack.imgur.com/S6Noa.png))>=2 is necessary but slow.
Any recommendation improvements would be hugely appreciated.
Answer: Few things I would to do if I meet performance issue.
1. check spark [web UI](http://spark.apache.org/docs/latest/monitoring.html). Find the slowest part.
2. The lambda function is really suspicious
3. Check executor configuration
4. Store some of the data in intermediate table.
5. Compare the result if store data in parquet helps.
6. Compare the if using Scala helps
EDIT:
Using Scala instead of Python could do the trick if the JavatoPython is the
slowest.
Here is the code for finding the latest/largest. It should be NlogN, most
likely close to N, since the sorting is on small data set.
import org.apache.spark.sql.functions._
import scala.collection.mutable.WrappedArray
val data = Seq((1,3),(1,5),(1,6),(1,9),(2,10),
(2,76),(3,8),(4,87),(4,96),(4,109),
(5,10),(6,19),(6,18),(6,65),(6,43),
(6,81),(7,12),(7,96),(7,452),(8,59))
val df = sqlContext.createDataFrame(data)
val dfAgg = df.groupBy("_1").agg(collect_set("_2").alias("_2"))
val udfFirst= udf[Int, WrappedArray[Int]](_.head)
val dfLatest = dfAgg.filter(size($"_2") > 1).
select($"_1", udfFirst(sort_array($"_2", asc=false)).alias("latest"))
dfLatest.show()
|
selecting multiple ROI in an image
Question: hey guys i am using opencv 2.4 with python 2.7 on ubuntu14.04
I want to select multiple Region of Interest in an image is it possible to do
so.
I want to do motion detection in only the area i have selected to do so any of
the following theory can solve my problem but don't know how to implement any
of them : -
1. Mask the area in image which is not ROI
2. After creating multiple ROI image how to add them such that all those ROI can be on the original location and remaining area be masked
Answer: Yes it is possible to do so. Main Idea behind the solution would be creating a
mask and setting it to `0` wherever you do not want the motion tracker to
track.
If you are using `numpy`then you can create the mask and set the regions you
do not want the detector to use, to zero. (Similar to `cv::Rect(start.col,
start.row, numberof.cols, numberof.rows) = 0` in c++)
In python using numpy you can create a mask, somewhat like this:
import numpy as np
ret, frame = cap.read()
if frame.ndim == 3
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
elif frame.ndim == 4
gray = cv2.cvtColor(frame, cv2.COLOR_BGRA2GRAY)
else:
gray = frame
# create mask
mask = np.ones_like(gray)
mask[start_row:end_row, start_col:end_col] = 0
mask[another_starting_row:another_ending_row, another_start_col:another_end_col] = 0
# and so on you can create your own mask
# use for loops to create specific masks
It is a bit crude solution but will do the job. check numpy documentation
[(PDF)](http://docs.scipy.org/doc/numpy-1.11.0/numpy-ref-1.11.0.pdf) for more
info.
|
Sort a nested dictionary in Python
Question: I have the following dictionary.
var = a = {
'Black': { 'grams': 1906, 'price': 2.05},
'Blue': { 'grams': 9526, 'price': 22.88},
'Gold': { 'grams': 194, 'price': 8.24},
'Magenta': { 'grams': 6035, 'price': 56.69},
'Maroon': { 'grams': 922, 'price': 18.76},
'Mint green': { 'grams': 9961, 'price': 63.89},
'Orchid': { 'grams': 4970, 'price': 10.78},
'Tan': { 'grams': 6738, 'price': 50.54},
'Yellow': { 'grams': 6045, 'price': 54.19}
}
How can I sort it based on the `price`. So the resulting dictionary will look
like below.
result = {
'Black': { 'grams': 1906, 'price': 2.05},
'Gold': { 'grams': 194, 'price': 8.24},
'Orchid': { 'grams': 4970, 'price': 10.78},
'Maroon': { 'grams': 922, 'price': 18.76},
'Blue': { 'grams': 9526, 'price': 22.88},
'Tan': { 'grams': 6738, 'price': 50.54},
'Magenta': { 'grams': 6035, 'price': 56.69},
'Mint green': { 'grams': 9961, 'price': 63.89},
}
Answer:
for s in sorted(a.iteritems(), key=lambda (x, y): y['price']):
print s
Or by OrderedDict
from collections import OrderedDict
res = OrderedDict(sorted(a.items(), key=lambda x: x[1]['price'], reverse=False))
print res
Output:
[('Black', {'price': 2.05, 'grams': 1906}), ('Gold', {'price': 8.24, 'grams': 194}), ('Orchid', {'price': 10.78, 'grams': 4970}), ('Maroon', {'price': 18.76, 'grams': 922}), ('Blue', {'price': 22.88, 'grams': 9526}), ('Tan', {'price': 50.54, 'grams': 6738}), ('Yellow', {'price': 54.19, 'grams': 6045}), ('Magenta', {'price': 56.69, 'grams': 6035}), ('Mint green', {'price': 63.89, 'grams': 9961})]
|
TwitterAPI for Python: using result from request in a new request
Question: I want to collect all user data from the followers from a specific twitter
user. First, I collect the user_id's of the followers from the user using
followers/id. Thereafter I want to use users/lookup in order to collect the
user data from the collected user_id's all at once (with a maximum of 100).
This is where I get stuck, I don't seem to get any results. I think it has
something to do with the input user_ids since inputting them manually gives me
the results I expect.
from TwitterAPI import TwitterAPI
import JSON
consumer_key = "..."
consumer_secret = "..."
access_token = "..."
access_token_secret = "..."
api = TwitterAPI(consumer_key, consumer_secret, access_token, access_token_secret)
r = api.request('followers/ids', {'screen_name':'elonmusk'})
r = json.loads(r.text)
r = list(r['ids'])
f = api.request('users/lookup', {'user_id': r })
print(f.text)
I've tried several devious ways to solve it, but the above mentioned is for as
far as my beginner python knowledge ranges the most reliable. Although it does
not work.
Answer: I managed to fix it myself. The first request yielded too many results, which
the second request couldn't process. I changed the first request to this:
r = api.request('followers/ids', {'screen_name':'elonmusk', 'count':'100'})
|
Python 3.5:Not able to remove non alpha -numeric characters from file_name
Question: i have written a python script to rename all the files present in a folder by
removing all the numbers from the file name but this doesn't work . Note :Same
code works fine for python2.7
import os
def rename_files():
#(1) get file names from a folder
file_list = os.listdir(r"D:\prank")
print(file_list)
saved_path = os.getcwd()
print("Current working Directory is " + saved_path)
os.chdir(r"D:\prank")
#(2) for each file ,rename filename
for file_name in file_list:
os.rename(file_name, file_name.translate(None,"0123456789"))
rename_files()
Can anyone tell me how to make it work.Is the translate function which is not
working properly
Answer: The problem is with os.rename() portion of your code.
os.rename() requires you to give it a full path to the file/folder you want to
change it to, while you only gave it the file_name and not the full path.
You have to add the full path to the folders/files directory. so it should
look like this:
def rename_files():
# add the folder path
folder_path = "D:\prank\\"
file_list = os.listdir(r"D:\prank")
print(file_list)
saved_path = os.getcwd()
print("Current working Directory is " + saved_path)
os.chdir(r"D:\prank")
# Concat the folder_path with file_name to create the full path.
for file_name in file_list:
full_path = folder_path + file_name
print (full_path) # See the full path here.
os.rename(full_path, full_path.translate(None, "0123456789"))
|
Python open FTP url and write to file
Question: How can I open and FTP url and download it into a file. What I'm trying looks
something like this:
from contextlib import closing
from urllib.request import urlopen
url = 'ftp://whatever.com/file.txt'
target_path = 'file.txt'
with closing(urlopen(url)) as source:
with open(target_path, 'wb') as target:
target.write(source)
However, this fails with the following error:
TypeError: 'addinfourl' does not support the buffer interface
Is there any simple way to make this work? Especially if I want to extend it,
so that the file is extracted while it is downloaded?
Answer: The `write` requires object with buffer interface in particular `bytes`, but
the `source` is actually `BufferedReader` (or `HTTPResonse` if http). To get
bytes you need to call
[`BufferedReader.read()`](https://docs.python.org/3/library/io.html#io.BufferedReader.read)
from contextlib import closing
from urllib.request import urlopen
url = 'ftp://whatever.com/file.txt'
target_path = 'file.txt'
with closing(urlopen(url)) as source:
with open(target_path, 'wb') as target:
target.write(source.read())
|
Subsets and Splits