repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
sanic-org/sanic | 623 | sanic-org__sanic-623 | [
"615",
"615"
] | 52ff2e0e63142f1d59e14d783954318475c5d6cf | diff --git a/sanic/response.py b/sanic/response.py
--- a/sanic/response.py
+++ b/sanic/response.py
@@ -129,7 +129,7 @@ def write(self, data):
data = self._encode_body(data)
self.transport.write(
- b"%b\r\n%b\r\n" % (str(len(data)).encode(), data))
+ b"%x\r\n%b\r\n" % (len(data), data))
async def stream(
self, version="1.1", keep_alive=False, keep_alive_timeout=None):
| Streaming Response Chunked Encoding Incomplete Read Error
The new `stream` method works with the [demo example](http://sanic.readthedocs.io/en/latest/sanic/streaming.html), but if I try to stream larger chunks of data (e.g. 10 bytes) I get a ChunkedEncodingError / broken connection / Incomplete Read on the client.
I confirmed this on both latest master 62ebcba64 and when the feature was introduced at 19592e8eea.
Here is a slightly modified test demo server to reproduce the problem:
```
from sanic import Sanic
from sanic.response import stream
app = Sanic(__name__)
@app.route("/")
async def test(request):
async def sample_streaming_fn(response):
response.write('foo,bat,baz,')
response.write('bar')
return stream(sample_streaming_fn, content_type='text/csv')
app.run(host="0.0.0.0", port=8000, workers=1)
```
Start this test server with:
```
$ docker run --rm -it -v `pwd`:/app -p 8000:8000 ubergarm/sanic-alpine /app/test.py
2017-04-04 17:27:49,501: INFO: Goin' Fast @ http://0.0.0.0:8000
2017-04-04 17:27:49,503: INFO: Starting worker [1]
```
Test with `curl`:
```
$ curl -v localhost:8000
* Rebuilt URL to: localhost:8000/
* Trying ::1...
* Connected to localhost (::1) port 8000 (#0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Keep-Alive: 60
< Transfer-Encoding: chunked
< Content-Type: text/csv
<
foo,bat,baz,
3
* Malformed encoding found in chunked-encoding
* Closing connection 0
curl: (56) Malformed encoding found in chunked-encoding
```
Test with `httpie`:
```
$ http -v localhost:8000
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8000
User-Agent: HTTPie/0.9.4
HTTP/1.1 200 OK
Content-Type: text/csv
Keep-Alive: 60
Transfer-Encoding: chunked
http: error: ChunkedEncodingError: ('Connection broken: IncompleteRead(2 bytes read)', IncompleteRead(2 bytes read))
```
Streaming Response Chunked Encoding Incomplete Read Error
The new `stream` method works with the [demo example](http://sanic.readthedocs.io/en/latest/sanic/streaming.html), but if I try to stream larger chunks of data (e.g. 10 bytes) I get a ChunkedEncodingError / broken connection / Incomplete Read on the client.
I confirmed this on both latest master 62ebcba64 and when the feature was introduced at 19592e8eea.
Here is a slightly modified test demo server to reproduce the problem:
```
from sanic import Sanic
from sanic.response import stream
app = Sanic(__name__)
@app.route("/")
async def test(request):
async def sample_streaming_fn(response):
response.write('foo,bat,baz,')
response.write('bar')
return stream(sample_streaming_fn, content_type='text/csv')
app.run(host="0.0.0.0", port=8000, workers=1)
```
Start this test server with:
```
$ docker run --rm -it -v `pwd`:/app -p 8000:8000 ubergarm/sanic-alpine /app/test.py
2017-04-04 17:27:49,501: INFO: Goin' Fast @ http://0.0.0.0:8000
2017-04-04 17:27:49,503: INFO: Starting worker [1]
```
Test with `curl`:
```
$ curl -v localhost:8000
* Rebuilt URL to: localhost:8000/
* Trying ::1...
* Connected to localhost (::1) port 8000 (#0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Keep-Alive: 60
< Transfer-Encoding: chunked
< Content-Type: text/csv
<
foo,bat,baz,
3
* Malformed encoding found in chunked-encoding
* Closing connection 0
curl: (56) Malformed encoding found in chunked-encoding
```
Test with `httpie`:
```
$ http -v localhost:8000
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8000
User-Agent: HTTPie/0.9.4
HTTP/1.1 200 OK
Content-Type: text/csv
Keep-Alive: 60
Transfer-Encoding: chunked
http: error: ChunkedEncodingError: ('Connection broken: IncompleteRead(2 bytes read)', IncompleteRead(2 bytes read))
```
| 2017-04-10T10:43:58 |
||
sanic-org/sanic | 647 | sanic-org__sanic-647 | [
"645"
] | f6d4a06661cd85d750c472ab600ba3103837ecee | diff --git a/sanic/static.py b/sanic/static.py
--- a/sanic/static.py
+++ b/sanic/static.py
@@ -56,7 +56,7 @@ async def _handler(request, file_uri=None):
# URL decode the path sent by the browser otherwise we won't be able to
# match filenames which got encoded (filenames with spaces etc)
file_path = path.abspath(unquote(file_path))
- if not file_path.startswith(root_path):
+ if not file_path.startswith(path.abspath(unquote(root_path))):
raise FileNotFound('File not found',
path=file_or_directory,
relative_url=file_uri)
| what have done to static.py?
On last Friday,everything is ok,my static file test works fine.
Today,when I pip install sanic==0.5.1
It raise 404 error.
when I pip install sanic==0.5.0
everything is ok again.
seems like the code blow has some problem?
if not file_path.startswith(root_path):
raise FileNotFound('File not found',
path=file_or_directory,
relative_url=file_uri)
| Can you please include the code to reproduce this?
@r0fls
from sanic import Sanic
app = Sanic(__name__)
app.static('/st/index.html', './client/index.html')
app.static('/static', './client')
app.run(host="0.0.0.0", port=8000)
when in sanic 0.5.0,no matter
wget localhost:8000/static/index.html
or
wget localhost:8000/st/index.html
is ok.
but when I install pip install sanic == 0.5.1
both wget get 404.
thank you.

It probably has something to do with #635 | 2017-04-17T04:58:52 |
|
sanic-org/sanic | 666 | sanic-org__sanic-666 | [
"665"
] | 472face7962e2ffc588f115163a47ecb1bd39adc | diff --git a/sanic/__main__.py b/sanic/__main__.py
--- a/sanic/__main__.py
+++ b/sanic/__main__.py
@@ -35,10 +35,10 @@
app.run(host=args.host, port=args.port,
workers=args.workers, debug=args.debug, ssl=ssl)
- except ImportError:
+ except ImportError as e:
log.error("No module named {} found.\n"
" Example File: project/sanic_server.py -> app\n"
" Example Module: project.sanic_server.app"
- .format(module_name))
+ .format(e.name))
except ValueError as e:
log.error("{}".format(e))
| __main__ top-level script shadowing ImportError
Given `main.py`:
```python
import nonexistent # assuming this is... non-existent
from sanic import Sanic, response
app = Sanic(__name__)
@app.route('/')
async def index(request):
return response.html('<p>Hello</p>')
```
When we try to import something non-existent, exception `ImportError` will be thrown,
[line 38 of __main__.py](https://github.com/channelcat/sanic/blob/5fd62098bd2f2722876a0873d5856d70046d3889/sanic/__main__.py#L38) does not preserve the exception, reporting preset message (what's provided on commandline) instead. This is what we will get:
```
python -m sanic main.app
No module named main found.
Example File: project/sanic_server.py -> app
Example Module: project.sanic_server.app
```
It is very hard to find the true cause if the import statement that failed is, let's say, buried three levels of modules deep, for example.
| 2017-04-27T02:56:52 |
||
sanic-org/sanic | 704 | sanic-org__sanic-704 | [
"703"
] | bece3d2bcf79666372ce2c81a8a8b3d91e8ca60f | diff --git a/sanic/config.py b/sanic/config.py
--- a/sanic/config.py
+++ b/sanic/config.py
@@ -1,10 +1,11 @@
-from sanic.defaultFilter import DefaultFilter
import os
import sys
import syslog
import platform
import types
+from sanic.log import DefaultFilter
+
SANIC_PREFIX = 'SANIC_'
_address_dict = {
diff --git a/sanic/defaultFilter.py b/sanic/defaultFilter.py
deleted file mode 100644
--- a/sanic/defaultFilter.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import logging
-
-
-class DefaultFilter(logging.Filter):
- def __init__(self, param=None):
- self.param = param
-
- def filter(self, record):
- if self.param is None:
- return True
- if record.levelno in self.param:
- return True
- return False
diff --git a/sanic/log.py b/sanic/log.py
--- a/sanic/log.py
+++ b/sanic/log.py
@@ -1,4 +1,18 @@
import logging
+
+class DefaultFilter(logging.Filter):
+
+ def __init__(self, param=None):
+ self.param = param
+
+ def filter(self, record):
+ if self.param is None:
+ return True
+ if record.levelno in self.param:
+ return True
+ return False
+
+
log = logging.getLogger('sanic')
netlog = logging.getLogger('network')
| Consistent module naming
I don't want to be the bad guy ๐ , but there is a module file named with camelCase. Disregard me if this is not a problem.
| I would agree that is annoying. That should just go in log.py I think | 2017-05-09T03:37:21 |
|
sanic-org/sanic | 717 | sanic-org__sanic-717 | [
"716"
] | fa1b7de52af4208e46a2d856a20cdd1454763605 | diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -143,7 +143,8 @@ def cookies(self):
@property
def ip(self):
if not hasattr(self, '_ip'):
- self._ip = self.transport.get_extra_info('peername')
+ self._ip = (self.transport.get_extra_info('peername') or
+ (None, None))
return self._ip
@property
diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -201,7 +201,7 @@ def write_response(self, response):
netlog.info('', extra={
'status': response.status,
'byte': len(response.body),
- 'host': '%s:%d' % self.request.ip,
+ 'host': '%s:%d' % (self.request.ip[0], self.request.ip[1]),
'request': '%s %s' % (self.request.method,
self.request.url)
})
| IPv6 addresses and netlog: TypeError: not all arguments converted during string formatting
Netlog is trying to log the IP address using request.ip (which is socket.getpeername()), and fails due to differences between returned tuples depending on the address family.
AF_INET returns (host, port), while AF_INET6 a four-tuple: (host, port, flowid, scopeid).
(https://docs.python.org/3/library/socket.html#socket.socket.getpeername)
I've got a PR ready, which simply changes the formatting at https://github.com/channelcat/sanic/blob/0.5.4/sanic/server.py#L204 to use host and port parts of the tuple explicitly.
Did you have any other plans on handling it? Currently it incorrectly logs each request that fails (and that's every IPv6 request) as a 500.
| 2017-05-13T15:55:34 |
||
sanic-org/sanic | 742 | sanic-org__sanic-742 | [
"700"
] | 49631542ce627010f37ccd1688d19939b7ef81b3 | diff --git a/examples/unix_socket.py b/examples/unix_socket.py
new file mode 100644
--- /dev/null
+++ b/examples/unix_socket.py
@@ -0,0 +1,23 @@
+from sanic import Sanic
+from sanic import response
+import socket
+import sys
+import os
+
+app = Sanic(__name__)
+
[email protected]("/test")
+async def test(request):
+ return response.text("OK")
+
+if __name__ == '__main__':
+ server_address = './uds_socket'
+ # Make sure the socket does not already exist
+ try:
+ os.unlink(server_address)
+ except OSError:
+ if os.path.exists(server_address):
+ raise
+ sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
+ sock.bind(server_address)
+ app.run(sock=sock)
diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -521,7 +521,7 @@ def test_client(self):
# Execution
# -------------------------------------------------------------------- #
- def run(self, host="127.0.0.1", port=8000, debug=False, ssl=None,
+ def run(self, host=None, port=None, debug=False, ssl=None,
sock=None, workers=1, protocol=None,
backlog=100, stop_event=None, register_sys_signals=True,
log_config=LOGGING):
@@ -580,7 +580,7 @@ def __call__(self):
"""gunicorn compatibility"""
return self
- async def create_server(self, host="127.0.0.1", port=8000, debug=False,
+ async def create_server(self, host=None, port=None, debug=False,
ssl=None, sock=None, protocol=None,
backlog=100, stop_event=None,
log_config=LOGGING):
@@ -629,11 +629,13 @@ async def _run_response_middleware(self, request, response):
break
return response
- def _helper(self, host="127.0.0.1", port=8000, debug=False,
+ def _helper(self, host=None, port=None, debug=False,
ssl=None, sock=None, workers=1, loop=None,
protocol=HttpProtocol, backlog=100, stop_event=None,
register_sys_signals=True, run_async=False, has_log=True):
"""Helper function used by `run` and `create_server`."""
+ if sock is None:
+ host, port = host or "127.0.0.1", port or 8000
if isinstance(ssl, dict):
# try common aliaseses
diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -201,9 +201,10 @@ def write_response(self, response):
netlog.info('', extra={
'status': response.status,
'byte': len(response.body),
- 'host': '%s:%d' % (self.request.ip[0], self.request.ip[1]),
- 'request': '%s %s' % (self.request.method,
- self.request.url)
+ 'host': '{0}:{1}'.format(self.request.ip[0],
+ self.request.ip[1]),
+ 'request': '{0} {1}'.format(self.request.method,
+ self.request.url)
})
except AttributeError:
log.error(
@@ -242,9 +243,10 @@ async def stream_response(self, response):
netlog.info('', extra={
'status': response.status,
'byte': -1,
- 'host': '%s:%d' % self.request.ip,
- 'request': '%s %s' % (self.request.method,
- self.request.url)
+ 'host': '{0}:{1}'.format(self.request.ip[0],
+ self.request.ip[1]),
+ 'request': '{0} {1}'.format(self.request.method,
+ self.request.url)
})
except AttributeError:
log.error(
| Using a domain socket?
I'm trying to put sanic behind nginx using a domain socket. Is this possible at all?
| So you need to expose sanic via a domain socket, right? I don't think that's supported but shouldn't be very hard to add.
You can utilize domain sockets through gunicorn and nginx as shown in gunicorn's documentation:
http://docs.gunicorn.org/en/stable/deploy.html
Refer to [our docs](http://sanic.readthedocs.io/en/latest/sanic/deploying.html#running-via-gunicorn) for how to run a sanic application through gunicorn
Gunicorn loads the class and creates a TCP listener as well. I need to use an existing setup that relies on domain sockets alone...
Right. Am willing to test and provide reference configs, of course.
ya, i think unix socket support should be separate from wsgi compatibility. | 2017-05-21T10:16:54 |
|
sanic-org/sanic | 756 | sanic-org__sanic-756 | [
"755"
] | d4abca0480fd0ac733ed7b985e03654eaf7dd9e0 | diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -99,6 +99,7 @@ def __init__(self, *, loop, request_handler, error_handler,
self._request_handler_task = None
self._request_stream_task = None
self._keep_alive = keep_alive
+ self._header_fragment = b''
self.state = state if state else {}
if 'requests_count' not in self.state:
self.state['requests_count'] = 0
@@ -173,14 +174,25 @@ def data_received(self, data):
self.write_error(exception)
def on_url(self, url):
- self.url = url
+ if not self.url:
+ self.url = url
+ else:
+ self.url += url
def on_header(self, name, value):
- if name == b'Content-Length' and int(value) > self.request_max_size:
- exception = PayloadTooLarge('Payload Too Large')
- self.write_error(exception)
+ self._header_fragment += name
+
+ if value is not None:
+ if self._header_fragment == b'Content-Length' \
+ and int(value) > self.request_max_size:
+ exception = PayloadTooLarge('Payload Too Large')
+ self.write_error(exception)
+
+ self.headers.append(
+ (self._header_fragment.decode().casefold(),
+ value.decode()))
- self.headers.append((name.decode().casefold(), value.decode()))
+ self._header_fragment = b''
def on_headers_complete(self):
self.request = self.request_class(
| fragmented headers
I stumbled upon an error where long header values (OAuth2 tokens in my case) may lead to fragmentation of the header names. This error was only reproducible with a remote server, not with a local dev server.
The error happens here: [server.py#L167](https://github.com/channelcat/sanic/blob/master/sanic/server.py#L167). The `value` param seems to be `None` if the header is not fully parsed. An empty header value is represented by `b''`.
Adding a debug log statement `log.debug('on_header: %s %s', name, value)` to the `on_header` function yield the following:
```
on_header: b'Conn' None
on_header: b'ection' b'close'
```
or
```
on_header: b'Connection' None
on_header: b'' b'close'
```
I fixed this by adding the header only if the `value` is not `None` and saving the `name` values to a new class variable (`_header_fragment`) on the `HttpProtocol` instance.
I will open a pull request.
| 2017-05-28T16:34:13 |
||
sanic-org/sanic | 790 | sanic-org__sanic-790 | [
"752"
] | 38997c1b47a609cfa4155bc2c567d4d4ca3168bb | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -33,7 +33,9 @@ def __init__(self, name=None, router=None, error_handler=None,
logging.config.dictConfig(log_config)
# Only set up a default log handler if the
# end-user application didn't set anything up.
- if not logging.root.handlers and log.level == logging.NOTSET:
+ if not (logging.root.handlers and
+ log.level == logging.NOTSET and
+ log_config):
formatter = logging.Formatter(
"%(asctime)s: %(levelname)s: %(message)s")
handler = logging.StreamHandler()
| [Errno 13] Permission denied: '/access.log' with disabled debug & logging?
With:
```text
app.run(host=API_HOST, port=PORT, sock=None, debug=False, workers=API_WORKERS, log_config=None)
```
```text
app = Sanic(__name__)
File "/usr/local/anaconda/envs/proj/lib/python3.6/site-packages/sanic/app.py", line 33, in __init__
logging.config.dictConfig(log_config)
File "/usr/local/anaconda/envs/proj/lib/python3.6/logging/config.py", line 795, in dictConfig
dictConfigClass(config).configure()
File "/usr/local/anaconda/envs/proj/lib/python3.6/logging/config.py", line 566, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'accessTimedRotatingFile': [Errno 13] Permission denied: '/access.log'
```
| Same here..
Looks like it's trying to create that at the host root folder
Actually, it shouldn't create any log file at all.
I can't reproduce this. Can one of you give the full example you're using? Also the sanic version:
```
import sanic
sanic.__version__
'0.5.4'
```
Hi @r0fls, see this repo: https://github.com/woutor/sanic-logging
+1 | 2017-06-12T06:20:34 |
|
sanic-org/sanic | 819 | sanic-org__sanic-819 | [
"817"
] | dbcbf124565100fd2487f123306919462cdf3abb | diff --git a/sanic/config.py b/sanic/config.py
--- a/sanic/config.py
+++ b/sanic/config.py
@@ -201,4 +201,10 @@ def load_environment_vars(self):
for k, v in os.environ.items():
if k.startswith(SANIC_PREFIX):
_, config_key = k.split(SANIC_PREFIX, 1)
- self[config_key] = v
+ try:
+ self[config_key] = int(v)
+ except ValueError:
+ try:
+ self[config_key] = float(v)
+ except ValueError:
+ self[config_key] = v
| Configs loaded from environmental variables aren't properly typed
When setting configs using environmental variables `export SANIC_REQUEST_TIMEOUT=30`
```
app = Sanic(__name__)
print(type(app.config.REQUEST_TIMEOUT)) # <class 'str'>
```
The problem is in this function
```
# .../sanic/config.py
def load_environment_vars(self):
"""
Looks for any SANIC_ prefixed environment variables and applies
them to the configuration if present.
"""
for k, v in os.environ.items():
if k.startswith(SANIC_PREFIX):
_, config_key = k.split(SANIC_PREFIX, 1)
self[config_key] = v # os.environ values are always of type str
```
| This could be solved by simply trying to cast `v` into `int` except `ValueError` then just leave it as string.
@mczp ya I was thinking that, but then I guess there could be other types as well.. :( ๐ค oh well, let's do that for now unless we come up with something better | 2017-06-27T03:50:02 |
|
sanic-org/sanic | 862 | sanic-org__sanic-862 | [
"752"
] | e8a9b4743bd1c56f12c7aaa73d2c926e8f227cc9 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -33,9 +33,7 @@ def __init__(self, name=None, router=None, error_handler=None,
logging.config.dictConfig(log_config)
# Only set up a default log handler if the
# end-user application didn't set anything up.
- if not (logging.root.handlers and
- log.level == logging.NOTSET and
- log_config):
+ if not logging.root.handlers and log.level == logging.NOTSET:
formatter = logging.Formatter(
"%(asctime)s: %(levelname)s: %(message)s")
handler = logging.StreamHandler()
| diff --git a/tests/test_logging.py b/tests/test_logging.py
--- a/tests/test_logging.py
+++ b/tests/test_logging.py
@@ -1,5 +1,7 @@
-import asyncio
import uuid
+from importlib import reload
+
+from sanic.config import LOGGING
from sanic.response import text
from sanic import Sanic
from io import StringIO
@@ -10,6 +12,11 @@
message: %(message)s'''
+def reset_logging():
+ logging.shutdown()
+ reload(logging)
+
+
def test_log():
log_stream = StringIO()
for handler in logging.root.handlers[:]:
@@ -32,5 +39,19 @@ def handler(request):
log_text = log_stream.getvalue()
assert rand_string in log_text
+
+def test_default_log_fmt():
+
+ reset_logging()
+ Sanic()
+ for fmt in [h.formatter for h in logging.getLogger('sanic').handlers]:
+ assert fmt._fmt == LOGGING['formatters']['simple']['format']
+
+ reset_logging()
+ Sanic(log_config=None)
+ for fmt in [h.formatter for h in logging.getLogger('sanic').handlers]:
+ assert fmt._fmt == "%(asctime)s: %(levelname)s: %(message)s"
+
+
if __name__ == "__main__":
test_log()
| [Errno 13] Permission denied: '/access.log' with disabled debug & logging?
With:
```text
app.run(host=API_HOST, port=PORT, sock=None, debug=False, workers=API_WORKERS, log_config=None)
```
```text
app = Sanic(__name__)
File "/usr/local/anaconda/envs/proj/lib/python3.6/site-packages/sanic/app.py", line 33, in __init__
logging.config.dictConfig(log_config)
File "/usr/local/anaconda/envs/proj/lib/python3.6/logging/config.py", line 795, in dictConfig
dictConfigClass(config).configure()
File "/usr/local/anaconda/envs/proj/lib/python3.6/logging/config.py", line 566, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'accessTimedRotatingFile': [Errno 13] Permission denied: '/access.log'
```
| Same here..
Looks like it's trying to create that at the host root folder
Actually, it shouldn't create any log file at all.
I can't reproduce this. Can one of you give the full example you're using? Also the sanic version:
```
import sanic
sanic.__version__
'0.5.4'
```
Hi @r0fls, see this repo: https://github.com/woutor/sanic-logging
+1 | 2017-07-24T04:55:10 |
sanic-org/sanic | 863 | sanic-org__sanic-863 | [
"760"
] | e8a9b4743bd1c56f12c7aaa73d2c926e8f227cc9 | diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -139,8 +139,10 @@ def connection_timeout(self):
self._request_stream_task.cancel()
if self._request_handler_task:
self._request_handler_task.cancel()
- exception = RequestTimeout('Request Timeout')
- self.write_error(exception)
+ try:
+ raise RequestTimeout('Request Timeout')
+ except RequestTimeout as exception:
+ self.write_error(exception)
# -------------------------------------------- #
# Parsing
@@ -317,6 +319,7 @@ async def stream_response(self, response):
self.cleanup()
def write_error(self, exception):
+ response = None
try:
response = self.error_handler.response(self.request, exception)
version = self.request.version if self.request else '1.1'
@@ -331,20 +334,23 @@ def write_error(self, exception):
from_error=True)
finally:
if self.has_log:
- extra = {
- 'status': response.status,
- 'host': '',
- 'request': str(self.request) + str(self.url)
- }
- if response and isinstance(response, HTTPResponse):
+ extra = dict()
+ if isinstance(response, HTTPResponse):
+ extra['status'] = response.status
extra['byte'] = len(response.body)
else:
+ extra['status'] = 0
extra['byte'] = -1
if self.request:
extra['host'] = '%s:%d' % self.request.ip,
extra['request'] = '%s %s' % (self.request.method,
self.url)
- netlog.info('', extra=extra)
+ else:
+ extra['host'] = 'UNKNOWN'
+ extra['request'] = 'nil'
+ if self.parser and not (self.keep_alive
+ and extra['status'] == 408):
+ netlog.info('', extra=extra)
self.transport.close()
def bail_out(self, message, from_error=False):
| keepalive message recorded in debug mode
ENVIRONMENT
==========
centos7_x64
chrome: 58.0.3029.110
sanic: 0.5.4
python: 3.5.3 or 3.6.1
CODE
====
```
from sanic import Sanic
from sanic.response import text,json
app = Sanic(__name__)
@app.route('/')
async def test(request):
return text('hello world')
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8080, debug=True)
```
ISSUE DESCRIPTION
=============
when issue http request from chrome the first time, Sanic RequestTimeout exception thrown out in 60s after we get the correct response from the server, I am not sure this is the expected behavior or not, I didn't see any reference about this error in the document:
```
2017-05-31 02:50:34 - (sanic)[INFO]: Goin' Fast @ http://0.0.0.0:8080
2017-05-31 02:50:34 - (sanic)[INFO]: Starting worker [3277]
I print when a request is received by the server
2017-05-31 02:50:38 - (network)[INFO][192.168.31.164:53576]: GET http://192.168.31.227:8080/number/123 200 13
2017-05-31 02:51:38 - (sanic)[ERROR]: NoneType: None
2017-05-31 02:51:38 - (network)[INFO][]: NoneNone 408 22
```
COMMENT
========
After capturing the tcpdump packets, a HTTP 408 message send by Sanic to browser when it reaches SANIC_REQUEST_TIMEOUT(60s), also it can be seen in the codes event_loop will call connection_timeout() after waiting for that time, so I assume this is the expected behavior of Sanic. please help confirm it is true or not? If it is true, then it had better mention this symptom in the document and perhaps add a switch to turn off send back this exception to client.
| 2017-07-24T05:20:39 |
||
sanic-org/sanic | 878 | sanic-org__sanic-878 | [
"876"
] | 9b3fbe45932b70d0872458e70cb39fd1688a3055 | diff --git a/sanic/__init__.py b/sanic/__init__.py
--- a/sanic/__init__.py
+++ b/sanic/__init__.py
@@ -1,6 +1,6 @@
from sanic.app import Sanic
from sanic.blueprints import Blueprint
-__version__ = '0.5.4'
+__version__ = '0.6.0'
__all__ = ['Sanic', 'Blueprint']
| 0.5.5 release request
Because 0.5.4 has actual protocol parsing problem (#755) I request to quickly release 0.5.5.
It causes actual request loss and unhandlable 400 errors for the sanic users. (unless they make local patch for sanic)
| https://github.com/channelcat/sanic/issues/830 | 2017-08-03T02:13:15 |
|
sanic-org/sanic | 883 | sanic-org__sanic-883 | [
"874"
] | 7b66a56cad9820b3172850fde9a61f0cb865dcf5 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -214,9 +214,12 @@ def add_route(self, handler, uri, methods=frozenset({'GET'}), host=None,
return handler
# Decorator
- def websocket(self, uri, host=None, strict_slashes=False):
+ def websocket(self, uri, host=None, strict_slashes=False,
+ subprotocols=None):
"""Decorate a function to be registered as a websocket route
:param uri: path of the URL
+ :param subprotocols: optional list of strings with the supported
+ subprotocols
:param host:
:return: decorated function
"""
@@ -236,7 +239,7 @@ async def websocket_handler(request, *args, **kwargs):
# On Python3.5 the Transport classes in asyncio do not
# have a get_protocol() method as in uvloop
protocol = request.transport._protocol
- ws = await protocol.websocket_handshake(request)
+ ws = await protocol.websocket_handshake(request, subprotocols)
# schedule the application handler
# its future is kept in self.websocket_tasks in case it
diff --git a/sanic/websocket.py b/sanic/websocket.py
--- a/sanic/websocket.py
+++ b/sanic/websocket.py
@@ -41,7 +41,7 @@ def write_response(self, response):
else:
super().write_response(response)
- async def websocket_handshake(self, request):
+ async def websocket_handshake(self, request, subprotocols=None):
# let the websockets package do the handshake with the client
headers = []
@@ -57,6 +57,17 @@ def set_header(k, v):
except InvalidHandshake:
raise InvalidUsage('Invalid websocket request')
+ subprotocol = None
+ if subprotocols and 'Sec-Websocket-Protocol' in request.headers:
+ # select a subprotocol
+ client_subprotocols = [p.strip() for p in request.headers[
+ 'Sec-Websocket-Protocol'].split(',')]
+ for p in client_subprotocols:
+ if p in subprotocols:
+ subprotocol = p
+ set_header('Sec-Websocket-Protocol', subprotocol)
+ break
+
# write the 101 response back to the client
rv = b'HTTP/1.1 101 Switching Protocols\r\n'
for k, v in headers:
@@ -69,5 +80,6 @@ def set_header(k, v):
max_size=self.websocket_max_size,
max_queue=self.websocket_max_queue
)
+ self.websocket.subprotocol = subprotocol
self.websocket.connection_made(request.transport)
return self.websocket
| diff --git a/tests/test_routes.py b/tests/test_routes.py
--- a/tests/test_routes.py
+++ b/tests/test_routes.py
@@ -341,6 +341,7 @@ def test_websocket_route():
@app.websocket('/ws')
async def handler(request, ws):
+ assert ws.subprotocol is None
ev.set()
request, response = app.test_client.get('/ws', headers={
@@ -352,6 +353,48 @@ async def handler(request, ws):
assert ev.is_set()
+def test_websocket_route_with_subprotocols():
+ app = Sanic('test_websocket_route')
+ results = []
+
+ @app.websocket('/ws', subprotocols=['foo', 'bar'])
+ async def handler(request, ws):
+ results.append(ws.subprotocol)
+
+ request, response = app.test_client.get('/ws', headers={
+ 'Upgrade': 'websocket',
+ 'Connection': 'upgrade',
+ 'Sec-WebSocket-Key': 'dGhlIHNhbXBsZSBub25jZQ==',
+ 'Sec-WebSocket-Version': '13',
+ 'Sec-WebSocket-Protocol': 'bar'})
+ assert response.status == 101
+
+ request, response = app.test_client.get('/ws', headers={
+ 'Upgrade': 'websocket',
+ 'Connection': 'upgrade',
+ 'Sec-WebSocket-Key': 'dGhlIHNhbXBsZSBub25jZQ==',
+ 'Sec-WebSocket-Version': '13',
+ 'Sec-WebSocket-Protocol': 'bar, foo'})
+ assert response.status == 101
+
+ request, response = app.test_client.get('/ws', headers={
+ 'Upgrade': 'websocket',
+ 'Connection': 'upgrade',
+ 'Sec-WebSocket-Key': 'dGhlIHNhbXBsZSBub25jZQ==',
+ 'Sec-WebSocket-Version': '13',
+ 'Sec-WebSocket-Protocol': 'baz'})
+ assert response.status == 101
+
+ request, response = app.test_client.get('/ws', headers={
+ 'Upgrade': 'websocket',
+ 'Connection': 'upgrade',
+ 'Sec-WebSocket-Key': 'dGhlIHNhbXBsZSBub25jZQ==',
+ 'Sec-WebSocket-Version': '13'})
+ assert response.status == 101
+
+ assert results == ['bar', 'bar', None, None]
+
+
def test_route_duplicate():
app = Sanic('test_route_duplicate')
| Websocket subprotocol support
The _websockets_ library that Sanic utilizes supports passing a list of acceptable websocket subprotocols into their server or client implementations, to be used during the opening handshake ("Sec-WebSocket-Protocol"). Sanic does not support this currently and does not do anything w/ subprotocols in the `websocket_handshake` method. The eventual websocket object that gets passed to the Sanic handler has an "subprotocol" attribute, which would normally be set during the handshake. Currently it is empty. Is there any plan to support subprotocols in the future? Thanks for the excellent framework!
| hmm.. seems like `Sanic` is using `WebSocketCommonProtocol`, not `WebSocketServerProtocol`. But `WebSocketCommonProtocol` doesn't support to pass in `subprotocols ` directly (we can hook it up if really needs). @r0fls is there any reason to use `WebSocketCommonProtocol`, instead of `WebSocketServerProtocol` ?
Also , https://github.com/channelcat/sanic/blob/master/sanic/websocket.py#L48 this piece of code is mainly from the implementation of `WebSocketServerProtocol`.
cc @miguelgrinberg
Yes, the support for negotiating a subprotocol is in the `WebSocketServerProtocol` class, which I think would be harder to integrate with the sanic request object than `WebSocketCommonProtocol`, which only has the low-level WebSocket support.
What I think makes the most sense here is to implement the negotiation of the subprotocol in `websocket_handshake`. We can add a `subprotocols` argument to the `@websocket` decorator so that the application can specify what subprotocols it accepts. Example:
```python
@app.websocket('/feed', subprotocols=['foo', 'bar'])
async def feed(request, ws):
if ws.subprotocol:
print('The subprotocol is: ', ws.subprotocol)
while True:
data = 'hello!'
print('Sending: ' + data)
await ws.send(data)
data = await ws.recv()
print('Received: ' + data)
``` | 2017-08-08T18:24:12 |
sanic-org/sanic | 917 | sanic-org__sanic-917 | [
"914"
] | fee9de96dec787ff5b5ca474adc38ea8bbe6a62d | diff --git a/sanic/exceptions.py b/sanic/exceptions.py
--- a/sanic/exceptions.py
+++ b/sanic/exceptions.py
@@ -209,6 +209,7 @@ class Unauthorized(SanicException):
Unauthorized exception (401 HTTP status code).
:param message: Message describing the exception.
+ :param status_code: HTTP Status code.
:param scheme: Name of the authentication scheme to be used.
When present, kwargs is used to complete the WWW-Authentication header.
@@ -216,11 +217,13 @@ class Unauthorized(SanicException):
Examples::
# With a Basic auth-scheme, realm MUST be present:
- raise Unauthorized("Auth required.", "Basic", realm="Restricted Area")
+ raise Unauthorized("Auth required.",
+ scheme="Basic",
+ realm="Restricted Area")
# With a Digest auth-scheme, things are a bit more complicated:
raise Unauthorized("Auth required.",
- "Digest",
+ scheme="Digest",
realm="Restricted Area",
qop="auth, auth-int",
algorithm="MD5",
@@ -228,20 +231,24 @@ class Unauthorized(SanicException):
opaque="zyxwvu")
# With a Bearer auth-scheme, realm is optional so you can write:
- raise Unauthorized("Auth required.", "Bearer")
+ raise Unauthorized("Auth required.", scheme="Bearer")
# or, if you want to specify the realm:
- raise Unauthorized("Auth required.", "Bearer", realm="Restricted Area")
+ raise Unauthorized("Auth required.",
+ scheme="Bearer",
+ realm="Restricted Area")
"""
- def __init__(self, message, scheme, **kwargs):
- super().__init__(message)
+ def __init__(self, message, status_code=None, scheme=None, **kwargs):
+ super().__init__(message, status_code)
- values = ["{!s}={!r}".format(k, v) for k, v in kwargs.items()]
- challenge = ', '.join(values)
+ # if auth-scheme is specified, set "WWW-Authenticate" header
+ if scheme is not None:
+ values = ["{!s}={!r}".format(k, v) for k, v in kwargs.items()]
+ challenge = ', '.join(values)
- self.headers = {
- "WWW-Authenticate": "{} {}".format(scheme, challenge).rstrip()
- }
+ self.headers = {
+ "WWW-Authenticate": "{} {}".format(scheme, challenge).rstrip()
+ }
def abort(status_code, message=None):
| diff --git a/tests/test_exceptions.py b/tests/test_exceptions.py
--- a/tests/test_exceptions.py
+++ b/tests/test_exceptions.py
@@ -31,14 +31,18 @@ def handler_404(request):
def handler_403(request):
raise Forbidden("Forbidden")
+ @app.route('/401')
+ def handler_401(request):
+ raise Unauthorized("Unauthorized")
+
@app.route('/401/basic')
def handler_401_basic(request):
- raise Unauthorized("Unauthorized", "Basic", realm="Sanic")
+ raise Unauthorized("Unauthorized", scheme="Basic", realm="Sanic")
@app.route('/401/digest')
def handler_401_digest(request):
raise Unauthorized("Unauthorized",
- "Digest",
+ scheme="Digest",
realm="Sanic",
qop="auth, auth-int",
algorithm="MD5",
@@ -47,12 +51,16 @@ def handler_401_digest(request):
@app.route('/401/bearer')
def handler_401_bearer(request):
- raise Unauthorized("Unauthorized", "Bearer")
+ raise Unauthorized("Unauthorized", scheme="Bearer")
@app.route('/invalid')
def handler_invalid(request):
raise InvalidUsage("OK")
+ @app.route('/abort/401')
+ def handler_invalid(request):
+ abort(401)
+
@app.route('/abort')
def handler_invalid(request):
abort(500)
@@ -124,6 +132,9 @@ def test_forbidden_exception(exception_app):
def test_unauthorized_exception(exception_app):
"""Test the built-in Unauthorized exception"""
+ request, response = exception_app.test_client.get('/401')
+ assert response.status == 401
+
request, response = exception_app.test_client.get('/401/basic')
assert response.status == 401
assert response.headers.get('WWW-Authenticate') is not None
@@ -186,5 +197,8 @@ def test_exception_in_exception_handler_debug_off(exception_app):
def test_abort(exception_app):
"""Test the abort function"""
+ request, response = exception_app.test_client.get('/abort/401')
+ assert response.status == 401
+
request, response = exception_app.test_client.get('/abort')
assert response.status == 500
| Sanic exceptions
How does sanic exceptions are supposed to work? The docs states that
> Exceptions can be thrown from within request handlers and will automatically be handled by Sanic. Exceptions take a message as their first argument, and can also take a status code to be passed back in the HTTP response.
This is my route
```python
@app.route("/")
async def test(request):
abort(401)
```
When I make a request on the path I get a response of :
>Internal Server Error
The server encountered an internal error and cannot complete your request.
`2017-08-24 10:18:43 - (sanic)[ERROR]: Traceback (most recent call last):
File "/home/nikos/.virtualenvs/3.6/lib/python3.6/site-packages/sanic/app.py", line 503, in handle_request
response = await response
File "/home/nikos/Desktop/Side Projects/micro/test2.py", line 15, in test
abort(401)
File "/home/nikos/.virtualenvs/3.6/lib/python3.6/site-packages/sanic/exceptions.py", line 262, in abort
raise sanic_exception(message=message, status_code=status_code)
TypeError: __init__() missing 1 required positional argument: 'scheme'`
Also after a bit the connection times out and the log trace is
`2017-08-24 10:18:43 - (network)[INFO][127.0.0.1:34734]: GET http://0.0.0.0:8001/ 500 144
2017-08-24 10:19:43 - (sanic)[ERROR]: Traceback (most recent call last):
File "/home/nikos/.virtualenvs/3.6/lib/python3.6/site-packages/sanic/server.py", line 143, in connection_timeout
raise RequestTimeout('Request Timeout')
sanic.exceptions.RequestTimeout: Request Timeout`
| There might be a bug with `abort(401)`.
`abort` function raises an exception based on SanicException.
`abort(401)` will raise `Unauthorized` exception.
And `Unauthorized.__init__` requires an argument `scheme`, but `SanicException` doesn't.
Maybe you can try:
```
@app.route("/")
async def test(request):
raise Unauthorized("Unauthorized", "Basic")
```
or
```
@app.route("/")
async def test(request):
abort(400)
```
Actually, I was just testing it. Yes, the Unauthorized exception needs a scheme argument .From source code:
```python
def __init__(self, message, scheme, **kwargs):
super().__init__(message)
values = ["{!s}={!r}".format(k, v) for k, v in kwargs.items()]
challenge = ', '.join(values)
self.headers = {
"WWW-Authenticate": "{} {}".format(scheme, challenge).rstrip()
}
```
This might be a solution?
```python
def __init__(self, message, scheme=None, **kwargs):
```
| 2017-08-24T15:12:31 |
sanic-org/sanic | 943 | sanic-org__sanic-943 | [
"763"
] | a146ebd856a6227b6ba0a6107afdf60aeb8001e5 | diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -189,10 +189,12 @@ def on_header(self, name, value):
and int(value) > self.request_max_size:
exception = PayloadTooLarge('Payload Too Large')
self.write_error(exception)
-
+ try:
+ value = value.decode()
+ except UnicodeDecodeError:
+ value = value.decode('latin_1')
self.headers.append(
- (self._header_fragment.decode().casefold(),
- value.decode()))
+ (self._header_fragment.decode().casefold(), value))
self._header_fragment = b''
| Bad char cause parse exception and how to log the url when request can't be parsed?
Using: sanic 0.5.1 (While I don't see which change with 0.5.4 in changelog)
In our code we using :
```
@app.exception(Exception)
def server_error_handler(request, exception):
if request is not None:
msg = request.url + ' ' + traceback.format_exc()
elif isinstance(exception, RequestTimeout):
return response.text('timeout', status=504)
else:
msg = str(exception) + ' ' + traceback.format_exc()
asyncio.ensure_future(fluent.write_debug('error', msg))
```
to log exception. We find error has :
> Bad Request Traceback (most recent call last):
> File "httptools/parser/parser.pyx", line 247, in httptools.parser.parser.cb_on_header_field (httptools/parser/parser.c:4007)
> File "httptools/parser/parser.pyx", line 109, in httptools.parser.parser.HttpParser._on_header_field (httptools/parser/parser.c:1893)
> File "httptools/parser/parser.pyx", line 105, in httptools.parser.parser.HttpParser._maybe_call_on_header (httptools/parser/parser.c:1822)
> File "/usr/local/lib/python3.5/dist-packages/sanic/server.py", line 157, in on_header
> self.headers.append((name.decode().casefold(), value.decode()))
> AttributeError: 'NoneType' object has no attribute 'decode'
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/usr/local/lib/python3.5/dist-packages/sanic/server.py", line 144, in data_received
> self.parser.feed_data(data)
> File "httptools/parser/parser.pyx", line 171, in httptools.parser.parser.HttpParser.feed_data (httptools/parser/parser.c:2721)
> httptools.parser.errors.HttpParserCallbackError: the on_header_field callback failed
At this moment the request have not been created. So is there a method can I log the access url ?
| A different exception, But the similar situation Using python3 to request:
```
#!/usr/bin/env python
# coding=utf-8
import requests
url = 'http://127.0.0.1:9000/my_blueprint/foo'
headers = {'User-Agent': b'Mozilla/5.0 (Linux; Android 5.0; \xd6wn Smart Build/LRX21M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/37.0.0.0 Mobile Safari/537.36'}
response = requests.get(url, headers=headers)
print(response.content)
```
sanic will show:
> 2017-06-01 17:38:37,644: ERROR: Traceback (most recent call last):
> File "httptools/parser/parser.pyx", line 247, in httptools.parser.parser.cb_on_header_field (httptools/parser/parser.c:4007)
> File "httptools/parser/parser.pyx", line 109, in httptools.parser.parser.HttpParser._on_header_field (httptools/parser/parser.c:1893)
> File "httptools/parser/parser.pyx", line 105, in httptools.parser.parser.HttpParser._maybe_call_on_header (httptools/parser/parser.c:1822)
> File "/home/jiamo/.pyenv/versions/new_service/lib/python3.5/site-packages/sanic/server.py", line 157, in on_header
> self.headers.append((name.decode().casefold(), value.decode()))
> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd6 in position 33: invalid continuation byte
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/home/jiamo/.pyenv/versions/new_service/lib/python3.5/site-packages/sanic/server.py", line 144, in data_received
> self.parser.feed_data(data)
> File "httptools/parser/parser.pyx", line 171, in httptools.parser.parser.HttpParser.feed_data (httptools/parser/parser.c:2721)
> httptools.parser.errors.HttpParserCallbackError: the on_header_field callback failed
>
Or any method to ignore the bad char?
Yes, I find the same issue in sanic 0.5.4. But i dont know what caused it. Can it be fixed next version?
```
2017-08-21 22:50:05 ERROR handlers.py:104 - Traceback (most recent call last):
File "httptools/parser/parser.pyx", line 247, in httptools.parser.parser.cb_on_header_field (httptools/parser/parser.c:4007)
File "httptools/parser/parser.pyx", line 109, in httptools.parser.parser.HttpParser._on_header_field (httptools/parser/parser.c:1893)
File "httptools/parser/parser.pyx", line 105, in httptools.parser.parser.HttpParser._maybe_call_on_header (httptools/parser/parser.c:1822)
File "/home/bot/services/pyvenv3/lib/python3.6/site-packages/sanic/server.py", line 164, in on_header
self.headers.append((name.decode().casefold(), value.decode()))
AttributeError: 'NoneType' object has no attribute 'decode'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/bot/services/pyvenv3/lib/python3.6/site-packages/sanic/server.py", line 151, in data_received
self.parser.feed_data(data)
File "httptools/parser/parser.pyx", line 171, in httptools.parser.parser.HttpParser.feed_data (httptools/parser/parser.c:2721)
httptools.parser.errors.HttpParserCallbackError: the on_header_field callback failed
```
I think httptools.parser.parser.HttpParser.feed_data can not parse utf-8 charset, someone know how we can fix this problem in httptools? | 2017-09-14T11:25:12 |
|
sanic-org/sanic | 953 | sanic-org__sanic-953 | [
"949"
] | 00d40a35cdec6dd61397ef461d76e73aad3bc31c | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -354,13 +354,13 @@ def middleware(self, middleware_or_request):
# Static Files
def static(self, uri, file_or_directory, pattern=r'/?.+',
use_modified_since=True, use_content_range=False,
- stream_large_files=False, name='static'):
+ stream_large_files=False, name='static', host=None):
"""Register a root to serve files from. The input can either be a
file or a directory. See
"""
static_register(self, uri, file_or_directory, pattern,
use_modified_since, use_content_range,
- stream_large_files, name)
+ stream_large_files, name, host)
def blueprint(self, blueprint, **options):
"""Register a blueprint on the application.
diff --git a/sanic/static.py b/sanic/static.py
--- a/sanic/static.py
+++ b/sanic/static.py
@@ -18,7 +18,7 @@
def register(app, uri, file_or_directory, pattern,
use_modified_since, use_content_range,
- stream_large_files, name='static'):
+ stream_large_files, name='static', host=None):
# TODO: Though sanic is not a file server, I feel like we should at least
# make a good effort here. Modified-since is nice, but we could
# also look into etags, expires, and caching
@@ -122,4 +122,4 @@ async def _handler(request, file_uri=None):
if not name.startswith('_static_'):
name = '_static_{}'.format(name)
- app.route(uri, methods=['GET', 'HEAD'], name=name)(_handler)
+ app.route(uri, methods=['GET', 'HEAD'], name=name, host=host)(_handler)
| diff --git a/tests/test_static.py b/tests/test_static.py
--- a/tests/test_static.py
+++ b/tests/test_static.py
@@ -161,3 +161,20 @@ def test_static_content_range_error(file_name, static_file_directory):
assert 'Content-Range' in response.headers
assert response.headers['Content-Range'] == "bytes */%s" % (
len(get_file_content(static_file_directory, file_name)),)
+
+
[email protected]('file_name', ['test.file', 'decode me.txt', 'python.png'])
+def test_static_file(static_file_directory, file_name):
+ app = Sanic('test_static')
+ app.static(
+ '/testing.file',
+ get_file_path(static_file_directory, file_name),
+ host="www.example.com"
+ )
+
+ headers = {"Host": "www.example.com"}
+ request, response = app.test_client.get('/testing.file', headers=headers)
+ assert response.status == 200
+ assert response.body == get_file_content(static_file_directory, file_name)
+ request, response = app.test_client.get('/testing.file')
+ assert response.status == 404
| Support blueprint static files per vhost
Right now, static files/folders (using app.static or bp.static) are served for all vhost.
I'd like Sanic to support static assets per vhost.
| 2017-09-27T08:25:30 |
|
sanic-org/sanic | 961 | sanic-org__sanic-961 | [
"920"
] | 086b5daa536e245fc228dfde8987eac17ed17dc8 | diff --git a/sanic/cookies.py b/sanic/cookies.py
--- a/sanic/cookies.py
+++ b/sanic/cookies.py
@@ -98,7 +98,8 @@ def __init__(self, key, value):
def __setitem__(self, key, value):
if key not in self._keys:
raise KeyError("Unknown cookie property")
- return super().__setitem__(key, value)
+ if value is not False:
+ return super().__setitem__(key, value)
def encode(self, encoding):
output = ['%s=%s' % (self.key, _quote(self.value))]
| diff --git a/tests/test_cookies.py b/tests/test_cookies.py
--- a/tests/test_cookies.py
+++ b/tests/test_cookies.py
@@ -25,6 +25,25 @@ def handler(request):
assert response.text == 'Cookies are: working!'
assert response_cookies['right_back'].value == 'at you'
[email protected]("httponly,expected", [
+ (False, False),
+ (True, True),
+])
+def test_false_cookies_encoded(httponly, expected):
+ app = Sanic('test_text')
+
+ @app.route('/')
+ def handler(request):
+ response = text('hello cookies')
+ response.cookies['hello'] = 'world'
+ response.cookies['hello']['httponly'] = httponly
+ return text(response.cookies['hello'].encode('utf8'))
+
+ request, response = app.test_client.get('/')
+
+ assert ('HttpOnly' in response.text) == expected
+
+
@pytest.mark.parametrize("httponly,expected", [
(False, False),
(True, True),
@@ -34,7 +53,7 @@ def test_false_cookies(httponly, expected):
@app.route('/')
def handler(request):
- response = text('Cookies are: {}'.format(request.cookies['test']))
+ response = text('hello cookies')
response.cookies['right_back'] = 'at you'
response.cookies['right_back']['httponly'] = httponly
return response
@@ -43,7 +62,7 @@ def handler(request):
response_cookies = SimpleCookie()
response_cookies.load(response.headers.get('Set-Cookie', {}))
- 'HttpOnly' in response_cookies == expected
+ assert ('HttpOnly' in response_cookies['right_back'].output()) == expected
def test_http2_cookies():
app = Sanic('test_http2_cookies')
| Cookie secure option not encoded properly
When `Cookies.encode` encounters `response.cookies["<cookie>"]["secure"] = False` then it outputs:
`b'Domain=xad.com; Path=/; Secure=False'`
where it should output:
`b'Domain=xad.com; Path=/;'` when `response.cookies["<cookie>"]["secure"] = False`
and
`b'Domain=xad.com; Path=/; Secure;'` when `response.cookies["<cookie>"]["secure"] = True`
| 2017-10-06T05:22:26 |
|
sanic-org/sanic | 1,004 | sanic-org__sanic-1004 | [
"1000"
] | bf6ed217c23faefe4ac52f16321afca4b2b419d4 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -28,7 +28,8 @@ class Sanic:
def __init__(self, name=None, router=None, error_handler=None,
load_env=True, request_class=None,
- strict_slashes=False, log_config=None):
+ strict_slashes=False, log_config=None,
+ configure_logging=True):
# Get name from previous stack frame
if name is None:
@@ -36,7 +37,8 @@ def __init__(self, name=None, router=None, error_handler=None,
name = getmodulename(frame_records[1])
# logging
- logging.config.dictConfig(log_config or LOGGING_CONFIG_DEFAULTS)
+ if configure_logging:
+ logging.config.dictConfig(log_config or LOGGING_CONFIG_DEFAULTS)
self.name = name
self.router = router or Router()
@@ -47,6 +49,7 @@ def __init__(self, name=None, router=None, error_handler=None,
self.response_middleware = deque()
self.blueprints = {}
self._blueprint_order = []
+ self.configure_logging = configure_logging
self.debug = None
self.sock = None
self.strict_slashes = strict_slashes
@@ -793,7 +796,7 @@ def _helper(self, host=None, port=None, debug=False,
listeners = [partial(listener, self) for listener in listeners]
server_settings[settings_name] = listeners
- if debug:
+ if self.configure_logging and debug:
logger.setLevel(logging.DEBUG)
if self.config.LOGO is not None:
logger.debug(self.config.LOGO)
| Do not override log configuration
Hello.
Can you optionalize the `logging.config.dictConfig` in `app.py`? My application already have a comprehensive log configuration (and does not use log conf), I don't need to change it.
For the moment, my workaround is to override the `Sanic` class:
```
class UseMyLoggingSanic(Sanic):
def __init__(self, name=None, router=None, error_handler=None,
load_env=True, request_class=None,
strict_slashes=False):
from collections import deque, defaultdict
from inspect import stack, getmodulename
from sanic.router import Router
from sanic.handlers import ErrorHandler
from sanic.config import Config
# Get name from previous stack frame
if name is None:
frame_records = stack()[1]
name = getmodulename(frame_records[1])
# logging
# logging.config.dictConfig(log_config or LOGGING_CONFIG_DEFAULTS)
self.log_config = None
self.name = name
self.router = router or Router()
self.request_class = request_class
self.error_handler = error_handler or ErrorHandler()
self.config = Config(load_env=load_env)
self.request_middleware = deque()
self.response_middleware = deque()
self.blueprints = {}
self._blueprint_order = []
self.debug = None
self.sock = None
self.strict_slashes = strict_slashes
self.listeners = defaultdict(list)
self.is_running = False
self.is_request_stream = False
self.websocket_enabled = False
self.websocket_tasks = set()
# Register alternative method names
self.go_fast = self.run
```
This way, logs are routed correctly in my debug terminal, and not twice
| You can pass in your own configuration. or are you saying you have log in another layer of your architecture ?
You can disable access_log by `app.run(access_log=False)`.
Yes, I have a configuration module that handle the logging config on its own. I now simply call ```Sanic(__name__, log_config=None)`` and log are disabled, but I don't have any access log from sanic at all. Can't you just optionialize the call to
```
logging.config.dictConfig(log_config or LOGGING_CONFIG_DEFAULTS)
```
? | 2017-11-02T13:32:04 |
|
sanic-org/sanic | 1,006 | sanic-org__sanic-1006 | [
"1003"
] | c3bcafb514c618d88633899c47d2bc712d53575e | diff --git a/sanic/router.py b/sanic/router.py
--- a/sanic/router.py
+++ b/sanic/router.py
@@ -130,8 +130,15 @@ def add(self, uri, methods, handler, host=None, strict_slashes=False,
return
# Add versions with and without trailing /
+ slashed_methods = self.routes_all.get(uri + '/', frozenset({}))
+ if isinstance(methods, Iterable):
+ _slash_is_missing = all(method in slashed_methods for
+ method in methods)
+ else:
+ _slash_is_missing = methods in slashed_methods
+
slash_is_missing = (
- not uri[-1] == '/' and not self.routes_all.get(uri + '/', False)
+ not uri[-1] == '/' and not _slash_is_missing
)
without_slash_is_missing = (
uri[-1] == '/' and not
| diff --git a/tests/test_routes.py b/tests/test_routes.py
--- a/tests/test_routes.py
+++ b/tests/test_routes.py
@@ -44,6 +44,24 @@ def handler(request):
request, response = app.test_client.post('/get')
assert response.status == 405
+def test_shorthand_routes_multiple():
+ app = Sanic('test_shorthand_routes_multiple')
+
+ @app.get('/get')
+ def get_handler(request):
+ return text('OK')
+
+ @app.options('/get')
+ def options_handler(request):
+ return text('')
+
+ request, response = app.test_client.get('/get/')
+ assert response.status == 200
+ assert response.text == 'OK'
+
+ request, response = app.test_client.options('/get/')
+ assert response.status == 200
+
def test_route_strict_slash():
app = Sanic('test_route_strict_slash')
@@ -431,7 +449,7 @@ async def handler(request, ws):
'Sec-WebSocket-Key': 'dGhlIHNhbXBsZSBub25jZQ==',
'Sec-WebSocket-Version': '13'})
assert response.status == 101
-
+
assert results == ['bar', 'bar', None, None]
@@ -754,6 +772,7 @@ async def handler(request):
assert response.status == 200
app.remove_route('/test', clean_cache=True)
+ app.remove_route('/test/', clean_cache=True)
request, response = app.test_client.get('/test')
assert response.status == 404
| Unexpected HTTP 405 errors when using shorthand route decorators
Hello Sanic folks,
I've been playing with Sanic recently and stumbled on getting weird 405 HTTP errors on endpoints that have methods defined. This only happens when 1) I use the `app.<http_method>` (e.g. `app.post`) decorator and 2) when I try to query and endpoint with trailing slash and the `strict_slash` option is set to `False` (the default).
I tried to reproduce the error with a simple case, which I have done as a test [here](https://travis-ci.org/bow/sanic/jobs/296017905). As you can see, the builds are all failing with a similar unexpected HTTP status code error.
After digging a little bit deeper, I think I've pinned this down to [this block of code](https://github.com/channelcat/sanic/blob/master/sanic/router.py#L133) which checks for an existing route using only the URI but without checking the HTTP method. The reason this causes the bug and only when `strict_slashes=False` is because when adding the route for the non-first HTTP method with the slash appended, both boolean checks evaluate to `False`, so the route is not added. However on an actual query, a method check is done and since the route for that method is missing, we have a 405.
Anyway, [I added the extra HTTP method check](https://github.com/bow/sanic/commit/7bde17aed54fd6b2ed5b48833ba8adc52fc78d21) on a my own branch and [it seems to make the test case pass without breaking anything else](https://travis-ci.org/bow/sanic/builds/296020569).
Would this fix be useful for Sanic?
By the way, I am opening this as an issue since I'm not sure the code and test I've written fit. I'm open to changing it if you think it would fit the rest of the code better and then submit a proper PR :).
| 2017-11-04T01:30:34 |
|
sanic-org/sanic | 1,045 | sanic-org__sanic-1045 | [
"1016"
] | 1b0ad2c3cd28850d63238a5a32c1958ade7f967f | diff --git a/sanic/__init__.py b/sanic/__init__.py
--- a/sanic/__init__.py
+++ b/sanic/__init__.py
@@ -1,6 +1,6 @@
from sanic.app import Sanic
from sanic.blueprints import Blueprint
-__version__ = '0.6.0'
+__version__ = '0.7.0'
__all__ = ['Sanic', 'Blueprint']
| 0.6.1 release to PyPi
Hey folks,
There's been a bunch of substantive changes in the past few months; I think it warrants a release of 0.6.1 (or 0.7, considering there may be large changes in PRs like #939). Any chance we could get a new candidate uploaded to PyPi?
If there's a better place to ask this, I'm happy to head there.
| I'll make some time to do a full `0.7.0` release this week
cc @r0fls
Thanks!
Please update the CHANGELOG as well!
@pikeas
I think the changelog has been abandoned at this point. There has been a _lot_ of releases since it was last updated. For now, the best way to see changes is in the Releases page on github, here: https://github.com/channelcat/sanic/releases
@seemethere
Do you still plan to do a 0.7.0 release soon? Is there anymore testing or QC you need before it is released, given the large number of changes since 0.6.0, including my commits in #939 ?
For what its worth, I am using Sanic@master in a semi-production application at work and it's performing great.
Any updates on the v0.7 release @seemethere?
If you're too busy at the moment @seemethere, isn't there someone else who can release 0.7.0? | 2017-12-06T03:14:16 |
|
sanic-org/sanic | 1,054 | sanic-org__sanic-1054 | [
"1052"
] | 25859006924ac3cf997dd235f618e2dd08e8b97a | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -543,6 +543,7 @@ async def handle_request(self, request, write_callback, stream_callback):
# Fetch handler from router
handler, args, kwargs, uri = self.router.get(request)
+
request.uri_template = uri
if handler is None:
raise ServerError(
diff --git a/sanic/exceptions.py b/sanic/exceptions.py
--- a/sanic/exceptions.py
+++ b/sanic/exceptions.py
@@ -150,6 +150,16 @@ class InvalidUsage(SanicException):
pass
+@add_status_code(405)
+class MethodNotSupported(SanicException):
+ def __init__(self, message, method, allowed_methods):
+ super().__init__(message)
+ self.headers = dict()
+ self.headers["Allow"] = ", ".join(allowed_methods)
+ if method in ['HEAD', 'PATCH', 'PUT', 'DELETE']:
+ self.headers['Content-Length'] = 0
+
+
@add_status_code(500)
class ServerError(SanicException):
pass
@@ -167,8 +177,6 @@ class URLBuildError(ServerError):
class FileNotFound(NotFound):
- pass
-
def __init__(self, message, path, relative_url):
super().__init__(message)
self.path = path
@@ -198,8 +206,6 @@ class HeaderNotFound(InvalidUsage):
@add_status_code(416)
class ContentRangeError(SanicException):
- pass
-
def __init__(self, message, content_range):
super().__init__(message)
self.headers = {
diff --git a/sanic/router.py b/sanic/router.py
--- a/sanic/router.py
+++ b/sanic/router.py
@@ -3,7 +3,7 @@
from collections.abc import Iterable
from functools import lru_cache
-from sanic.exceptions import NotFound, InvalidUsage
+from sanic.exceptions import NotFound, MethodNotSupported
from sanic.views import CompositionView
Route = namedtuple(
@@ -352,6 +352,16 @@ def get(self, request):
except NotFound:
return self._get(request.path, request.method, '')
+ def get_supported_methods(self, url):
+ """Get a list of supported methods for a url and optional host.
+
+ :param url: URL string (including host)
+ :return: frozenset of supported methods
+ """
+ route = self.routes_all.get(url)
+ # if methods are None then this logic will prevent an error
+ return getattr(route, 'methods', None) or frozenset()
+
@lru_cache(maxsize=ROUTER_CACHE_SIZE)
def _get(self, url, method, host):
"""Get a request handler based on the URL of the request, or raises an
@@ -364,9 +374,10 @@ def _get(self, url, method, host):
url = host + url
# Check against known static routes
route = self.routes_static.get(url)
- method_not_supported = InvalidUsage(
- 'Method {} not allowed for URL {}'.format(
- method, url), status_code=405)
+ method_not_supported = MethodNotSupported(
+ 'Method {} not allowed for URL {}'.format(method, url),
+ method=method,
+ allowed_methods=self.get_supported_methods(url))
if route:
if route.methods and method not in route.methods:
raise method_not_supported
@@ -409,7 +420,7 @@ def is_stream_handler(self, request):
"""
try:
handler = self.get(request)[0]
- except (NotFound, InvalidUsage):
+ except (NotFound, MethodNotSupported):
return False
if (hasattr(handler, 'view_class') and
hasattr(handler.view_class, request.method.lower())):
| diff --git a/tests/test_response.py b/tests/test_response.py
--- a/tests/test_response.py
+++ b/tests/test_response.py
@@ -35,6 +35,25 @@ async def sample_streaming_fn(response):
await asyncio.sleep(.001)
response.write('bar')
+def test_method_not_allowed():
+ app = Sanic('method_not_allowed')
+
+ @app.get('/')
+ async def test(request):
+ return response.json({'hello': 'world'})
+
+ request, response = app.test_client.head('/')
+ assert response.headers['Allow']== 'GET'
+
+ @app.post('/')
+ async def test(request):
+ return response.json({'hello': 'world'})
+
+ request, response = app.test_client.head('/')
+ assert response.status == 405
+ assert set(response.headers['Allow'].split(', ')) == set(['GET', 'POST'])
+ assert response.headers['Content-Length'] == '0'
+
@pytest.fixture
def json_app():
@@ -254,4 +273,4 @@ async def file_route(request, filename):
assert 'Content-Length' in response.headers
assert int(response.headers[
'Content-Length']) == len(
- get_file_content(static_file_directory, file_name))
\ No newline at end of file
+ get_file_content(static_file_directory, file_name))
| RFC 7231violations
https://tools.ietf.org/html/rfc7231#section-4.1 specifies that
> All general-purpose servers MUST support the methods GET and HEAD.
> All other methods are OPTIONAL.
While it's not defined what 'general purpose' means imo sanic should by default also accept HEAD requests performing the same action as a GET except returning a body.
Testcase:
```python
@app.route('/sanic/')
async def test(request):
return response.json({'hello': 'world'})
```
Result:
```
curl -I -v -o /dev/null http://127.0.0.1:5071/sanic/
* Trying 127.0.0.1...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 127.0.0.1 (127.0.0.1) port 5071 (#0)
> HEAD /sanic/ HTTP/1.1
> Host: 127.0.0.1:5071
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 405 Method Not Allowed
< Connection: keep-alive
< Keep-Alive: 5
< Content-Length: 46
< Content-Type: text/plain; charset=utf-8
<
* Excess found in a non pipelined read: excess = 46 url = /sanic/ (zero-length body)
* Curl_http_done: called premature == 0
0 46 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Connection #0 to host 127.0.0.1 left intact
```
This result has multiple issues:
- Incorrect Content-Length
- > The origin server MUST generate an Allow header field in a 405 response containing a list of the target resource's currently supported methods.
There's 3 options to fix the response code:
1. Have a HEAD performing the same action as a GET except returning a body (this only fixes the issue with HEAD requests).
2. Return [405 Method Not Allowed](https://tools.ietf.org/html/rfc7231#section-6.5.5) and Allow header containing allowed methods (this also fixes the response for POST/PUT/other requests).
3. Return [501 Not Implemented](https://tools.ietf.org/html/rfc7231#section-6.6.2).
For comparison with Flask regarding HEAD/GET: http://flask.pocoo.org/docs/0.12/quickstart/#http-methods (this refers to [RFC 2068](https://tools.ietf.org/html/rfc2068), however, the relevant parts for this issue are the same)
> If GET is present, HEAD will be added automatically for you. You donโt have to deal with that. It will also make sure that HEAD requests are handled as the [HTTP RFC](http://www.ietf.org/rfc/rfc2068.txt) (the document describing the HTTP protocol) demands, so you can completely ignore that part of the HTTP specification. Likewise, as of Flask 0.6, OPTIONS is implemented for you automatically as well.
| 2017-12-12T04:12:14 |
|
sanic-org/sanic | 1,063 | sanic-org__sanic-1063 | [
"1062"
] | 008cbe5ce79e465fd9aeeba02449e53de88568ed | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -88,14 +88,20 @@ def add_task(self, task):
"""
try:
if callable(task):
- self.loop.create_task(task())
+ try:
+ self.loop.create_task(task(self))
+ except TypeError:
+ self.loop.create_task(task())
else:
self.loop.create_task(task)
except SanicException:
@self.listener('before_server_start')
def run(app, loop):
if callable(task):
- loop.create_task(task())
+ try:
+ loop.create_task(task(self))
+ except TypeError:
+ loop.create_task(task())
else:
loop.create_task(task)
| diff --git a/tests/test_create_task.py b/tests/test_create_task.py
--- a/tests/test_create_task.py
+++ b/tests/test_create_task.py
@@ -2,6 +2,7 @@
from sanic.response import text
from threading import Event
import asyncio
+from queue import Queue
def test_create_task():
@@ -28,3 +29,19 @@ async def set(request):
request, response = app.test_client.get('/late')
assert response.body == b'True'
+
+def test_create_task_with_app_arg():
+ app = Sanic('test_add_task')
+ q = Queue()
+
+ @app.route('/')
+ def not_set(request):
+ return "hello"
+
+ async def coro(app):
+ q.put(app.name)
+
+ app.add_task(coro)
+
+ request, response = app.test_client.get('/')
+ assert q.get() == 'test_add_task'
| Accessing app in a background task
The example in the docs shows a nice way of scheduling a long-running task:
```python
async def notify_server_started_after_five_seconds():
await asyncio.sleep(5)
print('Server successfully started!')
app.add_task(notify_server_started_after_five_seconds())
```
If I've stashed my `notify_server_started_after_five_seconds()` function in `tasks.py`, What's the best way to access the `app` instance from it?
I've looked at dirty solutions like making `app` global... but I'm surprised that `app` and `loop` don't get injected like they do on the decorated methods.
Thanks in advance!
| 2017-12-22T01:38:17 |
|
sanic-org/sanic | 1,137 | sanic-org__sanic-1137 | [
"1136"
] | 0b38dea6134a760a2138e37942bbacc55be0ed4a | diff --git a/sanic/handlers.py b/sanic/handlers.py
--- a/sanic/handlers.py
+++ b/sanic/handlers.py
@@ -79,9 +79,9 @@ def response(self, request, exception):
response = None
try:
if handler:
- response = handler(request=request, exception=exception)
+ response = handler(request, exception)
if response is None:
- response = self.default(request=request, exception=exception)
+ response = self.default(request, exception)
except Exception:
self.log(format_exc())
if self.debug:
| restrictive call to handlers from sanic.handlers.ErrorHandler.response
`sanic.handlers.ErrorHandler.response` use keywords arguments on handler call and so force the signature of the handler
Consider this very basic application:
```python
from sanic import Sanic
from sanic.exceptions import SanicException
from sanic.response import text
app = Sanic()
@app.exception(SanicException)
def http_error_handler(request, exception):
return text(":(")
app.run()
```
it works as intended:
$ python server.py
[2018-02-21 00:35:04 +0100] [23560] [INFO] Goin' Fast @ http://127.0.0.1:8000
[2018-02-21 00:35:04 +0100] [23560] [INFO] Starting worker [23560]
[2018-02-21 00:35:16 +0100] - (sanic.access)[INFO][1:2]: GET http://127.0.0.1:8000/ 200 2
[2018-02-21 00:36:01 +0100] [23560] [INFO] Stopping worker [23560]
[2018-02-21 00:36:01 +0100] [23560] [INFO] Server Stopped
But you cannot change the argument names:
```python
@app.exception(SanicException)
def http_error_handler(req, exc):
return text(":(")
```
because `ErrorHandler.response` use keyword arguments:
$ python server.py
[2018-02-21 00:36:02 +0100] [23608] [INFO] Goin' Fast @ http://127.0.0.1:8000
[2018-02-21 00:36:02 +0100] [23608] [INFO] Starting worker [23608]
[2018-02-21 00:36:05 +0100] [23608] [ERROR] Traceback (most recent call last):
File "[..]/sanic/app.py", line 546, in handle_request
handler, args, kwargs, uri = self.router.get(request)
File "[..]/sanic/router.py", line 344, in get
return self._get(request.path, request.method, '')
File "[..]/sanic/router.py", line 393, in _get
raise NotFound('Requested URL {} not found'.format(url))
sanic.exceptions.NotFound: Requested URL / not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "[..]/sanic/handlers.py", line 82, in response
response = handler(request=request, exception=exception)
TypeError: http_error_handler() got an unexpected keyword argument 'request'
[2018-02-21 00:36:05 +0100] - (sanic.access)[INFO][1:2]: GET http://127.0.0.1:8000/ 500 41
[2018-02-21 00:36:10 +0100] [23608] [INFO] KeepAlive Timeout. Closing connection.
[2018-02-21 00:38:27 +0100] [23608] [INFO] Stopping worker [23608]
[2018-02-21 00:38:27 +0100] [23608] [INFO] Server Stopped
A pull request is coming shortly to use regular arguments instead of keywords in the `ErrorHandler.response`.
| 2018-02-20T23:57:33 |
||
sanic-org/sanic | 1,222 | sanic-org__sanic-1222 | [
"1221"
] | e1c90202682d76361c77adb7bb2176730038466a | diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -78,6 +78,11 @@ def __repr__(self):
self.method,
self.path)
+ def __bool__(self):
+ if self.transport:
+ return True
+ return False
+
@property
def json(self):
if self.parsed_json is None:
| Sanic `Request` object is falsey
```python
@app.route('/myroute')
async def someroute(request):
if request:
return 'some data'
raise Exception("Woops")
```
This code will raise the exception because `bool(request)` is `False`.
| 2018-05-15T21:36:02 |
||
sanic-org/sanic | 1,232 | sanic-org__sanic-1232 | [
"1231"
] | 202a4c6525cc47fb55a35be136a9b928011b78cc | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -303,7 +303,8 @@ async def websocket_handler(request, *args, **kwargs):
await fut
except (CancelledError, ConnectionClosed):
pass
- self.websocket_tasks.remove(fut)
+ finally:
+ self.websocket_tasks.remove(fut)
await ws.close()
self.router.add(uri=uri, handler=websocket_handler,
| Possible memory leak in websocket_handler function
Hey! It seems that I found a possible memory leak in `websocket_handler` function inside `Sanic.websocket` https://github.com/channelcat/sanic/blob/master/sanic/app.py#L301
If arbitrary exception occurred in websocket handler, it won't be catched down there and `fut` object will stay in `self.websocket_tasks` list. Little by little this list will become bigger and will consume more memory.
Probably it makes sense to catch all exceptions in `try: except:` block, not only `(CancelledError, ConnectionClosed)`?
| 2018-05-29T19:20:06 |
||
sanic-org/sanic | 1,267 | sanic-org__sanic-1267 | [
"1266"
] | 599834b0e1c41d934dce30b6393926635f72b78e | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -386,13 +386,14 @@ def middleware(self, middleware_or_request):
def static(self, uri, file_or_directory, pattern=r'/?.+',
use_modified_since=True, use_content_range=False,
stream_large_files=False, name='static', host=None,
- strict_slashes=None):
+ strict_slashes=None, content_type=None):
"""Register a root to serve files from. The input can either be a
file or a directory. See
"""
static_register(self, uri, file_or_directory, pattern,
use_modified_since, use_content_range,
- stream_large_files, name, host, strict_slashes)
+ stream_large_files, name, host, strict_slashes,
+ content_type)
def blueprint(self, blueprint, **options):
"""Register a blueprint on the application.
diff --git a/sanic/static.py b/sanic/static.py
--- a/sanic/static.py
+++ b/sanic/static.py
@@ -19,7 +19,7 @@
def register(app, uri, file_or_directory, pattern,
use_modified_since, use_content_range,
stream_large_files, name='static', host=None,
- strict_slashes=None):
+ strict_slashes=None, content_type=None):
# TODO: Though sanic is not a file server, I feel like we should at least
# make a good effort here. Modified-since is nice, but we could
# also look into etags, expires, and caching
@@ -41,6 +41,7 @@ def register(app, uri, file_or_directory, pattern,
If this is an integer, this represents the
threshold size to switch to file_stream()
:param name: user defined name used for url_for
+ :param content_type: user defined content type for header
"""
# If we're not trying to match a file directly,
# serve from the folder
@@ -95,10 +96,10 @@ async def _handler(request, file_uri=None):
del headers['Content-Length']
for key, value in _range.headers.items():
headers[key] = value
+ headers['Content-Type'] = content_type \
+ or guess_type(file_path)[0] or 'text/plain'
if request.method == 'HEAD':
- return HTTPResponse(
- headers=headers,
- content_type=guess_type(file_path)[0] or 'text/plain')
+ return HTTPResponse(headers=headers)
else:
if stream_large_files:
if isinstance(stream_large_files, int):
| diff --git a/tests/static/test.html b/tests/static/test.html
new file mode 100644
--- /dev/null
+++ b/tests/static/test.html
@@ -0,0 +1,26 @@
+<html>
+<body>
+<pre>
+ โโโโโ
+ โโโโโโโโโโโโ _______________
+ โโโโโ โโโโโโโโโโ / \
+ โโโโโโโโโโ โโโ โโโ | Gotta go fast! |
+ โโโโโโโโโ โโโโโโโโโโ | _________________/
+ โโโโโโ โโโโโโโโโโโโ |/
+ โโโโ โโโโโ โ โโ
+ โโโโโโโโโโโโโโโโโโ โโโโโโโโโ
+ โโโโโโโโโโโโโโ โโโโโโ โโโ
+โโโโโโโโโโโโโโโโโโโโโโโโ โโโโ
+โ โโโโโโโโโโโโโโโโโโโ
+โโโโโโ โโโโโโโโโโโโโโ
+ โโโโโโโโโโโโ
+ โโโโโโโโโโโโโ
+ โโโโ โโโ โ
+ โโ โโ
+ โโโโโโ โโโโโโโโโ
+โ โ โโโโโโ
+ โโโโโ
+
+</pre>
+</body>
+</html>
diff --git a/tests/test_blueprints.py b/tests/test_blueprints.py
--- a/tests/test_blueprints.py
+++ b/tests/test_blueprints.py
@@ -1,5 +1,6 @@
import asyncio
import inspect
+import os
import pytest
from sanic import Sanic
@@ -13,6 +14,14 @@
# GET
# ------------------------------------------------------------ #
+def get_file_path(static_file_directory, file_name):
+ return os.path.join(static_file_directory, file_name)
+
+def get_file_content(static_file_directory, file_name):
+ """The content of the static file to check"""
+ with open(get_file_path(static_file_directory, file_name), 'rb') as file:
+ return file.read()
+
@pytest.mark.parametrize('method', HTTP_METHODS)
def test_versioned_routes_get(method):
app = Sanic('test_shorhand_routes_get')
@@ -348,6 +357,28 @@ def test_bp_static():
assert response.status == 200
assert response.body == current_file_contents
[email protected]('file_name', ['test.html'])
+def test_bp_static_content_type(file_name):
+ # This is done here, since no other test loads a file here
+ current_file = inspect.getfile(inspect.currentframe())
+ current_directory = os.path.dirname(os.path.abspath(current_file))
+ static_directory = os.path.join(current_directory, 'static')
+
+ app = Sanic('test_static')
+ blueprint = Blueprint('test_static')
+ blueprint.static(
+ '/testing.file',
+ get_file_path(static_directory, file_name),
+ content_type='text/html; charset=utf-8'
+ )
+
+ app.blueprint(blueprint)
+
+ request, response = app.test_client.get('/testing.file')
+ assert response.status == 200
+ assert response.body == get_file_content(static_directory, file_name)
+ assert response.headers['Content-Type'] == 'text/html; charset=utf-8'
+
def test_bp_shorthand():
app = Sanic('test_shorhand_routes')
blueprint = Blueprint('test_shorhand_routes')
@@ -449,41 +480,41 @@ async def handler(request, ws):
def test_bp_group():
app = Sanic('test_nested_bp_groups')
-
+
deep_0 = Blueprint('deep_0', url_prefix='/deep')
deep_1 = Blueprint('deep_1', url_prefix = '/deep1')
@deep_0.route('/')
def handler(request):
return text('D0_OK')
-
+
@deep_1.route('/bottom')
def handler(request):
return text('D1B_OK')
mid_0 = Blueprint.group(deep_0, deep_1, url_prefix='/mid')
mid_1 = Blueprint('mid_tier', url_prefix='/mid1')
-
+
@mid_1.route('/')
def handler(request):
return text('M1_OK')
top = Blueprint.group(mid_0, mid_1)
-
+
app.blueprint(top)
-
+
@app.route('/')
def handler(request):
return text('TOP_OK')
-
+
request, response = app.test_client.get('/')
assert response.text == 'TOP_OK'
-
+
request, response = app.test_client.get('/mid1')
assert response.text == 'M1_OK'
-
+
request, response = app.test_client.get('/mid/deep')
assert response.text == 'D0_OK'
-
+
request, response = app.test_client.get('/mid/deep1/bottom')
assert response.text == 'D1B_OK'
diff --git a/tests/test_static.py b/tests/test_static.py
--- a/tests/test_static.py
+++ b/tests/test_static.py
@@ -36,6 +36,21 @@ def test_static_file(static_file_directory, file_name):
assert response.body == get_file_content(static_file_directory, file_name)
[email protected]('file_name', ['test.html'])
+def test_static_file_content_type(static_file_directory, file_name):
+ app = Sanic('test_static')
+ app.static(
+ '/testing.file',
+ get_file_path(static_file_directory, file_name),
+ content_type='text/html; charset=utf-8'
+ )
+
+ request, response = app.test_client.get('/testing.file')
+ assert response.status == 200
+ assert response.body == get_file_content(static_file_directory, file_name)
+ assert response.headers['Content-Type'] == 'text/html; charset=utf-8'
+
+
@pytest.mark.parametrize('file_name', ['test.file', 'decode me.txt'])
@pytest.mark.parametrize('base_uri', ['/static', '', '/dir'])
def test_static_directory(file_name, base_uri, static_file_directory):
| Define Static File Content Type
While trying to serve a utf-8 encoded HTML document via `Sanic.static`, I noticed that `content_type` cannot be set using `Sanic.static`.
`Sanic` header content_type [defaults to `text/plain`](https://github.com/channelcat/sanic/blob/master/sanic/static.py#L101)
if the type cannot be guessed by `mime_types.guess_type`, which seems to only analyze the filename.
Currently, the following is not possible (same goes for blueprints):
```python
app.static('/', get_data('index.html'), content_type='text/html; charset=utf-8')
```
Prior to this PR, you need to do the following to serve a document with a custom content-type:
```python
@app.route('/')
async def handle_index(request):
return await response.file(
'index.html',
headers={'Content-Type': 'text/html; charset=utf-8'}
)
```
I'll be submitting a PR shortly with a proposed fix.
| 2018-07-19T05:20:20 |
|
sanic-org/sanic | 1,269 | sanic-org__sanic-1269 | [
"1268"
] | 599834b0e1c41d934dce30b6393926635f72b78e | diff --git a/sanic/response.py b/sanic/response.py
--- a/sanic/response.py
+++ b/sanic/response.py
@@ -233,8 +233,8 @@ def html(body, status=200, headers=None):
content_type="text/html; charset=utf-8")
-async def file(
- location, mime_type=None, headers=None, filename=None, _range=None):
+async def file(location, status=200, mime_type=None, headers=None,
+ filename=None, _range=None):
"""Return a response object with file data.
:param location: Location of file on system.
@@ -260,15 +260,14 @@ async def file(
out_stream = await _file.read()
mime_type = mime_type or guess_type(filename)[0] or 'text/plain'
- return HTTPResponse(status=200,
+ return HTTPResponse(status=status,
headers=headers,
content_type=mime_type,
body_bytes=out_stream)
-async def file_stream(
- location, chunk_size=4096, mime_type=None, headers=None,
- filename=None, _range=None):
+async def file_stream(location, status=200, chunk_size=4096, mime_type=None,
+ headers=None, filename=None, _range=None):
"""Return a streaming response object with file data.
:param location: Location of file on system.
@@ -315,7 +314,7 @@ async def _streaming_fn(response):
headers['Content-Range'] = 'bytes %s-%s/%s' % (
_range.start, _range.end, _range.total)
return StreamingHTTPResponse(streaming_fn=_streaming_fn,
- status=200,
+ status=status,
headers=headers,
content_type=mime_type)
| diff --git a/tests/test_response.py b/tests/test_response.py
--- a/tests/test_response.py
+++ b/tests/test_response.py
@@ -227,17 +227,19 @@ def get_file_content(static_file_directory, file_name):
@pytest.mark.parametrize('file_name', ['test.file', 'decode me.txt', 'python.png'])
-def test_file_response(file_name, static_file_directory):
[email protected]('status', [200, 401])
+def test_file_response(file_name, static_file_directory, status):
app = Sanic('test_file_helper')
@app.route('/files/<filename>', methods=['GET'])
def file_route(request, filename):
file_path = os.path.join(static_file_directory, filename)
file_path = os.path.abspath(unquote(file_path))
- return file(file_path, mime_type=guess_type(file_path)[0] or 'text/plain')
+ return file(file_path, status=status,
+ mime_type=guess_type(file_path)[0] or 'text/plain')
request, response = app.test_client.get('/files/{}'.format(file_name))
- assert response.status == 200
+ assert response.status == status
assert response.body == get_file_content(static_file_directory, file_name)
assert 'Content-Disposition' not in response.headers
| Support status code for file response
## TL;DR
It is not possible to set response status for `response.file` and `response.file_stream` as currently suggested in documentation
## Issue
Current documentation e.g. at http://sanic.readthedocs.io/en/latest/sanic/response.html#modify-headers-or-status states that you simply need to add the `status` argument to set a custom status code for a response. This is not the case however for `response.file` and `response.file_stream` which always return status 200 by default (`sanic/response.py:263`, `sanic/response.py:318`).
This became an issue today where I wanted to show a login page for an unauthorized user but still return status 401.
Expected code:
```python
@app.exception(Unauthorized)
async def handle_401(request, exception):
return response.file(web_dir / "login.html", status=401)
```
Actual code:
```python
@app.exception(Unauthorized)
async def handle_401(request, exception):
response = await response.file(web_dir / "login.html")
response.status = 401
return response
```
## Solution
Looking at the source, it seems simple enough to add a `status` argument and use its value for `response.file` and `response.file_stream` like so:
```python
# response.py:236
async def file(
location, status=200, mime_type=None, headers=None, filename=None, _range=None):
"""Return a response object with file data.
:param location: Location of file on system.
:param status: Response code.
:param mime_type: Specific mime_type.
:param headers: Custom Headers.
:param filename: Override filename.
:param _range:
# [...]
# response.py:263
return HTTPResponse(status=status,
headers=headers,
content_type=mime_type,
body_bytes=out_stream)
```
However, if there are other considerations involved it would be good to explicitly state `status` is not available for `response.file` and `response.file_stream` in documentation
| 2018-07-19T19:56:36 |
|
sanic-org/sanic | 1,276 | sanic-org__sanic-1276 | [
"1143"
] | b238be54a4d13e37954e025e76472c30029390af | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -688,11 +688,12 @@ def run(self, host=None, port=None, debug=False, ssl=None,
warnings.simplefilter('default')
warnings.warn("stop_event will be removed from future versions.",
DeprecationWarning)
+ # compatibility old access_log params
+ self.config.ACCESS_LOG = access_log
server_settings = self._helper(
host=host, port=port, debug=debug, ssl=ssl, sock=sock,
workers=workers, protocol=protocol, backlog=backlog,
- register_sys_signals=register_sys_signals,
- access_log=access_log, auto_reload=auto_reload)
+ register_sys_signals=register_sys_signals, auto_reload=auto_reload)
try:
self.is_running = True
@@ -746,12 +747,12 @@ async def create_server(self, host=None, port=None, debug=False,
warnings.simplefilter('default')
warnings.warn("stop_event will be removed from future versions.",
DeprecationWarning)
-
+ # compatibility old access_log params
+ self.config.ACCESS_LOG = access_log
server_settings = self._helper(
host=host, port=port, debug=debug, ssl=ssl, sock=sock,
loop=get_event_loop(), protocol=protocol,
- backlog=backlog, run_async=True,
- access_log=access_log)
+ backlog=backlog, run_async=True)
# Trigger before_start events
await self.trigger_events(
@@ -796,8 +797,7 @@ async def _run_response_middleware(self, request, response):
def _helper(self, host=None, port=None, debug=False,
ssl=None, sock=None, workers=1, loop=None,
protocol=HttpProtocol, backlog=100, stop_event=None,
- register_sys_signals=True, run_async=False, access_log=True,
- auto_reload=False):
+ register_sys_signals=True, run_async=False, auto_reload=False):
"""Helper function used by `run` and `create_server`."""
if isinstance(ssl, dict):
# try common aliaseses
@@ -838,7 +838,7 @@ def _helper(self, host=None, port=None, debug=False,
'loop': loop,
'register_sys_signals': register_sys_signals,
'backlog': backlog,
- 'access_log': access_log,
+ 'access_log': self.config.ACCESS_LOG,
'websocket_max_size': self.config.WEBSOCKET_MAX_SIZE,
'websocket_max_queue': self.config.WEBSOCKET_MAX_QUEUE,
'websocket_read_limit': self.config.WEBSOCKET_READ_LIMIT,
diff --git a/sanic/config.py b/sanic/config.py
--- a/sanic/config.py
+++ b/sanic/config.py
@@ -39,6 +39,7 @@ def __init__(self, defaults=None, load_env=True, keep_alive=True):
self.WEBSOCKET_READ_LIMIT = 2 ** 16
self.WEBSOCKET_WRITE_LIMIT = 2 ** 16
self.GRACEFUL_SHUTDOWN_TIMEOUT = 15.0 # 15 sec
+ self.ACCESS_LOG = True
if load_env:
prefix = SANIC_PREFIX if load_env is True else load_env
| Turn off access log with gunicorn
Is there any way to turn off access log with gunicorn?
I could not find any example to set access_log=False with gunicorn.
| By default, accesslog is None if i recall correctly
By default, access_log is True.
So, the only way i found to turn off access log is passing a log_config which without `"sanic.access"`.
did you try define your own `gunicorn_logconfig.ini` ?
@yunstanford Can you give an example how such a file should look like? I am unable to stop sanic.access log. I am using Gunicorn and Sanic on Heroku. No matter what I configure, I still see the access log if I tail the logs.
I have tried sanic with the default set up using gunicorn with the default configuration and sanic outputs twice the access logs, once in gunicorn and once with sanic.accesslog
https://github.com/channelcat/sanic/blob/818a8c2196fb14aab3ee30cd2fef845d0b7148ef/sanic/worker.py#L50-L70
But then in the Sanic's _helper function we have:
https://github.com/channelcat/sanic/blob/818a8c2196fb14aab3ee30cd2fef845d0b7148ef/sanic/app.py#L794-L800
So therefore the access log is always `True`
@yunstanford can you double check?
And then we maybe need to add this in the documentation :) | 2018-07-31T09:14:43 |
|
sanic-org/sanic | 1,292 | sanic-org__sanic-1292 | [
"1245"
] | 6f813f940e370ae0b1d449bd4f16ee2efce3c887 | diff --git a/sanic/__init__.py b/sanic/__init__.py
--- a/sanic/__init__.py
+++ b/sanic/__init__.py
@@ -1,6 +1,6 @@
from sanic.app import Sanic
from sanic.blueprints import Blueprint
-__version__ = '0.7.0'
+__version__ = '0.8.0'
__all__ = ['Sanic', 'Blueprint']
| New release on Pypi ?
Hello,
I was looking for a tool to autoreload my code when I develop and I found this commit : https://github.com/channelcat/sanic/commit/52c2a8484e6aa5fa13aaade49e1f2597dd006e15
So it seems Sanic already integrates it since December 07, 2017. But the the latest version on Pypi dates from the day before (https://github.com/channelcat/sanic/commit/1ea3ab7fe8ab03a6ddf4d75a3de8cb719f4c584c) : https://pypi.org/project/Sanic/#history
Is-it possible to release a new version on Pypi please ? Other features (like the UUID support in routes) are also interesting :)
Thanks in advance !
| pip install https://github.com/user/repo.git@branch
@c-goosen Yes it's what we are doing now, but I still think `pip install sanic` is more convenient than `pip install https://github.com/channelcat/sanic.git@master` :pensive:
Six months without a new release on Pypi for an awesome project like **Sanic**... People who read blog posts on this framework will just try `pip install sanic`, not the full path including the branch.
There is already another thread about requesting a new Sanic release, here:
https://github.com/channelcat/sanic/issues/1170
Well we should discuss integrating with circle CI or other tools with a scheduled release cycle.
@c-goosen
What is your reasoning behind that? How would a scheduled release cycle benefit the Sanic project?
I think, it would be just enough to make new release )
Guess I'll go ahead and +1 this, if only for the ipv6 support.
Who is responsible?
@seemethere seems to be the one to tag and release, not sure if that also carries the responsibility to push to pypi.
Personally, I don't like having to specially get the Sanic repo for support for something like nested Blueprints which has been a feature for a while now.
https://github.com/channelcat/sanic/commit/a10d7469cdeeaa4bdd94d508520e84255bdddb0b
I'm having issues pushing for adoption of Sanic over Flask in my organisation at the moment mostly because of the lack of releases. The lack of a release cycle undermine the confidence in the product.
I've read the few different issues opened about the lack of releases and I still cannot find a good reasoning as to why Sanic stopped doing releases. So what's up?
@channelcat @seemethere Any ideas here?
@r0fls is one of the more active in 2018. Can you make a new release? If no one can make this, maybe we have to migrate for an organization in other repo. @yunstanford @seemethere
I tried and I get this message ๐ข :
```
HTTPError: 403 Client Error: The user 'r0fls' isn't allowed to upload to project 'Sanic'. See https://pypi.org/help/#project-name for more information. for url: https://upload.pypi.org/legacy/
```
Seems pypi is hoping to fix this eventually: https://github.com/pypa/warehouse/issues/1506
In the meantime, only @seemethere can do a release. @seemethere any chance you want to change your pypi to a password that you can share with me so that I can do a release and calm the masses?
@r0fls
This small PR https://github.com/channelcat/sanic/pull/1286 will need to be merged before a release, because auto_reload is currently broken for all users except those on Mac OS.
@r0fls you don't need to get the password. @seemethere will just add you to collaborators on PyPI :)
Example:

Also would be good to get https://github.com/channelcat/sanic/pull/1179 and https://github.com/channelcat/sanic/pull/1278 merged before a release, but I don't want to set a precedent of everyone pleading to get their PRs merged before the release.
It'd be great if we have a team/official process for operating release.
I'd also like to see this https://github.com/channelcat/sanic/pull/1179 merged in so we could get feedbacks.
I'll be quite honest..... I've heavily adopted sanic in production, but did so in a way I could easily switch it out if I need to. The lack of release schedule has me and my team worried and I've considered the effort to remove it. I already [took it out of my gateway service](
https://github.com/channelcat/sanic/issues/1264#issuecomment-413227303).
Seeing this move to an organization would give the project a lot. Sanic's biggest weakness (in my opinion) right now is the lack of project direction. This could give some great momentum to an otherwise great framework.
I would be happy to help whatever organizational work would need to be done to accomplish this.
@ahopkins agree, moving to an organization helps a lot.
I would be glad to help as well.
I would also be willing to assist, if needed.
Cutting a release today (@r0fls will handle the release notes), will contact @channelcat to see if we can move this to an organization so we can get more people who can help out with the project. | 2018-08-17T18:44:08 |
|
sanic-org/sanic | 1,327 | sanic-org__sanic-1327 | [
"1323"
] | 04b8dd989f175935a8b86f791508d533c36f0212 | diff --git a/sanic/exceptions.py b/sanic/exceptions.py
--- a/sanic/exceptions.py
+++ b/sanic/exceptions.py
@@ -1,4 +1,4 @@
-from sanic.http import STATUS_CODES
+from sanic.helpers import STATUS_CODES
TRACEBACK_STYLE = '''
<style>
diff --git a/sanic/http.py b/sanic/helpers.py
similarity index 100%
rename from sanic/http.py
rename to sanic/helpers.py
diff --git a/sanic/response.py b/sanic/response.py
--- a/sanic/response.py
+++ b/sanic/response.py
@@ -10,7 +10,7 @@
from aiofiles import open as open_async
from multidict import CIMultiDict
-from sanic import http
+from sanic.helpers import STATUS_CODES, has_message_body, remove_entity_headers
from sanic.cookies import CookieJar
@@ -103,7 +103,7 @@ def get_headers(
if self.status is 200:
status = b'OK'
else:
- status = http.STATUS_CODES.get(self.status)
+ status = STATUS_CODES.get(self.status)
return (b'HTTP/%b %d %b\r\n'
b'%b'
@@ -141,7 +141,7 @@ def output(
timeout_header = b'Keep-Alive: %d\r\n' % keep_alive_timeout
body = b''
- if http.has_message_body(self.status):
+ if has_message_body(self.status):
body = self.body
self.headers['Content-Length'] = self.headers.get(
'Content-Length', len(self.body))
@@ -150,14 +150,14 @@ def output(
'Content-Type', self.content_type)
if self.status in (304, 412):
- self.headers = http.remove_entity_headers(self.headers)
+ self.headers = remove_entity_headers(self.headers)
headers = self._parse_headers()
if self.status is 200:
status = b'OK'
else:
- status = http.STATUS_CODES.get(self.status, b'UNKNOWN RESPONSE')
+ status = STATUS_CODES.get(self.status, b'UNKNOWN RESPONSE')
return (b'HTTP/%b %d %b\r\n'
b'Connection: %b\r\n'
| Trouble with import conflict in request.py
```
Traceback (most recent call last):
File "/env/lib/python3.6/site-packages/sanic/__main__.py", line 4, in <module>
from sanic.log import logger
File "/env/lib/python3.6/site-packages/sanic/__init__.py", line 1, in <module>
from sanic.app import Sanic
File "/env/lib/python3.6/site-packages/sanic/app.py", line 21, in <module>
from sanic.server import serve, serve_multiple, HttpProtocol, Signal
File "/env/lib/python3.6/site-packages/sanic/server.py", line 31, in <module>
from sanic.request import Request
File "/env/lib/python3.6/site-packages/sanic/request.py", line 6, in <module>
from http.cookies import SimpleCookie
ModuleNotFoundError: No module named 'http.cookies'; 'http' is not a package
```
It seems it tries to use sanic's `http.py` instead of the builtin Python library. The `http.py` itself is used only twice, in the `response.py` & `exceptions.py`.
Am I missing something and/or have something misconfigured?
| @hatarist coincidently enough, I had almost the same problem yesterday. In my case, I was from invoking `ipython` from the `sanic` source code directory, resulting on this error:
```
$ pwd
/home/richard/work/sanic/sanic
$ ipython
Traceback (most recent call last):
File "/home/richard/.pyenv/versions/sanic-boom-3.7.0/bin/ipython", line 7, in <module>
from IPython import start_ipython
File "/home/richard/.pyenv/versions/sanic-boom-3.7.0/lib/python3.7/site-packages/IPython/__init__.py", line 55, in <module>
from .terminal.embed import embed
File "/home/richard/.pyenv/versions/sanic-boom-3.7.0/lib/python3.7/site-packages/IPython/terminal/embed.py", line 17, in <module>
from IPython.terminal.ipapp import load_default_config
File "/home/richard/.pyenv/versions/sanic-boom-3.7.0/lib/python3.7/site-packages/IPython/terminal/ipapp.py", line 28, in <module>
from IPython.core.magics import (
File "/home/richard/.pyenv/versions/sanic-boom-3.7.0/lib/python3.7/site-packages/IPython/core/magics/__init__.py", line 18, in <module>
from .code import CodeMagics, MacroToEdit
File "/home/richard/.pyenv/versions/sanic-boom-3.7.0/lib/python3.7/site-packages/IPython/core/magics/code.py", line 23, in <module>
from urllib.request import urlopen
File "/usr/lib/python3.7/urllib/request.py", line 88, in <module>
import http.client
ModuleNotFoundError: No module named 'http.client'; 'http' is not a package
```
How is your environment? Can you provide the result of your Python call using the `-v` flag? Example from what I did with IPython:
```
$ python -v -m IPython
```
Cheers!
Sorry for the late reply. Seems like I found the issue.
The problem seems to be the `python -msanic` command. It runs the `__main__.py`, which is located in the sanic's directory.
So if I run `python -msanic microservice.app.app`, it forks using `/bin/sh -c /env/bin/python /env/lib/python3.6/site-packages/sanic/__main__.py microservice.app.app`. Then, if I print out the `sys.path` right in the `__main__.py` (say, third line), it will show the sanic's directory in the first item.
I run the `python -msanic app` from my project's directory (not the `/env/` and obviously not `/env/../site-packages/sanic`, either)
(sorry, `python -v` is way too bloated)
The issue is reproducible using 3.6.5 on both Linux and OS X with such commands:
```
cd /tmp
python3.6 -mvenv testenv
source testenv/bin/activate
pip install sanic==0.8.3
echo 'import sanic\napp = sanic.Sanic()' > test.py
python -msanic test.app --debug
```
with such an output:
```
Traceback (most recent call last):
File "/private/tmp/testenv/lib/python3.6/site-packages/sanic/__main__.py", line 4, in <module>
from sanic.log import logger
File "/private/tmp/testenv/lib/python3.6/site-packages/sanic/__init__.py", line 1, in <module>
from sanic.app import Sanic
File "/private/tmp/testenv/lib/python3.6/site-packages/sanic/app.py", line 21, in <module>
from sanic.server import serve, serve_multiple, HttpProtocol, Signal
File "/private/tmp/testenv/lib/python3.6/site-packages/sanic/server.py", line 31, in <module>
from sanic.request import Request
File "/private/tmp/testenv/lib/python3.6/site-packages/sanic/request.py", line 6, in <module>
from http.cookies import SimpleCookie
ModuleNotFoundError: No module named 'http.cookies'; 'http' is not a package
```
@hatarist thanks for pointing this out. I think @ahopkins already pointed me out what may be the root cause of this and probably is by the fact we have a file inside the Sanic source code called `http.py`, that also happens to be the name of a builtin Python module. This needs to be changed to avoid these kind of problems. We'll see what we can do :wink:
Yeah, I know. Thanks, I appreciate it.
@vltr @ahopkins do we have an issue to track the http.py rename?
@sjsadowski no, it was on a chat some days ago (referencing this issue). Perhaps we need to re-open it or create a new one referencing this one.
Pardon me. If you guys need a pull request from me with the name change, could do, just not so sure about the name. `helpers.py`, perhaps?
@hatarist that would be awesome. I just added #1326 to reference this. `helpers.py` makes sense. | 2018-09-25T17:50:34 |
|
sanic-org/sanic | 1,341 | sanic-org__sanic-1341 | [
"1340"
] | fafe23d7c273a8329006a986dc9c73643b92e091 | diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -150,10 +150,7 @@ def request_timeout_callback(self):
self._request_stream_task.cancel()
if self._request_handler_task:
self._request_handler_task.cancel()
- try:
- raise RequestTimeout('Request Timeout')
- except RequestTimeout as exception:
- self.write_error(exception)
+ self.write_error(RequestTimeout('Request Timeout'))
def response_timeout_callback(self):
# Check if elapsed time since response was initiated exceeds our
@@ -170,10 +167,7 @@ def response_timeout_callback(self):
self._request_stream_task.cancel()
if self._request_handler_task:
self._request_handler_task.cancel()
- try:
- raise ServiceUnavailable('Response Timeout')
- except ServiceUnavailable as exception:
- self.write_error(exception)
+ self.write_error(ServiceUnavailable('Response Timeout'))
def keep_alive_timeout_callback(self):
# Check if elapsed time since last response exceeds our configured
@@ -199,8 +193,7 @@ def data_received(self, data):
# memory limits
self._total_request_size += len(data)
if self._total_request_size > self.request_max_size:
- exception = PayloadTooLarge('Payload Too Large')
- self.write_error(exception)
+ self.write_error(PayloadTooLarge('Payload Too Large'))
# Create parser if this is the first time we're receiving data
if self.parser is None:
@@ -218,8 +211,7 @@ def data_received(self, data):
message = 'Bad Request'
if self._debug:
message += '\n' + traceback.format_exc()
- exception = InvalidUsage(message)
- self.write_error(exception)
+ self.write_error(InvalidUsage(message))
def on_url(self, url):
if not self.url:
@@ -233,8 +225,7 @@ def on_header(self, name, value):
if value is not None:
if self._header_fragment == b'Content-Length' \
and int(value) > self.request_max_size:
- exception = PayloadTooLarge('Payload Too Large')
- self.write_error(exception)
+ self.write_error(PayloadTooLarge('Payload Too Large'))
try:
value = value.decode()
except UnicodeDecodeError:
@@ -433,7 +424,7 @@ def write_error(self, exception):
self.log_response(response)
try:
self.transport.close()
- except AttributeError as e:
+ except AttributeError:
logger.debug('Connection lost before server could close it.')
def bail_out(self, message, from_error=False):
@@ -443,8 +434,7 @@ def bail_out(self, message, from_error=False):
self.transport.get_extra_info('peername'))
logger.debug('Exception:\n%s', traceback.format_exc())
else:
- exception = ServerError(message)
- self.write_error(exception)
+ self.write_error(ServerError(message))
logger.error(message)
def cleanup(self):
| [Un]necessary code?
This:
https://github.com/huge-success/sanic/blob/fafe23d7c273a8329006a986dc9c73643b92e091/sanic/server.py#L153-L156
Couldn't be translated into this?
```python
exception = RequestTimeout("Request Timeout")
self.write_error(exception)
```
Another one here:
https://github.com/huge-success/sanic/blob/fafe23d7c273a8329006a986dc9c73643b92e091/sanic/server.py#L173-L176
Because there is a lot of these code patterns already in `server.py`:
https://github.com/huge-success/sanic/blob/fafe23d7c273a8329006a986dc9c73643b92e091/sanic/server.py#L202-L203
I can submit the PR, just for uniformity.
| Stuff like this makes me wonder if I'm bad at python (high probability) or crazy (also high probability).
Is there a reason you would try: raise and then catch the exception you raised in the try, when the raise is the only statement in the try block? I could see if we were wrapping things around the if blocks... but I'm all for leaning out the code if we can and there's no specific reason.
> Stuff like this makes me wonder if I'm bad at python (high probability) or crazy (also high probability).
That made me laugh. Welcome to the club :beers:
> Is there a reason you would try: raise and then catch the exception you raised in the try, when the raise is the only statement in the try block?
Well, no. Perhaps there might be a reason on the git history, idk. But, I'll send a PR tomorrow either way :wink:
I've seen these same patterns in server.py, and I too have (internally) questioned their purpose.
I have always assumed there is some nuanced low-level reason they are done that way.
I now have a different theory. I haven't looked back through the git history, but I think maybe the error handling section of the `server.py` file has been rewritten at some point, maybe each section was able to raise an exception, and the `write_error` was handled elsewhere, then at some point in time it was decided (for performance reasons?) to write out the error as soon as the exception is thrown, so it just ended up that way.
I can try translating those parts, if all of the tests still pass then I don't see anything wrong with removing those sections in favour of the translations.
Thanks, @ashleysommer ! I think I understood the general idea, but I don't know how I'd implement it. Anyway, if you need some help, please let me know :+1:
I think the point is that by raising you attach a traceback
```
import traceback
def with_raise():
try:
raise RuntimeError('failing')
except RuntimeError as e:
return e
e1 = with_raise()
print('with raise')
print('\n'.join(traceback.format_exception(type(e1), e1, e1.__traceback__)))
def without_raise():
return RuntimeError('failing')
e2 = without_raise()
print('without raise')
print('\n'.join(traceback.format_exception(type(e2), e2, e2.__traceback__)))
## -- End pasted text --
with raise
Traceback (most recent call last):
File "<ipython-input-4-9db7dae0feba>", line 5, in with_raise
raise RuntimeError('failing')
RuntimeError: failing
without raise
RuntimeError: failing
``` | 2018-10-03T01:00:30 |
|
sanic-org/sanic | 1,343 | sanic-org__sanic-1343 | [
"1331"
] | 1498baab0fd0e4799b228b32d73675848f0ac680 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -56,7 +56,7 @@ def open_local(paths, mode='r', encoding='utf8'):
uvloop = 'uvloop>=0.5.3' + env_dependency
requirements = [
- 'httptools>=0.0.9',
+ 'httptools>=0.0.10',
uvloop,
ujson,
'aiofiles>=0.3.0',
| Pin versions for LTS release
I think that versions of (some) should be allowed to float but when we are ready for an LTS release, the versions should be pinned at that time.
@r0fls @ahopkins @seemethere @ashleysommer @yunstanford @ahopkins
| @sjsadowski if we pin `httptools` to be `>=0.0.10`, then we can tackle #1057 (I don't know if we re-open it or create another related issue) :wink:
@vltr I'm going to branch today for the LTS release, we can set pins there. I'm actually okay (if we can get enough interest) with pinning a minimum version of httptools in master as well.
@sjsadowski oh yes, pinning `httptools` would make a lot of developers happy around here ...
@vltr let's do it. I have not seen anything that suggest that should not be done. Can you get a PR in and we'll merge it after someone upchecks it. I'm hoping to get the other current PRs in this week after review and then merge them up to the 18.12 LTS branch.
@sjsadowski sure thing! Let me see if this will not conflict with #1310 (dumb one here forgot to create a branch for it ... :grimacing:) | 2018-10-03T15:28:56 |
|
sanic-org/sanic | 1,393 | sanic-org__sanic-1393 | [
"1392"
] | 7d79a86d4dc48de11cd34e8ba12e41f3a9f9ff18 | diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -5,7 +5,7 @@
FutureRoute = namedtuple(
- "Route",
+ "FutureRoute",
[
"handler",
"uri",
@@ -17,11 +17,15 @@
"name",
],
)
-FutureListener = namedtuple("Listener", ["handler", "uri", "methods", "host"])
-FutureMiddleware = namedtuple("Route", ["middleware", "args", "kwargs"])
-FutureException = namedtuple("Route", ["handler", "args", "kwargs"])
+FutureListener = namedtuple(
+ "FutureListener", ["handler", "uri", "methods", "host"]
+)
+FutureMiddleware = namedtuple(
+ "FutureMiddleware", ["middleware", "args", "kwargs"]
+)
+FutureException = namedtuple("FutureException", ["handler", "args", "kwargs"])
FutureStatic = namedtuple(
- "Route", ["uri", "file_or_directory", "args", "kwargs"]
+ "FutureStatic", ["uri", "file_or_directory", "args", "kwargs"]
)
| diff --git a/tests/test_multiprocessing.py b/tests/test_multiprocessing.py
--- a/tests/test_multiprocessing.py
+++ b/tests/test_multiprocessing.py
@@ -1,9 +1,11 @@
import multiprocessing
import random
import signal
+import pickle
import pytest
from sanic.testing import HOST, PORT
+from sanic.response import text
@pytest.mark.skipif(
@@ -27,3 +29,54 @@ def stop_on_alarm(*args):
app.run(HOST, PORT, workers=num_workers)
assert len(process_list) == num_workers
+
+
+def test_multiprocessing_with_blueprint(app):
+ from sanic import Blueprint
+ # Selects a number at random so we can spot check
+ num_workers = random.choice(range(2, multiprocessing.cpu_count() * 2 + 1))
+ process_list = set()
+
+ def stop_on_alarm(*args):
+ for process in multiprocessing.active_children():
+ process_list.add(process.pid)
+ process.terminate()
+
+ signal.signal(signal.SIGALRM, stop_on_alarm)
+ signal.alarm(3)
+
+ bp = Blueprint('test_text')
+ app.blueprint(bp)
+ app.run(HOST, PORT, workers=num_workers)
+
+ assert len(process_list) == num_workers
+
+
+# this function must be outside a test function so that it can be
+# able to be pickled (local functions cannot be pickled).
+def handler(request):
+ return text('Hello')
+
+# Muliprocessing on Windows requires app to be able to be pickled
[email protected]('protocol', [3, 4])
+def test_pickle_app(app, protocol):
+ app.route('/')(handler)
+ p_app = pickle.dumps(app, protocol=protocol)
+ up_p_app = pickle.loads(p_app)
+ assert up_p_app
+ request, response = app.test_client.get('/')
+ assert response.text == 'Hello'
+
+
[email protected]('protocol', [3, 4])
+def test_pickle_app_with_bp(app, protocol):
+ from sanic import Blueprint
+ bp = Blueprint('test_text')
+ bp.route('/')(handler)
+ app.blueprint(bp)
+ p_app = pickle.dumps(app, protocol=protocol)
+ up_p_app = pickle.loads(p_app)
+ assert up_p_app
+ request, response = app.test_client.get('/')
+ assert app.is_request_stream is False
+ assert response.text == 'Hello'
| Blueprint with multiple workers not usable on Windows due to pickling error
**Describe the bug**
When using a blueprint with multiple workers on windows, sanic fails on startup due to a failure to pickle the route:
> Exception has occurred: _pickle.PicklingError
Can't pickle <class 'sanic.blueprints.Route'>: attribute lookup Route on sanic.blueprints failed
**Code snippet**
```python
from sanic import Sanic
from sanic import Blueprint
from sanic.response import json
blueprint = Blueprint("API_blueprint")
@blueprint.route("/")
async def test(request):
return json({"hello": "world"})
app = Sanic()
app.blueprint(blueprint)
def main():
app.run(workers=2)
```
**Expected behavior**
It should be possible to run with multiple workers using blueprints in the same way as it is using the sanic app object directly
**Environment (please complete the following information):**
- OS: Windows
- Version 7
**Additional context**
A similar issue with pickling was [recently fixed](https://github.com/ashleysommer/sanic-cors/issues/14#issuecomment-434968189) in sanic-cors. No idea if the resolution to that one will be helpful
| I can look into this.
PR created: https://github.com/huge-success/sanic/pull/1393
The blueprint pickling problem is fixed, though I don't have a windows machine to test that it fixes the multiprocessing issue on Windows. | 2018-11-04T04:23:12 |
sanic-org/sanic | 1,397 | sanic-org__sanic-1397 | [
"1395"
] | e3a27c2cc485d57aa1ff87d9f69098e4ab12727e | diff --git a/sanic/log.py b/sanic/log.py
--- a/sanic/log.py
+++ b/sanic/log.py
@@ -6,7 +6,7 @@
version=1,
disable_existing_loggers=False,
loggers={
- "root": {"level": "INFO", "handlers": ["console"]},
+ "sanic.root": {"level": "INFO", "handlers": ["console"]},
"sanic.error": {
"level": "INFO",
"handlers": ["error_console"],
| diff --git a/tests/test_logging.py b/tests/test_logging.py
--- a/tests/test_logging.py
+++ b/tests/test_logging.py
@@ -49,7 +49,7 @@ def test_logging_defaults():
reset_logging()
app = Sanic("test_logging")
- for fmt in [h.formatter for h in logging.getLogger('root').handlers]:
+ for fmt in [h.formatter for h in logging.getLogger('sanic.root').handlers]:
assert fmt._fmt == LOGGING_CONFIG_DEFAULTS['formatters']['generic']['format']
for fmt in [h.formatter for h in logging.getLogger('sanic.error').handlers]:
@@ -68,7 +68,7 @@ def test_logging_pass_customer_logconfig():
app = Sanic("test_logging", log_config=modified_config)
- for fmt in [h.formatter for h in logging.getLogger('root').handlers]:
+ for fmt in [h.formatter for h in logging.getLogger('sanic.root').handlers]:
assert fmt._fmt == modified_config['formatters']['generic']['format']
for fmt in [h.formatter for h in logging.getLogger('sanic.error').handlers]:
@@ -82,7 +82,7 @@ def test_logging_pass_customer_logconfig():
def test_log_connection_lost(app, debug, monkeypatch):
""" Should not log Connection lost exception on non debug """
stream = StringIO()
- root = logging.getLogger('root')
+ root = logging.getLogger('sanic.root')
root.addHandler(logging.StreamHandler(stream))
monkeypatch.setattr(sanic.server, 'logger', root)
@@ -102,3 +102,15 @@ async def conn_lost(request):
assert 'Connection lost before response written @' in log
else:
assert 'Connection lost before response written @' not in log
+
+
+def test_logging_modified_root_logger_config():
+ reset_logging()
+
+ modified_config = LOGGING_CONFIG_DEFAULTS
+ modified_config['loggers']['sanic.root']['level'] = 'DEBUG'
+
+ app = Sanic("test_logging", log_config=modified_config)
+
+ assert logging.getLogger('sanic.root').getEffectiveLevel() == logging.DEBUG
+
| Logger not work.
**Describe the bug**
Logger did not work at current master commit (https://github.com/huge-success/sanic/commit/7d79a86d4dc48de11cd34e8ba12e41f3a9f9ff18).
**Code snippet**
```python
from sanic import Sanic
from sanic.log import logger
from sanic.response import text
app = Sanic()
@app.listener('before_server_start')
async def setup(app, loop):
logger.info('INFO')
@app.get('/')
async def test(request):
return text('hello world')
if __name__ == '__main__':
app.run()
```
There is no any log/output now.
**Expected behavior**
At `0.8.3` release, it will logging/output some messages like:
```
[2018-11-05 17:34:47 +0800] [12112] [INFO] Goin' Fast @ http://127.0.0.1:8000
[2018-11-05 17:34:47 +0800] [12112] [INFO] INFO
[2018-11-05 17:34:47 +0800] [12112] [INFO] Starting worker [12112]
```
**Environment (please complete the following information):**
- OS: Ubuntu 18.04
- Version: https://github.com/huge-success/sanic/commit/7d79a86d4dc48de11cd34e8ba12e41f3a9f9ff18
**Additional context**
It seems that `getLogger()` does not get the correct logger at [line 56](https://github.com/huge-success/sanic/blob/master/sanic/log.py#L56) in `log.py`. The logger is trying to get a logger named `sanic.root`, but it does not exist. Rename the logger `root` at [line 9](https://github.com/huge-success/sanic/blob/master/sanic/log.py#L9) should fix this bug.
| @chenjr0719 Can you submit a PR?
I'd like to submit a PR for this, can I? | 2018-11-06T02:54:16 |
sanic-org/sanic | 1,482 | sanic-org__sanic-1482 | [
"1454"
] | bc7d0f0da57e6b0d6c7d57ca3bb9d0dda66a7199 | diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -212,6 +212,7 @@ def add_route(
strict_slashes=None,
version=None,
name=None,
+ stream=False,
):
"""Create a blueprint route from a function.
@@ -224,6 +225,7 @@ def add_route(
training */*
:param version: Blueprint Version
:param name: user defined route name for url_for
+ :param stream: boolean specifying if the handler is a stream handler
:return: function or class instance
"""
# Handle HTTPMethodView differently
@@ -246,6 +248,7 @@ def add_route(
methods=methods,
host=host,
strict_slashes=strict_slashes,
+ stream=stream,
version=version,
name=name,
)(handler)
diff --git a/sanic/response.py b/sanic/response.py
--- a/sanic/response.py
+++ b/sanic/response.py
@@ -117,7 +117,7 @@ def get_headers(
headers = self._parse_headers()
- if self.status is 200:
+ if self.status == 200:
status = b"OK"
else:
status = STATUS_CODES.get(self.status)
@@ -176,7 +176,7 @@ def output(self, version="1.1", keep_alive=False, keep_alive_timeout=None):
headers = self._parse_headers()
- if self.status is 200:
+ if self.status == 200:
status = b"OK"
else:
status = STATUS_CODES.get(self.status, b"UNKNOWN RESPONSE")
| diff --git a/tests/test_request_stream.py b/tests/test_request_stream.py
--- a/tests/test_request_stream.py
+++ b/tests/test_request_stream.py
@@ -270,6 +270,18 @@ async def streaming(response):
return stream(streaming)
+ async def post_add_route(request):
+ assert isinstance(request.stream, StreamBuffer)
+
+ async def streaming(response):
+ while True:
+ body = await request.stream.read()
+ if body is None:
+ break
+ await response.write(body.decode("utf-8"))
+ return stream(streaming)
+
+ bp.add_route(post_add_route, '/post/add_route', methods=['POST'], stream=True)
app.blueprint(bp)
assert app.is_request_stream is True
@@ -314,6 +326,10 @@ async def streaming(response):
assert response.status == 200
assert response.text == data
+ request, response = app.test_client.post("/post/add_route", data=data)
+ assert response.status == 200
+ assert response.text == data
+
def test_request_stream_composition_view(app):
"""for self.is_request_stream = True"""
| Request stream not working when using Blueprints
**Describe the bug**
When trying to use Request Streaming in a route that is part of a blueprint, the request.stream object is never set (it is None), _unless_ `stream=True` has been set on some global route.
**Code snippet**
```
from sanic import Sanic
from sanic.response import text
from sanic.views import HTTPMethodView, stream
from sanic import Blueprint
app = Sanic()
# Not sure if using the decorator here is actually
# allowed or useful
@stream
def test(request):
return text(type(request.stream))
class TestView(HTTPMethodView):
@stream
async def get(self, request):
return text(type(request.stream))
bp = Blueprint("test")
bp.add_route(test, '/')
bp.add_route(TestView.as_view(), '/class')
app.blueprint(bp, url_prefix='/test')
# Adding this line makes request.stream work FOR ALL ROUTES,
# even those from the blueprint!
# app.add_route(test, '/', stream=True)
app.run(host='0.0.0.0', port=8000)
```
**Expected behavior**
Expect request.stream to be set.
**Environment (please complete the following information):**
- OS: Ubuntu 18.04, RHEL7
- Version 18.12.0 and master (cea1547e08230b6ad49eb7777fd8db5335382b7a)
**Additional context**
I am trying to sort this out myself, but the interaction between routes appears very complex. I will submit a pull request if I can get it working.
| @quasarj Are you still there?
According to the document, use request stream with `Blueprint` will look like this:
```python
from sanic import Sanic, Blueprint
bp = Blueprint("test")
@bp.get('/', stream=True)
def test(request):
return text(type(request.stream))
app.blueprint(bp, url_prefix='/test')
app.run(host='0.0.0.0', port=8000)
```
If you want to use `bp.add_route()` with request stream, it seems not working currently because `bp.add_route()` does not support `stream` argument. Maybe I can send a PR to make it work. | 2019-01-30T07:43:54 |
sanic-org/sanic | 1,501 | sanic-org__sanic-1501 | [
"1492"
] | 34fe26e51bc13cd41d58e627ba264012640c76fc | diff --git a/sanic/reloader_helpers.py b/sanic/reloader_helpers.py
--- a/sanic/reloader_helpers.py
+++ b/sanic/reloader_helpers.py
@@ -36,7 +36,15 @@ def _iter_module_files():
def _get_args_for_reloading():
"""Returns the executable."""
rv = [sys.executable]
- rv.extend(sys.argv)
+ main_module = sys.modules["__main__"]
+ mod_spec = getattr(main_module, "__spec__", None)
+ if mod_spec:
+ # Parent exe was launched as a module rather than a script
+ rv.extend(["-m", mod_spec.name])
+ if len(sys.argv) > 1:
+ rv.extend(sys.argv[1:])
+ else:
+ rv.extend(sys.argv)
return rv
@@ -44,6 +52,7 @@ def restart_with_reloader():
"""Create a new process and a subprocess in it with the same arguments as
this one.
"""
+ cwd = os.getcwd()
args = _get_args_for_reloading()
new_environ = os.environ.copy()
new_environ["SANIC_SERVER_RUNNING"] = "true"
@@ -51,7 +60,7 @@ def restart_with_reloader():
worker_process = Process(
target=subprocess.call,
args=(cmd,),
- kwargs=dict(shell=True, env=new_environ),
+ kwargs={"cwd": cwd, "shell": True, "env": new_environ},
)
worker_process.start()
return worker_process
| Module import fails when auto_reload is active
I have two piece of code (the structure has been simplified for clarity sake)
First in base.py
```
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# module: init0
from abc import ABCMeta
from sanic import Sanic
class BaseService( metaclass = ABCMeta ):
def create_app( self ) -> Sanic:
app = Sanic( __name__ )
return app
# app = BaseService().create_app()
# app.run( host = '0.0.0.0',
# port = 5000,
# debug = True,
# )
print('[DONE]')
```
Second in run_test.py
```
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# module: init0
from init0.base import BaseService
def main():
app = BaseService().create_app()
app.run( host = '0.0.0.0',
port = 5000,
debug = True,
)
return
if __name__ == '__main__':
main()
```
If I were to run `python -m init0.run_test` with `debug = False` then everything works perfectly, however if it's `debug = True`, then it'd throw me `ModuleNotFoundError: No module named 'init0'`
Is it some sort of loading error somewhere that I need to configure beforehand?
Thanks a lot in advance
| What does your `__init__.py` look like for that directory?
> What does your `__init__.py` look like for that directory?
Completely empty. I updated my Sanic recently from 0.8 to 18.x along with my Python 3.6 to 3.7 or something, and somehow this issue appeared.
How does your project hierarchy look like ?
```
.
โโโ init0
โย ย โโโ __init__.py
โย ย โโโ __pycache__
โย ย โโโ base.py
โย ย โโโ run_test.py
โโโ venv
โโโ bin
โโโ include
โโโ lib
```
I just run from `python -m init0.runtest`
I don't think your python interpreter know your init0 module unless you've appended that to your sys.path
@yunstanford Hmm, but it works okay when it's not in debug mode, though.
ok, i think it's sth. wrong with the auto_reload logic..
You can set (`workers > 1` or `debug=false`) for verifying, since the auto_reload will be disabled in such cases.
Yeap, by setting it to
```
app.run( host = '0.0.0.0',
port = 5000,
debug = True,
workers = 2,
)
```
the error no longer occurs.
I hit the same error, adding workers >1 works for me as well!
@huge-success/sanic-core-devs Anyone want to [take a whack at this?](https://community.sanicframework.org/t/march-19-03-release/231)
@subokita @gangtao
Are you on Linux/Mac, or Windows?
The auto_reload logic is a bit different between Unix/Posix based OS, and Windows, so it would be good to know which we are dealing with before diving into debugging this.
@ashleysommer
```
OSX Mojave 10.14.4 Beta (18E194d)
Python 3.7.2
sanic 18.12.0
uvloop 0.12.1
gevent 1.4.0
```
To be clear, this issue is not because of windows.
When invoking python interpreter with `-m` option, the current directory will be added to the start of sys.path.
However, The reload_logic use https://github.com/huge-success/sanic/blob/master/sanic/reloader_helpers.py#L36-L57 to start worker process. That command line will look like `python /path/to/start.py`, because if invoking python interpreter with `-m` option, the first element of sys.argv will be the full path to the module file.
The reload implementation is not robust (and buggy), as mentioned before.
I'd also prefer to re-implement the reload logic as @vltr discussed here, https://github.com/huge-success/sanic/issues/1346
And I personally use gunicorn for reloading in dev mode. | 2019-03-01T07:33:32 |
|
sanic-org/sanic | 1,527 | sanic-org__sanic-1527 | [
"1524"
] | c42731a55ca4ce095348353204639e36e714a9be | diff --git a/sanic/__init__.py b/sanic/__init__.py
--- a/sanic/__init__.py
+++ b/sanic/__init__.py
@@ -2,6 +2,6 @@
from sanic.blueprints import Blueprint
-__version__ = "18.12.0"
+__version__ = "19.03.0"
__all__ = ["Sanic", "Blueprint"]
| Publish 19.3 release to PyPI
Thank you for the release 3 days ago!
https://github.com/huge-success/sanic/releases/tag/19.3
It's missing from PyPI at the moment:
https://pypi.org/project/sanic/#history
Please publish it at your convenience ๐
Keep up the awesome work โค๏ธ
| I think @sjsadowski is already looking into it.
Yeah we have a credentialing issue which we are working to resolve. I'll leave this open until 19.3 gets pushed. | 2019-03-21T23:26:56 |
|
sanic-org/sanic | 1,530 | sanic-org__sanic-1530 | [
"1524"
] | 669e2ed5b0973820b81cf10f001a9c12fdb2a142 | diff --git a/sanic/__init__.py b/sanic/__init__.py
--- a/sanic/__init__.py
+++ b/sanic/__init__.py
@@ -2,6 +2,6 @@
from sanic.blueprints import Blueprint
-__version__ = "19.03.0"
+__version__ = "19.03.1"
__all__ = ["Sanic", "Blueprint"]
| Publish 19.3 release to PyPI
Thank you for the release 3 days ago!
https://github.com/huge-success/sanic/releases/tag/19.3
It's missing from PyPI at the moment:
https://pypi.org/project/sanic/#history
Please publish it at your convenience ๐
Keep up the awesome work โค๏ธ
| I think @sjsadowski is already looking into it.
Yeah we have a credentialing issue which we are working to resolve. I'll leave this open until 19.3 gets pushed.
See #1529 | 2019-03-22T23:45:28 |
|
sanic-org/sanic | 1,539 | sanic-org__sanic-1539 | [
"801"
] | 566940e0527021ac10420ffa00587c1e54d20bdb | diff --git a/sanic/config.py b/sanic/config.py
--- a/sanic/config.py
+++ b/sanic/config.py
@@ -27,6 +27,9 @@
"WEBSOCKET_WRITE_LIMIT": 2 ** 16,
"GRACEFUL_SHUTDOWN_TIMEOUT": 15.0, # 15 sec
"ACCESS_LOG": True,
+ "PROXIES_COUNT": -1,
+ "FORWARDED_FOR_HEADER": "X-Forwarded-For",
+ "REAL_IP_HEADER": "X-Real-IP",
}
diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -355,19 +355,38 @@ def _get_address(self):
@property
def remote_addr(self):
- """Attempt to return the original client ip based on X-Forwarded-For.
+ """Attempt to return the original client ip based on X-Forwarded-For
+ or X-Real-IP. If HTTP headers are unavailable or untrusted, returns
+ an empty string.
:return: original client ip.
"""
if not hasattr(self, "_remote_addr"):
- forwarded_for = self.headers.get("X-Forwarded-For", "").split(",")
- remote_addrs = [
- addr
- for addr in [addr.strip() for addr in forwarded_for]
- if addr
- ]
- if len(remote_addrs) > 0:
- self._remote_addr = remote_addrs[0]
+ if self.app.config.PROXIES_COUNT == 0:
+ self._remote_addr = ""
+ elif self.app.config.REAL_IP_HEADER and self.headers.get(
+ self.app.config.REAL_IP_HEADER
+ ):
+ self._remote_addr = self.headers[
+ self.app.config.REAL_IP_HEADER
+ ]
+ elif self.app.config.FORWARDED_FOR_HEADER:
+ forwarded_for = self.headers.get(
+ self.app.config.FORWARDED_FOR_HEADER, ""
+ ).split(",")
+ remote_addrs = [
+ addr
+ for addr in [addr.strip() for addr in forwarded_for]
+ if addr
+ ]
+ if self.app.config.PROXIES_COUNT == -1:
+ self._remote_addr = remote_addrs[0]
+ elif len(remote_addrs) >= self.app.config.PROXIES_COUNT:
+ self._remote_addr = remote_addrs[
+ -self.app.config.PROXIES_COUNT
+ ]
+ else:
+ self._remote_addr = ""
else:
self._remote_addr = ""
return self._remote_addr
| diff --git a/tests/test_requests.py b/tests/test_requests.py
--- a/tests/test_requests.py
+++ b/tests/test_requests.py
@@ -29,7 +29,7 @@ def handler(request):
assert response.text == "Hello"
-def test_remote_address(app):
+def test_ip(app):
@app.route("/")
def handler(request):
return text("{}".format(request.ip))
@@ -203,11 +203,23 @@ async def handler(request):
assert response.text == "application/json"
-def test_remote_addr(app):
+def test_remote_addr_with_two_proxies(app):
+ app.config.PROXIES_COUNT = 2
+
@app.route("/")
async def handler(request):
return text(request.remote_addr)
+ headers = {"X-Real-IP": "127.0.0.2", "X-Forwarded-For": "127.0.1.1"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == "127.0.0.2"
+ assert response.text == "127.0.0.2"
+
+ headers = {"X-Forwarded-For": "127.0.1.1"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == ""
+ assert response.text == ""
+
headers = {"X-Forwarded-For": "127.0.0.1, 127.0.1.2"}
request, response = app.test_client.get("/", headers=headers)
assert request.remote_addr == "127.0.0.1"
@@ -222,6 +234,86 @@ async def handler(request):
assert request.remote_addr == "127.0.0.1"
assert response.text == "127.0.0.1"
+ headers = {
+ "X-Forwarded-For": ", 127.0.2.2, , ,127.0.0.1, , ,,127.0.1.2"
+ }
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == "127.0.0.1"
+ assert response.text == "127.0.0.1"
+
+
+def test_remote_addr_with_infinite_number_of_proxies(app):
+ app.config.PROXIES_COUNT = -1
+
+ @app.route("/")
+ async def handler(request):
+ return text(request.remote_addr)
+
+ headers = {"X-Real-IP": "127.0.0.2", "X-Forwarded-For": "127.0.1.1"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == "127.0.0.2"
+ assert response.text == "127.0.0.2"
+
+ headers = {"X-Forwarded-For": "127.0.1.1"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == "127.0.1.1"
+ assert response.text == "127.0.1.1"
+
+ headers = {
+ "X-Forwarded-For": "127.0.0.5, 127.0.0.4, 127.0.0.3, 127.0.0.2, 127.0.0.1"
+ }
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == "127.0.0.5"
+ assert response.text == "127.0.0.5"
+
+
+def test_remote_addr_without_proxy(app):
+ app.config.PROXIES_COUNT = 0
+
+ @app.route("/")
+ async def handler(request):
+ return text(request.remote_addr)
+
+ headers = {"X-Real-IP": "127.0.0.2", "X-Forwarded-For": "127.0.1.1"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == ""
+ assert response.text == ""
+
+ headers = {"X-Forwarded-For": "127.0.1.1"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == ""
+ assert response.text == ""
+
+ headers = {"X-Forwarded-For": "127.0.0.1, 127.0.1.2"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == ""
+ assert response.text == ""
+
+
+def test_remote_addr_custom_headers(app):
+ app.config.PROXIES_COUNT = 1
+ app.config.REAL_IP_HEADER = "Client-IP"
+ app.config.FORWARDED_FOR_HEADER = "Forwarded"
+
+ @app.route("/")
+ async def handler(request):
+ return text(request.remote_addr)
+
+ headers = {"X-Real-IP": "127.0.0.2", "Forwarded": "127.0.1.1"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == "127.0.1.1"
+ assert response.text == "127.0.1.1"
+
+ headers = {"X-Forwarded-For": "127.0.1.1"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == ""
+ assert response.text == ""
+
+ headers = {"Client-IP": "127.0.0.2", "Forwarded": "127.0.1.1"}
+ request, response = app.test_client.get("/", headers=headers)
+ assert request.remote_addr == "127.0.0.2"
+ assert response.text == "127.0.0.2"
+
def test_match_info(app):
@app.route("/api/v1/user/<user_id>/")
| sanic behind proxy - option to replace IP of request.ip with X-Forwarded-For value
Dear Devs,
Please consider adding an option to replace request.ip with X-Forwarded-For value, it will make life a lot easier for those of us who running sanic behind nginx or load balancer.
I can get real IP with request.headers.get('X-Forwarded-For') but would be good to have an option to set custom header for sanic to get ip address from so it can get added in sanic logs, console etc.
It's quite common and as an example Flask has ProxyFix to address this.
| #853 looks unconfigurable and unsafe, something like [werkzeug ProxyFix](https://github.com/pallets/werkzeug/blob/64d1d2117cc177b9caf18bb571f32133492978c3/werkzeug/contrib/fixers.py#L97) would be better
...but this is still unconfigurable and unsafe :/
Perhaps, but the original need was addressed, reviewed, and merged. @andreymal perhaps you can submit a new PR to address your concerns? | 2019-03-26T22:31:42 |
sanic-org/sanic | 1,549 | sanic-org__sanic-1549 | [
"1528"
] | 0b4769289a4219d3e188d89801f51301634aa2a2 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -880,8 +880,6 @@ async def handle_request(self, request, write_callback, stream_callback):
# -------------------------------------------- #
# Request Middleware
# -------------------------------------------- #
-
- request.app = self
response = await self._run_request_middleware(request)
# No middleware results
if not response:
@@ -1287,6 +1285,7 @@ def _helper(
"port": port,
"sock": sock,
"ssl": ssl,
+ "app": self,
"signal": Signal(),
"debug": debug,
"request_handler": self.handle_request,
diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -95,11 +95,11 @@ class Request(dict):
"version",
)
- def __init__(self, url_bytes, headers, version, method, transport):
+ def __init__(self, url_bytes, headers, version, method, transport, app):
self.raw_url = url_bytes
# TODO: Content-Encoding detection
self._parsed_url = parse_url(url_bytes)
- self.app = None
+ self.app = app
self.headers = headers
self.version = version
diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -44,6 +44,8 @@ class HttpProtocol(asyncio.Protocol):
"""
__slots__ = (
+ # app
+ "app",
# event loop, connection
"loop",
"transport",
@@ -88,6 +90,7 @@ def __init__(
self,
*,
loop,
+ app,
request_handler,
error_handler,
signal=Signal(),
@@ -107,6 +110,7 @@ def __init__(
**kwargs
):
self.loop = loop
+ self.app = app
self.transport = None
self.request = None
self.parser = None
@@ -303,6 +307,7 @@ def on_headers_complete(self):
version=self.parser.get_http_version(),
method=self.parser.get_method().decode(),
transport=self.transport,
+ app=self.app,
)
# Remove any existing KeepAlive handler here,
# It will be recreated if required on the new request.
@@ -607,6 +612,7 @@ def trigger_events(events, loop):
def serve(
host,
port,
+ app,
request_handler,
error_handler,
before_start=None,
@@ -704,6 +710,7 @@ def serve(
loop=loop,
connections=connections,
signal=signal,
+ app=app,
request_handler=request_handler,
error_handler=error_handler,
request_timeout=request_timeout,
| Access log writing during request timeout causes exception
**Describe the bug**
On request timeout, the attempt from Server write_error method to log to access log fails with the following error
```
Exception in callback <bound method WebSocketProtocol.request_timeout_callback of <sanic.websocket.WebSocketProtocol object at 0x7fe2485d1b40>> handle: <TimerHandle WebSocketProtocol.request_timeout_callback>
,stack_trace: Traceback (most recent call last):
File "uvloop/cbhandles.pyx", line 251, in uvloop.loop.TimerHandle._run
File "/venv/lib/python3.6/site-packages/sanic/websocket.py", line 31, in request_timeout_callback
super().request_timeout_callback()
File "/venv/lib/python3.6/site-packages/sanic/server.py", line 197, in request_timeout_callback
self.write_error(RequestTimeout("Request Timeout"))
File "/venv/lib/python3.6/site-packages/sanic/server.py", line 488, in write_error
self.log_response(response)
File "/venv/lib/python3.6/site-packages/sanic/server.py", line 357, in log_response
self.request.method, self.request.url
File "/venv/lib/python3.6/site-packages/sanic/request.py", line 301, in url
(self.scheme, self.host, self.path, None, self.query_string, None)
File "/venv/lib/python3.6/site-packages/sanic/request.py", line 260, in scheme
self.app.websocket_enabled
AttributeError: โNoneTypeโ object has no attribute โwebsocket_enabledโ
```
**Code snippet**
I think this is a generic problem..
**Expected behavior**
A request timeout and a closing of the client connection..
**Environment (please complete the following information):**
- OS: alpine 3.6
- Version: 18.12.0
**Additional context**
the request timeout was a very bad one, as in, the connection_made cb on the protocol was triggered but the handle_request was never triggered ( this is a guess ), the eventloop was stuck for a different reason probably, i am still debugging.
| Yeah, it's basically because `handle_request ` has not been triggered, and it didn't reach here https://github.com/huge-success/sanic/blob/master/sanic/app.py#L884
And not sure why we need this check https://github.com/huge-success/sanic/blob/master/sanic/request.py#L378 | 2019-04-10T18:05:47 |
|
sanic-org/sanic | 1,553 | sanic-org__sanic-1553 | [
"1551"
] | 53f45810ffd38969cd9a24ad9d428ff1dee44378 | diff --git a/examples/log_request_id.py b/examples/log_request_id.py
--- a/examples/log_request_id.py
+++ b/examples/log_request_id.py
@@ -76,7 +76,7 @@ async def test(request):
if __name__ == '__main__':
asyncio.set_event_loop(uvloop.new_event_loop())
- server = app.create_server(host="0.0.0.0", port=8000)
+ server = app.create_server(host="0.0.0.0", port=8000, return_asyncio_server=True)
loop = asyncio.get_event_loop()
loop.set_task_factory(context.task_factory)
task = asyncio.ensure_future(server)
diff --git a/examples/run_async.py b/examples/run_async.py
--- a/examples/run_async.py
+++ b/examples/run_async.py
@@ -12,7 +12,7 @@ async def test(request):
return response.json({"answer": "42"})
asyncio.set_event_loop(uvloop.new_event_loop())
-server = app.create_server(host="0.0.0.0", port=8000)
+server = app.create_server(host="0.0.0.0", port=8000, return_asyncio_server=True)
loop = asyncio.get_event_loop()
task = asyncio.ensure_future(server)
signal(SIGINT, lambda s, f: loop.stop())
| Unable to start server -- Running run_async.py failed
**Describe the bug**
[2019-04-14 19:22:02 +0800] [21512] [INFO] Goin' Fast @ http://0.0.0.0:8000
[2019-04-14 19:22:02 +0800] [21512] [ERROR] Unable to start server
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\venom\lib\site-packages\sanic\server.py", line 745, in serve
http_server = loop.run_until_complete(server_coroutine)
File "C:\ProgramData\Anaconda3\envs\venom\lib\asyncio\base_events.py", line 571, in run_until_complete
self.run_forever()
File "C:\ProgramData\Anaconda3\envs\venom\lib\asyncio\base_events.py", line 529, in run_forever
'Cannot run the event loop while another loop is running')
RuntimeError: Cannot run the event loop while another loop is running
**Code snippet**
Relevant source code, make sure to remove what is not necessary.
https://github.com/huge-success/sanic/blob/master/examples/run_async.py
**Expected behavior**
A clear and concise description of what you expected to happen.
**Environment (please complete the following information):**
- OS: [e.g. iOS]
- Version [e.g. 0.8.3]
Window and Linux, Python 3.6 or 3.7 don't work
**Additional context**
Add any other context about the problem here.
Is this example still work ?
| It does... but there was a slight change. You are running `0.8.3`?
If you are in a newer release, then you would need to add `return_asyncio_server=True` to `create_server(...)`
I run in this issue too. The fix proposed works. I think documentation needs update on this point.
@jrmi Agreed. I was planning on pushing a change to this when I get a chance tomorrow. Unless of course, you'd like to push a PR? | 2019-04-15T20:20:43 |
|
sanic-org/sanic | 1,559 | sanic-org__sanic-1559 | [
"1557"
] | b68a7fe7ae1a6abcce890aaeec77c2915f1220dd | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -96,6 +96,7 @@ def open_local(paths, mode="r", encoding="utf8"):
ujson,
"pytest-sanic",
"pytest-sugar",
+ "pytest-benchmark",
]
if strtobool(os.environ.get("SANIC_NO_UJSON", "no")):
| 2 failed tests when tox is not used (missing fixture "benchmark")
`pytest-benchmark` is not present in `tests_require`, so there are 2 failed tests in `tests/benchmark/test_route_resolution_benchmark.py` when tox is not used.
This requirement is present in `tox.ini` so tox and Travis CI are working fine.
(I don't know what's a better fix โ disable the benchmark tests or add `pytest-benchmark` to `tests_require`, so I didn't create a PR)
| add to tests_require | 2019-04-19T14:32:00 |
|
sanic-org/sanic | 1,600 | sanic-org__sanic-1600 | [
"1587"
] | c15158224b873d8686f2960b73958e0011d2a877 | diff --git a/sanic/exceptions.py b/sanic/exceptions.py
--- a/sanic/exceptions.py
+++ b/sanic/exceptions.py
@@ -218,6 +218,11 @@ def __init__(self, message, content_range):
}
+@add_status_code(417)
+class HeaderExpectationFailed(SanicException):
+ pass
+
+
@add_status_code(403)
class Forbidden(SanicException):
pass
diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -29,7 +29,7 @@ def json_loads(data):
DEFAULT_HTTP_CONTENT_TYPE = "application/octet-stream"
-
+EXPECT_HEADER = "EXPECT"
# HTTP/1.1: https://www.w3.org/Protocols/rfc2616/rfc2616-sec7.html#sec7.2.1
# > If the media type remains unknown, the recipient SHOULD treat it
diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -15,6 +15,7 @@
from multidict import CIMultiDict
from sanic.exceptions import (
+ HeaderExpectationFailed,
InvalidUsage,
PayloadTooLarge,
RequestTimeout,
@@ -22,7 +23,7 @@
ServiceUnavailable,
)
from sanic.log import access_logger, logger
-from sanic.request import Request, StreamBuffer
+from sanic.request import EXPECT_HEADER, Request, StreamBuffer
from sanic.response import HTTPResponse
@@ -314,6 +315,10 @@ def on_headers_complete(self):
if self._keep_alive_timeout_handler:
self._keep_alive_timeout_handler.cancel()
self._keep_alive_timeout_handler = None
+
+ if self.request.headers.get(EXPECT_HEADER):
+ self.expect_handler()
+
if self.is_request_stream:
self._is_stream_handler = self.router.is_stream_handler(
self.request
@@ -324,6 +329,21 @@ def on_headers_complete(self):
)
self.execute_request_handler()
+ def expect_handler(self):
+ """
+ Handler for Expect Header.
+ """
+ expect = self.request.headers.get(EXPECT_HEADER)
+ if self.request.version == "1.1":
+ if expect.lower() == "100-continue":
+ self.transport.write(b"HTTP/1.1 100 Continue\r\n\r\n")
+ else:
+ self.write_error(
+ HeaderExpectationFailed(
+ "Unknown Expect: {expect}".format(expect=expect)
+ )
+ )
+
def on_body(self, body):
if self.is_request_stream and self._is_stream_handler:
self._request_stream_task = self.loop.create_task(
| diff --git a/tests/test_request_stream.py b/tests/test_request_stream.py
--- a/tests/test_request_stream.py
+++ b/tests/test_request_stream.py
@@ -1,4 +1,6 @@
+import pytest
from sanic.blueprints import Blueprint
+from sanic.exceptions import HeaderExpectationFailed
from sanic.request import StreamBuffer
from sanic.response import stream, text
from sanic.views import CompositionView, HTTPMethodView
@@ -40,6 +42,38 @@ async def post(self, request):
assert response.text == data
[email protected]("headers, expect_raise_exception", [
+({"EXPECT": "100-continue"}, False),
+({"EXPECT": "100-continue-extra"}, True),
+])
+def test_request_stream_100_continue(app, headers, expect_raise_exception):
+ class SimpleView(HTTPMethodView):
+
+ @stream_decorator
+ async def post(self, request):
+ assert isinstance(request.stream, StreamBuffer)
+ result = ""
+ while True:
+ body = await request.stream.read()
+ if body is None:
+ break
+ result += body.decode("utf-8")
+ return text(result)
+
+ app.add_route(SimpleView.as_view(), "/method_view")
+
+ assert app.is_request_stream is True
+
+ if not expect_raise_exception:
+ request, response = app.test_client.post("/method_view", data=data, headers={"EXPECT": "100-continue"})
+ assert response.status == 200
+ assert response.text == data
+ else:
+ with pytest.raises(ValueError) as e:
+ app.test_client.post("/method_view", data=data, headers={"EXPECT": "100-continue-extra"})
+ assert "Unknown Expect: 100-continue-extra" in str(e)
+
+
def test_request_stream_app(app):
"""for self.is_request_stream = True and decorators"""
| Request Streaming is extremely slow
**Describe the bug**
I created a simple POC app to accept streaming binary data. The syntax is compliant with the official docs. The app works as expected but the response time is more than 1000 ms for even small data sizes (1.1K). In the snippet below tried uploading 2 files - `tiny.txt` (57B) and `small.txt` (1137B). File `tiny.txt` took 0.021s and `small.txt` took 1.026s on average. Testing was done on the same host via a loopback, so no network delay involved. I think the issue is cause by Sanic not responding with 100-continue so the client wastes time waiting for it.
**Code snippet**
Source code:
```
from sanic import Sanic
from sanic.response import stream, text
app = Sanic('request_stream')
async def detect_handler(request):
result = bytes()
while True:
body = await request.stream.read()
if body is None:
break
return text('done waiting!')
app.add_route(detect_handler, '/api/v1/detect', methods=['POST'], stream=True)
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8000)
```
curl testing with Sanic:
```
$ time curl -vvv -H "Transfer-Encoding: chunked" --data-binary @small.txt http://127.0.0.1:8000/api/v1/detect
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST /api/v1/detect HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.54.0
> Accept: */*
> Transfer-Encoding: chunked
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
>
* Done waiting for 100-continue
< HTTP/1.1 200 OK
< Connection: keep-alive
< Keep-Alive: 5
< Content-Length: 13
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host 127.0.0.1 left intact
done waiting!
real 0m1.028s
```
curl testing with a similar Golang/go-chi based app that returns 100-continue:
```$ time curl -vvv -H "Transfer-Encoding: chunked" --data-binary @big41K.wav http://127.0.0.1:8000/detect
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST /detect HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.54.0
> Accept: */*
> Transfer-Encoding: chunked
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< Date: Wed, 22 May 2019 22:40:40 GMT
< Content-Length: 38
< Content-Type: text/plain; charset=utf-8
<
{
"predictions": [0.911111]
* Connection #0 to host 127.0.0.1 left intact
}
real 0m0.040s
```
**Expected behavior**
I'd expect all small POSTs to take less than 50 ms
**Environment (please complete the following information):**
- OS: MacOS High Sierra 10.13.6, Python 3.7.3
- Sanic Version 19.3.1
| which version of Sanic did you run this on?
Sanic version 19.3.1. As was mentioned in #1535, the correct behavior would be to honor client's request for `100-Continue`. Python's `requests` lib doesn't expect 100-continue, but curl does and there might be other clients which do. I'm also going to test with JavaScript `request` lib.
@Leonidimus Sorry, I didn't see you had provided the details in your original post. I edited it to move the ` ``` ` so that it would format properly.
I am taking a look at this also as it relates to ASGI #1475.
My machine:
```
Python 3.7.3
5.1.3-arch1-1-ARCH
AMD Ryzen 5 2600 Six-Core Processor
```
My results when I run your same code. Granted I am not sure how big your `small.txt` is. Mine is only `13B`.
```
time curl -vvv -H "Transfer-Encoding: chunked" --data-binary @small.txt http://127.0.0.1:8000/api/v1/detect
Warning: Couldn't read data from file "small.txt", this makes an empty POST.
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST /api/v1/detect HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.64.1
> Accept: */*
> Transfer-Encoding: chunked
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 5 out of 0 bytes
< HTTP/1.1 200 OK
< Connection: keep-alive
< Keep-Alive: 5
< Content-Length: 13
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host 127.0.0.1 left intact
done waiting!
* Closing connection 0
curl -vvv -H "Transfer-Encoding: chunked" --data-binary @small.txt 0.00s user 0.00s system 84% cpu 0.004 total
```
Regardless, I am not seeing the long delay times that you are on the Sanic server.
I tried the test also using ASGI servers `uvicorn` and `hypercorn`.
`uvicorn server:app`
It does not work out of the box without sending the `Expect: 100-continue` header (for now, I have some work still to do on streaming. But, if I add the `Expect` header to the request, I get the `100 Continue` interim response.
```
time curl -vvv -H "Transfer-Encoding: chunked" -H "Expect: 100-continue" --data-binary @small.txt http://127.0.0.1:8000/api/v1/detect
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST /api/v1/detect HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.64.1
> Accept: */*
> Transfer-Encoding: chunked
> Expect: 100-continue
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 100 Continue
* Signaling end of chunked upload via terminating chunk.
< HTTP/1.1 200 OK
< date: Thu, 23 May 2019 07:29:10 GMT
< server: uvicorn
< content-length: 13
<
* Connection #0 to host 127.0.0.1 left intact
done waiting!
* Closing connection 0
curl -vvv -H "Transfer-Encoding: chunked" -H "Expect: 100-continue" 0.01s user 0.00s system 87% cpu 0.008 total
```
Same results with hypercorn
`uvicorn server:app`
```
time curl -vvv -H "Transfer-Encoding: chunked" -H "Expect: 100-continue" --data-binary @small.txt http://127.0.0.1:8000/api/v1/detect
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST /api/v1/detect HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.64.1
> Accept: */*
> Transfer-Encoding: chunked
> Expect: 100-continue
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 100
< date: Thu, 23 May 2019 07:29:46 GMT
< server: hypercorn-h11
* Signaling end of chunked upload via terminating chunk.
< HTTP/1.1 200
< content-length: 13
< date: Thu, 23 May 2019 07:29:46 GMT
< server: hypercorn-h11
<
* Connection #0 to host 127.0.0.1 left intact
done waiting!
* Closing connection 0
curl -vvv -H "Transfer-Encoding: chunked" -H "Expect: 100-continue" 0.00s user 0.00s system 54% cpu 0.006 total
```
As a side note, hypercorn is also okay without the `Expect` header:
```
time curl -vvv -H "Transfer-Encoding: chunked" --data-binary @small.txt http://127.0.0.1:8000/api/v1/detect
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST /api/v1/detect HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.64.1
> Accept: */*
> Transfer-Encoding: chunked
> Content-Type: application/x-www-form-urlencoded
>
> e
* upload completely sent off: 21 out of 14 bytes
< HTTP/1.1 200
< content-length: 13
< date: Thu, 23 May 2019 07:30:30 GMT
< server: hypercorn-h11
<
* Connection #0 to host 127.0.0.1 left intact
done waiting!
* Closing connection 0
curl -vvv -H "Transfer-Encoding: chunked" --data-binary @small.txt 0.00s user 0.00s system 79% cpu 0.006 total
```
---
So, when 19.6 is released (assuming we complete ASGI support by then), this will be "fixed" by using one of the ASGI servers.
The questions that I believe still needs to be answered:
1. Should we add this to the Sanic server?
1. Should there be a `100 Continue` response even if the client does not request it.
My thoughts are that (1) yes, we should respond to `Expect: 100 Continue`, and (2) no, we should not add the response if it is not requested.
I could be wrong on point 2, and I am open to debate, but my reading of [RFC 7231](https://tools.ietf.org/html/rfc7231#section-6.2.1) is that the response is not **required** if it is not requested.
> 6.2.1. 100 Continue
>
> The 100 (Continue) status code indicates that the initial part of a
request has been received and has not yet been rejected by the
server. The server intends to send a final response after the
request has been fully received and acted upon.
>
> When the request contains an Expect header field that includes a
100-continue expectation, the 100 response indicates that the server
wishes to receive the request payload body, as described in
Section 5.1.1. The client ought to continue sending the request and
discard the 100 response.
>
> If the request did not contain an Expect header field containing the
100-continue expectation, the client can simply discard this interim
response.
--
I am removing the Bug label because I do not think this is a bug per se, and more of a feature request.
It should also be noted that the last go round with 'sanic is slow when I test it with curl' the culprit ended up being curl; as people tested with other methods the slowness could not be reproduced, but could be reproduced with curl.
We probably could add this support
@ahopkins Thanks for quick responses! Now that I see the project is actively supported, I can safely continue using the framework :)
My `small.txt` file was about 1.1K. The issue doesn't happen for really tiny files. I agree it's a client specific issue (curl), but still it would be correct behavior to respect client's request for 100-continue.
BTW, I tested with a popular JS library and a 22M file and it worked very quickly.
Code:
```
const request = require('request-promise');
const fs = require('fs');
const start = Date.now();
fs.createReadStream('terraform_0.11.13_darwin_amd64.zip')
.pipe(request.post('http://127.0.0.1:8000/api/v1/detect'))
.then( result => {
console.log(result);
console.log("Elapsed time, ms:", Date.now()-start);
});
```
Output:
```
$ node request.js
done waiting!
Elapsed time, ms: 59
```
> Now that I see the project is actively supported, I can safely continue using the framework :)
There is a core team of developers that are working on Sanic in a community. Part of our reasoning for moving to the community supported model was to foster an ongoing group of developers so that it would stay active. I am glad you feel this way :smile:
I updated my `small.txt` to a larger file. Still runs ok for me. :thinking: If anyone else has any experience I'd be interested to hear.
```
โญโadam@thebrewery ~/Projects/Sanic/playground/issue1587
โฐโ$ ls -l
drwxr-xr-x adam users - Thu May 23 17:23:34 2019 ๏ __pycache__
.rw-r--r-- adam users 516 B Thu May 23 17:23:34 2019 ๎ server.py
.rw-r--r-- adam users 29.9 MB Thu May 23 23:08:42 2019 ๏
small.txt
โญโadam@thebrewery ~/Projects/Sanic/playground/issue1587
โฐโ$ time curl -vvv -H "Transfer-Encoding: chunked" -H "Expect: 100-continue" --data-binary @small.txt http://127.0.0.1:8000/api/v1/detect
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST /api/v1/detect HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.64.1
> Accept: */*
> Transfer-Encoding: chunked
> Expect: 100-continue
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 100 Continue
* Signaling end of chunked upload via terminating chunk.
< HTTP/1.1 200 OK
< date: Thu, 23 May 2019 20:08:49 GMT
< server: uvicorn
< content-length: 13
<
* Connection #0 to host 127.0.0.1 left intact
done waiting!
* Closing connection 0
curl -vvv -H "Transfer-Encoding: chunked" -H "Expect: 100-continue" 0.01s user 0.02s system 54% cpu 0.042 total
```
---
@yunstanford :muscle:
I added this to 19.9, but if anyone thinks then can handle providing `100 Continue` responses before then, maybe we can get it into 19.6.
> It should also be noted that the last go round with 'sanic is slow when I test it with curl' the culprit ended up being curl
Just to give more insight, libcurl by default waits 1 second for a 100 response before timing out and continuing the request. There's a thread about this behaviour: https://curl.haxx.se/mail/lib-2017-07/0013.html
@ahopkins Is anyone currently working on this? I would be glad to help.
@LTMenezes Would be happy to have you help take a stab at this.
Check with @yunstanford, looks like he self-assigned this so he may already be working on it. | 2019-06-04T05:13:35 |
sanic-org/sanic | 1,625 | sanic-org__sanic-1625 | [
"1623"
] | 68d5039c5f36e4e7be75aa5ea893ecdc29ab38a0 | diff --git a/sanic/asgi.py b/sanic/asgi.py
--- a/sanic/asgi.py
+++ b/sanic/asgi.py
@@ -1,7 +1,6 @@
import asyncio
import warnings
-from http.cookies import SimpleCookie
from inspect import isawaitable
from typing import Any, Awaitable, Callable, MutableMapping, Union
from urllib.parse import quote
@@ -288,11 +287,22 @@ async def stream_callback(self, response: HTTPResponse) -> None:
"""
Write the response.
"""
-
+ headers = []
+ cookies = {}
try:
- headers = [
+ cookies = {
+ v.key: v
+ for _, v in list(
+ filter(
+ lambda item: item[0].lower() == "set-cookie",
+ response.headers.items(),
+ )
+ )
+ }
+ headers += [
(str(name).encode("latin-1"), str(value).encode("latin-1"))
for name, value in response.headers.items()
+ if name.lower() not in ["set-cookie"]
]
except AttributeError:
logger.error(
@@ -319,12 +329,18 @@ async def stream_callback(self, response: HTTPResponse) -> None:
]
if response.cookies:
- cookies = SimpleCookie()
- cookies.load(response.cookies)
- headers += [
- (b"set-cookie", cookie.encode("utf-8"))
- for name, cookie in response.cookies.items()
- ]
+ cookies.update(
+ {
+ v.key: v
+ for _, v in response.cookies.items()
+ if v.key not in cookies.keys()
+ }
+ )
+
+ headers += [
+ (b"set-cookie", cookie.encode("utf-8"))
+ for k, cookie in cookies.items()
+ ]
await self.transport.send(
{
| diff --git a/tests/test_asgi.py b/tests/test_asgi.py
--- a/tests/test_asgi.py
+++ b/tests/test_asgi.py
@@ -229,3 +229,30 @@ def custom_request(request):
_, response = await app.asgi_client.get("/custom")
assert response.body == b"MyCustomRequest"
+
+
[email protected]
+async def test_cookie_customization(app):
+ @app.get("/cookie")
+ def get_cookie(request):
+ response = text("There's a cookie up in this response")
+ response.cookies["test"] = "Cookie1"
+ response.cookies["test"]["httponly"] = True
+
+ response.cookies["c2"] = "Cookie2"
+ response.cookies["c2"]["httponly"] = False
+
+ return response
+
+ _, response = await app.asgi_client.get("/cookie")
+ cookie_map = {
+ "test": {"value": "Cookie1", "HttpOnly": True},
+ "c2": {"value": "Cookie2", "HttpOnly": False},
+ }
+
+ for k, v in (
+ response.cookies._cookies.get("mockserver.local").get("/").items()
+ ):
+ assert cookie_map.get(k).get("value") == v.value
+ if cookie_map.get(k).get("HttpOnly"):
+ assert "HttpOnly" in v._rest.keys()
diff --git a/tests/test_requests.py b/tests/test_requests.py
--- a/tests/test_requests.py
+++ b/tests/test_requests.py
@@ -635,13 +635,15 @@ async def handler(request):
return text(request.remote_addr)
request, response = app.test_client.get("/")
- assert request.scheme == 'http'
+ assert request.scheme == "http"
- request, response = app.test_client.get("/", headers={'X-Forwarded-Proto': 'https'})
- assert request.scheme == 'https'
+ request, response = app.test_client.get(
+ "/", headers={"X-Forwarded-Proto": "https"}
+ )
+ assert request.scheme == "https"
- request, response = app.test_client.get("/", headers={'X-Scheme': 'https'})
- assert request.scheme == 'https'
+ request, response = app.test_client.get("/", headers={"X-Scheme": "https"})
+ assert request.scheme == "https"
def test_match_info(app):
@@ -1677,7 +1679,7 @@ def handler(request):
return text("OK")
request, response = app.test_client.get("/")
- assert request.server_name == '127.0.0.1'
+ assert request.server_name == "127.0.0.1"
def test_request_server_name_in_host_header(app):
@@ -1685,8 +1687,10 @@ def test_request_server_name_in_host_header(app):
def handler(request):
return text("OK")
- request, response = app.test_client.get("/", headers={'Host': 'my_server:5555'})
- assert request.server_name == 'my_server'
+ request, response = app.test_client.get(
+ "/", headers={"Host": "my_server:5555"}
+ )
+ assert request.server_name == "my_server"
def test_request_server_name_forwarded(app):
@@ -1694,11 +1698,11 @@ def test_request_server_name_forwarded(app):
def handler(request):
return text("OK")
- request, response = app.test_client.get("/", headers={
- 'Host': 'my_server:5555',
- 'X-Forwarded-Host': 'your_server'
- })
- assert request.server_name == 'your_server'
+ request, response = app.test_client.get(
+ "/",
+ headers={"Host": "my_server:5555", "X-Forwarded-Host": "your_server"},
+ )
+ assert request.server_name == "your_server"
def test_request_server_port(app):
@@ -1706,9 +1710,7 @@ def test_request_server_port(app):
def handler(request):
return text("OK")
- request, response = app.test_client.get("/", headers={
- 'Host': 'my_server'
- })
+ request, response = app.test_client.get("/", headers={"Host": "my_server"})
assert request.server_port == app.test_client.port
@@ -1717,9 +1719,9 @@ def test_request_server_port_in_host_header(app):
def handler(request):
return text("OK")
- request, response = app.test_client.get("/", headers={
- 'Host': 'my_server:5555'
- })
+ request, response = app.test_client.get(
+ "/", headers={"Host": "my_server:5555"}
+ )
assert request.server_port == 5555
@@ -1728,10 +1730,9 @@ def test_request_server_port_forwarded(app):
def handler(request):
return text("OK")
- request, response = app.test_client.get("/", headers={
- 'Host': 'my_server:5555',
- 'X-Forwarded-Port': '4444'
- })
+ request, response = app.test_client.get(
+ "/", headers={"Host": "my_server:5555", "X-Forwarded-Port": "4444"}
+ )
assert request.server_port == 4444
@@ -1754,29 +1755,34 @@ def handler(request):
def view_name(request):
return text("OK")
- request, response = app.test_client.get("/", headers={
- 'X-Forwarded-Proto': 'https',
- })
- assert app.url_for('view_name') == '/another_view'
- assert app.url_for('view_name', _external=True) == 'http:///another_view'
- assert request.url_for('view_name') == 'https://127.0.0.1:{}/another_view'.format(app.test_client.port)
+ request, response = app.test_client.get(
+ "/", headers={"X-Forwarded-Proto": "https"}
+ )
+ assert app.url_for("view_name") == "/another_view"
+ assert app.url_for("view_name", _external=True) == "http:///another_view"
+ assert request.url_for(
+ "view_name"
+ ) == "https://127.0.0.1:{}/another_view".format(app.test_client.port)
app.config.SERVER_NAME = "my_server"
- request, response = app.test_client.get("/", headers={
- 'X-Forwarded-Proto': 'https',
- 'X-Forwarded-Port': '6789',
- })
- assert app.url_for('view_name') == '/another_view'
- assert app.url_for('view_name', _external=True) == 'http://my_server/another_view'
- assert request.url_for('view_name') == 'https://my_server:6789/another_view'
-
- request, response = app.test_client.get("/", headers={
- 'X-Forwarded-Proto': 'https',
- 'X-Forwarded-Port': '443',
- })
- assert request.url_for('view_name') == 'https://my_server/another_view'
-
-
+ request, response = app.test_client.get(
+ "/", headers={"X-Forwarded-Proto": "https", "X-Forwarded-Port": "6789"}
+ )
+ assert app.url_for("view_name") == "/another_view"
+ assert (
+ app.url_for("view_name", _external=True)
+ == "http://my_server/another_view"
+ )
+ assert (
+ request.url_for("view_name") == "https://my_server:6789/another_view"
+ )
+
+ request, response = app.test_client.get(
+ "/", headers={"X-Forwarded-Proto": "https", "X-Forwarded-Port": "443"}
+ )
+ assert request.url_for("view_name") == "https://my_server/another_view"
+
+
@pytest.mark.asyncio
async def test_request_form_invalid_content_type_asgi(app):
@app.route("/", methods=["POST"])
@@ -1787,7 +1793,7 @@ async def post(request):
assert request.form == {}
-
+
def test_endpoint_basic():
app = Sanic()
| using AGSI the Set-Cookie header is send twice, one correct, one wrong
**Describe the bug**
When using a AGSI server (unicorn) the Set-Cookie header is send twice, one time correctly formatted, one time formatted as a json dict. See curl response below.
```
HTTP/1.1 200 OK
date: Sat, 06 Jul 2019 20:01:13 GMT
server: uvicorn
set-cookie: {'path': '/', 'httponly': True}
content-length: 36
set-cookie: test="It worked!"; Path=/; HttpOnly
```
**Code snippet**
```
import logging
from sanic import Sanic
from sanic.response import json, redirect, text, html, stream, raw
logger = logging.getLogger(__name__)
app = Sanic(__name__)
@app.get("/test", name='api')
async def cookie_test(request):
response = text("There's a cookie up in this response")
response.cookies['test'] = 'It worked!'
response.cookies['test']['httponly'] = True
logger.info(response.headers)
return response
def main():
uvicorn.run(app, host="0.0.0.0", port=5000, debug=True, access_log=False)
```
**Expected behavior**
The first Set-Cookie header should not be send.
```
HTTP/1.1 200 OK
date: Sat, 06 Jul 2019 20:01:13 GMT
server: uvicorn
content-length: 36
set-cookie: test="It worked!"; Path=/; HttpOnly
```
**Environment (please complete the following information):**
- OS: Lunix
- Version: 19.6.0
**Additional context**
The problem seems to happen when the headers are turned into bytes in the sanic AGSI module. The Cookies are already in de response headers because the CookieJar is adding the cookies to the headers. When `str()` is called on the Cookie object the object in converted to a json string.
[L293-L296](https://github.com/huge-success/sanic/blob/master/sanic/asgi.py#L293-L296)
**A possible solution**
First asume that the header value is a string or a other object with a `encode` function if not convert it to a string. This is similair to the `_parse_headers` function in the response class.
[_parse_headers (L32-L46)](https://github.com/huge-success/sanic/blob/master/sanic/response.py#L32-L46)
If this is implemented it won't be needed to add the cookies after the headers are converted to bytes. the lines below can be removed.
[L321-L327](https://github.com/huge-success/sanic/blob/master/sanic/asgi.py#L321-L327)
| @loek17 Agreed. This is an issue that needs to be fixed. Let me open a PR to address the same. | 2019-07-08T07:34:22 |
sanic-org/sanic | 1,654 | sanic-org__sanic-1654 | [
"1652"
] | a15d9552c4d7dc46c60322bf7c8911d122f67285 | diff --git a/sanic/asgi.py b/sanic/asgi.py
--- a/sanic/asgi.py
+++ b/sanic/asgi.py
@@ -328,6 +328,11 @@ async def stream_callback(self, response: HTTPResponse) -> None:
(b"content-length", str(len(response.body)).encode("latin-1"))
]
+ if "content-type" not in response.headers:
+ headers += [
+ (b"content-type", str(response.content_type).encode("latin-1"))
+ ]
+
if response.cookies:
cookies.update(
{
| diff --git a/tests/test_asgi.py b/tests/test_asgi.py
--- a/tests/test_asgi.py
+++ b/tests/test_asgi.py
@@ -9,7 +9,7 @@
from sanic.asgi import MockTransport
from sanic.exceptions import InvalidUsage
from sanic.request import Request
-from sanic.response import text
+from sanic.response import json, text
from sanic.websocket import WebSocketConnection
@@ -256,3 +256,27 @@ def get_cookie(request):
assert cookie_map.get(k).get("value") == v.value
if cookie_map.get(k).get("HttpOnly"):
assert "HttpOnly" in v._rest.keys()
+
+
[email protected]
+async def test_json_content_type(app):
+ @app.get("/json")
+ def send_json(request):
+ return json({"foo": "bar"})
+
+ @app.get("/text")
+ def send_text(request):
+ return text("foobar")
+
+ @app.get("/custom")
+ def send_custom(request):
+ return text("foobar", content_type="somethingelse")
+
+ _, response = await app.asgi_client.get("/json")
+ assert response.headers.get("content-type") == "application/json"
+
+ _, response = await app.asgi_client.get("/text")
+ assert response.headers.get("content-type") == "text/plain; charset=utf-8"
+
+ _, response = await app.asgi_client.get("/custom")
+ assert response.headers.get("content-type") == "somethingelse"
| The response.content_type is not add to headers in ASGI
Perhaps the response.content_type is add to headers here.
| 2019-08-11T08:30:20 |
|
sanic-org/sanic | 1,666 | sanic-org__sanic-1666 | [
"1309"
] | 1e4b1c4d1a907af559dcf91b697debd60d94bbd7 | diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -6,6 +6,7 @@
from collections import defaultdict, namedtuple
from http.cookies import SimpleCookie
+from types import SimpleNamespace
from urllib.parse import parse_qs, parse_qsl, unquote, urlunparse
from httptools import parse_url
@@ -71,7 +72,7 @@ def is_full(self):
return self._queue.full()
-class Request(dict):
+class Request:
"""Properties of an HTTP request such as URL, headers, etc."""
__slots__ = (
@@ -84,6 +85,7 @@ class Request(dict):
"_socket",
"app",
"body",
+ "ctx",
"endpoint",
"headers",
"method",
@@ -113,6 +115,7 @@ def __init__(self, url_bytes, headers, version, method, transport, app):
# Init but do not inhale
self.body_init()
+ self.ctx = SimpleNamespace()
self.parsed_forwarded = None
self.parsed_json = None
self.parsed_form = None
@@ -129,10 +132,30 @@ def __repr__(self):
self.__class__.__name__, self.method, self.path
)
- def __bool__(self):
- if self.transport:
- return True
- return False
+ def get(self, key, default=None):
+ """.. deprecated:: 19.9
+ Custom context is now stored in `request.custom_context.yourkey`"""
+ return self.ctx.__dict__.get(key, default)
+
+ def __contains__(self, key):
+ """.. deprecated:: 19.9
+ Custom context is now stored in `request.custom_context.yourkey`"""
+ return key in self.ctx.__dict__
+
+ def __getitem__(self, key):
+ """.. deprecated:: 19.9
+ Custom context is now stored in `request.custom_context.yourkey`"""
+ return self.ctx.__dict__[key]
+
+ def __delitem__(self, key):
+ """.. deprecated:: 19.9
+ Custom context is now stored in `request.custom_context.yourkey`"""
+ del self.ctx.__dict__[key]
+
+ def __setitem__(self, key, value):
+ """.. deprecated:: 19.9
+ Custom context is now stored in `request.custom_context.yourkey`"""
+ setattr(self.ctx, key, value)
def body_init(self):
self.body = []
| diff --git a/tests/test_request_data.py b/tests/test_request_data.py
--- a/tests/test_request_data.py
+++ b/tests/test_request_data.py
@@ -8,22 +8,72 @@
except ImportError:
from json import loads
+def test_custom_context(app):
+ @app.middleware("request")
+ def store(request):
+ request.ctx.user = "sanic"
+ request.ctx.session = None
+
+ @app.route("/")
+ def handler(request):
+ # Accessing non-existant key should fail with AttributeError
+ try:
+ invalid = request.ctx.missing
+ except AttributeError as e:
+ invalid = str(e)
+ return json({
+ "user": request.ctx.user,
+ "session": request.ctx.session,
+ "has_user": hasattr(request.ctx, "user"),
+ "has_session": hasattr(request.ctx, "session"),
+ "has_missing": hasattr(request.ctx, "missing"),
+ "invalid": invalid
+ })
+
+ request, response = app.test_client.get("/")
+ assert response.json == {
+ "user": "sanic",
+ "session": None,
+ "has_user": True,
+ "has_session": True,
+ "has_missing": False,
+ "invalid": "'types.SimpleNamespace' object has no attribute 'missing'",
+ }
+
-def test_storage(app):
+# Remove this once the deprecated API is abolished.
+def test_custom_context_old(app):
@app.middleware("request")
def store(request):
+ try:
+ request["foo"]
+ except KeyError:
+ pass
request["user"] = "sanic"
- request["sidekick"] = "tails"
+ sidekick = request.get("sidekick", "tails") # Item missing -> default
+ request["sidekick"] = sidekick
+ request["bar"] = request["sidekick"]
del request["sidekick"]
@app.route("/")
def handler(request):
return json(
- {"user": request.get("user"), "sidekick": request.get("sidekick")}
+ {
+ "user": request.get("user"),
+ "sidekick": request.get("sidekick"),
+ "has_bar": "bar" in request,
+ "has_sidekick": "sidekick" in request,
+ }
)
request, response = app.test_client.get("/")
+ assert response.json == {
+ "user": "sanic",
+ "sidekick": None,
+ "has_bar": True,
+ "has_sidekick": False,
+ }
response_json = loads(response.text)
assert response_json["user"] == "sanic"
assert response_json.get("sidekick") is None
diff --git a/tests/test_requests.py b/tests/test_requests.py
--- a/tests/test_requests.py
+++ b/tests/test_requests.py
@@ -1499,9 +1499,6 @@ def handler(request):
request, response = app.test_client.get("/")
assert bool(request)
- request.transport = False
- assert not bool(request)
-
def test_request_parsing_form_failed(app, caplog):
@app.route("/", methods=["POST"])
| Design question: why is `Request` a subclass of `dict`?
Hello. I was just wondering: why is the `Request` class a subclass of `dict`? Would it be to enable developers do "store" information into a instance like `request["name"]`?
Thanks in advance!
| Yes, here is the original PR for some context: https://github.com/channelcat/sanic/pull/163
Plus the original issue to this https://github.com/channelcat/sanic/issues/129
Thanks, @seemethere . The "impact" of using `dict` is almost neglectable - some rudimentary benchmarks on my old AMD workstation shows a 0.13 sec diff in 1mi iterations. Implementing basic `dict` interface (`__delitem__`, `__getitem__` and `__setitem__`) seems to show a diff of 0.09sec in 1mi iterations, that's the main reason I brought this up (again, I know, it's neglectable :wink:)
This is confusing for my exception serialization code which recognizes it wrongly as an empty dict.
If it is not really a dict, then it should not be sub-classed as one.
A solution based on `__getattr__` would have lesser impact and allow for nicer-looking extension code. | 2019-09-03T11:26:08 |
sanic-org/sanic | 1,690 | sanic-org__sanic-1690 | [
"37"
] | e506c89304948bba593e8603ecace1c495b06fd5 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -79,7 +79,8 @@ def __init__(
self.is_request_stream = False
self.websocket_enabled = False
self.websocket_tasks = set()
-
+ self.named_request_middleware = {}
+ self.named_response_middleware = {}
# Register alternative method names
self.go_fast = self.run
@@ -172,7 +173,7 @@ def route(
:param stream:
:param version:
:param name: user defined route name for url_for
- :return: decorated function
+ :return: tuple of routes, decorated function
"""
# Fix case where the user did not prefix the URL with a /
@@ -198,7 +199,7 @@ def response(handler):
if stream:
handler.is_stream = stream
- self.router.add(
+ routes = self.router.add(
uri=uri,
methods=methods,
handler=handler,
@@ -207,7 +208,7 @@ def response(handler):
version=version,
name=name,
)
- return handler
+ return routes, handler
return response
@@ -456,7 +457,7 @@ def websocket(
:param subprotocols: optional list of str with supported subprotocols
:param name: A unique name assigned to the URL so that it can
be used with :func:`url_for`
- :return: decorated function
+ :return: tuple of routes, decorated function
"""
self.enable_websocket()
@@ -509,7 +510,7 @@ async def websocket_handler(request, *args, **kwargs):
self.websocket_tasks.remove(fut)
await ws.close()
- self.router.add(
+ routes = self.router.add(
uri=uri,
handler=websocket_handler,
methods=frozenset({"GET"}),
@@ -517,7 +518,7 @@ async def websocket_handler(request, *args, **kwargs):
strict_slashes=strict_slashes,
name=name,
)
- return handler
+ return routes, handler
return response
@@ -538,6 +539,7 @@ def add_websocket_route(
:param host: Host IP or FQDN details
:param uri: URL path that will be mapped to the websocket
handler
+ handler
:param strict_slashes: If the API endpoint needs to terminate
with a "/" or not
:param subprotocols: Subprotocols to be used with websocket
@@ -639,6 +641,22 @@ def register_middleware(self, middleware, attach_to="request"):
self.response_middleware.appendleft(middleware)
return middleware
+ def register_named_middleware(
+ self, middleware, route_names, attach_to="request"
+ ):
+ if attach_to == "request":
+ for _rn in route_names:
+ if _rn not in self.named_request_middleware:
+ self.named_request_middleware[_rn] = deque()
+ if middleware not in self.named_request_middleware[_rn]:
+ self.named_request_middleware[_rn].append(middleware)
+ if attach_to == "response":
+ for _rn in route_names:
+ if _rn not in self.named_response_middleware:
+ self.named_response_middleware[_rn] = deque()
+ if middleware not in self.named_response_middleware[_rn]:
+ self.named_response_middleware[_rn].append(middleware)
+
# Decorator
def middleware(self, middleware_or_request):
"""
@@ -910,20 +928,23 @@ async def handle_request(self, request, write_callback, stream_callback):
# allocation before assignment below.
response = None
cancelled = False
+ name = None
try:
+ # Fetch handler from router
+ handler, args, kwargs, uri, name = self.router.get(request)
+
# -------------------------------------------- #
# Request Middleware
# -------------------------------------------- #
- response = await self._run_request_middleware(request)
+ response = await self._run_request_middleware(
+ request, request_name=name
+ )
# No middleware results
if not response:
# -------------------------------------------- #
# Execute Handler
# -------------------------------------------- #
- # Fetch handler from router
- handler, args, kwargs, uri = self.router.get(request)
-
request.uri_template = uri
if handler is None:
raise ServerError(
@@ -987,7 +1008,7 @@ async def handle_request(self, request, write_callback, stream_callback):
if response is not None:
try:
response = await self._run_response_middleware(
- request, response
+ request, response, request_name=name
)
except CancelledError:
# Response middleware can timeout too, as above.
@@ -1259,10 +1280,14 @@ async def trigger_events(self, events, loop):
if isawaitable(result):
await result
- async def _run_request_middleware(self, request):
+ async def _run_request_middleware(self, request, request_name=None):
# The if improves speed. I don't know why
- if self.request_middleware:
- for middleware in self.request_middleware:
+ named_middleware = self.named_request_middleware.get(
+ request_name, deque()
+ )
+ applicable_middleware = self.request_middleware + named_middleware
+ if applicable_middleware:
+ for middleware in applicable_middleware:
response = middleware(request)
if isawaitable(response):
response = await response
@@ -1270,9 +1295,15 @@ async def _run_request_middleware(self, request):
return response
return None
- async def _run_response_middleware(self, request, response):
- if self.response_middleware:
- for middleware in self.response_middleware:
+ async def _run_response_middleware(
+ self, request, response, request_name=None
+ ):
+ named_middleware = self.named_response_middleware.get(
+ request_name, deque()
+ )
+ applicable_middleware = self.response_middleware + named_middleware
+ if applicable_middleware:
+ for middleware in applicable_middleware:
_response = middleware(request, response)
if isawaitable(_response):
_response = await _response
diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -104,6 +104,8 @@ def register(self, app, options):
url_prefix = options.get("url_prefix", self.url_prefix)
+ routes = []
+
# Routes
for future in self.routes:
# attach the blueprint name to the handler so that it can be
@@ -114,7 +116,7 @@ def register(self, app, options):
version = future.version or self.version
- app.route(
+ _routes, _ = app.route(
uri=uri[1:] if uri.startswith("//") else uri,
methods=future.methods,
host=future.host or self.host,
@@ -123,6 +125,8 @@ def register(self, app, options):
version=version,
name=future.name,
)(future.handler)
+ if _routes:
+ routes += _routes
for future in self.websocket_routes:
# attach the blueprint name to the handler so that it can be
@@ -130,21 +134,27 @@ def register(self, app, options):
future.handler.__blueprintname__ = self.name
# Prepend the blueprint URI prefix if available
uri = url_prefix + future.uri if url_prefix else future.uri
- app.websocket(
+ _routes, _ = app.websocket(
uri=uri,
host=future.host or self.host,
strict_slashes=future.strict_slashes,
name=future.name,
)(future.handler)
+ if _routes:
+ routes += _routes
+ route_names = [route.name for route in routes]
# Middleware
for future in self.middlewares:
if future.args or future.kwargs:
- app.register_middleware(
- future.middleware, *future.args, **future.kwargs
+ app.register_named_middleware(
+ future.middleware,
+ route_names,
+ *future.args,
+ **future.kwargs
)
else:
- app.register_middleware(future.middleware)
+ app.register_named_middleware(future.middleware, route_names)
# Exceptions
for future in self.exceptions:
diff --git a/sanic/router.py b/sanic/router.py
--- a/sanic/router.py
+++ b/sanic/router.py
@@ -140,21 +140,22 @@ def add(
docs for further details.
:return: Nothing
"""
+ routes = []
if version is not None:
version = re.escape(str(version).strip("/").lstrip("v"))
uri = "/".join(["/v{}".format(version), uri.lstrip("/")])
# add regular version
- self._add(uri, methods, handler, host, name)
+ routes.append(self._add(uri, methods, handler, host, name))
if strict_slashes:
- return
+ return routes
if not isinstance(host, str) and host is not None:
# we have gotten back to the top of the recursion tree where the
# host was originally a list. By now, we've processed the strict
# slashes logic on the leaf nodes (the individual host strings in
# the list of host)
- return
+ return routes
# Add versions with and without trailing /
slashed_methods = self.routes_all.get(uri + "/", frozenset({}))
@@ -176,10 +177,12 @@ def add(
)
# add version with trailing slash
if slash_is_missing:
- self._add(uri + "/", methods, handler, host, name)
+ routes.append(self._add(uri + "/", methods, handler, host, name))
# add version without trailing slash
elif without_slash_is_missing:
- self._add(uri[:-1], methods, handler, host, name)
+ routes.append(self._add(uri[:-1], methods, handler, host, name))
+
+ return routes
def _add(self, uri, methods, handler, host=None, name=None):
"""Add a handler to the route list
@@ -328,6 +331,7 @@ def merge_route(route, methods, handler):
self.routes_dynamic[url_hash(uri)].append(route)
else:
self.routes_static[uri] = route
+ return route
@staticmethod
def check_dynamic_route_exists(pattern, routes_to_check, parameters):
@@ -442,6 +446,7 @@ def _get(self, url, method, host):
method=method,
allowed_methods=self.get_supported_methods(url),
)
+
if route:
if route.methods and method not in route.methods:
raise method_not_supported
@@ -476,7 +481,7 @@ def _get(self, url, method, host):
route_handler = route.handler
if hasattr(route_handler, "handlers"):
route_handler = route_handler.handlers[method]
- return route_handler, [], kwargs, route.uri
+ return route_handler, [], kwargs, route.uri, route.name
def is_stream_handler(self, request):
""" Handler for request is stream or not.
| diff --git a/tests/test_app.py b/tests/test_app.py
--- a/tests/test_app.py
+++ b/tests/test_app.py
@@ -94,7 +94,7 @@ async def handler():
def test_app_handle_request_handler_is_none(app, monkeypatch):
def mockreturn(*args, **kwargs):
- return None, [], {}, ""
+ return None, [], {}, "", ""
# Not sure how to make app.router.get() return None, so use mock here.
monkeypatch.setattr(app.router, "get", mockreturn)
diff --git a/tests/test_blueprint_group.py b/tests/test_blueprint_group.py
--- a/tests/test_blueprint_group.py
+++ b/tests/test_blueprint_group.py
@@ -83,7 +83,7 @@ def enhance_response_middleware(request: Request, response: HTTPResponse):
_, response = app.test_client.patch("/api/bp2/route/bp2", headers=header)
assert response.text == "PATCH_bp2"
- _, response = app.test_client.get("/v2/api/bp1/request_path")
+ _, response = app.test_client.put("/v2/api/bp1/request_path")
assert response.status == 401
@@ -141,8 +141,8 @@ def app_default_route(request):
_, response = app.test_client.get("/api/bp3")
assert response.text == "BP3_OK"
- assert MIDDLEWARE_INVOKE_COUNTER["response"] == 4
- assert MIDDLEWARE_INVOKE_COUNTER["request"] == 4
+ assert MIDDLEWARE_INVOKE_COUNTER["response"] == 3
+ assert MIDDLEWARE_INVOKE_COUNTER["request"] == 2
def test_bp_group_list_operations(app: Sanic):
diff --git a/tests/test_blueprints.py b/tests/test_blueprints.py
--- a/tests/test_blueprints.py
+++ b/tests/test_blueprints.py
@@ -268,7 +268,7 @@ async def handler(request):
request, response = app.test_client.get("/")
assert response.status == 200
- assert response.text == "OK"
+ assert response.text == "FAIL"
def test_bp_exception_handler(app):
| Blueprint middleware applied globally
I was a little too hasty updating the blueprints documentation for middleware/exceptions, and just realized that middlewares and exceptions registered through a blueprint decorator are applied to all routes.
Is this intended behaviour? If so, then the the blueprint documentation must be updated once more.
Alternative behaviour: middleware registered on a blueprint, are only applied to routes for that blueprint.
Some considerations:
- should middleware applied to the app-object (instance of Sanic) also be applied to blueprint middleware?
- if so, how will ordering be handled?
| Ah, I missed that in the pull request. I've updated the documentation for now. My thoughts on this are:
Applied locally:
- Is more self-contained, prevents conflicting code.
- Applying global middleware and exception handling requires registering them separately from the blueprint.
Applied globally:
- Modularizes middleware and exception handling.
- Blueprints can facilitate whole modules, allowing a single interface for sharing code between projects.
- Can use decorators to achieve local middleware and exception handling:
``` python
def authenticated(func):
def handler(request, *args, **kwargs):
# Middleware goes here
try:
return func(request, *args, **kwargs)
except MyCustomException:
return text("son, i am dissapoint")
return handler
@bp.route('/')
@authenticated
def index(request):
return text("We did it fam")
...
```
In both scenarios you can achieve the same thing, but I think the latter offers more.
As for ordering, I'm thinking they should be applied in the order the blueprint was registered, and in the order they were registered in the blueprint. I think it would allow blueprints to require other blueprints in a way that's easy to discern in the code.
My concern with using a decorator for local middleware/exceptions, is that it requires a different way of adding them, compared to registering on the application object. My fear is, that it becomes too complicated or too involved to handle middleware/exceptions on blueprints.
I would prefer, that you register middleware and exceptions in the same manner (`@<obj>.middleware`), regardless of applying it on the app-object (affecting all routes) or the blueprint (affecting only those blueprint routes).
What is the different between using `@app.middleware` and `@bp.middleware`? Are there any benefits to use `@bp.middleware`?
To add my two cents: While it might be beneficial to apply middleware registered on a blueprint globally, that's definitely not what I'd expect to happen. I'd regard blueprints as pretty self-contained and separate units and find it kinda surprising to see them interact in such a way. What about adding a `@bp.global_middleware`?
It's defnly unexpected behavior. Well, for me with a long experience with Flask that has same concept the 'Blueprint', it surprised me a lot! I think that the blueprint is a modular sub-*application*. Unless it stands just for deleting all url prefixes in code, an application with blueprints would be like an 'united' application, not 'single' one. I can understand complexity fears but I believe that it can offer much more capability and convenience with blueprint's independence.
> @eth3lbert) What is the different between using @app.middleware and @bp.middleware? Are there any benefits to use @bp.middleware?
@eth3lbert : You can apply various caching policy, control access to specific spaces, or inject something into your response only when it was processed in some bp, based on your blueprint separation.
> @channelcat) As for ordering, I'm thinking they should be applied in the order the blueprint was registered, and in the order they were registered in the blueprint.
๐
I also think this is a counterintuitive behavior. Especially when you introduce this project as flask-like, I expected the blueprint-related features are totally separated and it doesn't affect the app out of blueprint.
I agree about the concolusion the global application offers more, still I also more agree to the other parts its name must not be `Blueprint.middleware`. Strongly voting to @Tim-Erwin's idea, you don't need to select only one idea of middleware behavior on the counterintuitive name. Please consider to let Blueprint has its own scoped middleware and allow them also to have modulized global middlewares like `Blueprint.app_middleware` or `Blueprint.global_middleware`.
Is this still an issue? Will reopen if necessary.
Reopening per request from @r0fls
Is it still there? any update on this?
still there
Does this still apply because of #715 being fixed?
Example from life: I have an application with admin control. In `admin` blueprint I have to use decorator for check admin rights for all views instead of one blueprint middleware handler.
And do not forget about this please
> Explicit is better than implicit.
Another real-world example.
Two URL patterns:
```
/a/b/<cred1>/<cred2>
/a/b?cred1=abc&cred2=xyz
```
Planned to have two middlewares that gathered the necessary parameters (one from the URL, another from arguments) and store them in the request object so the next layer of code would not need to worry about where the creds 'came from.'
(There's actually more than two, and some are significantly different from the examples.)
Personally I would expect `@app.middleware` to apply globally and `@bp.middleware` to apply locally.
Another real life example (which led me here):
```python
public = Blueprint("public", url_prefix="/public")
public.static("/", public_web_dir)
secure = Blueprint("secure", url_prefix="/secure")
secure.static("/", secure_web_dir)
@secure.middleware("request")
@authenticated
async def request_middleware(request):
pass
app = Sanic()
app.blueprint(public)
app.blueprint(secure)
app.run(host="0.0.0.0", port=8080, debug=True)
```
In this case I expect to be able to access the static files served via `public` and be stopped from accessing the static files served via `secure` without authorization.
Due to the nature of `bp.static` I cannot add my decorator to the requests it manages, which makes using `.middleware` for all requests going to `secure` the logical solution. However, with the current globalness of `.middleware` I will need to come up with a more complicated solution.
---
While I definitely agree with @Tim-Erwin that it would be more logical to have a `.global_middleware` for the current behavior of `.middleware` for blueprints, maybe the introduction of `.local_middleware` could be an option to maintain compatibility?
I have a workaround (although I still think this should be implemented in sanic). Using my last example:
```python
public = Blueprint("public", url_prefix="/public")
public.static("/", public_web_dir)
secure = Blueprint("secure", url_prefix="/secure")
secure.static("/", secure_web_dir)
@secure.middleware("request")
def global_middleware(request):
if request.path.startswith(secure.url_prefix):
@authenticated
def local_middleware(request):
pass
local_middleware(request)
app = Sanic()
app.blueprint(public)
app.blueprint(secure)
app.run(host="0.0.0.0", port=8080, debug=True)
```
I just wanted to agree that my expectation for a blueprint middleware is that it applies only to routes included in the blueprint.
I don't see why it would apply globally. My sample handler setup which seems to do what I would like:
```
from sanic import response
from sanic import Blueprint
from tempfile import TemporaryFile
blueprint = Blueprint('docx_server', url_prefix='/docx')
@blueprint.middleware('request')
async def type_conversion_dates(request):
request.args['date'] = pendulum.now()
@blueprint.middleware('response')
async def set_model_response(request, response):
response.content_type ='application/vnd.openxmlformats-officedocument.wordprocessingml.document'
return response
@blueprint.route("/<template>", methods=['POST'], strict_slashes=False)
async def get_template_docx(request, template):
"""Returns X3D document
context (json):
employee_name (str): name
employee_title (str): title
employee_education (str[]): education
employee_certification (str[]): certifications
selected_experience (str): experience description
extra_experience (ob[{title, description}]):
title: of description
description: experience description
employee_organizations(str[]): organizations
Args:
template (str): render template name
Returns:
docx
"""
async with request.app.db_pool.acquire() as con:
template_name = request.args.get('template')
doc = DocxTemplate(f'/app/templates/{template_name}')
context = request.json
doc.render(context)
tf = TemporaryFile()
doc.save(tf)
dt_string = dt.format('YYYY-MM-DD')
r.headers['Content-Disposition'] = f'filename="{dt_string}-{context.employee_name}-resume.docx"'
return await response.file(tf)
```
This seems self contained and easy to manage. Except, now I have to add the middleware somewhere else, disconnecting it from an endpoint I expect to only take certain requests and only serve certain responses.
Hi! I see there's no update for pass 3 years. Is there any plan to fix or somehow differently resolve this issue? The current behavior seems rather counterintuitive.
I believe that this should be fixed (make them local). However, currently routing is performed *after* request middlewares, so it would be a rather big and potentially breaking architectural change. In particular, one would need to consider whether existing middlewares rely on the current behaviour, e.g. to change request url or method prior to routing.
I believe in that in general (among other languages) middleware should be applied after the routing and is what many people expect the middleware to do. You are right with that it is breaking change so I am not sure what the optimal solution would be. Maybe as @FelixZY suggested add `local_middleware`? But if this should be implemented, it would probably make more sense to make `middleware` function local by default.
I just wanted to pop into this thread to point out that the [current documentation for blueprint group middleware](https://sanic.readthedocs.io/en/latest/sanic/blueprints.html#blueprint-group-middleware) implies (incorrectly) that blueprint group middleware only executes on the routes defined in that blueprint group (and not globally).
> Using this middleware will ensure that you can apply a common middleware to all the blueprints that form the current blueprint group under consideration.
The example code also makes it appear that middleware applied to a blueprint only impacts routes under that blueprint. Specifically this code from the example:
```
@bp1.middleware('request')
async def bp1_only_middleware(request):
print('applied on Blueprint : bp1 Only')
```
I'm wondering where this issue stands. I am using 19.6.0 (but tried 19.6.3) and I am seeing my blueprint middleware being applied globally. I tried wrapping it up and a blueprint group as well but the behavior is the same.
My understanding of the latest documentation is that middleware added to a blueprint will be global unless the blueprint is added to a blueprint group. In this case it should apply only to routes in that group.
While this distinction is a bit confusing, I can live with it but it doesn't seem to be working that way.
Any help would be appreciated.
Let me take a look. If it's an easy enough fix without major refactoring, let me see if I can open a quick PR to address this.
@huge-success/sanic-core-devs This is getting a bit interesting. I was able to get the blueprint based middleware to work the right way without changing much, But here is a curious case,
```python
def test_bp_middleware(app):
blueprint = Blueprint("test_middleware")
@blueprint.middleware("response")
async def process_response(request, response):
return text("OK")
@app.route("/")
async def handler(request):
return text("FAIL")
app.blueprint(blueprint)
request, response = app.test_client.get("/")
assert response.status == 200
assert response.text == "OK"
```
This is from one of the existing test cases. Now, if we make sure that the blueprint middleware gets applied only on the route registered with the middleware, then this expected output is invalid. Since the route was created using `@app` and not `@blueprint`.
What would be the best behavior in this case?
1. If you register a middleware via `@blueprint.middleware` then it will apply only to the routes defined by the blueprint.
2. If you register a middleware via `@blueprint_group.middleware` then it will apply to all blueprint based routes that are part of the group.
3. If you define a middleware via `@app.middleware` then it will be applied on all available routes
With the above in mind, what is the expected precedence in which this can be applied ? | 2019-10-08T16:09:43 |
sanic-org/sanic | 1,708 | sanic-org__sanic-1708 | [
"1704"
] | e506c89304948bba593e8603ecace1c495b06fd5 | diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -519,8 +519,11 @@ def url_for(self, view_name, **kwargs):
:rtype: str
"""
# Full URL SERVER_NAME can only be handled in app.url_for
- if "//" in self.app.config.SERVER_NAME:
- return self.app.url_for(view_name, _external=True, **kwargs)
+ try:
+ if "//" in self.app.config.SERVER_NAME:
+ return self.app.url_for(view_name, _external=True, **kwargs)
+ except AttributeError:
+ pass
scheme = self.scheme
host = self.server_name
| diff --git a/tests/test_requests.py b/tests/test_requests.py
--- a/tests/test_requests.py
+++ b/tests/test_requests.py
@@ -2103,3 +2103,19 @@ async def bp_root(request):
request, response = await app.asgi_client.get("/bp")
assert request.endpoint == "named.my_blueprint.bp_root"
+
+
+def test_url_for_without_server_name(app):
+ @app.route("/sample")
+ def sample(request):
+ return json({"url": request.url_for("url_for")})
+
+ @app.route("/url-for")
+ def url_for(request):
+ return text("url-for")
+
+ request, response = app.test_client.get("/sample")
+ assert (
+ response.json["url"]
+ == f"http://127.0.0.1:{app.test_client.port}/url-for"
+ )
| Improve documentation in *Accessing values using `get` and `getlist`*
**Is your feature request related to a problem? Please describe.**
Documentation here should be improved:
https://sanic.readthedocs.io/en/latest/sanic/request_data.html#accessing-values-using-get-and-getlist
It isn't clear how to use `get` and `getlist`
**Describe the solution you'd like**
Change
> The request properties which return a dictionary actually return a subclass of dict called RequestParameters.
To
> `request.args` which return a dictionary actually return a subclass of dict called RequestParameters.
| 2019-10-23T15:50:22 |
|
sanic-org/sanic | 1,716 | sanic-org__sanic-1716 | [
"1714"
] | 2d72874b0b95f58e0ef95358537f041bdb90e2a0 | diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -364,6 +364,21 @@ def on_body(self, body):
else:
self.request.body_push(body)
+ async def body_append(self, body):
+ if (
+ self.request is None
+ or self._request_stream_task is None
+ or self._request_stream_task.cancelled()
+ ):
+ return
+
+ if self.request.stream.is_full():
+ self.transport.pause_reading()
+ await self.request.stream.put(body)
+ self.transport.resume_reading()
+ else:
+ await self.request.stream.put(body)
+
async def stream_append(self):
while self._body_chunks:
body = self._body_chunks.popleft()
| When calling abort() in a stream=True handler, error occurs
**Describe the bug**
When calling abort() inside a request handler with stream=True, this exception occurs in the logs:
```
Task exception was never retrieved
future: <Task finished name='Task-3' coro=<HttpProtocol.body_append() done, defined at /usr/local/lib/python3.8/site-packages/sanic/server.py:356> exception=AttributeError("'NoneType' object has no attribute 'stream'")>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sanic/server.py", line 357, in body_append
if self.request.stream.is_full():
AttributeError: 'NoneType' object has no attribute 'stream'
```
**Code snippet**
```
import cgi
from sanic import Sanic
from sanic.response import stream
from sanic.exceptions import abort
app = Sanic('request_stream')
@app.put('/stream', stream=True)
async def handler(request):
mimetype, options = cgi.parse_header(request.headers['content-type'])
charset = options.get('charset')
if charset not in {'utf-8'} and mimetype != 'text/plain':
abort(400, f'Content-Type must be "text/plain; charset=utf-8')
async def streaming(response):
while (body := await request.stream.read()) is not None:
body = body.decode(charset).upper()
await response.write(body)
return stream(streaming, content_type="text/plain; charset=utf-8")
```
**Expected behavior**
On the client side, behavior is as expected, but it seems that request is None in this case. It should not be
**Environment (please complete the following information):**
Python3.8 official docker image
Sanic 19.9.0
**Additional context**
/
| @djoek We still haven't officially started supporting `sanic` on python 3.8. There is a PR opened to introduce this via #1709
Is this behavior the same with python3.7 + sanic as well?
@harshanarayana yes, same behavior:
```
Task exception was never retrieved
future: <Task finished coro=<HttpProtocol.body_append() done, defined at /usr/local/lib/python3.7/site-packages/sanic/server.py:356> exception=AttributeError("'NoneType' object has no attribute 'stream'")>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/sanic/server.py", line 357, in body_append
if self.request.stream.is_full():
AttributeError: 'NoneType' object has no attribute 'stream'
```
using 3.7 and replacing walrus with:
```
async def streaming(response):
while True:
body = await request.stream.read()
if body is None:
break
body = body.decode(charset).upper()
await response.write(body)
```
probably can take a look over weekend | 2019-11-17T04:10:27 |
|
sanic-org/sanic | 1,760 | sanic-org__sanic-1760 | [
"1757"
] | 6239fa4f56d33175a1b60e610e97ee344256257f | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -462,7 +462,13 @@ def add_route(
# Decorator
def websocket(
- self, uri, host=None, strict_slashes=None, subprotocols=None, name=None
+ self,
+ uri,
+ host=None,
+ strict_slashes=None,
+ subprotocols=None,
+ version=None,
+ name=None,
):
"""
Decorate a function to be registered as a websocket route
@@ -536,6 +542,7 @@ async def websocket_handler(request, *args, **kwargs):
methods=frozenset({"GET"}),
host=host,
strict_slashes=strict_slashes,
+ version=version,
name=name,
)
)
@@ -550,6 +557,7 @@ def add_websocket_route(
host=None,
strict_slashes=None,
subprotocols=None,
+ version=None,
name=None,
):
"""
@@ -577,6 +585,7 @@ def add_websocket_route(
host=host,
strict_slashes=strict_slashes,
subprotocols=subprotocols,
+ version=version,
name=name,
)(handler)
| diff --git a/tests/test_routes.py b/tests/test_routes.py
--- a/tests/test_routes.py
+++ b/tests/test_routes.py
@@ -531,6 +531,19 @@ async def handler(request, ws):
assert ev.is_set()
+def test_add_webscoket_route_with_version(app):
+ ev = asyncio.Event()
+
+ async def handler(request, ws):
+ assert ws.subprotocol is None
+ ev.set()
+
+ app.add_websocket_route(handler, "/ws", version=1)
+ request, response = app.test_client.websocket("/v1/ws")
+ assert response.opened is True
+ assert ev.is_set()
+
+
def test_route_duplicate(app):
with pytest.raises(RouteExists):
| Websocket doesn't work when using version parameter
**Describe the bug**
When defining a websocket handler using `version=1` a exceptions will be thrown.
**Code snippet**
Relevant source code, make sure to remove what is not necessary.
```
@app.websocket("/changes", version=1)
```
**Expected behavior**
It should work like without `version=1`
**Environment (please complete the following information):**
- OS: Fedora 31
- Sanic: 19.12.2
| I found out, that the websocket path is still `/changes` without `/v1` as prefix.
It's a routing issue.
`version` is missing here: https://github.com/huge-success/sanic/blob/master/sanic/app.py#L519
@danieldaeschle Would you be able to submit a PR and perhaps a unit test?
I didn't found the root case yet. I'm not familiar with the sanic code. Do
you have a hint for me?
Otherwise i have to dig deeper an try to submit one.
Adam Hopkins <[email protected]> schrieb am Mo., 6. Jan. 2020, 21:12:
> @danieldaeschle <https://github.com/danieldaeschle> Would you be able to
> submit a PR and perhaps a unit test?
>
> โ
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huge-success/sanic/issues/1757?email_source=notifications&email_token=AHKTWHANZ7ML36I6OQIWJHTQ4OGCBA5CNFSM4KC3JRTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIGUK7Y#issuecomment-571295103>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHKTWHDQTETSXJJN5FV3OQDQ4OGCBANCNFSM4KC3JRTA>
> .
>
I did a quick look at the code you linked to on my phone. I'm fairly certain that you pinpointed the right place. All you should need to do is add the version kwarg to the method and then to the call that you linked. If you want to just do that much I can then take a look and give you some more direction on where and how to add the test.
I already tried adding that kwarg. But that wasn't enough to work.
Adam Hopkins <[email protected]> schrieb am Di., 7. Jan. 2020, 07:14:
> I did a quick look at the code you linked to on my phone. I'm fairly
> certain that you pinpointed the right place. All you should need to do is
> add the version kwarg to the method and then to the call that you linked.
> If you want to just do that much I can then take a look and give you some
> more direction on where and how to add the test.
>
> โ
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huge-success/sanic/issues/1757?email_source=notifications&email_token=AHKTWHC43ITNNSLP44XTACLQ4QMUFA5CNFSM4KC3JRTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIH2SQY#issuecomment-571451715>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHKTWHFJAN3GKFAO2DXRF73Q4QMUFANCNFSM4KC3JRTA>
> .
>
OK. Thanks for taking a shot. I can take this one in conjunction with upgrading websockets dependency.
Thanks!
Is it possible to get this fix soon?
Am Di., 7. Jan. 2020 um 18:04 Uhr schrieb Adam Hopkins <
[email protected]>:
> OK. Thanks for taking a shot. I can take this one in conjunction with
> upgrading websockets dependency.
>
> โ
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huge-success/sanic/issues/1757?email_source=notifications&email_token=AHKTWHC2CGRAAT5ZK5GOLI3Q4SYZTA5CNFSM4KC3JRTKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIJR7NA#issuecomment-571678644>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHKTWHF53DZANYMZSMP5MULQ4SYZTANCNFSM4KC3JRTA>
> .
>
| 2020-01-09T20:56:43 |
sanic-org/sanic | 1,762 | sanic-org__sanic-1762 | [
"1754"
] | 784d5cce5234933a5af36b668c8ffce1af4af91a | diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -735,6 +735,26 @@ def close(self):
task = asyncio.ensure_future(coro, loop=self.loop)
return task
+ def start_serving(self):
+ if self.server:
+ try:
+ return self.server.start_serving()
+ except AttributeError:
+ raise NotImplementedError(
+ "server.start_serving not available in this version "
+ "of asyncio or uvloop."
+ )
+
+ def serve_forever(self):
+ if self.server:
+ try:
+ return self.server.serve_forever()
+ except AttributeError:
+ raise NotImplementedError(
+ "server.serve_forever not available in this version "
+ "of asyncio or uvloop."
+ )
+
def __await__(self):
"""Starts the asyncio server, returns AsyncServerCoro"""
task = asyncio.ensure_future(self.serve_coro)
| diff --git a/tests/test_app.py b/tests/test_app.py
--- a/tests/test_app.py
+++ b/tests/test_app.py
@@ -44,7 +44,7 @@ def test_create_asyncio_server(app):
@pytest.mark.skipif(
sys.version_info < (3, 7), reason="requires python3.7 or higher"
)
-def test_asyncio_server_start_serving(app):
+def test_asyncio_server_no_start_serving(app):
if not uvloop_installed():
loop = asyncio.get_event_loop()
asyncio_srv_coro = app.create_server(
@@ -54,6 +54,22 @@ def test_asyncio_server_start_serving(app):
srv = loop.run_until_complete(asyncio_srv_coro)
assert srv.is_serving() is False
[email protected](
+ sys.version_info < (3, 7), reason="requires python3.7 or higher"
+)
+def test_asyncio_server_start_serving(app):
+ if not uvloop_installed():
+ loop = asyncio.get_event_loop()
+ asyncio_srv_coro = app.create_server(
+ return_asyncio_server=True,
+ asyncio_server_kwargs=dict(start_serving=False),
+ )
+ srv = loop.run_until_complete(asyncio_srv_coro)
+ assert srv.is_serving() is False
+ loop.run_until_complete(srv.start_serving())
+ assert srv.is_serving() is True
+ srv.close()
+ # Looks like we can't easily test `serve_forever()`
def test_app_loop_not_running(app):
with pytest.raises(SanicException) as excinfo:
| Incorrect documentation for AsyncioServer or missing __getattr__ on AsyncioServer
**Describe the bug**
The described sample code for the AsyncIO server is out of date (https://sanic.readthedocs.io/en/latest/sanic/asyncio_python37.html) or a resultant merge stripped a `__getattr__` handler from the AsyncIOServer class.
I discovered this while bootstrapping a mock Azure API inside pytest for my API server (all Sanic because, why not?)
**Code snippet**
```
#!/usr/bin/env python3
import socket
import asyncio
from sanic import Sanic
from sanic.response import text
async def main():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
sock.bind(("", 0))
print(sock.getsockname())
app = Sanic("azure_test_api")
@app.get("/")
async def test_index(request):
return text('test')
server = await app.create_server(
sock=sock,
return_asyncio_server=True,
asyncio_server_kwargs=dict(start_serving=False),
)
# The docs say this
await server.start_serving()
await server.serve_forever()
# but only this works ;)
# await server.server.start_serving()
# await server.server.serve_forever()
if __name__ == "__main__":
asyncio.set_event_loop_policy(asyncio.DefaultEventLoopPolicy())
asyncio.run(main())
```
**Expected behavior**
I expect the docs to be updated or the following added to the AsyncIOServer class in server.py (https://github.com/huge-success/sanic/blob/2f776eba85b80ed5ee7e75badb92028ee1df0f4f/sanic/server.py#L671) -
```
def __getattr__(self, prop_name):
if not self.server:
raise AttributeError('{} not found on {}'.format(prop_name, self)
return getattr(self.server, prop_name)
```
**Environment (please complete the following information):**
- OS: osx
- Version 10.14.6 (18G95)
**Additional context**
N/A
| Hi @autumnjolitz
This might be a problem introduced by my changes in this PR: https://github.com/huge-success/sanic/pull/1676
I didn't test those changes on python3.7, nor with the "server.start_serving()" or "server.serve_forever()" methods.
We can change the documentation to reflect the code changes you mentioned, or we can add those properties to the AsyncioServer proxy object in a future Sanic version.
I think the better change is the second described by @ashleysommer: changing the code to reflect the existing documentation to undo the regression. ๐
PRS welcome. | 2020-01-10T03:11:23 |
sanic-org/sanic | 1,764 | sanic-org__sanic-1764 | [
"1742"
] | caa1b4d69b8b154704c264d154b0579faa6b2bb3 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -194,6 +194,12 @@ def route(
strict_slashes = self.strict_slashes
def response(handler):
+ if isinstance(handler, tuple):
+ # if a handler fn is already wrapped in a route, the handler
+ # variable will be a tuple of (existing routes, handler fn)
+ routes, handler = handler
+ else:
+ routes = []
args = list(signature(handler).parameters.keys())
if not args:
@@ -205,14 +211,16 @@ def response(handler):
if stream:
handler.is_stream = stream
- routes = self.router.add(
- uri=uri,
- methods=methods,
- handler=handler,
- host=host,
- strict_slashes=strict_slashes,
- version=version,
- name=name,
+ routes.extend(
+ self.router.add(
+ uri=uri,
+ methods=methods,
+ handler=handler,
+ host=host,
+ strict_slashes=strict_slashes,
+ version=version,
+ name=name,
+ )
)
return routes, handler
@@ -476,6 +484,13 @@ def websocket(
strict_slashes = self.strict_slashes
def response(handler):
+ if isinstance(handler, tuple):
+ # if a handler fn is already wrapped in a route, the handler
+ # variable will be a tuple of (existing routes, handler fn)
+ routes, handler = handler
+ else:
+ routes = []
+
async def websocket_handler(request, *args, **kwargs):
request.app = self
if not getattr(handler, "__blueprintname__", False):
@@ -516,13 +531,15 @@ async def websocket_handler(request, *args, **kwargs):
self.websocket_tasks.remove(fut)
await ws.close()
- routes = self.router.add(
- uri=uri,
- handler=websocket_handler,
- methods=frozenset({"GET"}),
- host=host,
- strict_slashes=strict_slashes,
- name=name,
+ routes.extend(
+ self.router.add(
+ uri=uri,
+ handler=websocket_handler,
+ methods=frozenset({"GET"}),
+ host=host,
+ strict_slashes=strict_slashes,
+ name=name,
+ )
)
return routes, handler
| diff --git a/tests/test_routes.py b/tests/test_routes.py
--- a/tests/test_routes.py
+++ b/tests/test_routes.py
@@ -551,6 +551,35 @@ async def handler4(request, dynamic):
pass
+def test_double_stack_route(app):
+ @app.route("/test/1")
+ @app.route("/test/2")
+ async def handler1(request):
+ return text("OK")
+
+ request, response = app.test_client.get("/test/1")
+ assert response.status == 200
+ request, response = app.test_client.get("/test/2")
+ assert response.status == 200
+
+
[email protected]
+async def test_websocket_route_asgi(app):
+ ev = asyncio.Event()
+
+ @app.websocket("/test/1")
+ @app.websocket("/test/2")
+ async def handler(request, ws):
+ ev.set()
+
+ request, response = await app.asgi_client.websocket("/test/1")
+ first_set = ev.is_set()
+ ev.clear()
+ request, response = await app.asgi_client.websocket("/test/1")
+ second_set = ev.is_set()
+ assert(first_set and second_set)
+
+
def test_method_not_allowed(app):
@app.route("/test", methods=["GET"])
async def handler(request):
| Changes to Blueprints application breaks multiply applied routes!!!
Consider the following pattern:
```
@app.get("/url1")
@app.get("/url2")
async def handle_multiple(request):
return text("some response")
```
The recent change in #1690 breaks that pattern.
It's a common pattern to use multiple `@blueprint.get(...)` and `@app.get(...)` to handle routes that have common behavior with slight variations, like a login process.
The only alternative I can think of is splitting out each of these routes into distinct rump functions that literally forward onto a common function, which inflates from one line declarations to three lines plus the constant common route handler.
| @benjolitz
The original change in #1690 was done in order to account for an old standing open issue that actually needed fixing. I missed this case when implementing the change that seems to have broken the case of multiple routes being bound to the same handler method.
Let me see what can be done to address this while still being able to handle the original request from #37
@harshanarayana okay. wrong username for me btw. ;)
@autumnjolitz oh. Sorry. That explains why the auto complete wasn't working last night. ๐ sorry about the mistake
Hi @autumnjolitz
Until we get a proper fix for this, can you try this workaround?
```
async def handle_multiple(request):
return text("some response")
names1, handler1 = app.get("/url1")(handle_multiple)
names2, handler2 = app.get("/url2")(handle_multiple)
``` | 2020-01-10T04:00:06 |
sanic-org/sanic | 1,789 | sanic-org__sanic-1789 | [
"1788"
] | 91f6abaa81248189fbcbdf685e8bdcbb7846609f | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -830,6 +830,14 @@ def url_for(self, view_name: str, **kwargs):
"Endpoint with name `{}` was not found".format(view_name)
)
+ # If the route has host defined, split that off
+ # TODO: Retain netloc and path separately in Route objects
+ host = uri.find("/")
+ if host > 0:
+ host, uri = uri[:host], uri[host:]
+ else:
+ host = None
+
if view_name == "static" or view_name.endswith(".static"):
filename = kwargs.pop("filename", None)
# it's static folder
@@ -862,7 +870,7 @@ def url_for(self, view_name: str, **kwargs):
netloc = kwargs.pop("_server", None)
if netloc is None and external:
- netloc = self.config.get("SERVER_NAME", "")
+ netloc = host or self.config.get("SERVER_NAME", "")
if external:
if not scheme:
| diff --git a/tests/test_url_for.py b/tests/test_url_for.py
new file mode 100644
--- /dev/null
+++ b/tests/test_url_for.py
@@ -0,0 +1,12 @@
+def test_routes_with_host(app):
+ @app.route("/")
+ @app.route("/", name="hostindex", host="example.com")
+ @app.route("/path", name="hostpath", host="path.example.com")
+ def index(request):
+ pass
+
+ assert app.url_for("index") == "/"
+ assert app.url_for("hostindex") == "/"
+ assert app.url_for("hostpath") == "/path"
+ assert app.url_for("hostindex", _external=True) == "http://example.com/"
+ assert app.url_for("hostpath", _external=True) == "http://path.example.com/path"
| url_for() doesn't return a working URI for a blueprint route with host
**Describe the bug**
When i use blueprints with `host` argument and try to get url with `url_for` i get an incorrect url - blueprint's host is used as a path, and not as a hostname
**Code snippet**
```python
from sanic import Sanic, Blueprint
from sanic.response import text
bp = Blueprint('bp_app', host='bp.example.com')
@bp.route('/', name='index')
async def bp_index(request):
url = request.app.url_for('bp_app.index')
return text(url)
@bp.route('/internal', name='internal')
async def bp_index(request):
url = request.app.url_for('bp_app.internal')
return text(url)
@bp.route('/external', name='external')
async def bp_index(request):
url = request.app.url_for('bp_app.external', _external=True)
return text(url)
app = Sanic('app_name')
app.blueprint(bp)
if __name__ == '__main__':
app.run(port=8000)
```
Output
```console
# 1
$ curl -H "Host:bp.example.com" http://127.0.0.1:8000/
bp.example.com
# 2
$ curl -H "Host:bp.example.com" http://127.0.0.1:8000/internal
bp.example.com/internal
# 3
$ curl -H "Host:bp.example.com" http://127.0.0.1:8000/external
http:///bp.example.com/external
```
`1` example returns hostname by like a path
`2` example returns hostname and path but it's like a path
`3` example return full url, but with no hostname (3 slashes)
**Expected behavior**
I'm expecting correct urls.
For internal urls - only path returned as stated in route.
For external urls - fully qualified domain name with full path
`1` example - `/`
`2` example - `/internal`
`3` example - `http://bp.example.com/external`
**Environment (please complete the following information):**
- OS: macOS
- Version: 19.12.2
**Additional context**
Can't get correct place, but found some places, where it can be
[app.py#L829-L832](https://github.com/huge-success/sanic/blob/v19.12.2/sanic/app.py#L829-L832),
[app.py#L848](https://github.com/huge-success/sanic/blob/v19.12.2/sanic/app.py#L848) (perhaps, blueprint's host should be used, and config's server name as default)
| 2020-02-20T16:20:39 |
|
sanic-org/sanic | 1,842 | sanic-org__sanic-1842 | [
"1774"
] | 7c04c9a22775ffbff04dc17ad4d605591a6ae20e | diff --git a/sanic/static.py b/sanic/static.py
--- a/sanic/static.py
+++ b/sanic/static.py
@@ -1,3 +1,4 @@
+from functools import partial, wraps
from mimetypes import guess_type
from os import path
from re import sub
@@ -15,6 +16,89 @@
from sanic.response import HTTPResponse, file, file_stream
+async def _static_request_handler(
+ file_or_directory,
+ use_modified_since,
+ use_content_range,
+ stream_large_files,
+ request,
+ content_type=None,
+ file_uri=None,
+):
+ # Using this to determine if the URL is trying to break out of the path
+ # served. os.path.realpath seems to be very slow
+ if file_uri and "../" in file_uri:
+ raise InvalidUsage("Invalid URL")
+ # Merge served directory and requested file if provided
+ # Strip all / that in the beginning of the URL to help prevent python
+ # from herping a derp and treating the uri as an absolute path
+ root_path = file_path = file_or_directory
+ if file_uri:
+ file_path = path.join(file_or_directory, sub("^[/]*", "", file_uri))
+
+ # URL decode the path sent by the browser otherwise we won't be able to
+ # match filenames which got encoded (filenames with spaces etc)
+ file_path = path.abspath(unquote(file_path))
+ if not file_path.startswith(path.abspath(unquote(root_path))):
+ raise FileNotFound(
+ "File not found", path=file_or_directory, relative_url=file_uri
+ )
+ try:
+ headers = {}
+ # Check if the client has been sent this file before
+ # and it has not been modified since
+ stats = None
+ if use_modified_since:
+ stats = await stat_async(file_path)
+ modified_since = strftime(
+ "%a, %d %b %Y %H:%M:%S GMT", gmtime(stats.st_mtime)
+ )
+ if request.headers.get("If-Modified-Since") == modified_since:
+ return HTTPResponse(status=304)
+ headers["Last-Modified"] = modified_since
+ _range = None
+ if use_content_range:
+ _range = None
+ if not stats:
+ stats = await stat_async(file_path)
+ headers["Accept-Ranges"] = "bytes"
+ headers["Content-Length"] = str(stats.st_size)
+ if request.method != "HEAD":
+ try:
+ _range = ContentRangeHandler(request, stats)
+ except HeaderNotFound:
+ pass
+ else:
+ del headers["Content-Length"]
+ for key, value in _range.headers.items():
+ headers[key] = value
+ headers["Content-Type"] = (
+ content_type or guess_type(file_path)[0] or "text/plain"
+ )
+ if request.method == "HEAD":
+ return HTTPResponse(headers=headers)
+ else:
+ if stream_large_files:
+ if type(stream_large_files) == int:
+ threshold = stream_large_files
+ else:
+ threshold = 1024 * 1024
+
+ if not stats:
+ stats = await stat_async(file_path)
+ if stats.st_size >= threshold:
+ return await file_stream(
+ file_path, headers=headers, _range=_range
+ )
+ return await file(file_path, headers=headers, _range=_range)
+ except ContentRangeError:
+ raise
+ except Exception:
+ raise FileNotFound(
+ "File not found", path=file_or_directory, relative_url=file_uri
+ )
+
+
def register(
app,
uri,
@@ -56,86 +140,21 @@ def register(
if not path.isfile(file_or_directory):
uri += "<file_uri:" + pattern + ">"
- async def _handler(request, file_uri=None):
- # Using this to determine if the URL is trying to break out of the path
- # served. os.path.realpath seems to be very slow
- if file_uri and "../" in file_uri:
- raise InvalidUsage("Invalid URL")
- # Merge served directory and requested file if provided
- # Strip all / that in the beginning of the URL to help prevent python
- # from herping a derp and treating the uri as an absolute path
- root_path = file_path = file_or_directory
- if file_uri:
- file_path = path.join(
- file_or_directory, sub("^[/]*", "", file_uri)
- )
-
- # URL decode the path sent by the browser otherwise we won't be able to
- # match filenames which got encoded (filenames with spaces etc)
- file_path = path.abspath(unquote(file_path))
- if not file_path.startswith(path.abspath(unquote(root_path))):
- raise FileNotFound(
- "File not found", path=file_or_directory, relative_url=file_uri
- )
- try:
- headers = {}
- # Check if the client has been sent this file before
- # and it has not been modified since
- stats = None
- if use_modified_since:
- stats = await stat_async(file_path)
- modified_since = strftime(
- "%a, %d %b %Y %H:%M:%S GMT", gmtime(stats.st_mtime)
- )
- if request.headers.get("If-Modified-Since") == modified_since:
- return HTTPResponse(status=304)
- headers["Last-Modified"] = modified_since
- _range = None
- if use_content_range:
- _range = None
- if not stats:
- stats = await stat_async(file_path)
- headers["Accept-Ranges"] = "bytes"
- headers["Content-Length"] = str(stats.st_size)
- if request.method != "HEAD":
- try:
- _range = ContentRangeHandler(request, stats)
- except HeaderNotFound:
- pass
- else:
- del headers["Content-Length"]
- for key, value in _range.headers.items():
- headers[key] = value
- headers["Content-Type"] = (
- content_type or guess_type(file_path)[0] or "text/plain"
- )
- if request.method == "HEAD":
- return HTTPResponse(headers=headers)
- else:
- if stream_large_files:
- if type(stream_large_files) == int:
- threshold = stream_large_files
- else:
- threshold = 1024 * 1024
-
- if not stats:
- stats = await stat_async(file_path)
- if stats.st_size >= threshold:
- return await file_stream(
- file_path, headers=headers, _range=_range
- )
- return await file(file_path, headers=headers, _range=_range)
- except ContentRangeError:
- raise
- except Exception:
- raise FileNotFound(
- "File not found", path=file_or_directory, relative_url=file_uri
- )
-
# special prefix for static files
if not name.startswith("_static_"):
name = f"_static_{name}"
+ _handler = wraps(_static_request_handler)(
+ partial(
+ _static_request_handler,
+ file_or_directory,
+ use_modified_since,
+ use_content_range,
+ stream_large_files,
+ content_type=content_type,
+ )
+ )
+
app.route(
uri,
methods=["GET", "HEAD"],
| diff --git a/tests/test_multiprocessing.py b/tests/test_multiprocessing.py
--- a/tests/test_multiprocessing.py
+++ b/tests/test_multiprocessing.py
@@ -87,3 +87,14 @@ def test_pickle_app_with_bp(app, protocol):
request, response = up_p_app.test_client.get("/")
assert up_p_app.is_request_stream is False
assert response.text == "Hello"
+
[email protected]("protocol", [3, 4])
+def test_pickle_app_with_static(app, protocol):
+ app.route("/")(handler)
+ app.static('/static', "/tmp/static")
+ p_app = pickle.dumps(app, protocol=protocol)
+ del app
+ up_p_app = pickle.loads(p_app)
+ assert up_p_app
+ request, response = up_p_app.test_client.get("/static/missing.txt")
+ assert response.status == 404
| `AttributeError: Can't pickle local object 'register.<locals>._handler'` sanic 19.12.2; python3.8.2; sanic-cors==0.10.0.b1
Hi.
On OSX 10.14.6, python 3.8.1
conda list sanic
# packages in environment at /Users/efarrell/miniconda3/envs/LG-py38:
#
# Name Version Build Channel
pytest-sanic 1.1.2 pypi_0 pypi
sanic 19.12.2 py38_0 conda-forge
sanic-cors 0.10.0b1 pypi_0 pypi
sanic-openapi3e 0.6.2 pypi_0 pypi
sanic-plugins-framework 0.9.0b1 pypi_0 pypi
I cannot start an API due to
Traceback (most recent call last):
File "API/src/main.py", line 613, in <module>
main(sys.argv)
File "API/src/main.py", line 195, in main
app.go_fast(**sanic_run_env)
File "/Users/efarrell/miniconda3/envs/py38/lib/python3.8/site-packages/sanic/app.py", line 1169, in run
serve_multiple(server_settings, workers)
File "/Users/efarrell/miniconda3/envs/py38/lib/python3.8/site-packages/sanic/server.py", line 997, in serve_multiple
process.start()
File "/Users/efarrell/miniconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/efarrell/miniconda3/envs/py38/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/efarrell/miniconda3/envs/py38/lib/python3.8/multiprocessing/context.py", line 283, in _Popen
return Popen(process_obj)
File "/Users/efarrell/miniconda3/envs/py38/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/efarrell/miniconda3/envs/py38/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/efarrell/miniconda3/envs/py38/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/efarrell/miniconda3/envs/py38/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'register.<locals>._handler'
| For triaging: this is now a problem because Python 3.8 changes the default mode of `multiprocessing` on MacOS to `spawn` instead of `fork`. This requires all parameters passed to Sanic worker processed to be picklable, and now there is an object that isn't.
This could be worked around by forcing `fork` mode instead of the new default, but there really shouldn't be objects that cannot be pickled there because Windows cannot fork and probably could not spawn any workers then either.
Hi @Tronic - many thanks for this. We write code on a mixture of OSX and Linux, and deploy to Linux, so in our particular instance, not being able to fork on Windows is not going to cause a problem.
We _do_ have complex objects being passed to the workers, but I had _though_ that those with connections to databases (which we know cannot be pickled) were created only in `after_server_start` handlers. Will check. Many thanks.
@Tronic a comment related to this:
around the release of Sanic 18.12 we wanted to get multiprocessing working on Windows, so I put in some effort and made _all_ of Sanic pickle-able.
I thought I added tests to ensure a sanic app could be fully pickled and unpickled without error.
Looks like now some code has been introduced at some point that has a local sub_function, which isn't pickleable, or perhaps its only a problem on Python 3.8, which I think our test suite isn't running properly yet.
@ashleysommer Would it be viable to move most app initialisation to within workers, instead of pickling it from the main process? Only server socket opening really needs to be in the master process, while app definition and other user code could be run in workers. Getting this done would need quite a bit of refactoring, but barring that, it could be the clean solution to this. Doing so would avoid troublesome pickling and the `freeze_support` hack of `multiprocessing` that tries to avoid the re-execution of main function and other such code when child processes are run (but it still re-runs any module level code reached prior to that freeze function call).
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is incorrect, please respond with an update. Thank you for your contributions.
This still needs to be fixed.
@Tronic I have some (hopefully good) news.
Our project was busy for a few months but last week we got around to putting effort back into our py38 upgrade. Somewhere along the lines we had also re-checked all code to ensure that calls to set up connections to external databases (we have three external connection pools to three different kinds of database) were made in `@bp.listener("after_server_start")` methods in our blueprints. There had been at least one "before_server_start" before.
We were happily surprised to find that due to ensuring we created our connections "after_server_start" we can start the sanic `19.12.2` API just fine on OSX `10.14.6` with py3.8.2 (and sanic-cors `0.10.0.post3` - but I do not believe we did anything special with that).
At _this_ stage I can think of two possible next steps:
1: a note in the docs (and a review of existing examples) to state that connections to databases should be created "after_server_start"
2: a note in the docs that states that "after_server_start" happens after the main thread creates the workers but before allowing them to take traffic. It seems somewhat obvious in hind-sight, but I struggled for a little to grok this.
@Tronic - what's the best way to proceed?
I don't think that before vs. after server start should make a difference for pickling. Both are done in worker processes, and only `await loop.create_server(...)` occurs between them. Also, what happens inside those handlers should not affect anything.
According to your traceback, the `_handler` function defined within `sanic.static.register` is what cannot be pickled, which makes sense because pickling AFAIK doesn't work for anything defined inside functions. Why this is *now* working is unknown to me, and more time would be needed to fully investigate. Perhaps different Python or Sanic versions, or perhaps somehow it no longer uses that `_handler` wrapper.
Ah - I clearly still don't understand the inner working yet then.
I can't relate the string `AttributeError: Can't pickle local object 'register.<locals>._handler'` to our code - at least not directly nor in an obvious manner. Do you happen to have any pointers as to how to chase this particular thing down?
Are you no longer using `app.static` (for serving static files)? That would explain it because this appears to be the only place where that is used.
I believe this could be fixed quite easily by rewriting `sanic/static.py` to use `functools.partial` to pass `use_modified_since`, `use_content_range`, `stream_large_files` and `content_type` to `_handler` instead of it being a local function.
Pinging anyone who cares: pull requests are welcome! | 2020-05-07T01:07:03 |
sanic-org/sanic | 1,848 | sanic-org__sanic-1848 | [
"1847"
] | e7001b00747b659f7042b0534802b936ee8a53e0 | diff --git a/examples/blueprint_middlware_execution_order.py b/examples/blueprint_middlware_execution_order.py
new file mode 100644
--- /dev/null
+++ b/examples/blueprint_middlware_execution_order.py
@@ -0,0 +1,43 @@
+from sanic import Sanic, Blueprint
+from sanic.response import text
+'''
+Demonstrates that blueprint request middleware are executed in the order they
+are added. And blueprint response middleware are executed in _reverse_ order.
+On a valid request, it should print "1 2 3 6 5 4" to terminal
+'''
+
+app = Sanic(__name__)
+
+bp = Blueprint("bp_"+__name__)
+
[email protected]('request')
+def request_middleware_1(request):
+ print('1')
+
[email protected]('request')
+def request_middleware_2(request):
+ print('2')
+
[email protected]('request')
+def request_middleware_3(request):
+ print('3')
+
[email protected]('response')
+def resp_middleware_4(request, response):
+ print('4')
+
[email protected]('response')
+def resp_middleware_5(request, response):
+ print('5')
+
[email protected]('response')
+def resp_middleware_6(request, response):
+ print('6')
+
[email protected]('/')
+def pop_handler(request):
+ return text('hello world')
+
+app.blueprint(bp, url_prefix='/bp')
+
+app.run(host="0.0.0.0", port=8000, debug=True, auto_reload=False)
diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -653,7 +653,7 @@ def register_named_middleware(
if _rn not in self.named_response_middleware:
self.named_response_middleware[_rn] = deque()
if middleware not in self.named_response_middleware[_rn]:
- self.named_response_middleware[_rn].append(middleware)
+ self.named_response_middleware[_rn].appendleft(middleware)
# Decorator
def middleware(self, middleware_or_request):
| diff --git a/tests/test_blueprints.py b/tests/test_blueprints.py
--- a/tests/test_blueprints.py
+++ b/tests/test_blueprints.py
@@ -253,7 +253,7 @@ def handler2(request):
def test_bp_middleware(app):
- blueprint = Blueprint("test_middleware")
+ blueprint = Blueprint("test_bp_middleware")
@blueprint.middleware("response")
async def process_response(request, response):
@@ -270,6 +270,38 @@ async def handler(request):
assert response.status == 200
assert response.text == "FAIL"
+def test_bp_middleware_order(app):
+ blueprint = Blueprint("test_bp_middleware_order")
+ order = list()
+ @blueprint.middleware("request")
+ def mw_1(request):
+ order.append(1)
+ @blueprint.middleware("request")
+ def mw_2(request):
+ order.append(2)
+ @blueprint.middleware("request")
+ def mw_3(request):
+ order.append(3)
+ @blueprint.middleware("response")
+ def mw_4(request, response):
+ order.append(6)
+ @blueprint.middleware("response")
+ def mw_5(request, response):
+ order.append(5)
+ @blueprint.middleware("response")
+ def mw_6(request, response):
+ order.append(4)
+
+ @blueprint.route("/")
+ def process_response(request):
+ return text("OK")
+
+ app.blueprint(blueprint)
+ order.clear()
+ request, response = app.test_client.get("/")
+
+ assert response.status == 200
+ assert order == [1, 2, 3, 4, 5, 6]
def test_bp_exception_handler(app):
blueprint = Blueprint("test_middleware")
| "Named Response Middleware" executed in wrong order
**Describe the bug**
PR https://github.com/huge-success/sanic/pull/1690 Introduced "named response middlware" that is, middleware which is only executed in a given request context. For example a blueprint middleware is only executed on a route which is defined in _that_ blueprint.
There was a copy+paste error in the `register_named_middleware` function, here: https://github.com/huge-success/sanic/blob/e7001b00747b659f7042b0534802b936ee8a53e0/sanic/app.py#L656
When registering a "response" middleware, they are supposed to be added to the left in reverse. So `appendleft()` should be used instead of `append()`. The correct behavior is seen in the normal `register_middleware` function.
**Code snippet**
See these two examples, the first using normal middleware, and the second using named middleware:
```
from sanic import Sanic
from sanic.response import text
app = Sanic(__name__)
@app.middleware('request')
def request_middleware_1(request):
print('1')
@app.middleware('request')
def request_middleware_2(request):
print('2')
@app.middleware('request')
def request_middleware_3(request):
print('3')
@app.middleware('response')
def resp_middleware_4(request, response):
print('4')
@app.middleware('response')
def resp_middleware_5(request, response):
print('5')
@app.middleware('response')
def resp_middleware_6(request, response):
print('6')
@app.route('/')
def pop_handler(request):
return text('hello world')
app.run(host="0.0.0.0", port=8000, debug=True, auto_reload=False)
```
vs:
```
from sanic import Sanic, Blueprint
from sanic.response import text
app = Sanic(__name__)
bp = Blueprint("bp_"+__name__)
@bp.middleware('request')
def request_middleware_1(request):
print('1')
@bp.middleware('request')
def request_middleware_2(request):
print('2')
@bp.middleware('request')
def request_middleware_3(request):
print('3')
@bp.middleware('response')
def resp_middleware_4(request, response):
print('4')
@bp.middleware('response')
def resp_middleware_5(request, response):
print('5')
@bp.middleware('response')
def resp_middleware_6(request, response):
print('6')
@bp.route('/')
def pop_handler(request):
return text('hello world')
app.blueprint(bp, url_prefix='/bp')
app.run(host="0.0.0.0", port=8000, debug=True, auto_reload=False)
```
**Expected behavior**
See the first snippet prints "1 2 3 6 5 4" (correct) but the second snippet prints "1 2 3 4 5 6". This should match the first.
**Additional Context**
This bug is _similar to_ but not the same as https://github.com/huge-success/sanic/issues/1845
This bug was uncovered while looking deeper into https://github.com/huge-success/sanic/issues/1845
| 2020-05-13T23:55:23 |
|
sanic-org/sanic | 1,857 | sanic-org__sanic-1857 | [
"1856"
] | bedf68a9b2025618a94cb8044f495a0abd87a134 | diff --git a/sanic/websocket.py b/sanic/websocket.py
--- a/sanic/websocket.py
+++ b/sanic/websocket.py
@@ -113,7 +113,7 @@ async def websocket_handshake(self, request, subprotocols=None):
# hook up the websocket protocol
self.websocket = WebSocketCommonProtocol(
- timeout=self.websocket_timeout,
+ close_timeout=self.websocket_timeout,
max_size=self.websocket_max_size,
max_queue=self.websocket_max_queue,
read_limit=self.websocket_read_limit,
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -79,7 +79,7 @@ def open_local(paths, mode="r", encoding="utf8"):
uvloop,
ujson,
"aiofiles>=0.3.0",
- "websockets>=7.0,<9.0",
+ "websockets>=8.1,<9.0",
"multidict>=4.0,<5.0",
"httpx==0.11.1",
]
| diff --git a/tests/test_reloader.py b/tests/test_reloader.py
--- a/tests/test_reloader.py
+++ b/tests/test_reloader.py
@@ -1,8 +1,9 @@
import os
import secrets
import sys
+from contextlib import suppress
-from subprocess import PIPE, Popen
+from subprocess import PIPE, Popen, TimeoutExpired
from tempfile import TemporaryDirectory
from textwrap import dedent
from threading import Timer
@@ -85,4 +86,5 @@ async def test_reloader_live(runargs, mode):
finally:
timeout.cancel()
terminate(proc)
- proc.wait(timeout=3)
+ with suppress(TimeoutExpired):
+ proc.wait(timeout=3)
| Nightly build fails due to websockets version not matching setup.py
on setup.py: >=0.7.0,<0.9
on tox.ini: >=0.7.0,<0.8
| 2020-05-16T18:30:30 |
|
sanic-org/sanic | 1,906 | sanic-org__sanic-1906 | [
"1904"
] | 0072fd1573d43c64a4a2b9b89b4e4f887bc07a70 | diff --git a/sanic/config.py b/sanic/config.py
--- a/sanic/config.py
+++ b/sanic/config.py
@@ -24,6 +24,8 @@
"WEBSOCKET_MAX_QUEUE": 32,
"WEBSOCKET_READ_LIMIT": 2 ** 16,
"WEBSOCKET_WRITE_LIMIT": 2 ** 16,
+ "WEBSOCKET_PING_TIMEOUT": 20,
+ "WEBSOCKET_PING_INTERVAL": 20,
"GRACEFUL_SHUTDOWN_TIMEOUT": 15.0, # 15 sec
"ACCESS_LOG": True,
"FORWARDED_SECRET": None,
diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -14,11 +14,13 @@
from signal import SIG_IGN, SIGINT, SIGTERM, Signals
from signal import signal as signal_func
from time import time
+from typing import Type
from httptools import HttpRequestParser # type: ignore
from httptools.parser.errors import HttpParserError # type: ignore
from sanic.compat import Header, ctrlc_workaround_for_windows
+from sanic.config import Config
from sanic.exceptions import (
HeaderExpectationFailed,
InvalidUsage,
@@ -844,6 +846,7 @@ def serve(
app.asgi = False
connections = connections if connections is not None else set()
+ protocol_kwargs = _build_protocol_kwargs(protocol, app.config)
server = partial(
protocol,
loop=loop,
@@ -852,6 +855,7 @@ def serve(
app=app,
state=state,
unix=unix,
+ **protocol_kwargs,
)
asyncio_server_kwargs = (
asyncio_server_kwargs if asyncio_server_kwargs else {}
@@ -948,6 +952,21 @@ def serve(
remove_unix_socket(unix)
+def _build_protocol_kwargs(
+ protocol: Type[HttpProtocol], config: Config
+) -> dict:
+ if hasattr(protocol, "websocket_timeout"):
+ return {
+ "max_size": config.WEBSOCKET_MAX_SIZE,
+ "max_queue": config.WEBSOCKET_MAX_QUEUE,
+ "read_limit": config.WEBSOCKET_READ_LIMIT,
+ "write_limit": config.WEBSOCKET_WRITE_LIMIT,
+ "ping_timeout": config.WEBSOCKET_PING_TIMEOUT,
+ "ping_interval": config.WEBSOCKET_PING_INTERVAL,
+ }
+ return {}
+
+
def bind_socket(host: str, port: int, *, backlog=100) -> socket.socket:
"""Create TCP server socket.
:param host: IPv4, IPv6 or hostname may be specified
diff --git a/sanic/websocket.py b/sanic/websocket.py
--- a/sanic/websocket.py
+++ b/sanic/websocket.py
@@ -35,6 +35,8 @@ def __init__(
websocket_max_queue=None,
websocket_read_limit=2 ** 16,
websocket_write_limit=2 ** 16,
+ websocket_ping_interval=20,
+ websocket_ping_timeout=20,
**kwargs
):
super().__init__(*args, **kwargs)
@@ -45,6 +47,8 @@ def __init__(
self.websocket_max_queue = websocket_max_queue
self.websocket_read_limit = websocket_read_limit
self.websocket_write_limit = websocket_write_limit
+ self.websocket_ping_interval = websocket_ping_interval
+ self.websocket_ping_timeout = websocket_ping_timeout
# timeouts make no sense for websocket routes
def request_timeout_callback(self):
@@ -119,6 +123,8 @@ async def websocket_handshake(self, request, subprotocols=None):
max_queue=self.websocket_max_queue,
read_limit=self.websocket_read_limit,
write_limit=self.websocket_write_limit,
+ ping_interval=self.websocket_ping_interval,
+ ping_timeout=self.websocket_ping_timeout,
)
# Following two lines are required for websockets 8.x
self.websocket.is_client = False
| diff --git a/tests/test_app.py b/tests/test_app.py
--- a/tests/test_app.py
+++ b/tests/test_app.py
@@ -1,6 +1,7 @@
import asyncio
import logging
import sys
+from unittest.mock import patch
from inspect import isawaitable
@@ -148,6 +149,35 @@ async def handler(request, ws):
assert app.websocket_enabled == True
+@patch("sanic.app.WebSocketProtocol")
+def test_app_websocket_parameters(websocket_protocol_mock, app):
+ app.config.WEBSOCKET_MAX_SIZE = 44
+ app.config.WEBSOCKET_MAX_QUEUE = 45
+ app.config.WEBSOCKET_READ_LIMIT = 46
+ app.config.WEBSOCKET_WRITE_LIMIT = 47
+ app.config.WEBSOCKET_PING_TIMEOUT = 48
+ app.config.WEBSOCKET_PING_INTERVAL = 50
+
+ @app.websocket("/ws")
+ async def handler(request, ws):
+ await ws.send("test")
+
+ try:
+ # This will fail because WebSocketProtocol is mocked and only the call kwargs matter
+ app.test_client.get("/ws")
+ except:
+ pass
+
+ websocket_protocol_call_args = websocket_protocol_mock.call_args
+ ws_kwargs = websocket_protocol_call_args[1]
+ assert ws_kwargs["max_size"] == app.config.WEBSOCKET_MAX_SIZE
+ assert ws_kwargs["max_queue"] == app.config.WEBSOCKET_MAX_QUEUE
+ assert ws_kwargs["read_limit"] == app.config.WEBSOCKET_READ_LIMIT
+ assert ws_kwargs["write_limit"] == app.config.WEBSOCKET_WRITE_LIMIT
+ assert ws_kwargs["ping_timeout"] == app.config.WEBSOCKET_PING_TIMEOUT
+ assert ws_kwargs["ping_interval"] == app.config.WEBSOCKET_PING_INTERVAL
+
+
def test_handle_request_with_nested_exception(app, monkeypatch):
err_msg = "Mock Exception"
| Ability to set ping_interval and ping_timeout parameters for WebSocketCommonProtocol
**Is your feature request related to a problem? Please describe.**
The `ping_interval` and `ping_timeout` parameters for [WebSocketCommonProtocol](https://websockets.readthedocs.io/en/stable/api.html#websockets.protocol.WebSocketCommonProtocol) default to 20 seconds. It would be nice to be able to set them with configuration values.
**Describe the solution you'd like**
Add `WEBSOCKET_PING_INTERVAL` and `WEBSOCKET_PING_TIMEOUT` to configuration values and pass the values into the [WebSocketCommonProtocol initialization](https://github.com/huge-success/sanic/blob/master/sanic/websocket.py#L116).
**Additional context**
| 2020-08-05T22:13:27 |
|
sanic-org/sanic | 1,954 | sanic-org__sanic-1954 | [
"1953"
] | 5928c5005786b690539d3cf2c2814f696a326104 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -676,9 +676,10 @@ def static(
:param strict_slashes: Instruct :class:`Sanic` to check if the request
URLs need to terminate with a */*
:param content_type: user defined content type for header
- :return: None
+ :return: routes registered on the router
+ :rtype: List[sanic.router.Route]
"""
- static_register(
+ return static_register(
self,
uri,
file_or_directory,
diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -143,7 +143,18 @@ def register(self, app, options):
if _routes:
routes += _routes
+ # Static Files
+ for future in self.statics:
+ # Prepend the blueprint URI prefix if available
+ uri = url_prefix + future.uri if url_prefix else future.uri
+ _routes = app.static(
+ uri, future.file_or_directory, *future.args, **future.kwargs
+ )
+ if _routes:
+ routes += _routes
+
route_names = [route.name for route in routes if route]
+
# Middleware
for future in self.middlewares:
if future.args or future.kwargs:
@@ -160,14 +171,6 @@ def register(self, app, options):
for future in self.exceptions:
app.exception(*future.args, **future.kwargs)(future.handler)
- # Static Files
- for future in self.statics:
- # Prepend the blueprint URI prefix if available
- uri = url_prefix + future.uri if url_prefix else future.uri
- app.static(
- uri, future.file_or_directory, *future.args, **future.kwargs
- )
-
# Event listeners
for event, listeners in self.listeners.items():
for listener in listeners:
diff --git a/sanic/static.py b/sanic/static.py
--- a/sanic/static.py
+++ b/sanic/static.py
@@ -134,6 +134,8 @@ def register(
threshold size to switch to file_stream()
:param name: user defined name used for url_for
:param content_type: user defined content type for header
+ :return: registered static routes
+ :rtype: List[sanic.router.Route]
"""
# If we're not trying to match a file directly,
# serve from the folder
@@ -155,10 +157,11 @@ def register(
)
)
- app.route(
+ _routes, _ = app.route(
uri,
methods=["GET", "HEAD"],
name=name,
host=host,
strict_slashes=strict_slashes,
)(_handler)
+ return _routes
| diff --git a/tests/test_blueprints.py b/tests/test_blueprints.py
--- a/tests/test_blueprints.py
+++ b/tests/test_blueprints.py
@@ -735,6 +735,36 @@ def test_static_blueprint_name(app: Sanic, static_file_directory, file_name):
_, response = app.test_client.get("/static/test.file/")
assert response.status == 200
[email protected]("file_name", ["test.file"])
+def test_static_blueprintp_mw(app: Sanic, static_file_directory, file_name):
+ current_file = inspect.getfile(inspect.currentframe())
+ with open(current_file, "rb") as file:
+ file.read()
+
+ triggered = False
+
+ bp = Blueprint(name="test_mw", url_prefix="")
+
+ @bp.middleware('request')
+ def bp_mw1(request):
+ nonlocal triggered
+ triggered = True
+
+ bp.static(
+ "/test.file",
+ get_file_path(static_file_directory, file_name),
+ strict_slashes=True,
+ name="static"
+ )
+
+ app.blueprint(bp)
+
+ uri = app.url_for("test_mw.static")
+ assert uri == "/test.file"
+
+ _, response = app.test_client.get("/test.file")
+ assert triggered is True
+
def test_route_handler_add(app: Sanic):
view = CompositionView()
| Decorators not applied to Static Files
**Describe the bug**
decorators cannot be applied to static files serving. My use-case here is serving auto-generated docs through the application itself while requesting a valid Basic-Authentication for the static files.
**Code snippet**
```python
import os
from sanic import Blueprint, Sanic
from sanic.request import Request
from sanic_httpauth import HTTPBasicAuth
auth = HTTPBasicAuth()
users = {
"docs": "docs",
}
@auth.verify_password
def verify_password(username: str, password: str) -> bool:
"""
basic auth credentials validation
:param username:
:param password:
:return:
"""
if username in users:
return users.get(username) == "docs"
return False
def ApiDoc(app: Sanic):
"""
add autogenerated Api Docs to the Sanic App
:param app: the Sanic App
:return:
"""
directory = './api_doc/')
path = '/api_doc'
bp = Blueprint('api_doc', url_prefix=path)
bp.static('/', directory)
@bp.middleware('request')
@auth.login_required
def ba_middleware(request: Request):
"""
inject Basic Authentication for the whole blueprint
:param request:
:return:
"""
pass
app.blueprint(bp)
```
this works perfectly fine for e.g. the sanic-swagger blueprint.
**Expected behavior**
the blueprint middleware is also applied when serving the static files
**Environment (please complete the following information):**
- OS: osx
- Python 3.8.6
- Sanic 20.9.0
| Hi @digitalkaoz
I was about to reply with a comment like "this is expected, because request middlewares do not apply to static file routes".
However I just did a test, and `app.middleware('request')` decorators _do_ apply to `app.static()` routes.
So this must be a bug in the way blueprints register static routes. Its likely related to the relatively new namespaced middlewares used in blueprints (ie, `named_middleware`) looks like it the `applicable_middlewares` algorithm does not detect static routes within a blueprint as a candidate to apply the middleware to.
This might be intentional, but I doubt it. I'll add a test for it in our test suite, and try to see if there is a clean way to fix it without introducing any breakage or performance decrease here.
| 2020-10-22T23:47:38 |
sanic-org/sanic | 1,965 | sanic-org__sanic-1965 | [
"1964"
] | 5961da3f571314fb95699fea404d96a7cdd93171 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -1454,6 +1454,8 @@ async def __call__(self, scope, receive, send):
asgi_app = await ASGIApp.create(self, scope, receive, send)
await asgi_app()
+ _asgi_single_callable = True # We conform to ASGI 3.0 single-callable
+
# -------------------------------------------------------------------- #
# Configuration
# -------------------------------------------------------------------- #
diff --git a/sanic/asgi.py b/sanic/asgi.py
--- a/sanic/asgi.py
+++ b/sanic/asgi.py
@@ -312,13 +312,19 @@ async def __call__(self) -> None:
callback = None if self.ws else self.stream_callback
await handler(self.request, None, callback)
- async def stream_callback(self, response: HTTPResponse) -> None:
+ _asgi_single_callable = True # We conform to ASGI 3.0 single-callable
+
+ async def stream_callback(
+ self, response: Union[HTTPResponse, StreamingHTTPResponse]
+ ) -> None:
"""
Write the response.
"""
headers: List[Tuple[bytes, bytes]] = []
cookies: Dict[str, str] = {}
+ content_length: List[str] = []
try:
+ content_length = response.headers.popall("content-length", [])
cookies = {
v.key: v
for _, v in list(
@@ -351,12 +357,22 @@ async def stream_callback(self, response: HTTPResponse) -> None:
]
response.asgi = True
-
- if "content-length" not in response.headers and not isinstance(
- response, StreamingHTTPResponse
- ):
+ is_streaming = isinstance(response, StreamingHTTPResponse)
+ if is_streaming and getattr(response, "chunked", False):
+ # disable sanic chunking, this is done at the ASGI-server level
+ setattr(response, "chunked", False)
+ # content-length header is removed to signal to the ASGI-server
+ # to use automatic-chunking if it supports it
+ elif len(content_length) > 0:
headers += [
- (b"content-length", str(len(response.body)).encode("latin-1"))
+ (b"content-length", str(content_length[0]).encode("latin-1"))
+ ]
+ elif not is_streaming:
+ headers += [
+ (
+ b"content-length",
+ str(len(getattr(response, "body", b""))).encode("latin-1"),
+ )
]
if "content-type" not in response.headers:
diff --git a/sanic/response.py b/sanic/response.py
--- a/sanic/response.py
+++ b/sanic/response.py
@@ -100,6 +100,8 @@ async def write(self, data):
"""
data = self._encode_body(data)
+ # `chunked` will always be False in ASGI-mode, even if the underlying
+ # ASGI Transport implements Chunked transport. That does it itself.
if self.chunked:
await self.protocol.push_data(b"%x\r\n%b\r\n" % (len(data), data))
else:
| diff --git a/tests/test_response.py b/tests/test_response.py
--- a/tests/test_response.py
+++ b/tests/test_response.py
@@ -238,7 +238,7 @@ def test_chunked_streaming_returns_correct_content(streaming_app):
@pytest.mark.asyncio
async def test_chunked_streaming_returns_correct_content_asgi(streaming_app):
request, response = await streaming_app.asgi_client.get("/")
- assert response.text == "4\r\nfoo,\r\n3\r\nbar\r\n0\r\n\r\n"
+ assert response.text == "foo,bar"
def test_non_chunked_streaming_adds_correct_headers(non_chunked_streaming_app):
| Possible bug in response.py when using Sanic + ASGI (chunked stream)
**Describe the bug**
In response.py write method, if chunked==true the code will push the data into the stream specifying the len:
```python
if self.chunked:
await self.protocol.push_data(b"%x\r\n%b\r\n" % (len(data), data))
```
It looks like that uvicorn and daphne already execute this in httptools._impl.py (uvicorn>protocol>http) resulting in changing what the client will consider the body of the response.
If sanic is not using ASGI obviously we need that operation but if we use ASGI the content is prepared by the corresponding gateway.
**How to reproduce**
It is enough to stream a generic file and observe the response received by the client. With text files (like in the example) it's not big issue, but if you deal with media files, the decoder on client-side will just fail (I figured this out trying to stream mp4).
main.py
```python
import os
from sanic import Sanic
from sanic.handlers import ContentRangeHandler
from sanic.exceptions import NotFound, HeaderNotFound, InvalidUsage, SanicException
from sanic import Blueprint, response
from aiofiles import os as async_os
from sanic.response import file_stream
import uvicorn
app = Sanic(__name__)
@app.route("/test.txt")
async def handler_file_stream(request):
return await response.file_stream(
"./test.txt",
chunk_size=1024
)
if __name__ == "__main__":
#app.run(host="0.0.0.0", port=8000, debug=False)
uvicorn.run(app, host="0.0.0.0", port=8000, workers=1, log_level="debug")
```
test.txt
```txt
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut
labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit
esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt
in culpa qui officia deserunt mollit anim id est laborum."
```
*Note*: setting chunked=True in the response.file_stream() will make the response working but it is not the default value.
**Environment:**
- OS: Tested with MacOS and Linux Ubuntu 20
- Version 20.9.1, uvicorn 0.12.2, daphne 3.0.0
- Python 3.8
Thanks!
| Hi @logtheta
Looks like this is very similar to, but not the same as https://github.com/huge-success/sanic/issues/1730
https://github.com/huge-success/sanic/issues/1730 was fixed by https://github.com/huge-success/sanic/pull/1957 and released in v20.9.1, but looks like it doesn't fix _this_ issue.
I'll look into whether it might be a quick fix.
Here in uvicorn, for reference: https://github.com/encode/uvicorn/blob/6468b70c85e10ae2d405165ca69f2b6a5bd55878/uvicorn/protocols/http/httptools_impl.py#L512
@ashleysommer Thanks for looking into ๐ . Yes it looks similar to #1730, I think it is worth to fix it since the chunked feature is very useful, I personally use it for video pseduo streaming.
Thanks again!
Looks like the way we're doing chunked-encoding is completely incompatible with the ASGI-spec here:
https://asgi.readthedocs.io/en/latest/specs/www.html#response-start-send-event
Thats why our current solution works if httpx is the asgi-response-transport, but not if uvicorn or daphne are the transport.
Thats why the tests pass, but we see breakage in real-world applications.
@huge-success/sanic-core-devs See the section in the spec regarding
>You may send a Transfer-Encoding header in this message, but the server must ignore it. Servers handle Transfer-Encoding themselves, and may opt to use Transfer-Encoding: chunked if the application presents a response that has no Content-Length set.
So in asgi-mode we need to _not_ set the `Transfer-Encoding: chunked` header, and to signal to the asgi-transport we're doing chunked mode, we need to _not_ specify a "content-length" header, then the asgi-transport will do its own chunking, based on our subsequent http.response.body messages.
@ashleysommer I agree, it should be ASGI's responsibility to "prepare" the chunking (when from Sanic we just set the flag). At the beginning I thought it was uvicorn' issue but then I realized that other gateways were doing the same so...I decided to open the issue.
If anything, the bug is in the `httpx` asgi-response-transport mechanism, because thats what we test against. It doesn't do _any_ chunking at the response level, so that lead us to assume we still do it at the Sanic level.
But response chunking is optional for the ASGI transport, so its not really a bug that it doesn't do it. The httpx transport just gathers up all of the async body parts and sends it when its done, rather than doing chunked transport, which is a perfectly valid way of doing it if the ASGI-transport doesn't support chunked transport.
I've got a fix made, just need to package it up into a nice PR for master, and for the 20.9.x series, and also for the 19.12.x LTS series | 2020-11-05T05:29:28 |
sanic-org/sanic | 2,001 | sanic-org__sanic-2001 | [
"1810"
] | 7028eae083b0da72d09111b9892ddcc00bce7df4 | diff --git a/sanic/cookies.py b/sanic/cookies.py
--- a/sanic/cookies.py
+++ b/sanic/cookies.py
@@ -109,7 +109,7 @@ def __setitem__(self, key, value):
if value is not False:
if key.lower() == "max-age":
if not str(value).isdigit():
- value = DEFAULT_MAX_AGE
+ raise ValueError("Cookie max-age must be an integer")
elif key.lower() == "expires":
if not isinstance(value, datetime):
raise TypeError(
| diff --git a/tests/test_cookies.py b/tests/test_cookies.py
--- a/tests/test_cookies.py
+++ b/tests/test_cookies.py
@@ -162,7 +162,7 @@ def handler(request):
assert response.cookies["test"] == "pass"
[email protected]("max_age", ["0", 30, 30.0, 30.1, "30", "test"])
[email protected]("max_age", ["0", 30, "30"])
def test_cookie_max_age(app, max_age):
cookies = {"test": "wait"}
@@ -204,6 +204,23 @@ def handler(request):
assert cookie is None
[email protected]("max_age", [30.0, 30.1, "test"])
+def test_cookie_bad_max_age(app, max_age):
+ cookies = {"test": "wait"}
+
+ @app.get("/")
+ def handler(request):
+ response = text("pass")
+ response.cookies["test"] = "pass"
+ response.cookies["test"]["max-age"] = max_age
+ return response
+
+ request, response = app.test_client.get(
+ "/", cookies=cookies, raw_cookies=True
+ )
+ assert response.status == 500
+
+
@pytest.mark.parametrize(
"expires", [datetime.utcnow() + timedelta(seconds=60)]
)
| Hard error on invalid max-age cookie
**Describe the bug**
Currently when setting the `max-age` cookie value, it's possible for a valid value to not be set as expected, as well as an invalid value from raising a hard error. In both cases the values are replaced by a `0` `max-age`.
**Code snippet**
```python
response.cookie["my-cookie"]["max-age"] = 10.0 # max-age is set to 0
response.cookie["my-cookie"]["max-age"] = 10.5 # max-age is set to 0
response.cookie["my-cookie"]["max-age"] = "ten" # max-age is set to 0
response.cookie["my-cookie"]["max-age"] = "10" # max-age is set to 10
response.cookie["my-cookie"]["max-age"] = 10 # max-age is set to 10
```
**Expected behavior**
Here's what I think the expected behaviour should be (akin to how the `expires` cookie attribute is handled; raising an error if not a `datetime.datetime`).
```python
response.cookie["my-cookie"]["max-age"] = 10.0 # max-age is set to 10
response.cookie["my-cookie"]["max-age"] = 10.5 # raise ValueError
response.cookie["my-cookie"]["max-age"] = "ten" # raise ValueError
response.cookie["my-cookie"]["max-age"] = "10" # max-age is set to 10
response.cookie["my-cookie"]["max-age"] = 10 # max-age is set to 10
```
**Environment (please complete the following information):**
- OS: macOS
- Version 19.12.2
**Additional context**
I've created a pull request for this here #1809. Here's the issue relating to the original implementation #1452.
Creating this issue so I can have an issue number for the changelog.
| This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is incorrect, please respond with an update. Thank you for your contributions.
@ahopkins Should we close the ticket? looks like https://github.com/huge-success/sanic/pull/1457 was fixed the issue.
Let's investigate. That PR was several years ago. There was a change to cookies not that long ago, maybe it is a regression? Let's confirm this on both sanic server and ASGI. I'm not at the computer now to check if there are tests for this already or not.
Looks like for the server we have the tests, but for ASGI don't have. | 2021-01-11T12:21:48 |
sanic-org/sanic | 2,012 | sanic-org__sanic-2012 | [
"2011"
] | 6009e6d35d1185dcb00465ed92f70bf87a41689b | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -86,6 +86,7 @@ def __init__(
self.websocket_tasks: Set[Future] = set()
self.named_request_middleware: Dict[str, MiddlewareType] = {}
self.named_response_middleware: Dict[str, MiddlewareType] = {}
+ self._test_manager = None
self._test_client = None
self._asgi_client = None
# Register alternative method names
@@ -1032,18 +1033,22 @@ async def handle_request(self, request):
# -------------------------------------------------------------------- #
@property
- def test_client(self):
+ def test_client(self): # noqa
if self._test_client:
return self._test_client
+ elif self._test_manager:
+ return self._test_manager.test_client
from sanic_testing.testing import SanicTestClient # type: ignore
self._test_client = SanicTestClient(self)
return self._test_client
@property
- def asgi_client(self):
+ def asgi_client(self): # noqa
if self._asgi_client:
return self._asgi_client
+ elif self._test_manager:
+ return self._test_manager.test_client
from sanic_testing.testing import SanicASGITestClient # type: ignore
self._asgi_client = SanicASGITestClient(self)
| diff --git a/tests/test_static.py b/tests/test_static.py
--- a/tests/test_static.py
+++ b/tests/test_static.py
@@ -1,5 +1,6 @@
import inspect
import os
+
from pathlib import Path
from time import gmtime, strftime
@@ -93,8 +94,8 @@ def test_static_file_pathlib(app, static_file_directory, file_name):
[b"test.file", b"decode me.txt", b"python.png"],
)
def test_static_file_bytes(app, static_file_directory, file_name):
- bsep = os.path.sep.encode('utf-8')
- file_path = static_file_directory.encode('utf-8') + bsep + file_name
+ bsep = os.path.sep.encode("utf-8")
+ file_path = static_file_directory.encode("utf-8") + bsep + file_name
app.static("/testing.file", file_path)
request, response = app.test_client.get("/testing.file")
assert response.status == 200
| sanic-testing integration improvements
In the testing client properties, let's add a check for the `TestManager` instance.
```
elif hasattr(self, "_test_manager"):
return self._test_manager.test_client
```
| 2021-01-28T07:35:26 |
|
sanic-org/sanic | 2,053 | sanic-org__sanic-2053 | [
"2048"
] | 400f54c7ec68d319e7de26743c5531aef3802143 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -1182,10 +1182,21 @@ def register_app(cls, app: "Sanic") -> None:
cls._app_registry[name] = app
@classmethod
- def get_app(cls, name: str, *, force_create: bool = False) -> "Sanic":
+ def get_app(
+ cls, name: Optional[str] = None, *, force_create: bool = False
+ ) -> "Sanic":
"""
Retrieve an instantiated Sanic instance
"""
+ if name is None:
+ if len(cls._app_registry) > 1:
+ raise SanicException(
+ 'Multiple Sanic apps found, use Sanic.get_app("app_name")'
+ )
+ elif len(cls._app_registry) == 0:
+ raise SanicException(f"No Sanic apps have been registered.")
+ else:
+ return list(cls._app_registry.values())[0]
try:
return cls._app_registry[name]
except KeyError:
| diff --git a/tests/test_app.py b/tests/test_app.py
--- a/tests/test_app.py
+++ b/tests/test_app.py
@@ -1,6 +1,6 @@
import asyncio
import logging
-import sys
+import re
from inspect import isawaitable
from os import environ
@@ -13,6 +13,11 @@
from sanic.response import text
[email protected](autouse=True)
+def clear_app_registry():
+ Sanic._app_registry = {}
+
+
def uvloop_installed():
try:
import uvloop # noqa
@@ -286,14 +291,18 @@ def test_app_registry():
def test_app_registry_wrong_type():
- with pytest.raises(SanicException):
+ with pytest.raises(
+ SanicException, match="Registered app must be an instance of Sanic"
+ ):
Sanic.register_app(1)
def test_app_registry_name_reuse():
Sanic("test")
Sanic.test_mode = False
- with pytest.raises(SanicException):
+ with pytest.raises(
+ SanicException, match='Sanic app name "test" already in use.'
+ ):
Sanic("test")
Sanic.test_mode = True
Sanic("test")
@@ -304,8 +313,16 @@ def test_app_registry_retrieval():
assert Sanic.get_app("test") is instance
+def test_app_registry_retrieval_from_multiple():
+ instance = Sanic("test")
+ Sanic("something_else")
+ assert Sanic.get_app("test") is instance
+
+
def test_get_app_does_not_exist():
- with pytest.raises(SanicException):
+ with pytest.raises(
+ SanicException, match='Sanic app name "does-not-exist" not found.'
+ ):
Sanic.get_app("does-not-exist")
@@ -315,15 +332,43 @@ def test_get_app_does_not_exist_force_create():
)
+def test_get_app_default():
+ instance = Sanic("test")
+ assert Sanic.get_app() is instance
+
+
+def test_get_app_no_default():
+ with pytest.raises(
+ SanicException, match="No Sanic apps have been registered."
+ ):
+ Sanic.get_app()
+
+
+def test_get_app_default_ambiguous():
+ Sanic("test1")
+ Sanic("test2")
+ with pytest.raises(
+ SanicException,
+ match=re.escape(
+ 'Multiple Sanic apps found, use Sanic.get_app("app_name")'
+ ),
+ ):
+ Sanic.get_app()
+
+
def test_app_no_registry():
Sanic("no-register", register=False)
- with pytest.raises(SanicException):
+ with pytest.raises(
+ SanicException, match='Sanic app name "no-register" not found.'
+ ):
Sanic.get_app("no-register")
def test_app_no_registry_env():
environ["SANIC_REGISTER"] = "False"
Sanic("no-register")
- with pytest.raises(SanicException):
+ with pytest.raises(
+ SanicException, match='Sanic app name "no-register" not found.'
+ ):
Sanic.get_app("no-register")
del environ["SANIC_REGISTER"]
| Sanic.get_app with no app name
**Is your feature request related to a problem? Please describe.**
Sometimes you want to get the app instance, but do not have the name of the app.
```python
app = Sanic.get_app("what was it called again ๐ค")
```
**Describe the solution you'd like**
```python
@classmethod
def get_app(cls, name: Optional[str], *, force_create: bool = False) -> "Sanic":
# If name is None, then return the first item in the app registry
```
**Additional context**
[See docs](https://sanicframework.org/guide/basics/app.html#app-registry)
| 2021-03-10T07:52:36 |
|
sanic-org/sanic | 2,072 | sanic-org__sanic-2072 | [
"2073"
] | 15a8b5c8946de0231ef20831e8e5ffb025f55c54 | diff --git a/sanic/base.py b/sanic/base.py
--- a/sanic/base.py
+++ b/sanic/base.py
@@ -8,38 +8,19 @@
from sanic.mixins.signals import SignalMixin
-class Base(type):
- def __new__(cls, name, bases, attrs):
- init = attrs.get("__init__")
-
- def __init__(self, *args, **kwargs):
- nonlocal init
- nonlocal name
-
- bases = [
- b for base in type(self).__bases__ for b in base.__bases__
- ]
-
- for base in bases:
- base.__init__(self, *args, **kwargs)
-
- if init:
- init(self, *args, **kwargs)
-
- attrs["__init__"] = __init__
- return type.__new__(cls, name, bases, attrs)
-
-
class BaseSanic(
RouteMixin,
MiddlewareMixin,
ListenerMixin,
ExceptionMixin,
SignalMixin,
- metaclass=Base,
):
__fake_slots__: Tuple[str, ...]
+ def __init__(self, *args, **kwargs) -> None:
+ for base in BaseSanic.__bases__:
+ base.__init__(self, *args, **kwargs) # type: ignore
+
def __str__(self) -> str:
return f"<{self.__class__.__name__} {self.name}>"
diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -73,6 +73,7 @@ def __init__(
version: Optional[int] = None,
strict_slashes: Optional[bool] = None,
):
+ super().__init__()
self._apps: Set[Sanic] = set()
self.ctx = SimpleNamespace()
| diff --git a/tests/test_app.py b/tests/test_app.py
--- a/tests/test_app.py
+++ b/tests/test_app.py
@@ -405,3 +405,10 @@ def test_app_set_context(app):
retrieved = Sanic.get_app(app.name)
assert retrieved.ctx.foo == 1
+
+
+def test_subclass_initialisation():
+ class CustomSanic(Sanic):
+ pass
+
+ CustomSanic("test_subclass_initialisation")
| RecursionError on Sanic subclass initialisation
Discovered at https://github.com/sanic-org/sanic/issues/2071
version: latest sanic master (`8a2ea626c6d04a5eb1e28d071ffa56bf9ad98a12`)
description:
RecursionError occurs when initialising Sanic subclass
minimal code to reproduce:
```python
from sanic import Sanic
class Custom(Sanic):
pass
custom = Custom("custom")
```
Potential fix: https://github.com/sanic-org/sanic/pull/2072
| 2021-03-20T16:56:26 |
|
sanic-org/sanic | 2,076 | sanic-org__sanic-2076 | [
"2075"
] | 13630a79ad62e97cd8d62e32b61cda23bd3bcb19 | diff --git a/examples/static_assets.py b/examples/static_assets.py
new file mode 100644
--- /dev/null
+++ b/examples/static_assets.py
@@ -0,0 +1,6 @@
+from sanic import Sanic
+
+
+app = Sanic(__name__)
+
+app.static("/", "./static")
diff --git a/sanic/__version__.py b/sanic/__version__.py
--- a/sanic/__version__.py
+++ b/sanic/__version__.py
@@ -1 +1 @@
-__version__ = "21.3.0"
+__version__ = "21.3.1"
diff --git a/sanic/mixins/routes.py b/sanic/mixins/routes.py
--- a/sanic/mixins/routes.py
+++ b/sanic/mixins/routes.py
@@ -776,7 +776,7 @@ def _register_static(
# If we're not trying to match a file directly,
# serve from the folder
if not path.isfile(file_or_directory):
- uri += "/<__file_uri__>"
+ uri += "/<__file_uri__:path>"
# special prefix for static files
# if not static.name.startswith("_static_"):
| diff --git a/tests/static/nested/dir/foo.txt b/tests/static/nested/dir/foo.txt
new file mode 100644
--- /dev/null
+++ b/tests/static/nested/dir/foo.txt
@@ -0,0 +1 @@
+foo
diff --git a/tests/test_static.py b/tests/test_static.py
--- a/tests/test_static.py
+++ b/tests/test_static.py
@@ -445,3 +445,12 @@ def test_static_name(app, static_file_directory, static_name, file_name):
request, response = app.test_client.get(f"/static/{file_name}")
assert response.status == 200
+
+
+def test_nested_dir(app, static_file_directory):
+ app.static("/static", static_file_directory)
+
+ request, response = app.test_client.get("/static/nested/dir/foo.txt")
+
+ assert response.status == 200
+ assert response.text == "foo\n"
| Static files inside subfolders are not accessible (404)
**Describe the bug**
After upgrading from 20.12.3 to 21.3.0 I'm getting 404 for all my static files (those files are located inside subfolders), example:
GET http://127.0.0.1:8000/static/node_modules/jquery/dist/jquery.min.js 404
but static files that are located directly inside static folder are accessible, example:
GET http://127.0.0.1:8000/static/package.json 200
**Code snippet**
I added static in this way (it was working fine for Sanic <= 20.12.3):
`app.static("/static", "./static")`
I'm assuming the problem is in this line:
`uri += "/<__file_uri__>"`
https://github.com/sanic-org/sanic/blob/master/sanic/mixins/routes.py#L779
looks like there is no place for subfolders
**Expected behavior**
I expect static files inside subfolders to accessible.
**Environment (please complete the following information):**
- OS: MacOS 11.2.3
- Version 21.3.0
- Python 3.9.0
**Additional context**
This is a problem only for local environment, for prod static files are not served by Sanic
| :unamused: I knew there was bound to be some bug with such sweeping changes. I will work on a patch for this and get a 21.3.1 out as soon as I can. We need to add this to test coverage as well. | 2021-03-21T12:31:17 |
sanic-org/sanic | 2,081 | sanic-org__sanic-2081 | [
"2078"
] | 7be5f0ed3d083af95d1d7ac28276054af4ecf6da | diff --git a/sanic/server.py b/sanic/server.py
--- a/sanic/server.py
+++ b/sanic/server.py
@@ -234,11 +234,16 @@ def check_timeouts(self):
if stage is Stage.IDLE and duration > self.keep_alive_timeout:
logger.debug("KeepAlive Timeout. Closing connection.")
elif stage is Stage.REQUEST and duration > self.request_timeout:
+ logger.debug("Request Timeout. Closing connection.")
self._http.exception = RequestTimeout("Request Timeout")
+ elif stage is Stage.HANDLER and self._http.upgrade_websocket:
+ logger.debug("Handling websocket. Timeouts disabled.")
+ return
elif (
stage in (Stage.HANDLER, Stage.RESPONSE, Stage.FAILED)
and duration > self.response_timeout
):
+ logger.debug("Response Timeout. Closing connection.")
self._http.exception = ServiceUnavailable("Response Timeout")
else:
interval = (
| diff --git a/tests/test_response_timeout.py b/tests/test_response_timeout.py
--- a/tests/test_response_timeout.py
+++ b/tests/test_response_timeout.py
@@ -1,7 +1,11 @@
import asyncio
+import logging
+
+from time import sleep
from sanic import Sanic
from sanic.exceptions import ServiceUnavailable
+from sanic.log import LOGGING_CONFIG_DEFAULTS
from sanic.response import text
@@ -13,6 +17,8 @@
response_timeout_default_app.config.RESPONSE_TIMEOUT = 1
response_handler_cancelled_app.config.RESPONSE_TIMEOUT = 1
+response_handler_cancelled_app.ctx.flag = False
+
@response_timeout_app.route("/1")
async def handler_1(request):
@@ -25,32 +31,17 @@ def handler_exception(request, exception):
return text("Response Timeout from error_handler.", 503)
-def test_server_error_response_timeout():
- request, response = response_timeout_app.test_client.get("/1")
- assert response.status == 503
- assert response.text == "Response Timeout from error_handler."
-
-
@response_timeout_default_app.route("/1")
async def handler_2(request):
await asyncio.sleep(2)
return text("OK")
-def test_default_server_error_response_timeout():
- request, response = response_timeout_default_app.test_client.get("/1")
- assert response.status == 503
- assert "Response Timeout" in response.text
-
-
-response_handler_cancelled_app.flag = False
-
-
@response_handler_cancelled_app.exception(asyncio.CancelledError)
def handler_cancelled(request, exception):
# If we get a CancelledError, it means sanic has already sent a response,
# we should not ever have to handle a CancelledError.
- response_handler_cancelled_app.flag = True
+ response_handler_cancelled_app.ctx.flag = True
return text("App received CancelledError!", 500)
# The client will never receive this response, because the socket
# is already closed when we get a CancelledError.
@@ -62,8 +53,44 @@ async def handler_3(request):
return text("OK")
+def test_server_error_response_timeout():
+ request, response = response_timeout_app.test_client.get("/1")
+ assert response.status == 503
+ assert response.text == "Response Timeout from error_handler."
+
+
+def test_default_server_error_response_timeout():
+ request, response = response_timeout_default_app.test_client.get("/1")
+ assert response.status == 503
+ assert "Response Timeout" in response.text
+
+
def test_response_handler_cancelled():
request, response = response_handler_cancelled_app.test_client.get("/1")
assert response.status == 503
assert "Response Timeout" in response.text
- assert response_handler_cancelled_app.flag is False
+ assert response_handler_cancelled_app.ctx.flag is False
+
+
+def test_response_timeout_not_applied(caplog):
+ modified_config = LOGGING_CONFIG_DEFAULTS
+ modified_config["loggers"]["sanic.root"]["level"] = "DEBUG"
+
+ app = Sanic("test_logging", log_config=modified_config)
+ app.config.RESPONSE_TIMEOUT = 1
+ app.ctx.event = asyncio.Event()
+
+ @app.websocket("/ws")
+ async def ws_handler(request, ws):
+ sleep(2)
+ await asyncio.sleep(0)
+ request.app.ctx.event.set()
+
+ with caplog.at_level(logging.DEBUG):
+ _ = app.test_client.websocket("/ws")
+ assert app.ctx.event.is_set()
+ assert (
+ "sanic.root",
+ 10,
+ "Handling websocket. Timeouts disabled.",
+ ) in caplog.record_tuples
| websocket disconnect every minute and raise 503 error.
**Describe the bug**
from client, every 25 seconds send a "ping",
on server side, when received "ping", send a "pong".
on client side, after sent a "ping", wait 15 seconds, if not receive "pong", then reconnect.
websocket disconnect about every minute.
**Code snippet**
server.py:
```python
from sanic import Sanic
app = Sanic("websocket_example")
@app.websocket('/wsserver')
async def feed(request, ws):
while True:
data = 'pong'
print('Sending: ' + data)
await ws.send(data)
data = await ws.recv()
print('Received: ' + data)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8001)
```
**Expected behavior**
keep connection!
**Environment (please complete the following information):**
- OS: ubuntu20.04
- Version : sanic 21.3.1
**Additional context**
on server side log:
```
[2021-03-22 06:28:15 +0800] [3380798] [INFO] Goin' Fast @ http://0.0.0.0:8001
[2021-03-22 06:28:15 +0800] [3380798] [INFO] Starting worker [3380798]
Sending: pong
Received: ping
Sending: pong
Received: ping
Sending: pong
Received: ping
Sending: pong
[2021-03-22 06:29:35 +0800] - (sanic.access)[INFO][42.185.74.69:34866]: GET ws://42.3.27.246/wsserver 503 -1
Sending: pong
Received: ping
Sending: pong
Received: ping
Sending: pong
Received: ping
Sending: pong
[2021-03-22 06:30:52 +0800] - (sanic.access)[INFO][42.185.74.69:34911]: GET ws://42.3.27.246/wsserver 503 -1
Sending: pong
Received: ping
Sending: pong
Received: ping
Sending: pong
Received: ping
Sending: pong
[2021-03-22 06:32:09 +0800] - (sanic.access)[INFO][42.185.74.69:35021]: GET ws://42.3.27.246/wsserver 503 -1
```
| I have test some different codes, the same error on them.
I have no idea now, please check if it is a bug or my code is wrong?
anyone faced the same issue?
It is tough to say without seeing your client code as well. One of the first thing that jumps out at me is that you have both the consumer (`ws.recv()`) and the producer (`ws.send()`) running consecutively. Therefore, each of those operations will block the other from happening. The handler will pause at each `await` until it can be resolved. Depending upon your client logic, that might cause the described behavior.
thanks for reply.
I am almost certain it is a bug in sanic new version 21.3.0 and 21.3.1.
on sanic 21.12.0-21.12.3, the codes work smoothly.
please have a deep check.
Can you provide your client code?
the client very simple:
1\every 25 seconds send a "ping",
2\after sent a "ping", wait 15 seconds, if not receive "pong", then reconnect.
the (ws.recv()) and (ws.send()) running consecutively, but it is copy from your official documents.
now I add a new logger about reconnection to show the bug clearly.
`
@app.websocket("/wsserver")
async def feed(request, ws):
try:
logger.debug(f"the client re-connected!")
while True:
#print("request:ใ", request)
#print("ws:ใ", ws)
name = await ws.recv()
print(f"< {name}")
greeting = f"pong"
await ws.send(greeting)
print(f"> {greeting}")
except websockets.exceptions.ConnectionClosedOK:
logger.debug("/wsserver ConnectionClosedOK")
except websockets.exceptions.ConnectionClosedError:
logger.debug("/wsserver ConnectionClosedError")
except sanic.websocket.ConnectionClosed:
logger.debug("/wsserver ConnectionClosed: " + str(e))
except websockets.exceptions.WebSocketException as e:
logger.debug("/wsserver WebSocketException: " + str(e))
except concurrent.futures.CancelledError:
logger.debug("/wsserver CancelledError")
except BaseException as e:
logger.debug("/wsserver BaseException: " + str(type(e)))
`
```
[2021-03-22 13:07:35 +0800] [3463351] [INFO] Goin' Fast @ http://0.0.0.0:8001
[2021-03-22 13:07:35 +0800] [3463351] [INFO] Starting worker [3463351]
[2021-03-22 13:07:40 +0800] [3463351] [DEBUG] the client re-connected!
< ping
> pong
< ping
> pong
< ping
> pong
[2021-03-22 13:08:42 +0800] [3463351] [DEBUG] /wsserver BaseException: <class 'asyncio.exceptions.CancelledError'>
[2021-03-22 13:08:43 +0800] - (sanic.access)[INFO][104.233.191.163:34936]: GET ws://42.3.27.246/wsserver 503 -1
[2021-03-22 13:08:43 +0800] [3463351] [ERROR] Connection lost before response written @ ('104.233.191.163', 34936) <Request: GET /wsserver>
[2021-03-22 13:08:54 +0800] [3463351] [DEBUG] the client re-connected!
[2021-03-22 13:09:05 +0800] [3463351] [DEBUG] /wsserver BaseException: <class 'asyncio.exceptions.CancelledError'>
[2021-03-22 13:09:05 +0800] [3463351] [ERROR] Connection lost before response written @ ('104.233.191.163', 35081) <Request: GET /wsserver>
[2021-03-22 13:09:06 +0800] [3463351] [DEBUG] the client re-connected!
[2021-03-22 13:09:17 +0800] [3463351] [DEBUG] /wsserver BaseException: <class 'asyncio.exceptions.CancelledError'>
[2021-03-22 13:09:17 +0800] [3463351] [ERROR] Connection lost before response written @ ('104.233.191.163', 35100) <Request: GET /wsserver>
[2021-03-22 13:09:17 +0800] [3463351] [DEBUG] the client re-connected!
[2021-03-22 13:09:28 +0800] [3463351] [DEBUG] /wsserver BaseException: <class 'asyncio.exceptions.CancelledError'>
[2021-03-22 13:09:28 +0800] [3463351] [ERROR] Connection lost before response written @ ('104.233.191.163', 35107) <Request: GET /wsserver>
```
**Notice: the new codes work OK in version 20.12.3 too.!** | 2021-03-22T08:34:39 |
sanic-org/sanic | 2,085 | sanic-org__sanic-2085 | [
"2080"
] | 4998fd54c00f6580fe999b0e1371a6470b5f046c | diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -85,7 +85,11 @@ def __init__(
self.routes: List[Route] = []
self.statics: List[RouteHandler] = []
self.strict_slashes = strict_slashes
- self.url_prefix = url_prefix
+ self.url_prefix = (
+ url_prefix[:-1]
+ if url_prefix and url_prefix.endswith("/")
+ else url_prefix
+ )
self.version = version
self.websocket_routes: List[Route] = []
diff --git a/sanic/mixins/routes.py b/sanic/mixins/routes.py
--- a/sanic/mixins/routes.py
+++ b/sanic/mixins/routes.py
@@ -71,7 +71,7 @@ def route(
# Fix case where the user did not prefix the URL with a /
# and will probably get confused as to why it's not working
- if not uri.startswith("/"):
+ if not uri.startswith("/") and (uri or hasattr(self, "router")):
uri = "/" + uri
if strict_slashes is None:
| diff --git a/tests/test_routes.py b/tests/test_routes.py
--- a/tests/test_routes.py
+++ b/tests/test_routes.py
@@ -1175,3 +1175,59 @@ async def handler(request):
with pytest.raises(SanicException):
app.router.finalize()
+
+
+def test_routes_with_and_without_slash_definitions(app):
+ bar = Blueprint("bar", url_prefix="bar")
+ baz = Blueprint("baz", url_prefix="/baz")
+ fizz = Blueprint("fizz", url_prefix="fizz/")
+ buzz = Blueprint("buzz", url_prefix="/buzz/")
+
+ instances = (
+ (app, "foo"),
+ (bar, "bar"),
+ (baz, "baz"),
+ (fizz, "fizz"),
+ (buzz, "buzz"),
+ )
+
+ for instance, term in instances:
+ route = f"/{term}" if isinstance(instance, Sanic) else ""
+
+ @instance.get(route, strict_slashes=True)
+ def get_without(request):
+ return text(f"{term}_without")
+
+ @instance.get(f"{route}/", strict_slashes=True)
+ def get_with(request):
+ return text(f"{term}_with")
+
+ @instance.post(route, strict_slashes=True)
+ def post_without(request):
+ return text(f"{term}_without")
+
+ @instance.post(f"{route}/", strict_slashes=True)
+ def post_with(request):
+ return text(f"{term}_with")
+
+ app.blueprint(bar)
+ app.blueprint(baz)
+ app.blueprint(fizz)
+ app.blueprint(buzz)
+
+ for _, term in instances:
+ _, response = app.test_client.get(f"/{term}")
+ assert response.status == 200
+ assert response.text == f"{term}_without"
+
+ _, response = app.test_client.get(f"/{term}/")
+ assert response.status == 200
+ assert response.text == f"{term}_with"
+
+ _, response = app.test_client.post(f"/{term}")
+ assert response.status == 200
+ assert response.text == f"{term}_without"
+
+ _, response = app.test_client.post(f"/{term}/")
+ assert response.status == 200
+ assert response.text == f"{term}_with"
| bug: route without a trailing slash gets routed to route with a trailing slash, when the route is the top level of blueprint (maybe also app)
Version: 21.3.1
Description:
setting two urls in a blueprint with strict slashes for "" and "/" will raise an exception "route already registered" in sanic 21.3.1
In sanic version before 21.3 this would be acceptable
Code to reproduce:
```python
from sanic import Sanic, Blueprint
from sanic.response import text
app = Sanic("test")
bp = Blueprint("test", url_prefix="test")
@bp.get("", strict_slashes=True)
def _(req):
return text("a")
@bp.get("/", strict_slashes=True) # causes exception on sanic 21.3
def _(req):
return text("b")
app.blueprint(bp)
app.run()
```
potential fix: https://github.com/sanic-org/sanic/pull/2079
| Probably needs a bugfix release since routes like `/api/example` will stop working in 21.3.1 when `strict_slashes` is set
Hi @argaen
I can see why routes `""` and `"/"` cause the bug, and I understand why you might have a niche requirement to have a route `""` with `strict_slashes=True`, and how your PR fixes that feature.
But I don't understand your second comment, how would a route like `"/api/example"` be affected by this bug?
And are you sure this is just Blueprints? Wouldn't this also affect app routes?
```python3
app = Sanic("test")
@app.get("", strict_slashes=True)
def _(req):
return text("a")
@app.get("/", strict_slashes=True) # <- does this cause exception?
def _(req):
return text("b")
```
Or is it something to do with how blueprints prepend `url_prefix`?
Like this example
```python
from sanic import Sanic, Blueprint
from sanic.response import text
app = Sanic("test")
bp = Blueprint("test", url_prefix="test")
@bp.get("", strict_slashes=True)
def _(req):
return text("a")
app.blueprint(bp)
app.run()
```
Old:
`http://localhost:8000/test` -> works
New:
`http://localhost:8000/test` --> does not work
The problem is the line illustrated in the fix turning "" into "/" when its an empty string
Im not too sure on the logic between the difference between the host root path with the slash or not, like
localhost:8000 vs localhost:8000/
I don't know if in networking terms there is a difference, so can't answer on whether it should affect app or not with no url_prefix
Ok, yep I absolutely see how this is an issue on a blueprint when the route is `""` and strict_slashes is `True`. But I'm still struggling to understand what you mean when you said it would affect a route like `"/api/example"`. Your follow-up example to demonstrate still uses route of `""`.
Putting that aside, it seems like we need a condition like:
```
IF route is on a blueprint AND strict_slashes is True AND bp has url_prefix AND route is "" THEN
leave route as ""
ELSE
route = "/"
```
I will have to do some more testing to get to a correct solution for this.
Ok sounds good, will leave it to sanic core devs to decide about how to handle it ...
Probably a poor example on my end but basically with that I meant if `"example"` was a blueprint and `"/api/example"` route was then the `example_blueprint.get("") `
Feel free to close & replace my PR if its inadequate, I just made it in the case that it would be a quick fix
Thinking about it a bit more:
* If route is "" and route is on the App, change `""` to `"/"` even if `strict_slashes` is True. (because app-level route of "" is invalid)
* Or, if route is "" and route is on a Blueprint but bp `url_prefix` is (None or ""), then change `""` to `"/"`
* in all other cases then add "/" to the start of the route if its not there, and if strict_slashes is False, make the route match with trailing slash and without trailing slash.
@ahopkins is the guru for everything regarding the new router, I'll leave it to him.
But thanks for bringing this to our attention!
I am looking at this now. Looks like @ashleysommer has the right idea.
Basically, this should work as they should be functionally equivalent:
```python
app = Sanic(__name__)
bp = Blueprint("test", url_prefix="/bar")
@bp.get("", strict_slashes=True)
def bar(req):
return text("bar")
@bp.get("/", strict_slashes=True)
def Bar(req):
return text("Bar")
@app.get("/foo", strict_slashes=True)
def foo(request):
return text("foo")
@app.get("/foo/", strict_slashes=True)
def Foo(request):
return text("Foo")
```
Running some tests, but I think it is as simple as this:
```python
if not uri.startswith("/") and (uri or hasattr(self, "router")):
uri = "/" + uri
```
If there is a `uri`, we always want it to have a `/`. No other rules apply.
If uri is falsey (in this case it should only be `""`, I am fine with standard exceptions if you pass `None`), we always add it when the route is on the app instance, but never on the blueprint. We do not need to inspect `strict_slashes` or anything else here. All we care about is whether or not something else will be in front of this eventually or not.
---
Now, why am I using `hasattr(self, "router")`? I am not sure of the best way to handle this since the method is on the mixin because of circular imports. This seemed to me the most relevant property to test for. | 2021-03-22T21:48:20 |
sanic-org/sanic | 2,094 | sanic-org__sanic-2094 | [
"2067"
] | e21521f45c0b58bac619a9111fd47426e208bf08 | diff --git a/sanic/response.py b/sanic/response.py
--- a/sanic/response.py
+++ b/sanic/response.py
@@ -203,6 +203,9 @@ async def send(self, *args, **kwargs):
self.streaming_fn = None
await super().send(*args, **kwargs)
+ async def eof(self):
+ raise NotImplementedError
+
class HTTPResponse(BaseHTTPResponse):
"""
@@ -235,6 +238,9 @@ def __init__(
self.headers = Header(headers or {})
self._cookies = None
+ async def eof(self):
+ await self.send("", True)
+
def empty(
status=204, headers: Optional[Dict[str, str]] = None
| diff --git a/tests/test_response.py b/tests/test_response.py
--- a/tests/test_response.py
+++ b/tests/test_response.py
@@ -529,3 +529,19 @@ def handler(request):
request, response = app.test_client.get("/test")
assert response.content_type is None
assert response.body == b""
+
+
+def test_direct_response_stream(app):
+ @app.route("/")
+ async def test(request):
+ response = await request.respond(content_type="text/csv")
+ await response.send("foo,")
+ await response.send("bar")
+ await response.eof()
+ return response
+
+ _, response = app.test_client.get("/")
+ assert response.text == "foo,bar"
+ assert response.headers["Transfer-Encoding"] == "chunked"
+ assert response.headers["Content-Type"] == "text/csv"
+ assert "Content-Length" not in response.headers
| New style streaming route handler improvements
**Is your feature request related to a problem? Please describe.**
The new streaming API allows streaming responses in the route handler without having to use a callback:
```python
@app.route("/")
async def test(request):
response = await request.respond(content_type="text/csv")
await response.send("foo,")
await response.send("bar")
await response.send("", True)
return response
```
**Describe the solution you'd like**
A simpler method to closing the stream before returning in place of `await response.send("", True)`
```python
await response.eof()
```
Under the hood, `eof` should simply just be a convenience call to `send("", True)`.
| this should work in `sanic -> response -> BaseHTTPResponse`, right?
```
async def eof(self):
await self.send("", True)
```
@ahopkins
Yup. Eventually, we probably can merge `HTTPResponse` right into `BaseHTTPResponse`. Until we do that, the `eof()` should be in the same place as `send()` since it would simply be a shortcut.
@ajay1mg Are you working on a PR for this?
If yes, then add a check for `self.streaming_fn` inside `eof()`. If it exists, that we should raise an exception since `eof` should only ever be called when there is no `streaming_fn`. And, maybe it should be `NotImplementedError` on `StreamingHTTPResponse`.
I am working on the PR for this but I am confused with your second comment. `HttpResponse` class doesn't have any attribute `streaming_fn` nor does `BaseHTTPResponse`
Sorry for not being clear. This is what I meant:
```python
class StreamingHTTPResponse(BaseHTTPResponse):
def eof(...):
raise NotImplementedError
```
This class should not have it. | 2021-03-29T16:32:24 |
sanic-org/sanic | 2,110 | sanic-org__sanic-2110 | [
"2106"
] | 53a571ec6c49a328f5920604d3dbfa38df57ee07 | diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -378,9 +378,12 @@ def get_args(
:type errors: str
:return: RequestParameters
"""
- if not self.parsed_args[
- (keep_blank_values, strict_parsing, encoding, errors)
- ]:
+ if (
+ keep_blank_values,
+ strict_parsing,
+ encoding,
+ errors,
+ ) not in self.parsed_args:
if self.query_string:
self.parsed_args[
(keep_blank_values, strict_parsing, encoding, errors)
@@ -434,9 +437,12 @@ def get_query_args(
:type errors: str
:return: list
"""
- if not self.parsed_not_grouped_args[
- (keep_blank_values, strict_parsing, encoding, errors)
- ]:
+ if (
+ keep_blank_values,
+ strict_parsing,
+ encoding,
+ errors,
+ ) not in self.parsed_not_grouped_args:
if self.query_string:
self.parsed_not_grouped_args[
(keep_blank_values, strict_parsing, encoding, errors)
| diff --git a/tests/test_requests.py b/tests/test_requests.py
--- a/tests/test_requests.py
+++ b/tests/test_requests.py
@@ -291,6 +291,17 @@ async def handler(request):
assert request.args.getlist("test1") == ["1"]
assert request.args.get("test3", default="My value") == "My value"
+def test_popped_stays_popped(app):
+ @app.route("/")
+ async def handler(request):
+ return text("OK")
+
+ request, response = app.test_client.get(
+ "/", params=[("test1", "1")]
+ )
+
+ assert request.args.pop("test1") == ["1"]
+ assert "test1" not in request.args
@pytest.mark.asyncio
async def test_query_string_asgi(app):
| request.args.pop removes parameters inconsistently
Environment : 20.12.3, presumably other environments too
Code to reproduce:
send this request to this app
`GET http://localhost:8000/?hello=a&world=4`
```python
from sanic import Sanic
app = Sanic('test')
@app.get('/')
def _(request):
a = request.args.pop('hello')
b = request.args.pop('world')
c = request.args.pop('world')
d = request.args.pop('world') << crashes here
app.run()
```
Expected:
crashes at line second call of .pop('world'), or not at all
| I have no ideas about this since the RequestParameters seems to be a really straightforward wrapper around the `dict`, would be interested if anyone can reproduce it ...
Started to look at this a bit more it seems that the problem is not with the implementation of RequestParameters but rather with this somewhere
` args = property(get_args)`
vs 18.12 where it still works as expected
```
@property
def args(self):
if self.parsed_args is None:
if self.query_string:
self.parsed_args = RequestParameters(
parse_qs(self.query_string)
)
else:
self.parsed_args = RequestParameters()
return self.parsed_args
```
Ah I worked it out!
Because parsed_args is a defaultdict, once the request.args is exhausted, it will evaluate as False again, and then a fresh parsing of the query will occur. | 2021-04-10T20:15:12 |
sanic-org/sanic | 2,119 | sanic-org__sanic-2119 | [
"2115"
] | d16b9e5a020bf438f1da88847fa6ebbb0e672d98 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -123,6 +123,8 @@ class Sanic(BaseSanic):
def __init__(
self,
name: str = None,
+ config: Optional[Config] = None,
+ ctx: Optional[Any] = None,
router: Optional[Router] = None,
signal_router: Optional[SignalRouter] = None,
error_handler: Optional[ErrorHandler] = None,
@@ -141,6 +143,12 @@ def __init__(
if configure_logging:
logging.config.dictConfig(log_config or LOGGING_CONFIG_DEFAULTS)
+ if config and (load_env is not True or env_prefix != SANIC_PREFIX):
+ raise SanicException(
+ "When instantiating Sanic with config, you cannot also pass "
+ "load_env or env_prefix"
+ )
+
self._asgi_client = None
self._blueprint_order: List[Blueprint] = []
self._test_client = None
@@ -148,9 +156,11 @@ def __init__(
self.asgi = False
self.auto_reload = False
self.blueprints: Dict[str, Blueprint] = {}
- self.config = Config(load_env=load_env, env_prefix=env_prefix)
+ self.config = config or Config(
+ load_env=load_env, env_prefix=env_prefix
+ )
self.configure_logging = configure_logging
- self.ctx = SimpleNamespace()
+ self.ctx = ctx or SimpleNamespace()
self.debug = None
self.error_handler = error_handler or ErrorHandler()
self.is_running = False
| diff --git a/tests/test_app.py b/tests/test_app.py
--- a/tests/test_app.py
+++ b/tests/test_app.py
@@ -9,6 +9,7 @@
import pytest
from sanic import Sanic
+from sanic.config import Config
from sanic.exceptions import SanicException
from sanic.response import text
@@ -412,3 +413,42 @@ class CustomSanic(Sanic):
pass
CustomSanic("test_subclass_initialisation")
+
+
+def test_bad_custom_config():
+ with pytest.raises(
+ SanicException,
+ match=(
+ "When instantiating Sanic with config, you cannot also pass "
+ "load_env or env_prefix"
+ ),
+ ):
+ Sanic("test", config=1, load_env=1)
+ with pytest.raises(
+ SanicException,
+ match=(
+ "When instantiating Sanic with config, you cannot also pass "
+ "load_env or env_prefix"
+ ),
+ ):
+ Sanic("test", config=1, env_prefix=1)
+
+
+def test_custom_config():
+ class CustomConfig(Config):
+ ...
+
+ config = CustomConfig()
+ app = Sanic("custom", config=config)
+
+ assert app.config == config
+
+
+def test_custom_context():
+ class CustomContext:
+ ...
+
+ ctx = CustomContext()
+ app = Sanic("custom", ctx=ctx)
+
+ assert app.ctx == ctx
| Allow an alternate configuration class or object to be passed to application objects
It is currently difficult to extend the `Config` class, and have an `Sanic` instance actually use that configuration class throughout its entire lifecycle. This is because the `Sanic` class's `__init__` method is hard-coded to use `sanic.config.Config`. Anyone wishing to use a different class must do one of:
- Patch and replace `sanic.config.Config`.
- Re-implement `Sanic.__init__` in a sub-class, duplicating most of the base implementation.
- Assign a different value to `Sanic.config`.
The first solution is inelegant, and hard to do correctly, the second involves redundant code duplication, and the third means that users are only able to introduce custom behavior post-init.
A simple fix for this is to modify `Sanic.__init__` to allow passing in a custom configuration class. This is already done for the class used to represent requests, and users can also pass in custom router and error handler instances.
| This was on my list of simple changes to add. Same thing for the context object. | 2021-04-17T23:45:17 |
sanic-org/sanic | 2,127 | sanic-org__sanic-2127 | [
"2070"
] | c543d19f8ae791f21bc40d049791459ea3377123 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -585,7 +585,12 @@ def url_for(self, view_name: str, **kwargs):
# determine if the parameter supplied by the caller
# passes the test in the URL
if param_info.pattern:
- passes_pattern = param_info.pattern.match(supplied_param)
+ pattern = (
+ param_info.pattern[1]
+ if isinstance(param_info.pattern, tuple)
+ else param_info.pattern
+ )
+ passes_pattern = pattern.match(supplied_param)
if not passes_pattern:
if param_info.cast != str:
msg = (
@@ -593,13 +598,13 @@ def url_for(self, view_name: str, **kwargs):
f"for parameter `{param_info.name}` does "
"not match pattern for type "
f"`{param_info.cast.__name__}`: "
- f"{param_info.pattern.pattern}"
+ f"{pattern.pattern}"
)
else:
msg = (
f'Value "{supplied_param}" for parameter '
f"`{param_info.name}` does not satisfy "
- f"pattern {param_info.pattern.pattern}"
+ f"pattern {pattern.pattern}"
)
raise URLBuildError(msg)
@@ -740,17 +745,14 @@ async def handle_request(self, request: Request):
if response:
response = await request.respond(response)
- else:
+ elif not hasattr(handler, "is_websocket"):
response = request.stream.response # type: ignore
- # Make sure that response is finished / run StreamingHTTP callback
+ # Make sure that response is finished / run StreamingHTTP callback
if isinstance(response, BaseHTTPResponse):
await response.send(end_stream=True)
else:
- try:
- # Fastest method for checking if the property exists
- handler.is_websocket # type: ignore
- except AttributeError:
+ if not hasattr(handler, "is_websocket"):
raise ServerError(
f"Invalid response type {response!r} "
"(need HTTPResponse)"
@@ -777,6 +779,7 @@ async def _websocket_handler(
if self.asgi:
ws = request.transport.get_websocket_connection()
+ await ws.accept(subprotocols)
else:
protocol = request.transport.get_protocol()
protocol.app = self
diff --git a/sanic/asgi.py b/sanic/asgi.py
--- a/sanic/asgi.py
+++ b/sanic/asgi.py
@@ -140,7 +140,6 @@ async def create(
instance.ws = instance.transport.create_websocket_connection(
send, receive
)
- await instance.ws.accept()
else:
raise ServerError("Received unknown ASGI scope")
diff --git a/sanic/websocket.py b/sanic/websocket.py
--- a/sanic/websocket.py
+++ b/sanic/websocket.py
@@ -41,7 +41,7 @@ def __init__(
websocket_write_limit=2 ** 16,
websocket_ping_interval=20,
websocket_ping_timeout=20,
- **kwargs
+ **kwargs,
):
super().__init__(*args, **kwargs)
self.websocket = None
@@ -154,7 +154,7 @@ def __init__(
) -> None:
self._send = send
self._receive = receive
- self.subprotocols = subprotocols or []
+ self._subprotocols = subprotocols or []
async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:
message: Dict[str, Union[str, bytes]] = {"type": "websocket.send"}
@@ -178,13 +178,28 @@ async def recv(self, *args, **kwargs) -> Optional[str]:
receive = recv
- async def accept(self) -> None:
+ async def accept(self, subprotocols: Optional[List[str]] = None) -> None:
+ subprotocol = None
+ if subprotocols:
+ for subp in subprotocols:
+ if subp in self.subprotocols:
+ subprotocol = subp
+ break
+
await self._send(
{
"type": "websocket.accept",
- "subprotocol": ",".join(list(self.subprotocols)),
+ "subprotocol": subprotocol,
}
)
async def close(self) -> None:
pass
+
+ @property
+ def subprotocols(self):
+ return self._subprotocols
+
+ @subprotocols.setter
+ def subprotocols(self, subprotocols: Optional[List[str]] = None):
+ self._subprotocols = subprotocols or []
| diff --git a/tests/test_asgi.py b/tests/test_asgi.py
--- a/tests/test_asgi.py
+++ b/tests/test_asgi.py
@@ -218,7 +218,7 @@ async def test_websocket_accept_with_no_subprotocols(
message = message_stack.popleft()
assert message["type"] == "websocket.accept"
- assert message["subprotocol"] == ""
+ assert message["subprotocol"] is None
assert "bytes" not in message
@@ -227,7 +227,7 @@ async def test_websocket_accept_with_subprotocol(send, receive, message_stack):
subprotocols = ["graphql-ws"]
ws = WebSocketConnection(send, receive, subprotocols)
- await ws.accept()
+ await ws.accept(subprotocols)
assert len(message_stack) == 1
@@ -244,13 +244,13 @@ async def test_websocket_accept_with_multiple_subprotocols(
subprotocols = ["graphql-ws", "hello", "world"]
ws = WebSocketConnection(send, receive, subprotocols)
- await ws.accept()
+ await ws.accept(["hello", "world"])
assert len(message_stack) == 1
message = message_stack.popleft()
assert message["type"] == "websocket.accept"
- assert message["subprotocol"] == "graphql-ws,hello,world"
+ assert message["subprotocol"] == "hello"
assert "bytes" not in message
| Sanic WebSockets not working in ASGI mode with daphne as a server
Sanic WebSockets not working in ASGI mode with daphne as a server
Backend code:
```
from sanic import Sanic
app = Sanic("bug")
@app.websocket("/ws/")
async def handler(request, ws):
await ws.accept()
while True:
data = "Hello"
print("Sending:", data)
await ws.send(data)
data = await ws.recv()
print("Received: ", data)
```
Ran with `daphne -e tcp:80:interface=0.0.0.0 bug:app`
Frontend code:
```
socket = new WebSocket("ws://localhost:80/ws/", []);
socket.onopen = function(e) {
console.log("Sending to server");
socket.send("My name is John");
};
socket.onmessage = function(event) {
console.log(`[message] Data received from server: ${event.data}`);
};
socket.onclose = function(event) {
if (event.wasClean) {
console.log(`[close] Connection closed cleanly, code=${event.code} reason=${event.reason}`);
} else {
// e.g. server process killed or network down
// event.code is usually 1006 in this case
console.log('[close] Connection died');
}
};
socket.onerror = function(error) {
console.log(`[error] ${error.message}`);
};
ฦ (error) {
console.log(`[error] ${error.message}`);
}
```
Expected behavior is for the sockets to connect and exchange messages.
Observed behavior is that the backend crashes with `AttributeError: 'WebSocketProtocol' object has no attribute 'handshake_deferred'`.
Removing the `await ws.accept()` line causes backend to crash with `ValueError: Socket has not been accepted, so cannot send over it`.
Switching the order of send and recv prevents the crash, but no messages are received nor sent.
In all cases frontend disconects with message `[close] Connection died`.
OS: Win10
sanic==20.12.2
daphne==3.0.1
| I've reported the same bug in daphne since I can't determine whose fault it is. https://github.com/django/daphne/issues/360
I set up your test as above, eliminating the await and running directly as a sanic app with app.run()
Sanic App:
```
pipenv run python test.py
[2021-03-21 08:03:44 -0500] [4076543] [INFO] Goin' Fast @ http://127.0.0.1:8000
[2021-03-21 08:03:44 -0500] [4076543] [INFO] Starting worker [4076543]
Sending: Hello
Received: My name is John
Sending: Hello
```
DevTools:
```
Sending to server
2(index):11 [message] Data received from server: Hello
(index):16 [close] Connection closed cleanly, code=1000 reason=
```
I tested with uvicorn and it looks like this may be a problem with the ASGI response
```
[2021-03-21 08:09:51 -0500] [4086237] [ERROR] Exception occurred while handling uri: 'ws://localhost:8000/ws'
Traceback (most recent call last):
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/sanic/app.py", line 729, in handle_request
response = request.stream.response # type: ignore
AttributeError: 'ASGIApp' object has no attribute 'response'
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/sanic/app.py", line 729, in handle_request
response = request.stream.response # type: ignore
AttributeError: 'ASGIApp' object has no attribute 'response'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 162, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/sanic/app.py", line 1214, in __call__
await asgi_app()
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/sanic/asgi.py", line 209, in __call__
await self.sanic_app.handle_request(self.request)
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/sanic/app.py", line 748, in handle_request
await self.handle_exception(request, e)
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/sanic/app.py", line 655, in handle_exception
await response.send(end_stream=True)
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/sanic/response.py", line 122, in send
await self.stream.send(data, end_stream=end_stream)
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/sanic/asgi.py", line 185, in send
await self.transport.send(
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/sanic/models/asgi.py", line 92, in send
await self._send(data)
File "/opt/ssadowski/.local/share/virtualenvs/2070-fKWZDJFT/lib/python3.8/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 235, in asgi_send
raise RuntimeError(msg % message_type)
RuntimeError: Expected ASGI message 'websocket.send' or 'websocket.close', but got 'http.response.start'.
AttributeError: 'ASGIApp' object has no attribute 'response'
```
@sjsadowski This might be an issue with 21.3, but the original ticket says 20.12 so we might have two different things going on here.
@sim1234 Out of curiosity, have you tried the same with `uvicorn` or `hypercorn`?
above was 21.3, below is 20.12.3 w/ uvicorn:
sanic:
```
$ pipenv run uvicorn --host 127.0.0.1 --port 8000 test:app
INFO: Started server process [4132344]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: ('127.0.0.1', 57078) - "WebSocket /ws" [accepted]
Sending: Hello
Received: None
Sending: Hello
```
browser:
```
WebSocket connection to 'ws://localhost:8000/ws' failed:
(anonymous) @ (index):3
(index):25 [error] undefined
(index):20 [close] Connection died
```
Hi!
I tested uvicorn, hypercorn and daphne.
# environments
1. hardware:
MacBook Pro (13-inch, M1, 2020)
2. Python:
3.9
3. PythonPackage:
- Sanic==21.3.2
- sanic-routing==0.4.0
- uvicorn==0.13.4
- Hypercorn==0.11.2
- daphne==3.0.1
- websockets==8.1
- wsproto==1.0.0
# before test
1. shut subprotocol down in webockets
seems in asgi mode, websockets `subprotocol` is not well-supported. So I did a little change before test.
```python
# sanic.websocket.WebSocketConnection
async def accept(self) -> None:
await self._send(
{
"type": "websocket.accept",
# "subprotocol": ",".join(list(self.subprotocols)),
}
)
```
`subprotocol` causes broken in `uvicorn + wsproto` and `hypercorn` and `dephne`.
2. Comment out the following code
```python
# app.py line 726
if response:
response = await request.respond(response)
# else:
# response = request.stream.response # type: ignore
```
When sending a websocket request. a exception raised: `AttributeError: 'ASGIApp' object has no attribute 'response'`. looks like `request.stream == sanic.asgi.ASGIApp()` in ASGI mode, AGSI object doesn't have response attribute.
these code may cause exceptions.
# conclusion
## uvicorn
uvicorn uses webcokets and wsproto as it's implements of websocket protocol, So:
1. uvicorn + websockets: Broken, can't establish websocket connection. (didn't find any solution yet, uvicorn can't get along this version of websockets). ๐
2. uvicorn + wsproto: works fine. ๐
## dephne
1. All routes are broken(both http and ws) ๐
traceback shown:
```sh
File ".../python3.9/site-packages/sanic_routing/router.py", line 63, in resolve
route, param_basket = self.find_route(
TypeError: 'NoneType' object is not callable
```
Seems `sanic-routeing` `finalize()` is never called. after set a breakpoint in `Sanic.__call__()`, after call `finalize()` manually in pdb, router works.
should check signals event when using dephne.
## hypercorn
1. works fine.๐
@ZinkLu This is awesome work. Thanks you so much.
Glad to help. If there is anything I can PR, let me know. | 2021-04-26T18:31:15 |
sanic-org/sanic | 2,128 | sanic-org__sanic-2128 | [
"2121"
] | 5bb9aa0c2c8768b5cd7eafb3bd4dfb7cc999fff8 | diff --git a/sanic/handlers.py b/sanic/handlers.py
--- a/sanic/handlers.py
+++ b/sanic/handlers.py
@@ -25,7 +25,6 @@ class ErrorHandler:
handlers = None
cached_handlers = None
- _missing = object()
def __init__(self):
self.handlers = []
@@ -45,7 +44,9 @@ def add(self, exception, handler):
:return: None
"""
+ # self.handlers to be deprecated and removed in version 21.12
self.handlers.append((exception, handler))
+ self.cached_handlers[exception] = handler
def lookup(self, exception):
"""
@@ -61,14 +62,19 @@ def lookup(self, exception):
:return: Registered function if found ``None`` otherwise
"""
- handler = self.cached_handlers.get(type(exception), self._missing)
- if handler is self._missing:
- for exception_class, handler in self.handlers:
- if isinstance(exception, exception_class):
- self.cached_handlers[type(exception)] = handler
- return handler
- self.cached_handlers[type(exception)] = None
- handler = None
+ exception_class = type(exception)
+ if exception_class in self.cached_handlers:
+ return self.cached_handlers[exception_class]
+
+ for ancestor in type.mro(exception_class):
+ if ancestor in self.cached_handlers:
+ handler = self.cached_handlers[ancestor]
+ self.cached_handlers[exception_class] = handler
+ return handler
+ if ancestor is BaseException:
+ break
+ self.cached_handlers[exception_class] = None
+ handler = None
return handler
def response(self, request, exception):
| Unexpected behavior using a catch-all Blueprint exception handler
**Describe the bug**
Using a catch-all exception handler in a Blueprint might lead to unexpected behavior. For example:
```python
from sanic import Sanic, Blueprint, response
from sanic.exceptions import NotFound
error_handlers = Blueprint(__name__)
@error_handlers.exception(NotFound)
def not_found(request, exception):
return response.text("Not found", status=404)
@error_handlers.exception(Exception)
def unhandled_exceptions(request, exception):
return response.text("Unhandled exception", status=500)
app = Sanic("My Hello, world app")
app.blueprint(error_handlers)
@app.route("/")
async def test(request):
return json({"hello": "world"})
if __name__ == '__main__':
app.run(debug=True)
```
One might think that the `not_found` would handle all 404's, but that's not always the case, sometimes the `unhandled_exceptions` handler is being used instead, restarting the application will give "random" results.
From what I can see the underlying problem is this line: https://github.com/sanic-org/sanic/blob/main/sanic/handlers.py#L67.
Since all exceptions derive from `Exception` they will return `True` here when compared to the `unhandled_exceptions` exception `Exception`. So it's basically the order of the `self.handlers` that will determine which error handler to be used (if there are multiple handlers registered for the same derived exception) since it returns early on the first match.
Also, the reason for "random" results between restarts seems to be that a `set` (undefined order) is used as the data structure for storing the registered exception handlers: https://github.com/sanic-org/sanic/blob/main/sanic/mixins/exceptions.py#L8 when using a Blueprint.
Previously in versions <21.x this used to be a `list` and the problem above could be "circumvented" by registering the catch-all exception handler last. This is also how the `app.error_handler` seems to be working and the workaround still works for normal application routes.
**Expected behavior**
The explicitly registered exception handler should primarily be used even thou a catch-all handler is registered, the order when the handler was registered shouldn't matter. I would also expect the same behavior for both Blueprint and normal application routes.
**Environment**
- Version: 21.3.2
| Looks like the simplest solution would be to precache the handler:
```python
self.cached_handlers[exception] = handler
```
At first glance, I am not entirely sure why this was not done.
---
Digging deeper, it looks to me like a better solution would be either, at run time loop thru `mro` to find the nearest ancestor (and cache that). | 2021-04-26T19:03:15 |
|
sanic-org/sanic | 2,133 | sanic-org__sanic-2133 | [
"2122",
"2122"
] | 7c180376d64d0cdb35fd0d0b7aaee5646215d79a | diff --git a/sanic/mixins/routes.py b/sanic/mixins/routes.py
--- a/sanic/mixins/routes.py
+++ b/sanic/mixins/routes.py
@@ -160,7 +160,9 @@ def decorator(handler):
if apply:
self._apply_route(route)
- return route, handler
+ if static:
+ return route, handler
+ return handler
return decorator
diff --git a/sanic/signals.py b/sanic/signals.py
--- a/sanic/signals.py
+++ b/sanic/signals.py
@@ -48,7 +48,7 @@ def get( # type: ignore
f".{event}",
self.DEFAULT_METHOD,
self,
- {"__params__": {}},
+ {"__params__": {}, "__matches__": {}},
extra=extra,
)
except NotFound:
@@ -59,7 +59,13 @@ def get( # type: ignore
terms.append(extra)
raise NotFound(message % tuple(terms))
- params = param_basket.pop("__params__")
+ params = param_basket["__params__"]
+ if not params:
+ params = {
+ param.name: param_basket["__matches__"][idx]
+ for idx, param in group.params.items()
+ }
+
return group, [route.handler for route in group], params
async def _dispatch(
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -83,7 +83,7 @@ def open_local(paths, mode="r", encoding="utf8"):
uvloop = "uvloop>=0.5.3" + env_dependency
requirements = [
- "sanic-routing>=0.6.0",
+ "sanic-routing==0.7.0rc1",
"httptools>=0.0.10",
uvloop,
ujson,
| diff --git a/tests/test_named_routes.py b/tests/test_named_routes.py
--- a/tests/test_named_routes.py
+++ b/tests/test_named_routes.py
@@ -234,7 +234,7 @@ async def handler(request, name):
app.router.routes_all[
(
"folder",
- "<name>",
+ "<name:str>",
)
].name
== "app.route_dynamic"
@@ -369,7 +369,8 @@ async def handler(request, name):
app.add_route(handler, "/folder/<name>", name="route_dynamic")
assert (
- app.router.routes_all[("folder", "<name>")].name == "app.route_dynamic"
+ app.router.routes_all[("folder", "<name:str>")].name
+ == "app.route_dynamic"
)
assert app.url_for("route_dynamic", name="test") == "/folder/test"
with pytest.raises(URLBuildError):
diff --git a/tests/test_requests.py b/tests/test_requests.py
--- a/tests/test_requests.py
+++ b/tests/test_requests.py
@@ -2246,9 +2246,7 @@ async def delete(request, foo):
def test_handler_overload(app):
- @app.get(
- "/long/sub/route/param_a/<param_a:string>/param_b/<param_b:string>"
- )
+ @app.get("/long/sub/route/param_a/<param_a:str>/param_b/<param_b:str>")
@app.post("/long/sub/route/")
def handler(request, **kwargs):
return json(kwargs)
diff --git a/tests/test_routes.py b/tests/test_routes.py
--- a/tests/test_routes.py
+++ b/tests/test_routes.py
@@ -258,7 +258,7 @@ def handler2(request):
def test_route_invalid_parameter_syntax(app):
with pytest.raises(ValueError):
- @app.get("/get/<:string>", strict_slashes=True)
+ @app.get("/get/<:str>", strict_slashes=True)
def handler(request):
return text("OK")
@@ -478,7 +478,7 @@ async def handler(request, name):
def test_dynamic_route_string(app):
results = []
- @app.route("/folder/<name:string>")
+ @app.route("/folder/<name:str>")
async def handler(request, name):
results.append(name)
return text("OK")
@@ -513,7 +513,7 @@ async def handler(request, folder_id):
def test_dynamic_route_number(app):
results = []
- @app.route("/weight/<weight:number>")
+ @app.route("/weight/<weight:float>")
async def handler(request, weight):
results.append(weight)
return text("OK")
@@ -585,7 +585,6 @@ async def handler(request, path):
return text("OK")
app.router.finalize()
- print(app.router.find_route_src)
request, response = app.test_client.get("/path/1/info")
assert response.status == 200
@@ -824,7 +823,7 @@ async def handler(request, name):
results.append(name)
return text("OK")
- app.add_route(handler, "/folder/<name:string>")
+ app.add_route(handler, "/folder/<name:str>")
request, response = app.test_client.get("/folder/test123")
assert response.text == "OK"
@@ -860,7 +859,7 @@ async def handler(request, weight):
results.append(weight)
return text("OK")
- app.add_route(handler, "/weight/<weight:number>")
+ app.add_route(handler, "/weight/<weight:float>")
request, response = app.test_client.get("/weight/12345")
assert response.text == "OK"
@@ -1067,7 +1066,8 @@ async def ad_post(request, action):
return json({"action": action})
request, response = app.test_client.get("/ads/1234")
- assert response.status == 405
+ assert response.status == 200
+ assert response.json == {"ad_id": "1234"}
request, response = app.test_client.post("/ads/post")
assert response.status == 200
diff --git a/tests/test_url_building.py b/tests/test_url_building.py
--- a/tests/test_url_building.py
+++ b/tests/test_url_building.py
@@ -143,7 +143,7 @@ def fail(request):
COMPLEX_PARAM_URL = (
"/<foo:int>/<four_letter_string:[A-z]{4}>/"
- "<two_letter_string:[A-z]{2}>/<normal_string>/<some_number:number>"
+ "<two_letter_string:[A-z]{2}>/<normal_string>/<some_number:float>"
)
PASSING_KWARGS = {
"foo": 4,
@@ -168,7 +168,7 @@ def fail(request):
expected_error = (
r'Value "not_int" for parameter `foo` '
- r"does not match pattern for type `int`: ^-?\d+"
+ r"does not match pattern for type `int`: ^-?\d+$"
)
assert str(e.value) == expected_error
@@ -223,7 +223,7 @@ def fail(request):
@pytest.mark.parametrize("number", [3, -3, 13.123, -13.123])
def test_passes_with_negative_number_message(app, number):
- @app.route("path/<possibly_neg:number>/another-word")
+ @app.route("path/<possibly_neg:float>/another-word")
def good(request, possibly_neg):
assert isinstance(possibly_neg, (int, float))
return text(f"this should pass with `{possibly_neg}`")
| Handler would be modify after wrapped by `app.route`
**Is your feature request related to a problem? Please describe.**
Handler would be modify after wrapped by `app.route`
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("test")
@app.get("/")
def get(request):
"""
here is my doc
"""
return text("123")
print(get.__doc__)
print(get.__name__)
```
Original function has been changed into tuple.
```sh
Built-in immutable sequence.
If no argument is given, the constructor returns an empty tuple.
If iterable is specified the tuple is initialized from iterable's items.
If the argument is a tuple, the return value is the same object.
Traceback (most recent call last):
File "/Users/zinklu/code/WebProject/SanicDebug/app.py", line 22, in <module>
print(get.__name__)
AttributeError: 'tuple' object has no attribute '__name__'
```
**Describe the solution you'd like**
`app.route` won't change original handler.
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("test")
@app.get("/")
def get(request):
"""
here is my doc
"""
return text("123")
print(get.__doc__)
print(get.__name__)
```
```sh
here is my doc
get
```
**Additional context**
Handler would be modify after wrapped by `app.route`
**Is your feature request related to a problem? Please describe.**
Handler would be modify after wrapped by `app.route`
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("test")
@app.get("/")
def get(request):
"""
here is my doc
"""
return text("123")
print(get.__doc__)
print(get.__name__)
```
Original function has been changed into tuple.
```sh
Built-in immutable sequence.
If no argument is given, the constructor returns an empty tuple.
If iterable is specified the tuple is initialized from iterable's items.
If the argument is a tuple, the return value is the same object.
Traceback (most recent call last):
File "/Users/zinklu/code/WebProject/SanicDebug/app.py", line 22, in <module>
print(get.__name__)
AttributeError: 'tuple' object has no attribute '__name__'
```
**Describe the solution you'd like**
`app.route` won't change original handler.
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("test")
@app.get("/")
def get(request):
"""
here is my doc
"""
return text("123")
print(get.__doc__)
print(get.__name__)
```
```sh
here is my doc
get
```
**Additional context**
| 2021-05-12T06:15:04 |
|
sanic-org/sanic | 2,140 | sanic-org__sanic-2140 | [
"2087"
] | 16875b1f41e7665135caad5cbc542fc072af3809 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -382,11 +382,19 @@ def dispatch(
condition=condition,
)
- def event(self, event: str, timeout: Optional[Union[int, float]] = None):
+ async def event(
+ self, event: str, timeout: Optional[Union[int, float]] = None
+ ):
signal = self.signal_router.name_index.get(event)
if not signal:
- raise NotFound("Could not find signal %s" % event)
- return wait_for(signal.ctx.event.wait(), timeout=timeout)
+ if self.config.EVENT_AUTOREGISTER:
+ self.signal_router.reset()
+ self.add_signal(None, event)
+ signal = self.signal_router.name_index.get(event)
+ self.signal_router.finalize()
+ else:
+ raise NotFound("Could not find signal %s" % event)
+ return await wait_for(signal.ctx.event.wait(), timeout=timeout)
def enable_websocket(self, enable=True):
"""Enable or disable the support for websocket.
diff --git a/sanic/config.py b/sanic/config.py
--- a/sanic/config.py
+++ b/sanic/config.py
@@ -16,28 +16,29 @@
"""
DEFAULT_CONFIG = {
- "REQUEST_MAX_SIZE": 100000000, # 100 megabytes
+ "ACCESS_LOG": True,
+ "EVENT_AUTOREGISTER": False,
+ "FALLBACK_ERROR_FORMAT": "html",
+ "FORWARDED_FOR_HEADER": "X-Forwarded-For",
+ "FORWARDED_SECRET": None,
+ "GRACEFUL_SHUTDOWN_TIMEOUT": 15.0, # 15 sec
+ "KEEP_ALIVE_TIMEOUT": 5, # 5 seconds
+ "KEEP_ALIVE": True,
+ "PROXIES_COUNT": None,
+ "REAL_IP_HEADER": None,
+ "REGISTER": True,
"REQUEST_BUFFER_QUEUE_SIZE": 100,
"REQUEST_BUFFER_SIZE": 65536, # 64 KiB
+ "REQUEST_ID_HEADER": "X-Request-ID",
+ "REQUEST_MAX_SIZE": 100000000, # 100 megabytes
"REQUEST_TIMEOUT": 60, # 60 seconds
"RESPONSE_TIMEOUT": 60, # 60 seconds
- "KEEP_ALIVE": True,
- "KEEP_ALIVE_TIMEOUT": 5, # 5 seconds
- "WEBSOCKET_MAX_SIZE": 2 ** 20, # 1 megabyte
"WEBSOCKET_MAX_QUEUE": 32,
+ "WEBSOCKET_MAX_SIZE": 2 ** 20, # 1 megabyte
+ "WEBSOCKET_PING_INTERVAL": 20,
+ "WEBSOCKET_PING_TIMEOUT": 20,
"WEBSOCKET_READ_LIMIT": 2 ** 16,
"WEBSOCKET_WRITE_LIMIT": 2 ** 16,
- "WEBSOCKET_PING_TIMEOUT": 20,
- "WEBSOCKET_PING_INTERVAL": 20,
- "GRACEFUL_SHUTDOWN_TIMEOUT": 15.0, # 15 sec
- "ACCESS_LOG": True,
- "FORWARDED_SECRET": None,
- "REAL_IP_HEADER": None,
- "PROXIES_COUNT": None,
- "FORWARDED_FOR_HEADER": "X-Forwarded-For",
- "REQUEST_ID_HEADER": "X-Request-ID",
- "FALLBACK_ERROR_FORMAT": "html",
- "REGISTER": True,
}
diff --git a/sanic/mixins/signals.py b/sanic/mixins/signals.py
--- a/sanic/mixins/signals.py
+++ b/sanic/mixins/signals.py
@@ -1,4 +1,4 @@
-from typing import Any, Callable, Dict, Set
+from typing import Any, Callable, Dict, Optional, Set
from sanic.models.futures import FutureSignal
from sanic.models.handler_types import SignalHandler
@@ -60,10 +60,16 @@ def decorator(handler: SignalHandler):
def add_signal(
self,
- handler,
+ handler: Optional[Callable[..., Any]],
event: str,
condition: Dict[str, Any] = None,
):
+ if not handler:
+
+ async def noop():
+ ...
+
+ handler = noop
self.signal(event=event, condition=condition)(handler)
return handler
| diff --git a/tests/test_signals.py b/tests/test_signals.py
--- a/tests/test_signals.py
+++ b/tests/test_signals.py
@@ -257,17 +257,60 @@ def sync_signal(amount):
assert counter == 0
-def test_event_not_exist(app):
[email protected]
+async def test_event_not_exist(app):
with pytest.raises(NotFound, match="Could not find signal does.not.exist"):
- app.event("does.not.exist")
+ await app.event("does.not.exist")
-def test_event_not_exist_on_bp(app):
[email protected]
+async def test_event_not_exist_on_bp(app):
bp = Blueprint("bp")
app.blueprint(bp)
with pytest.raises(NotFound, match="Could not find signal does.not.exist"):
- bp.event("does.not.exist")
+ await bp.event("does.not.exist")
+
+
[email protected]
+async def test_event_not_exist_with_autoregister(app):
+ app.config.EVENT_AUTOREGISTER = True
+ try:
+ await app.event("does.not.exist", timeout=0.1)
+ except asyncio.TimeoutError:
+ ...
+
+
[email protected]
+async def test_dispatch_signal_triggers_non_exist_event_with_autoregister(app):
+ @app.signal("some.stand.in")
+ async def signal_handler():
+ ...
+
+ app.config.EVENT_AUTOREGISTER = True
+ app_counter = 0
+ app.signal_router.finalize()
+
+ async def do_wait():
+ nonlocal app_counter
+ await app.event("foo.bar.baz")
+ app_counter += 1
+
+ fut = asyncio.ensure_future(do_wait())
+ await app.dispatch("foo.bar.baz")
+ await fut
+
+ assert app_counter == 1
+
+
[email protected]
+async def test_dispatch_not_exist(app):
+ @app.signal("do.something.start")
+ async def signal_handler():
+ ...
+
+ app.signal_router.finalize()
+ await app.dispatch("does.not.exist")
def test_event_on_bp_not_registered():
| Event registration
When defining a signal, an event is created:
```python
@app.signal("do.something.start")
async def signal_handler():
...
```
But, what if you want to dispatch that signal, and then wait for another event:
```python
@app.post("/trigger")
async def trigger(request):
await app.dispatch("do.something.start")
await app.event("do.something.complete")
return text("Done.")
```
Currently, this would fail because we never defined `do.something.complete`. And, I think this is correct. I do not think that `app.event` should start waiting on something that will never happen.
However, we might still want to dispatch `do.something.complete` without needing to have a signal handler. This usage would really facilitate intra-application messaging.
```python
@app.signal("do.something.start")
async def signal_handler():
await asyncio.sleep(2)
await app.dispatch("do.something.complete")
```
We need some sort of an API to register a signal with no handler. I think the solution is to allow `add_signal` to have `handler=None`.
```python
app.add_signal(None, "do.something.complete")
```
The easiest way to achieve that would likely be a change something like this:
```python
def add_signal(
self,
handler: Optional[Callable[..., None]],
event: str,
condition: Dict[str, Any] = None,
):
if not handler:
handler = lambda: ...
self.signal(event=event, condition=condition)(handler)
return handler
```
| Hello. I'd like to give this a shot. Any specific place I should put this?
Awesome! ๐
Take a look in `./mixins/signals.py` | 2021-05-20T06:47:47 |
sanic-org/sanic | 2,150 | sanic-org__sanic-2150 | [
"2113"
] | 108a4a99c7e12561b35b62e26ecfc33c77db802d | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -420,7 +420,33 @@ def blueprint(self, blueprint, **options):
"""
if isinstance(blueprint, (list, tuple, BlueprintGroup)):
for item in blueprint:
- self.blueprint(item, **options)
+ params = {**options}
+ if isinstance(blueprint, BlueprintGroup):
+ if blueprint.url_prefix:
+ merge_from = [
+ options.get("url_prefix", ""),
+ blueprint.url_prefix,
+ ]
+ if not isinstance(item, BlueprintGroup):
+ merge_from.append(item.url_prefix or "")
+ merged_prefix = "/".join(
+ u.strip("/") for u in merge_from
+ ).rstrip("/")
+ params["url_prefix"] = f"/{merged_prefix}"
+
+ for _attr in ["version", "strict_slashes"]:
+ if getattr(item, _attr) is None:
+ params[_attr] = getattr(
+ blueprint, _attr
+ ) or options.get(_attr)
+ if item.version_prefix == "/v":
+ if blueprint.version_prefix == "/v":
+ params["version_prefix"] = options.get(
+ "version_prefix"
+ )
+ else:
+ params["version_prefix"] = blueprint.version_prefix
+ self.blueprint(item, **params)
return
if blueprint.name in self.blueprints:
assert self.blueprints[blueprint.name] is blueprint, (
diff --git a/sanic/blueprint_group.py b/sanic/blueprint_group.py
--- a/sanic/blueprint_group.py
+++ b/sanic/blueprint_group.py
@@ -1,8 +1,8 @@
+from __future__ import annotations
+
from collections.abc import MutableSequence
from typing import TYPE_CHECKING, List, Optional, Union
-import sanic
-
if TYPE_CHECKING:
from sanic.blueprints import Blueprint
@@ -97,7 +97,7 @@ def url_prefix(self) -> Optional[Union[int, str, float]]:
return self._url_prefix
@property
- def blueprints(self) -> List["sanic.Blueprint"]:
+ def blueprints(self) -> List[Blueprint]:
"""
Retrieve a list of all the available blueprints under this group.
@@ -187,37 +187,16 @@ def __len__(self) -> int:
"""
return len(self._blueprints)
- def _sanitize_blueprint(self, bp: "sanic.Blueprint") -> "sanic.Blueprint":
- """
- Sanitize the Blueprint Entity to override the Version and strict slash
- behaviors as required.
-
- :param bp: Sanic Blueprint entity Object
- :return: Modified Blueprint
- """
- if self._url_prefix:
- merged_prefix = "/".join(
- u.strip("/") for u in [self._url_prefix, bp.url_prefix or ""]
- ).rstrip("/")
- bp.url_prefix = f"/{merged_prefix}"
- for _attr in ["version", "strict_slashes"]:
- if getattr(bp, _attr) is None:
- setattr(bp, _attr, getattr(self, _attr))
- if bp.version_prefix == "/v":
- bp.version_prefix = self._version_prefix
-
- return bp
-
- def append(self, value: "sanic.Blueprint") -> None:
+ def append(self, value: Blueprint) -> None:
"""
The Abstract class `MutableSequence` leverages this append method to
perform the `BlueprintGroup.append` operation.
:param value: New `Blueprint` object.
:return: None
"""
- self._blueprints.append(self._sanitize_blueprint(bp=value))
+ self._blueprints.append(value)
- def insert(self, index: int, item: "sanic.Blueprint") -> None:
+ def insert(self, index: int, item: Blueprint) -> None:
"""
The Abstract class `MutableSequence` leverages this insert method to
perform the `BlueprintGroup.append` operation.
@@ -226,7 +205,7 @@ def insert(self, index: int, item: "sanic.Blueprint") -> None:
:param item: New `Blueprint` object.
:return: None
"""
- self._blueprints.insert(index, self._sanitize_blueprint(item))
+ self._blueprints.insert(index, item)
def middleware(self, *args, **kwargs):
"""
diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -168,8 +168,6 @@ def chain(nested) -> Iterable[Blueprint]:
for i in nested:
if isinstance(i, (list, tuple)):
yield from chain(i)
- elif isinstance(i, BlueprintGroup):
- yield from i.blueprints
else:
yield i
@@ -196,6 +194,7 @@ def register(self, app, options):
self._apps.add(app)
url_prefix = options.get("url_prefix", self.url_prefix)
opt_version = options.get("version", None)
+ opt_strict_slashes = options.get("strict_slashes", None)
opt_version_prefix = options.get("version_prefix", self.version_prefix)
routes = []
@@ -220,18 +219,13 @@ def register(self, app, options):
version_prefix = prefix
break
- version = self.version
- for v in (future.version, opt_version, self.version):
- if v is not None:
- version = v
- break
-
- strict_slashes = (
- self.strict_slashes
- if future.strict_slashes is None
- and self.strict_slashes is not None
- else future.strict_slashes
+ version = self._extract_value(
+ future.version, opt_version, self.version
+ )
+ strict_slashes = self._extract_value(
+ future.strict_slashes, opt_strict_slashes, self.strict_slashes
)
+
name = app._generate_name(future.name)
apply_route = FutureRoute(
@@ -315,3 +309,12 @@ def event(self, event: str, timeout: Optional[Union[int, float]] = None):
return_when=asyncio.FIRST_COMPLETED,
timeout=timeout,
)
+
+ @staticmethod
+ def _extract_value(*values):
+ value = values[-1]
+ for v in values:
+ if v is not None:
+ value = v
+ break
+ return value
| diff --git a/tests/test_blueprint_group.py b/tests/test_blueprint_group.py
--- a/tests/test_blueprint_group.py
+++ b/tests/test_blueprint_group.py
@@ -200,7 +200,7 @@ def test_bp_group_as_nested_group():
blueprint_group_1 = Blueprint.group(
Blueprint.group(blueprint_1, blueprint_2)
)
- assert len(blueprint_group_1) == 2
+ assert len(blueprint_group_1) == 1
def test_blueprint_group_insert():
@@ -215,9 +215,29 @@ def test_blueprint_group_insert():
group.insert(0, blueprint_1)
group.insert(0, blueprint_2)
group.insert(0, blueprint_3)
- assert group.blueprints[1].strict_slashes is False
- assert group.blueprints[2].strict_slashes is True
- assert group.blueprints[0].url_prefix == "/test"
+
+ @blueprint_1.route("/")
+ def blueprint_1_default_route(request):
+ return text("BP1_OK")
+
+ @blueprint_2.route("/")
+ def blueprint_2_default_route(request):
+ return text("BP2_OK")
+
+ @blueprint_3.route("/")
+ def blueprint_3_default_route(request):
+ return text("BP3_OK")
+
+ app = Sanic("PropTest")
+ app.blueprint(group)
+ app.router.finalize()
+
+ routes = [(route.path, route.strict) for route in app.router.routes]
+
+ assert len(routes) == 3
+ assert ("v1/test/bp1/", True) in routes
+ assert ("v1.3/test/bp2", False) in routes
+ assert ("v1.3/test", False) in routes
def test_bp_group_properties():
@@ -231,19 +251,25 @@ def test_bp_group_properties():
url_prefix="/grouped",
strict_slashes=True,
)
+ primary = Blueprint.group(group, url_prefix="/primary")
- assert group.version_prefix == "/api/v"
- assert blueprint_1.version_prefix == "/api/v"
- assert blueprint_2.version_prefix == "/api/v"
+ @blueprint_1.route("/")
+ def blueprint_1_default_route(request):
+ return text("BP1_OK")
+
+ @blueprint_2.route("/")
+ def blueprint_2_default_route(request):
+ return text("BP2_OK")
- assert group.version == 1
- assert blueprint_1.version == 1
- assert blueprint_2.version == 1
+ app = Sanic("PropTest")
+ app.blueprint(group)
+ app.blueprint(primary)
+ app.router.finalize()
- assert group.strict_slashes
- assert blueprint_1.strict_slashes
- assert blueprint_2.strict_slashes
+ routes = [route.path for route in app.router.routes]
- assert group.url_prefix == "/grouped"
- assert blueprint_1.url_prefix == "/grouped/bp1"
- assert blueprint_2.url_prefix == "/grouped/bp2"
+ assert len(routes) == 4
+ assert "api/v1/grouped/bp1/" in routes
+ assert "api/v1/grouped/bp2/" in routes
+ assert "api/v1/primary/grouped/bp1" in routes
+ assert "api/v1/primary/grouped/bp2" in routes
| Reuse of blueprint groups causes duplicate route errors
**Describe the bug**
When adding a blueprint to multiple groups, the router seems to club `url_prefix`es together in an unexpected way, resulting in duplicate route errors.
```
Traceback (most recent call last):
File "/home/jraymond/Development/sanic-org/bugreport/bugreport/app.py", line 26, in <module>
app.blueprint(new)
File "/home/jraymond/Development/sanic-org/sanic/sanic/app.py", line 407, in blueprint
self.blueprint(item, **options)
File "/home/jraymond/Development/sanic-org/sanic/sanic/app.py", line 423, in blueprint
blueprint.register(self, options)
File "/home/jraymond/Development/sanic-org/sanic/sanic/blueprints.py", line 227, in register
route = app._apply_route(apply_route)
File "/home/jraymond/Development/sanic-org/sanic/sanic/app.py", line 337, in _apply_route
routes = self.router.add(**params)
File "/home/jraymond/Development/sanic-org/sanic/sanic/router.py", line 128, in add
route = super().add(**params) # type: ignore
File "/home/jraymond/.local/lib/python3.9/site-packages/sanic_routing/router.py", line 170, in add
route.add_handler(path, handler, method, requirements, overwrite)
File "/home/jraymond/.local/lib/python3.9/site-packages/sanic_routing/route.py", line 114, in add_handler
raise RouteExists(
sanic_routing.exceptions.RouteExists: Route already registered: api/new/v1/api/v1/hello [GET]
```
**Code snippet**
Minimal reproduction:
```python
from sanic import Blueprint, Sanic
from sanic.response import text
bp1 = Blueprint("bp1", url_prefix="/hello")
@bp1.get("/")
async def hello(_):
return text("hello")
bp2 = Blueprint("bp2", url_prefix="/goodbye")
@bp2.get("/")
async def goodbye(_):
return text("goodbye")
legacy = Blueprint.group([bp1, bp2], url_prefix="/api/v1")
new = Blueprint.group([bp1, bp2], url_prefix="/api/new/v1")
app = Sanic("bugreport")
app.blueprint(legacy)
app.blueprint(new)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=1234, debug=True)
```
**Expected behavior**
Blueprint groups should be independent references to handlers:
```
GET: /api/thing/v1/hello => "hello"
GET: /api/v1/hello => "hello"
```
**Environment (please complete the following information):**
- OS: Linux (Arch)
- Python: 3.9
- Version 21.3.2
| Problem appears [to be here](https://github.com/sanic-org/sanic/blob/e21521f45c0b58bac619a9111fd47426e208bf08/sanic/blueprint_group.py#L176). Not sure if there's an obvious fix, but the API for blueprint groups doesn't indicate that the op is destructive. Can try to come up with sth if nobody beats me to the punch
Thanks @jdraymon for the great bug report.
The 21.3 release included a new router and backend server, along with those changes came some some necessary modifications to the way url_prefixes work. Can you confirm that this code works correctly on Sanic v20.12?
I guess there is simply no test in the Sanic test suite that tests adding a blueprint to multiple bp groups.
@ashleysommer Running with version 20.12.3 produces a similar stack trace:
```
Traceback (most recent call last):
File "/home/jraymond/Development/sanic-org/bugreport/bugreport/app.py", line 26, in <module>
app.blueprint(new)
File "/home/jraymond/.cache/pypoetry/virtualenvs/bugreport-qBp1ZuVR-py3.9/lib/python3.9/site-packages/sanic/app.py", line 707, in blueprint
self.blueprint(item, **options)
File "/home/jraymond/.cache/pypoetry/virtualenvs/bugreport-qBp1ZuVR-py3.9/lib/python3.9/site-packages/sanic/app.py", line 717, in blueprint
blueprint.register(self, options)
File "/home/jraymond/.cache/pypoetry/virtualenvs/bugreport-qBp1ZuVR-py3.9/lib/python3.9/site-packages/sanic/blueprints.py", line 119, in register
_routes, _ = app.route(
File "/home/jraymond/.cache/pypoetry/virtualenvs/bugreport-qBp1ZuVR-py3.9/lib/python3.9/site-packages/sanic/app.py", line 211, in response
self.router.add(
File "/home/jraymond/.cache/pypoetry/virtualenvs/bugreport-qBp1ZuVR-py3.9/lib/python3.9/site-packages/sanic/router.py", line 158, in add
routes.append(self._add(uri, methods, handler, host, name))
File "/home/jraymond/.cache/pypoetry/virtualenvs/bugreport-qBp1ZuVR-py3.9/lib/python3.9/site-packages/sanic/router.py", line 317, in _add
route = merge_route(route, methods, handler)
File "/home/jraymond/.cache/pypoetry/virtualenvs/bugreport-qBp1ZuVR-py3.9/lib/python3.9/site-packages/sanic/router.py", line 267, in merge_route
raise RouteExists(
sanic.router.RouteExists: Route already registered: /api/new/v1/api/v1/hello/ [GET]
```
Walked the tree back a goodly long ways and it seems this has always been the behavior, looks like I'm just the first one to try and do this :slightly_smiling_face:
@jdraymon This is a good report, and I can confirm that this is a problem with the way in which we are applying the URLs. TBH, to solve this properly will require a bit of reworking of the groups. I do agree with @sjsadowski applying the `LTS` label here that we should fix this and then apply it backwards.
I propose that the `BlueprintGroup.append` solution instead of making the change to the blueprint, registers it either on the group instance with a reference (less appealing), or on some property on the blueprint (`blueprint.alternatives`). Then, when the blueprint is added to the router, we simply register the handlers once for each alternative definition that was supplied.
The other problem that the current methodology has (not reported in this issue) is that having blueprints with differing `version` or `strict_slashes` would result in weird and unexpected behaviors as they would override one another. | 2021-06-01T06:44:41 |
sanic-org/sanic | 2,154 | sanic-org__sanic-2154 | [
"2142"
] | 2c80571a8aba8885779a94d85032a89316f5c87b | diff --git a/sanic/websocket.py b/sanic/websocket.py
--- a/sanic/websocket.py
+++ b/sanic/websocket.py
@@ -14,9 +14,13 @@
ConnectionClosed,
InvalidHandshake,
WebSocketCommonProtocol,
- handshake,
)
+# Despite the "legacy" namespace, the primary maintainer of websockets
+# committed to maintaining backwards-compatibility until 2026 and will
+# consider extending it if sanic continues depending on this module.
+from websockets.legacy import handshake
+
from sanic.exceptions import InvalidUsage
from sanic.server import HttpProtocol
@@ -126,7 +130,9 @@ async def websocket_handshake(self, request, subprotocols=None):
ping_interval=self.websocket_ping_interval,
ping_timeout=self.websocket_ping_timeout,
)
- # Following two lines are required for websockets 8.x
+ # we use WebSocketCommonProtocol because we don't want the handshake
+ # logic from WebSocketServerProtocol; however, we must tell it that
+ # we're running on the server side
self.websocket.is_client = False
self.websocket.side = "server"
self.websocket.subprotocol = subprotocol
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -88,12 +88,12 @@ def open_local(paths, mode="r", encoding="utf8"):
uvloop,
ujson,
"aiofiles>=0.6.0",
- "websockets>=8.1,<9.0",
+ "websockets>=9.0",
"multidict>=5.0,<6.0",
]
tests_require = [
- "sanic-testing",
+ "sanic-testing>=0.6.0",
"pytest==5.2.1",
"multidict>=5.0,<6.0",
"gunicorn==20.0.4",
| Allow later websockets releases
**Describe the bug**
`websockets` is [pinned](https://github.com/sanic-org/sanic/blob/main/setup.py#L91
). The latest `websockets` is 9.1 and this release is fixing a [authentication vulnerability](https://websockets.readthedocs.io/en/stable/changelog.html) which was introduced with 8.0.
**Expected behavior**
Allow to use `websockets>9`
**Environment (please complete the following information):**
- OS: probably all
- Version: current
**Additional context**
n/a
| Working on an implementation in #2000 that would resolve this.
Sanic isn't affected by the security vulnerability:
https://github.com/sanic-org/sanic/search?q=BasicAuthWebSocketServerProtocol
https://github.com/sanic-org/sanic/search?q=basic_auth_protocol_factory
Only downside of sticking with the old version: you'll get issues like this one.
Let me see if I can offer a PR to bump the dependency. | 2021-06-02T20:11:35 |
|
sanic-org/sanic | 2,155 | sanic-org__sanic-2155 | [
"2153"
] | a140c47195cf33ca691e0982abb84166b83abdcc | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -183,7 +183,6 @@ def __init__(
if register is not None:
self.config.REGISTER = register
-
if self.config.REGISTER:
self.__class__.register_app(self)
diff --git a/sanic/config.py b/sanic/config.py
--- a/sanic/config.py
+++ b/sanic/config.py
@@ -4,6 +4,8 @@
from typing import Any, Dict, Optional, Union
from warnings import warn
+from sanic.http import Http
+
from .utils import load_module_from_file_location, str_to_bool
@@ -28,6 +30,7 @@
"REAL_IP_HEADER": None,
"REGISTER": True,
"REQUEST_BUFFER_SIZE": 65536, # 64 KiB
+ "REQUEST_MAX_HEADER_SIZE": 8192, # 8 KiB, but cannot exceed 16384
"REQUEST_ID_HEADER": "X-Request-ID",
"REQUEST_MAX_SIZE": 100000000, # 100 megabytes
"REQUEST_TIMEOUT": 60, # 60 seconds
@@ -42,12 +45,36 @@
class Config(dict):
+ ACCESS_LOG: bool
+ EVENT_AUTOREGISTER: bool
+ FALLBACK_ERROR_FORMAT: str
+ FORWARDED_FOR_HEADER: str
+ FORWARDED_SECRET: Optional[str]
+ GRACEFUL_SHUTDOWN_TIMEOUT: float
+ KEEP_ALIVE_TIMEOUT: int
+ KEEP_ALIVE: bool
+ PROXIES_COUNT: Optional[int]
+ REAL_IP_HEADER: Optional[str]
+ REGISTER: bool
+ REQUEST_BUFFER_SIZE: int
+ REQUEST_MAX_HEADER_SIZE: int
+ REQUEST_ID_HEADER: str
+ REQUEST_MAX_SIZE: int
+ REQUEST_TIMEOUT: int
+ RESPONSE_TIMEOUT: int
+ WEBSOCKET_MAX_QUEUE: int
+ WEBSOCKET_MAX_SIZE: int
+ WEBSOCKET_PING_INTERVAL: int
+ WEBSOCKET_PING_TIMEOUT: int
+ WEBSOCKET_READ_LIMIT: int
+ WEBSOCKET_WRITE_LIMIT: int
+
def __init__(
self,
defaults: Dict[str, Union[str, bool, int, float, None]] = None,
load_env: Optional[Union[bool, str]] = True,
env_prefix: Optional[str] = SANIC_PREFIX,
- keep_alive: Optional[int] = None,
+ keep_alive: Optional[bool] = None,
):
defaults = defaults or {}
super().__init__({**DEFAULT_CONFIG, **defaults})
@@ -72,6 +99,8 @@ def __init__(
else:
self.load_environment_vars(SANIC_PREFIX)
+ self._configure_header_size()
+
def __getattr__(self, attr):
try:
return self[attr]
@@ -80,6 +109,19 @@ def __getattr__(self, attr):
def __setattr__(self, attr, value):
self[attr] = value
+ if attr in (
+ "REQUEST_MAX_HEADER_SIZE",
+ "REQUEST_BUFFER_SIZE",
+ "REQUEST_MAX_SIZE",
+ ):
+ self._configure_header_size()
+
+ def _configure_header_size(self):
+ Http.set_header_max_size(
+ self.REQUEST_MAX_HEADER_SIZE,
+ self.REQUEST_BUFFER_SIZE - 4096,
+ self.REQUEST_MAX_SIZE,
+ )
def load_environment_vars(self, prefix=SANIC_PREFIX):
"""
diff --git a/sanic/http.py b/sanic/http.py
--- a/sanic/http.py
+++ b/sanic/http.py
@@ -64,6 +64,9 @@ class Http:
:raises RuntimeError:
"""
+ HEADER_CEILING = 16_384
+ HEADER_MAX_SIZE = 0
+
__slots__ = [
"_send",
"_receive_more",
@@ -169,7 +172,6 @@ async def http1_request_header(self):
"""
Receive and parse request header into self.request.
"""
- HEADER_MAX_SIZE = min(8192, self.request_max_size)
# Receive until full header is in buffer
buf = self.recv_buffer
pos = 0
@@ -180,12 +182,12 @@ async def http1_request_header(self):
break
pos = max(0, len(buf) - 3)
- if pos >= HEADER_MAX_SIZE:
+ if pos >= self.HEADER_MAX_SIZE:
break
await self._receive_more()
- if pos >= HEADER_MAX_SIZE:
+ if pos >= self.HEADER_MAX_SIZE:
raise PayloadTooLarge("Request header exceeds the size limit")
# Parse header content
@@ -541,3 +543,10 @@ def respond(self, response: BaseHTTPResponse) -> BaseHTTPResponse:
@property
def send(self):
return self.response_func
+
+ @classmethod
+ def set_header_max_size(cls, *sizes: int):
+ cls.HEADER_MAX_SIZE = min(
+ *sizes,
+ cls.HEADER_CEILING,
+ )
| diff --git a/tests/test_headers.py b/tests/test_headers.py
--- a/tests/test_headers.py
+++ b/tests/test_headers.py
@@ -7,6 +7,13 @@
from sanic.http import Http
[email protected]
+def raised_ceiling():
+ Http.HEADER_CEILING = 32_768
+ yield
+ Http.HEADER_CEILING = 16_384
+
+
@pytest.mark.parametrize(
"input, expected",
[
@@ -76,15 +83,75 @@ async def _receive_more():
recv_buffer += b"123"
protocol = Mock()
+ Http.set_header_max_size(1)
http = Http(protocol)
http._receive_more = _receive_more
- http.request_max_size = 1
http.recv_buffer = recv_buffer
with pytest.raises(PayloadTooLarge):
await http.http1_request_header()
[email protected]
+async def test_header_size_increased_okay():
+ recv_buffer = bytearray()
+
+ async def _receive_more():
+ nonlocal recv_buffer
+ recv_buffer += b"123"
+
+ protocol = Mock()
+ Http.set_header_max_size(12_288)
+ http = Http(protocol)
+ http._receive_more = _receive_more
+ http.recv_buffer = recv_buffer
+
+ with pytest.raises(PayloadTooLarge):
+ await http.http1_request_header()
+
+ assert len(recv_buffer) == 12_291
+
+
[email protected]
+async def test_header_size_exceeded_maxed_out():
+ recv_buffer = bytearray()
+
+ async def _receive_more():
+ nonlocal recv_buffer
+ recv_buffer += b"123"
+
+ protocol = Mock()
+ Http.set_header_max_size(18_432)
+ http = Http(protocol)
+ http._receive_more = _receive_more
+ http.recv_buffer = recv_buffer
+
+ with pytest.raises(PayloadTooLarge):
+ await http.http1_request_header()
+
+ assert len(recv_buffer) == 16_389
+
+
[email protected]
+async def test_header_size_exceeded_raised_ceiling(raised_ceiling):
+ recv_buffer = bytearray()
+
+ async def _receive_more():
+ nonlocal recv_buffer
+ recv_buffer += b"123"
+
+ protocol = Mock()
+ http = Http(protocol)
+ Http.set_header_max_size(65_536)
+ http._receive_more = _receive_more
+ http.recv_buffer = recv_buffer
+
+ with pytest.raises(PayloadTooLarge):
+ await http.http1_request_header()
+
+ assert len(recv_buffer) == 32_772
+
+
def test_raw_headers(app):
app.route("/")(lambda _: text(""))
request, _ = app.test_client.get(
| Maximum request header size capped at 8k
**Describe the bug**
Sending a request with a large header to a Sanic server results in 413 (Payload Too Large) response.
This happens regardless of the `request_max_size` setting.
If I undserstand the implementation correctly, the maximum HTTP header size is hardcoded to be 8k:
https://github.com/sanic-org/sanic/blob/main/sanic/http.py#L172
**Expected behavior**
I think maximum header size should be configurable and values > 8k should be allowed.
**Environment (please complete the following information):**
- OS: CentOS 7
- Version 21.3.4
| The limit is in place to mitigate DoS attacks which may easily consume all server memory and crash the entire system (or at least Sanic processes, if the admin was careful to set suitable ulimits). If you need larger headers, you are probably doing something very wrong, and as a primary solution you should be looking into fixing your application (e.g. use a session token instead of scramming everything into cookies, or use browser localStorage). The header memory use expands a lot as Sanic parses the byte buffer into Python data structures, and with a large number of parallel requests this becomes a problem. Other software such as proxies and clients may also have their own limits that affect for your application at some point.
That being said, it might still be a good idea to have it configurable for those special cases where huge headers cannot be avoided, especially in intranet services where DoS is not a concern. I believe this needs to stay at least a few kilobytes lower than `REQUEST_BUFFER_SIZE` (by default 64 KiB) to avoid deadlocks (need to check sanic/http.py).
Relevant discussion elsewhere:
* https://stackoverflow.com/questions/686217/maximum-on-http-header-values
* https://github.com/nodejs/node/issues/24692
Thanks for clarification. I believe there was no such limit in 20.12 and I was a bit surprised when I saw all those 413 errors after upgrading to 21.03. I agree that reducing the header size is the best approach and this is what I will try to do, although I believe this can sometimes be problematic, e.g. when integrating with an existing service which uses large headers.
I am okay allowing for some variance here, and in most cases am on board with letting the developer make the choice. However, I'd feel more comfortable having a max config value to guard from someone putting too high a value. It's Python, so someone that really knew the consequences and had the use case, they could still monkey patch around it. I believe some other servers out there do something similar. A default value that can be configured up to some maximum.
FWIW I've been using the approach of "design it so users who really want to change it can monkey-patch it" to tackle this request and it went well โ no issues filed anymore โ https://websockets.readthedocs.io/en/stable/security.html#other-limits | 2021-06-02T20:59:09 |
sanic-org/sanic | 2,167 | sanic-org__sanic-2167 | [
"2143"
] | 83c746ee5753e3385efecc2e03b685de216b8001 | diff --git a/sanic/__main__.py b/sanic/__main__.py
--- a/sanic/__main__.py
+++ b/sanic/__main__.py
@@ -1,7 +1,7 @@
import os
import sys
-from argparse import ArgumentParser, RawDescriptionHelpFormatter
+from argparse import ArgumentParser, RawTextHelpFormatter
from importlib import import_module
from typing import Any, Dict, Optional
@@ -17,7 +17,7 @@ class SanicArgumentParser(ArgumentParser):
def add_bool_arguments(self, *args, **kwargs):
group = self.add_mutually_exclusive_group()
group.add_argument(*args, action="store_true", **kwargs)
- kwargs["help"] = "no " + kwargs["help"]
+ kwargs["help"] = f"no {kwargs['help']}\n "
group.add_argument(
"--no-" + args[0][2:], *args[1:], action="store_false", **kwargs
)
@@ -27,7 +27,15 @@ def main():
parser = SanicArgumentParser(
prog="sanic",
description=BASE_LOGO,
- formatter_class=RawDescriptionHelpFormatter,
+ formatter_class=lambda prog: RawTextHelpFormatter(
+ prog, max_help_position=33
+ ),
+ )
+ parser.add_argument(
+ "-v",
+ "--version",
+ action="version",
+ version=f"Sanic {__version__}; Routing {__routing_version__}",
)
parser.add_argument(
"-H",
@@ -51,13 +59,24 @@ def main():
dest="unix",
type=str,
default="",
- help="location of unix socket",
+ help="location of unix socket\n ",
)
parser.add_argument(
"--cert", dest="cert", type=str, help="location of certificate for SSL"
)
parser.add_argument(
- "--key", dest="key", type=str, help="location of keyfile for SSL."
+ "--key", dest="key", type=str, help="location of keyfile for SSL\n "
+ )
+ parser.add_bool_arguments(
+ "--access-logs", dest="access_log", help="display access logs"
+ )
+ parser.add_argument(
+ "--factory",
+ action="store_true",
+ help=(
+ "Treat app as an application factory, "
+ "i.e. a () -> <Sanic app> callable\n "
+ ),
)
parser.add_argument(
"-w",
@@ -65,32 +84,23 @@ def main():
dest="workers",
type=int,
default=1,
- help="number of worker processes [default 1]",
+ help="number of worker processes [default 1]\n ",
)
parser.add_argument("-d", "--debug", dest="debug", action="store_true")
parser.add_argument(
"-r",
+ "--reload",
"--auto-reload",
dest="auto_reload",
action="store_true",
help="Watch source directory for file changes and reload on changes",
)
parser.add_argument(
- "--factory",
- action="store_true",
- help=(
- "Treat app as an application factory, "
- "i.e. a () -> <Sanic app> callable."
- ),
- )
- parser.add_argument(
- "-v",
- "--version",
- action="version",
- version=f"Sanic {__version__}; Routing {__routing_version__}",
- )
- parser.add_bool_arguments(
- "--access-logs", dest="access_log", help="display access logs"
+ "-R",
+ "--reload-dir",
+ dest="path",
+ action="append",
+ help="Extra directories to watch and reload on changes\n ",
)
parser.add_argument(
"module", help="path to your Sanic app. Example: path.to.server:app"
@@ -140,6 +150,17 @@ def main():
}
if args.auto_reload:
kwargs["auto_reload"] = True
+
+ if args.path:
+ if args.auto_reload or args.debug:
+ kwargs["reload_dir"] = args.path
+ else:
+ error_logger.warning(
+ "Ignoring '--reload-dir' since auto reloading was not "
+ "enabled. If you would like to watch directories for "
+ "changes, consider using --debug or --auto-reload."
+ )
+
app.run(**kwargs)
except ImportError as e:
if module_name.startswith(e.name):
diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -14,6 +14,7 @@
from collections import defaultdict, deque
from functools import partial
from inspect import isawaitable
+from pathlib import Path
from socket import socket
from ssl import Purpose, SSLContext, create_default_context
from traceback import format_exc
@@ -105,6 +106,7 @@ class Sanic(BaseSanic):
"name",
"named_request_middleware",
"named_response_middleware",
+ "reload_dirs",
"request_class",
"request_middleware",
"response_middleware",
@@ -168,6 +170,7 @@ def __init__(
self.listeners: Dict[str, List[ListenerType]] = defaultdict(list)
self.named_request_middleware: Dict[str, Deque[MiddlewareType]] = {}
self.named_response_middleware: Dict[str, Deque[MiddlewareType]] = {}
+ self.reload_dirs: Set[Path] = set()
self.request_class = request_class
self.request_middleware: Deque[MiddlewareType] = deque()
self.response_middleware: Deque[MiddlewareType] = deque()
@@ -389,7 +392,7 @@ async def event(
if self.config.EVENT_AUTOREGISTER:
self.signal_router.reset()
self.add_signal(None, event)
- signal = self.signal_router.name_index.get(event)
+ signal = self.signal_router.name_index[event]
self.signal_router.finalize()
else:
raise NotFound("Could not find signal %s" % event)
@@ -846,6 +849,7 @@ def run(
access_log: Optional[bool] = None,
unix: Optional[str] = None,
loop: None = None,
+ reload_dir: Optional[Union[List[str], str]] = None,
) -> None:
"""
Run the HTTP Server and listen until keyboard interrupt or term
@@ -880,6 +884,18 @@ def run(
:type unix: str
:return: Nothing
"""
+ if reload_dir:
+ if isinstance(reload_dir, str):
+ reload_dir = [reload_dir]
+
+ for directory in reload_dir:
+ direc = Path(directory)
+ if not direc.is_dir():
+ logger.warning(
+ f"Directory {directory} could not be located"
+ )
+ self.reload_dirs.add(Path(directory))
+
if loop is not None:
raise TypeError(
"loop is not a valid argument. To use an existing loop, "
diff --git a/sanic/reloader_helpers.py b/sanic/reloader_helpers.py
--- a/sanic/reloader_helpers.py
+++ b/sanic/reloader_helpers.py
@@ -1,3 +1,4 @@
+import itertools
import os
import signal
import subprocess
@@ -59,6 +60,20 @@ def restart_with_reloader():
)
+def _check_file(filename, mtimes):
+ need_reload = False
+
+ mtime = os.stat(filename).st_mtime
+ old_time = mtimes.get(filename)
+ if old_time is None:
+ mtimes[filename] = mtime
+ elif mtime > old_time:
+ mtimes[filename] = mtime
+ need_reload = True
+
+ return need_reload
+
+
def watchdog(sleep_interval, app):
"""Watch project files, restart worker process if a change happened.
@@ -85,17 +100,16 @@ def interrupt_self(*args):
while True:
need_reload = False
- for filename in _iter_module_files():
+ for filename in itertools.chain(
+ _iter_module_files(),
+ *(d.glob("**/*") for d in app.reload_dirs),
+ ):
try:
- mtime = os.stat(filename).st_mtime
+ check = _check_file(filename, mtimes)
except OSError:
continue
- old_time = mtimes.get(filename)
- if old_time is None:
- mtimes[filename] = mtime
- elif mtime > old_time:
- mtimes[filename] = mtime
+ if check:
need_reload = True
if need_reload:
| diff --git a/tests/test_cli.py b/tests/test_cli.py
--- a/tests/test_cli.py
+++ b/tests/test_cli.py
@@ -33,7 +33,7 @@ def capture(command):
"fake.server:app",
"fake.server:create_app()",
"fake.server.create_app()",
- )
+ ),
)
def test_server_run(appname):
command = ["sanic", appname]
diff --git a/tests/test_reloader.py b/tests/test_reloader.py
--- a/tests/test_reloader.py
+++ b/tests/test_reloader.py
@@ -23,6 +23,8 @@
except ImportError:
flags = 0
+TIMER_DELAY = 2
+
def terminate(proc):
if flags:
@@ -56,6 +58,40 @@ def complete(*args):
return text
+def write_json_config_app(filename, jsonfile, **runargs):
+ with open(filename, "w") as f:
+ f.write(
+ dedent(
+ f"""\
+ import os
+ from sanic import Sanic
+ import json
+
+ app = Sanic(__name__)
+ with open("{jsonfile}", "r") as f:
+ config = json.load(f)
+ app.config.update_config(config)
+
+ app.route("/")(lambda x: x)
+
+ @app.listener("after_server_start")
+ def complete(*args):
+ print("complete", os.getpid(), app.config.FOO)
+
+ if __name__ == "__main__":
+ app.run(**{runargs!r})
+ """
+ )
+ )
+
+
+def write_file(filename):
+ text = secrets.token_urlsafe()
+ with open(filename, "w") as f:
+ f.write(f"""{{"FOO": "{text}"}}""")
+ return text
+
+
def scanner(proc):
for line in proc.stdout:
line = line.decode().strip()
@@ -90,9 +126,10 @@ async def test_reloader_live(runargs, mode):
with TemporaryDirectory() as tmpdir:
filename = os.path.join(tmpdir, "reloader.py")
text = write_app(filename, **runargs)
- proc = Popen(argv[mode], cwd=tmpdir, stdout=PIPE, creationflags=flags)
+ command = argv[mode]
+ proc = Popen(command, cwd=tmpdir, stdout=PIPE, creationflags=flags)
try:
- timeout = Timer(5, terminate, [proc])
+ timeout = Timer(TIMER_DELAY, terminate, [proc])
timeout.start()
# Python apparently keeps using the old source sometimes if
# we don't sleep before rewrite (pycache timestamp problem?)
@@ -107,3 +144,40 @@ async def test_reloader_live(runargs, mode):
terminate(proc)
with suppress(TimeoutExpired):
proc.wait(timeout=3)
+
+
[email protected](
+ "runargs, mode",
+ [
+ (dict(port=42102, auto_reload=True), "script"),
+ (dict(port=42103, debug=True), "module"),
+ ({}, "sanic"),
+ ],
+)
+async def test_reloader_live_with_dir(runargs, mode):
+ with TemporaryDirectory() as tmpdir:
+ filename = os.path.join(tmpdir, "reloader.py")
+ config_file = os.path.join(tmpdir, "config.json")
+ runargs["reload_dir"] = tmpdir
+ write_json_config_app(filename, config_file, **runargs)
+ text = write_file(config_file)
+ command = argv[mode]
+ if mode == "sanic":
+ command += ["--reload-dir", tmpdir]
+ proc = Popen(command, cwd=tmpdir, stdout=PIPE, creationflags=flags)
+ try:
+ timeout = Timer(TIMER_DELAY, terminate, [proc])
+ timeout.start()
+ # Python apparently keeps using the old source sometimes if
+ # we don't sleep before rewrite (pycache timestamp problem?)
+ sleep(1)
+ line = scanner(proc)
+ assert text in next(line)
+ # Edit source code and try again
+ text = write_file(config_file)
+ assert text in next(line)
+ finally:
+ timeout.cancel()
+ terminate(proc)
+ with suppress(TimeoutExpired):
+ proc.wait(timeout=3)
| Auto reload resource files in the a Sanic app
**Is your feature request related to a problem? Please describe.**
When a Sanic app is in production, I use different configs for development, staging, and production. I use 3 YAML files to implement that. However, they are not tracked by auto_reload in the development environment. I would be happy to be able to see Sanic can restart when I change the config.
**Describe the solution you'd like**
It can be a helper API to register resources for auto_reload. importlib provides a semantic method [`importlib.resources.read_text`](https://docs.python.org/3/library/importlib.html#importlib.resources.read_text). It does no more than read(<file>, 'r'). It might be nice to hook with readers from importlib.
**Additional context**
Example app structure:
```
awesomeapp
| - config
| - development.yaml
| - staging.yaml
| - production.yaml
| app.py
| __init__.py
```
Potential API signature:
`sanic.resources.read_text(package, resource, encoding='utf-8', errors='strict')`
| Can you explain to me the thought process on the `read_text`?
It seems like what you are really looking for is a way to tell sanic to reload for anything in `./config`.
Something like this?
```python
app.run(auto_reload=True, include_dir="./config")
```
You are right about this. Your solution is much simpler! | 2021-06-16T21:17:57 |
sanic-org/sanic | 2,170 | sanic-org__sanic-2170 | [
"2069"
] | 80fca9aef7ec0009f24f2257d7d75ef726fcd77a | diff --git a/sanic/views.py b/sanic/views.py
--- a/sanic/views.py
+++ b/sanic/views.py
@@ -1,9 +1,25 @@
-from typing import Any, Callable, List
+from __future__ import annotations
+
+from typing import (
+ TYPE_CHECKING,
+ Any,
+ Callable,
+ Iterable,
+ List,
+ Optional,
+ Union,
+)
+from warnings import warn
from sanic.constants import HTTP_METHODS
from sanic.exceptions import InvalidUsage
+if TYPE_CHECKING:
+ from sanic import Sanic
+ from sanic.blueprints import Blueprint
+
+
class HTTPMethodView:
"""Simple class based implementation of view for the sanic.
You should implement methods (get, post, put, patch, delete) for the class
@@ -40,6 +56,31 @@ def get(self, request, my_param_here, *args, **kwargs):
decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = []
+ def __init_subclass__(
+ cls,
+ attach: Optional[Union[Sanic, Blueprint]] = None,
+ uri: str = "",
+ methods: Iterable[str] = frozenset({"GET"}),
+ host: Optional[str] = None,
+ strict_slashes: Optional[bool] = None,
+ version: Optional[int] = None,
+ name: Optional[str] = None,
+ stream: bool = False,
+ version_prefix: str = "/v",
+ ) -> None:
+ if attach:
+ cls.attach(
+ attach,
+ uri=uri,
+ methods=methods,
+ host=host,
+ strict_slashes=strict_slashes,
+ version=version,
+ name=name,
+ stream=stream,
+ version_prefix=version_prefix,
+ )
+
def dispatch_request(self, request, *args, **kwargs):
handler = getattr(self, request.method.lower(), None)
return handler(request, *args, **kwargs)
@@ -65,6 +106,31 @@ def view(*args, **kwargs):
view.__name__ = cls.__name__
return view
+ @classmethod
+ def attach(
+ cls,
+ to: Union[Sanic, Blueprint],
+ uri: str,
+ methods: Iterable[str] = frozenset({"GET"}),
+ host: Optional[str] = None,
+ strict_slashes: Optional[bool] = None,
+ version: Optional[int] = None,
+ name: Optional[str] = None,
+ stream: bool = False,
+ version_prefix: str = "/v",
+ ) -> None:
+ to.add_route(
+ cls.as_view(),
+ uri=uri,
+ methods=methods,
+ host=host,
+ strict_slashes=strict_slashes,
+ version=version,
+ name=name,
+ stream=stream,
+ version_prefix=version_prefix,
+ )
+
def stream(func):
func.is_stream = True
@@ -91,6 +157,11 @@ class CompositionView:
def __init__(self):
self.handlers = {}
self.name = self.__class__.__name__
+ warn(
+ "CompositionView has been deprecated and will be removed in "
+ "v21.12. Please update your view to HTTPMethodView.",
+ DeprecationWarning,
+ )
def __name__(self):
return self.name
| diff --git a/tests/test_views.py b/tests/test_views.py
--- a/tests/test_views.py
+++ b/tests/test_views.py
@@ -77,6 +77,56 @@ def get(self, request):
assert response.text == "I am get method"
+def test_with_attach(app):
+ class DummyView(HTTPMethodView):
+ def get(self, request):
+ return text("I am get method")
+
+ DummyView.attach(app, "/")
+
+ request, response = app.test_client.get("/")
+
+ assert response.text == "I am get method"
+
+
+def test_with_sub_init(app):
+ class DummyView(HTTPMethodView, attach=app, uri="/"):
+ def get(self, request):
+ return text("I am get method")
+
+ request, response = app.test_client.get("/")
+
+ assert response.text == "I am get method"
+
+
+def test_with_attach_and_bp(app):
+ bp = Blueprint("test_text")
+
+ class DummyView(HTTPMethodView):
+ def get(self, request):
+ return text("I am get method")
+
+ DummyView.attach(bp, "/")
+
+ app.blueprint(bp)
+ request, response = app.test_client.get("/")
+
+ assert response.text == "I am get method"
+
+
+def test_with_sub_init_and_bp(app):
+ bp = Blueprint("test_text")
+
+ class DummyView(HTTPMethodView, attach=bp, uri="/"):
+ def get(self, request):
+ return text("I am get method")
+
+ app.blueprint(bp)
+ request, response = app.test_client.get("/")
+
+ assert response.text == "I am get method"
+
+
def test_with_bp_with_url_prefix(app):
bp = Blueprint("test_text", url_prefix="/test1")
@@ -218,15 +268,15 @@ def first(request):
assert response.status == 200
assert response.text == "first method"
- # response = view(request)
- # assert response.body.decode() == "first method"
+ response = view(request)
+ assert response.body.decode() == "first method"
- # if method in ["DELETE", "PATCH"]:
- # request, response = getattr(app.test_client, method.lower())("/")
- # assert response.text == "second method"
+ if method in ["DELETE", "PATCH"]:
+ request, response = getattr(app.test_client, method.lower())("/")
+ assert response.text == "second method"
- # response = view(request)
- # assert response.body.decode() == "second method"
+ response = view(request)
+ assert response.body.decode() == "second method"
@pytest.mark.parametrize("method", HTTP_METHODS)
@@ -244,3 +294,12 @@ def test_composition_view_rejects_invalid_methods(app, method):
if method in ["DELETE", "PATCH"]:
request, response = getattr(app.test_client, method.lower())("/")
assert response.status == 405
+
+
+def test_composition_view_deprecation():
+ message = (
+ "CompositionView has been deprecated and will be removed in v21.12. "
+ "Please update your view to HTTPMethodView."
+ )
+ with pytest.warns(DeprecationWarning, match=message):
+ CompositionView()
| deprecate CompositionView ?
Currently sanic offers a class called `CompositionView`
I really am struggling to find any utility in this class, since
```python
from sanic.views import CompositionView
def get_handler(request):
return text("I am a get method")
view = CompositionView()
view.add(["GET"], get_handler)
view.add(["POST", "PUT"], lambda request: text("I am a post/put method"))
# Use the new view to handle requests to the base URL
app.add_route(view, "/")
```
Seems much more confusing to me than
```python
def get_handler(request):
return text("I am a get method")
app.route("/", methods=["GET"])(get_handler)
app.route("/", methods=["POST", "PUT"])(lambda request: text("I am a post/put method"))
```
Can anyone offer a compelling use case for CompositionView?
If not, I would suggest to deprecate it
https://github.com/sanic-org/sanic/blob/master/sanic/views.py
| For HTTPMethodView I can somehow see the utility since everything is defined within the class so you can have class methods that are common to the request types, but for CompositionView where you don't implement a subclass that value is lost
Only have time to leave a quick note. I'll respond with some thoughts more fully later this weekend.
CompositionView is a PITA to support.
I think this is another one of those features that's been in Sanic from the start, and its stayed around for a long time, and its only still there because "its always been that way".
I was looking back trough some code history and it looks like CompositionView was implemented initially as a quick and easy way do something like a MethodView class. But real MethodViews were added not long after, and seems like CompositionView should be deprecated.
Thanks @ashleysommer for taking a look through that. I agree that this is a good example of another feature that Sanic has probably outgrown.
I agree with deprecation notice from 21.6 and removal in 21.12.
At the same time as this change, it could be an idea to provide HTTPMethodView with a method like
```python
def attach_routes_to_app(self, app, route_prefix):
# probably completely wrong syntax I am just making this up
for method, handler in self.methods.items():
app.route(handler, method, route_prefix)
```
And then in the future requiring people to use
`view.attach_routes_to_app(app, "/example")`
instead of
`app.add_route(view.as_view(), "/example")`
Would take care of the maintenance frustration @ahopkins mentioned, which also carries forward to sanic plugins/ addons like e.g. sanic-openapi
Not a bad idea. I am generally a fan, however, to use concise method names inside the framework where possible. It helps people from having to refer back to the documentation so much (of course type annotations will help). I would suggest just `attach(...)`.
It also feels like it should be a class method:
```python
@classmethod
def attach(cls, instance: Union[Sanic, Blueprint], url_prefix: str):
```
Then...
```python
class MyView(HTTPMethodView):
...
MyView.attach(Sanic.get_app(), "/view")
```
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is incorrect, please respond with an update. Thank you for your contributions.
| 2021-06-20T20:33:52 |
sanic-org/sanic | 2,181 | sanic-org__sanic-2181 | [
"2177"
] | 08a4b3013f796fc3e184514d4a434ea815102693 | diff --git a/sanic/exceptions.py b/sanic/exceptions.py
--- a/sanic/exceptions.py
+++ b/sanic/exceptions.py
@@ -31,6 +31,7 @@ class NotFound(SanicException):
"""
status_code = 404
+ quiet = True
class InvalidUsage(SanicException):
@@ -39,6 +40,7 @@ class InvalidUsage(SanicException):
"""
status_code = 400
+ quiet = True
class MethodNotSupported(SanicException):
@@ -47,6 +49,7 @@ class MethodNotSupported(SanicException):
"""
status_code = 405
+ quiet = True
def __init__(self, message, method, allowed_methods):
super().__init__(message)
@@ -70,6 +73,7 @@ class ServiceUnavailable(SanicException):
"""
status_code = 503
+ quiet = True
class URLBuildError(ServerError):
@@ -101,6 +105,7 @@ class RequestTimeout(SanicException):
"""
status_code = 408
+ quiet = True
class PayloadTooLarge(SanicException):
@@ -109,6 +114,7 @@ class PayloadTooLarge(SanicException):
"""
status_code = 413
+ quiet = True
class HeaderNotFound(InvalidUsage):
@@ -117,6 +123,7 @@ class HeaderNotFound(InvalidUsage):
"""
status_code = 400
+ quiet = True
class ContentRangeError(SanicException):
@@ -125,6 +132,7 @@ class ContentRangeError(SanicException):
"""
status_code = 416
+ quiet = True
def __init__(self, message, content_range):
super().__init__(message)
@@ -137,6 +145,7 @@ class HeaderExpectationFailed(SanicException):
"""
status_code = 417
+ quiet = True
class Forbidden(SanicException):
@@ -145,6 +154,7 @@ class Forbidden(SanicException):
"""
status_code = 403
+ quiet = True
class InvalidRangeType(ContentRangeError):
@@ -153,6 +163,7 @@ class InvalidRangeType(ContentRangeError):
"""
status_code = 416
+ quiet = True
class PyFileError(Exception):
@@ -196,6 +207,7 @@ class Unauthorized(SanicException):
"""
status_code = 401
+ quiet = True
def __init__(self, message, status_code=None, scheme=None, **kwargs):
super().__init__(message, status_code)
| diff --git a/tests/test_static.py b/tests/test_static.py
--- a/tests/test_static.py
+++ b/tests/test_static.py
@@ -471,7 +471,7 @@ def test_stack_trace_on_not_found(app, static_file_directory, caplog):
assert response.status == 404
assert counter[logging.INFO] == 5
- assert counter[logging.ERROR] == 1
+ assert counter[logging.ERROR] == 0
def test_no_stack_trace_on_not_found(app, static_file_directory, caplog):
| asyncio.exceptions.CancelledError On Windows OS
The following exception raised when running the hello world example on Windows with Microsoft store version of Python 3.9.
WSL2 environment does not reproduce the issue.
```
from sanic import Sanic
from sanic.response import json
app = Sanic("My Hello, world app")
@app.route('/')
async def test(request):
return json({'hello': 'world'})
if __name__ == '__main__':
app.run()
```
```
[2021-06-30 23:00:05 -0700] [12184] [INFO] Goin' Fast @ http://127.0.0.1:8000
[2021-06-30 23:00:06 -0700] [12184] [INFO] Starting worker [12184]
[2021-06-30 23:00:11 -0700] - (sanic.access)[INFO][127.0.0.1:53159]: GET http://127.0.0.1:8000/ 200 17
[2021-06-30 23:01:11 -0700] [12184] [ERROR] Exception occurred while handling uri: 'http:///*'
Traceback (most recent call last):
File "C:\Users\zhiwe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\sanic\http.py", line 126, in http1
await self.http1_request_header()
File "C:\Users\zhiwe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\sanic\http.py", line 188, in http1_request_header
await self._receive_more()
File "C:\Users\zhiwe\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\sanic\server.py", line 222, in receive_more
await self._data_received.wait()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.1776.0_x64__qbz5n2kfra8p0\lib\asyncio\locks.py", line 226, in wait
await fut
asyncio.exceptions.CancelledError
[2021-06-30 23:01:11 -0700] - (sanic.access)[INFO][UNKNOWN]: NONE http:///* 408 664
```
| Not able to reproduce after a while, maybe it is a random error?
Looks like request timeout after 60 seconds. Did your client send a request and get a response OK? In that case, it was probably just an idling connection being terminated (but one ugly message for doing so, that still needs to be fixed).
If it were a request timeout, there should be `RequestTimeout` error.
Yes, something goes wrong in the error handling. But it happens after 60 seconds and logs a 408 Request Timeout status.
@Tronic I used browser to visit the endpoint, and it got the correct response. It seems not a timeout issue because the exception raised immediately after the request.
I would like to look into it while trying to understand how Python sanic is working with asyncio. Any suggestion about how to fix this? After reading the code in `server.py` and `http.py`, I think I got the basic concept of the data reading workflow, but I still don't understand why, how, and where the operation is cancelled...
```python
def check_timeouts(self):
"""
Runs itself periodically to enforce any expired timeouts.
"""
try:
if not self._task:
return
duration = current_time() - self._time
stage = self._http.stage
if stage is Stage.IDLE and duration > self.keep_alive_timeout:
logger.debug("KeepAlive Timeout. Closing connection.")
elif stage is Stage.REQUEST and duration > self.request_timeout:
logger.debug("Request Timeout. Closing connection.")
self._http.exception = RequestTimeout("Request Timeout")
elif stage is Stage.HANDLER and self._http.upgrade_websocket:
logger.debug("Handling websocket. Timeouts disabled.")
return
elif (
stage in (Stage.HANDLER, Stage.RESPONSE, Stage.FAILED)
and duration > self.response_timeout
):
logger.debug("Response Timeout. Closing connection.")
self._http.exception = ServiceUnavailable("Response Timeout")
else:
interval = (
min(
self.keep_alive_timeout,
self.request_timeout,
self.response_timeout,
)
/ 2
)
self.loop.call_later(max(0.1, interval), self.check_timeouts)
return
self._task.cancel()
except Exception:
error_logger.exception("protocol.check_timeouts")
```
server.py runs this background task that runs periodically checks if any timeouts have expired, then cancels `self._task` that handles the connection (server.py function `connection_task` that calls http.py and other functions). The `connection_task` function is supposed to catch and ignore CancelledError, so I would look into why you still get an error message. From the backtrace it looks as if `http1_request_header` is catching and logging the error, which is probably unintentional.
Happy hacking!
> @Tronic I used browser to visit the endpoint, and it got the correct response. It seems not a timeout issue because the exception raised immediately after the request.
Did you visit it twice, or just at 23:00:11 where a 200 OK response is logged? The remaining log events at 23:01:11 suggest that no request or possibly an incomplete request was received, but that no valid responses were served. If there was an incomplete request, Sanic tries to respond with 408, otherwise it just closes the socket.
It looks like that a connection was left idling by the browser after the initial and successful request in case it would do more requests, and Sanic closing the connection would cause no error in browser. If this is the case, Sanic behaved correctly but it should not be logging that exception or the 408 access log event.
> > @Tronic I used browser to visit the endpoint, and it got the correct response. It seems not a timeout issue because the exception raised immediately after the request.
>
> Did you visit it twice, or just at 23:00:11 where a 200 OK response is logged? The remaining log events at 23:01:11 suggest that no request or possibly an incomplete request was received, but that no valid responses were served. If there was an incomplete request, Sanic tries to respond with 408, otherwise it just closes the socket.
>
> It looks like that a connection was left idling by the browser after the initial and successful request in case it would do more requests, and Sanic closing the connection would cause no error in browser. If this is the case, Sanic behaved correctly but it should not be logging that exception or the 408 access log event.
I tried it again several times and the error popup after 1 min, so I think you are right. Let's see if we can catch that exception so it will not be raised.
@Tronic I found that the error was logged because there is not a error handler registered in ErrorHandler object's `cached_handlers`. Do you think this is an intentional behavior?
Ah, this might be related to another new feature with error handlers that recently got merged and released in 21.6 I think. Ping @ahopkins shouldn't custom error handlers be limited to `Exception` and not used `BaseException` (like the cancellation that occurs here)?
Ahh, that might make sense. I'm going to be pushing out a patch in Sunday so I will take a closer look at this.
Turns out the issue stems from #2077. The issue is related to how `quiet` is being set. This is probably worthy of a fix and release.
This is real easy to reproduce.
```python
@app.get("/")
async def handler(request):
await asyncio.sleep(3)
```
Kill the request before the 3 seconds.
> Ping @ahopkins shouldn't custom error handlers be limited to `Exception` and not used `BaseException` (like the cancellation that occurs here)?
It is just looping through the hierarchy and stopping when it gets to `Exception` (or `CancelledError`, etc).
> Turns out the issue stems from #2077. The issue is related to how `quiet` is being set. This is probably worthy of a fix and release.
>
> This is real easy to reproduce.
>
> ```python
> @app.get("/")
> async def handler(request):
> await asyncio.sleep(3)
> ```
>
> Kill the request before the 3 seconds.
I think there is probably not `quite` attr in `CancelledError` object by default since it is not inherited from `SanicException`. Maybe we can consider replace `CancelledError` to some other Exception that is a sub-class `SanicException` in the first layer (http.py). | 2021-07-05T21:20:19 |
sanic-org/sanic | 2,183 | sanic-org__sanic-2183 | [
"2182"
] | 8b7ea27a48cdc1654724701d2df174d5b4d2cc70 | diff --git a/sanic/http.py b/sanic/http.py
--- a/sanic/http.py
+++ b/sanic/http.py
@@ -490,6 +490,9 @@ async def read(self) -> Optional[bytes]:
if size <= 0:
self.request_body = None
+ # Because we are leaving one CRLF in the buffer, we manually
+ # reset the buffer here
+ self.recv_buffer = bytearray()
if size < 0:
self.keep_alive = False
| Request streaming results in a phantom 503
When streaming a request body, you end up with a phantom 503 response. To the client, everything looks fine. The data is transmitted, and a response received OK.
```
[2021-07-05 22:45:47 +0300] - (sanic.access)[INFO][127.0.0.1:34264]: POST http://localhost:9999/upload 201 4
[2021-07-05 22:45:47 +0300] - (sanic.access)[INFO][127.0.0.1:34264]: POST http://localhost:9999/upload 503 666
[2021-07-05 22:45:47 +0300] [686804] [ERROR] Connection lost before response written @ ('127.0.0.1', 34264) <Request: POST /upload>
```
But, there is an extra 503 that is caused by a task cancel while waiting on `receive_more`. This appears to be caused by leaving one extra CRLF in the buffer.
| 2021-07-05T21:52:21 |
||
sanic-org/sanic | 2,208 | sanic-org__sanic-2208 | [
"2202"
] | 945885d501f993c4a7bb3045ca162ca441aae1ff | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -334,7 +334,11 @@ def register_named_middleware(
self.named_response_middleware[_rn].appendleft(middleware)
return middleware
- def _apply_exception_handler(self, handler: FutureException):
+ def _apply_exception_handler(
+ self,
+ handler: FutureException,
+ route_names: Optional[List[str]] = None,
+ ):
"""Decorate a function to be registered as a handler for exceptions
:param exceptions: exceptions
@@ -344,9 +348,9 @@ def _apply_exception_handler(self, handler: FutureException):
for exception in handler.exceptions:
if isinstance(exception, (tuple, list)):
for e in exception:
- self.error_handler.add(e, handler.handler)
+ self.error_handler.add(e, handler.handler, route_names)
else:
- self.error_handler.add(exception, handler.handler)
+ self.error_handler.add(exception, handler.handler, route_names)
return handler.handler
def _apply_listener(self, listener: FutureListener):
diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -338,7 +338,9 @@ def register(self, app, options):
# Exceptions
for future in self._future_exceptions:
- exception_handlers.append(app._apply_exception_handler(future))
+ exception_handlers.append(
+ app._apply_exception_handler(future, route_names)
+ )
# Event listeners
for listener in self._future_listeners:
diff --git a/sanic/handlers.py b/sanic/handlers.py
--- a/sanic/handlers.py
+++ b/sanic/handlers.py
@@ -1,3 +1,5 @@
+from typing import List, Optional
+
from sanic.errorpages import exception_response
from sanic.exceptions import (
ContentRangeError,
@@ -21,15 +23,12 @@ class ErrorHandler:
"""
- handlers = None
- cached_handlers = None
-
def __init__(self):
self.handlers = []
self.cached_handlers = {}
self.debug = False
- def add(self, exception, handler):
+ def add(self, exception, handler, route_names: Optional[List[str]] = None):
"""
Add a new exception handler to an already existing handler object.
@@ -42,11 +41,16 @@ def add(self, exception, handler):
:return: None
"""
- # self.handlers to be deprecated and removed in version 21.12
+ # self.handlers is deprecated and will be removed in version 22.3
self.handlers.append((exception, handler))
- self.cached_handlers[exception] = handler
- def lookup(self, exception):
+ if route_names:
+ for route in route_names:
+ self.cached_handlers[(exception, route)] = handler
+ else:
+ self.cached_handlers[(exception, None)] = handler
+
+ def lookup(self, exception, route_name: Optional[str]):
"""
Lookup the existing instance of :class:`ErrorHandler` and fetch the
registered handler for a specific type of exception.
@@ -61,17 +65,26 @@ def lookup(self, exception):
:return: Registered function if found ``None`` otherwise
"""
exception_class = type(exception)
- if exception_class in self.cached_handlers:
- return self.cached_handlers[exception_class]
- for ancestor in type.mro(exception_class):
- if ancestor in self.cached_handlers:
- handler = self.cached_handlers[ancestor]
- self.cached_handlers[exception_class] = handler
+ for name in (route_name, None):
+ exception_key = (exception_class, name)
+ handler = self.cached_handlers.get(exception_key)
+ if handler:
return handler
- if ancestor is BaseException:
- break
- self.cached_handlers[exception_class] = None
+
+ for name in (route_name, None):
+ for ancestor in type.mro(exception_class):
+ exception_key = (ancestor, name)
+ if exception_key in self.cached_handlers:
+ handler = self.cached_handlers[exception_key]
+ self.cached_handlers[
+ (exception_class, route_name)
+ ] = handler
+ return handler
+
+ if ancestor is BaseException:
+ break
+ self.cached_handlers[(exception_class, route_name)] = None
handler = None
return handler
@@ -89,7 +102,8 @@ def response(self, request, exception):
:return: Wrap the return value obtained from :func:`default`
or registered handler for that type of exception.
"""
- handler = self.lookup(exception)
+ route_name = request.name if request else None
+ handler = self.lookup(exception, route_name)
response = None
try:
if handler:
| diff --git a/tests/test_exceptions_handler.py b/tests/test_exceptions_handler.py
--- a/tests/test_exceptions_handler.py
+++ b/tests/test_exceptions_handler.py
@@ -189,18 +189,24 @@ class ModuleNotFoundError(ImportError):
handler.add(CustomError, custom_error_handler)
handler.add(ServerError, server_error_handler)
- assert handler.lookup(ImportError()) == import_error_handler
- assert handler.lookup(ModuleNotFoundError()) == import_error_handler
- assert handler.lookup(CustomError()) == custom_error_handler
- assert handler.lookup(ServerError("Error")) == server_error_handler
- assert handler.lookup(CustomServerError("Error")) == server_error_handler
+ assert handler.lookup(ImportError(), None) == import_error_handler
+ assert handler.lookup(ModuleNotFoundError(), None) == import_error_handler
+ assert handler.lookup(CustomError(), None) == custom_error_handler
+ assert handler.lookup(ServerError("Error"), None) == server_error_handler
+ assert (
+ handler.lookup(CustomServerError("Error"), None)
+ == server_error_handler
+ )
# once again to ensure there is no caching bug
- assert handler.lookup(ImportError()) == import_error_handler
- assert handler.lookup(ModuleNotFoundError()) == import_error_handler
- assert handler.lookup(CustomError()) == custom_error_handler
- assert handler.lookup(ServerError("Error")) == server_error_handler
- assert handler.lookup(CustomServerError("Error")) == server_error_handler
+ assert handler.lookup(ImportError(), None) == import_error_handler
+ assert handler.lookup(ModuleNotFoundError(), None) == import_error_handler
+ assert handler.lookup(CustomError(), None) == custom_error_handler
+ assert handler.lookup(ServerError("Error"), None) == server_error_handler
+ assert (
+ handler.lookup(CustomServerError("Error"), None)
+ == server_error_handler
+ )
def test_exception_handler_processed_request_middleware():
| Cannot define a blueprint-specific exception handler
**Describe the bug**
The documentation at https://sanicframework.org/en/guide/best-practices/blueprints.html#exceptions says that:
> Just like other exception handling, you can define blueprint specific handlers.
However, exception handlers defined this way don't seem to be blueprint-specific at all. Instead, they handle exceptions in other blueprints as well.
**Code snippet**
```python
#!/usr/bin/env python3
from sanic import Sanic, Blueprint, response
class Error(Exception):
pass
handled = Blueprint("handled")
@handled.exception(Error)
def handle_error(req, e):
return response.text("handled {}".format(e))
b = Blueprint("b")
@b.route("/")
def e(request):
raise Error("error in e")
app = Sanic(__name__)
app.blueprint(handled)
app.blueprint(b)
app.run()
```
**Expected behavior**
A request to http://localhost:8000 should generate 500, because `Error` should not be handled.
Instead, the request generates 200, `Error` is handled, even though the handler is registered in `handled`, while the endpoint is in `b`.
**Environment (please complete the following information):**
- OS: ubuntu 19.10
- Version: Sanic 21.6.0; Routing 0.7.0
**Additional context**
The order of blueprint registration does not seem to change anything. Not registering blueprint `handled` obviously fixes the issue.
Is it a bug or am I missing something here?
| Sorry for not responding earlier. I am looking at this. See also #2121 | 2021-08-01T22:51:43 |
sanic-org/sanic | 2,211 | sanic-org__sanic-2211 | [
"2210"
] | 54ca6a6178798515c34116765d3dda192915cfce | diff --git a/sanic/__version__.py b/sanic/__version__.py
--- a/sanic/__version__.py
+++ b/sanic/__version__.py
@@ -1 +1 @@
-__version__ = "21.6.1"
+__version__ = "21.6.2"
diff --git a/sanic/asgi.py b/sanic/asgi.py
--- a/sanic/asgi.py
+++ b/sanic/asgi.py
@@ -207,4 +207,7 @@ async def __call__(self) -> None:
"""
Handle the incoming request.
"""
- await self.sanic_app.handle_request(self.request)
+ try:
+ await self.sanic_app.handle_request(self.request)
+ except Exception as e:
+ await self.sanic_app.handle_exception(self.request, e)
| diff --git a/tests/test_asgi.py b/tests/test_asgi.py
--- a/tests/test_asgi.py
+++ b/tests/test_asgi.py
@@ -7,7 +7,7 @@
from sanic import Sanic
from sanic.asgi import MockTransport
-from sanic.exceptions import InvalidUsage
+from sanic.exceptions import Forbidden, InvalidUsage, ServiceUnavailable
from sanic.request import Request
from sanic.response import json, text
from sanic.websocket import WebSocketConnection
@@ -346,3 +346,32 @@ def send_custom(request):
_, response = await app.asgi_client.get("/custom")
assert response.headers.get("content-type") == "somethingelse"
+
+
[email protected]
+async def test_request_handle_exception(app):
+ @app.get("/error-prone")
+ def _request(request):
+ raise ServiceUnavailable(message="Service unavailable")
+
+ _, response = await app.asgi_client.get("/wrong-path")
+ assert response.status_code == 404
+
+ _, response = await app.asgi_client.get("/error-prone")
+ assert response.status_code == 503
+
[email protected]
+async def test_request_exception_suppressed_by_middleware(app):
+ @app.get("/error-prone")
+ def _request(request):
+ raise ServiceUnavailable(message="Service unavailable")
+
+ @app.on_request
+ def forbidden(request):
+ raise Forbidden(message="forbidden")
+
+ _, response = await app.asgi_client.get("/wrong-path")
+ assert response.status_code == 403
+
+ _, response = await app.asgi_client.get("/error-prone")
+ assert response.status_code == 403
\ No newline at end of file
| In ASGI mod, the response turns into 500 server error.
In ASGI mod, if request middleware raises an exception and meanwhile there is an error in fetching handler from router part as well, server response turns into 500 server error.
**app.py**
```python
from sanic import Request, Sanic
from sanic.exceptions import Forbidden
from sanic.response import text
app = Sanic("My Hello, world app")
@app.get("/")
async def hello_world(request):
return text("Hello, world.")
@app.middleware
async def request_middleware(request: Request) -> None:
if "Authorization" not in request.headers:
raise Forbidden(message="Authorization header not found")
```
```bash
>>> uvicorn app:app
INFO: Started server process [227205]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic_routing/router.py", line 79, in resolve
route, param_basket = self.find_route(
File "", line 9, in find_route
sanic_routing.exceptions.NotFound: Not Found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic/router.py", line 33, in _get
return self.resolve(
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic_routing/router.py", line 96, in resolve
raise self.exception(str(e), path=path)
sanic_routing.exceptions.NotFound: Not Found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic/app.py", line 723, in handle_request
route, handler, kwargs = self.router.get(
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic/router.py", line 61, in get
return self._get(path, method, host)
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic/router.py", line 39, in _get
raise NotFound("Requested URL {} not found".format(e.path))
sanic.exceptions.NotFound: Requested URL /wrong not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 371, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 59, in __call__
return await self.app(scope, receive, send)
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic/app.py", line 1276, in __call__
await asgi_app()
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic/asgi.py", line 210, in __call__
await self.sanic_app.handle_request(self.request)
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic/app.py", line 791, in handle_request
await self.handle_exception(request, e)
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic/app.py", line 667, in handle_exception
response = await self._run_request_middleware(
File "/home/cansarigol/Documents/sanic-demo/venv/lib/python3.8/site-packages/sanic/app.py", line 1116, in _run_request_middleware
response = await response
File "/home/cansarigol/Documents/sanic-demo/./app.py", line 16, in request_middleware
raise Forbidden(message="Authorization header not found")
sanic.exceptions.Forbidden: Authorization header not found
INFO: 127.0.0.1:52662 - "GET /wrong HTTP/1.1" 500 Internal Server Error
```
**Expected behavior**
it should have been 403 (the same approach as WSGI).
**Environment (please complete the following information):**
- OS: ubuntu 20.04
- Version
python = "3.8"
sanic = "21.6.1"
uvicorn = "0.14.0"
| :thinking: Am I must be missing something? How did you get a 500?
```
$ uvicorn p:app
INFO: Started server process [306986]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:60770 - "GET / HTTP/1.1" 403 Forbidden
```
```
$ curl localhost:8000 -i
HTTP/1.1 403 Forbidden
date: Mon, 02 Aug 2021 12:24:43 GMT
server: uvicorn
content-type: text/html; charset=utf-8
transfer-encoding: chunked
<!DOCTYPE html><html lang=en><meta charset=UTF-8><title>โ ๏ธ 403 โ Forbidden</title>
<style>
html { font-family: sans-serif }
h2 { color: #888; }
.tb-wrapper p { margin: 0 }
.frame-border { margin: 1rem }
.frame-line > * { padding: 0.3rem 0.6rem }
.frame-line { margin-bottom: 0.3rem }
.frame-code { font-size: 16px; padding-left: 4ch }
.tb-wrapper { border: 1px solid #eee }
.tb-header { background: #eee; padding: 0.3rem; font-weight: bold }
.frame-descriptor { background: #e2eafb; font-size: 14px }
</style>
<h1>โ ๏ธ 403 โ Forbidden</h1><p>Authorization header not found
```
Never mind... I was able to reproduce:
```
$ curl localhost:8000/111 -i
HTTP/1.1 500 Internal Server Error
date: Mon, 02 Aug 2021 12:25:49 GMT
server: uvicorn
content-type: text/plain; charset=utf-8
connection: close
transfer-encoding: chunked
Internal Server Error
```
@ahopkins When I put a try-expect in call dunder of the asgi app class, the problem is solved.
```git
diff --git a/sanic/asgi.py b/sanic/asgi.py
index 5765a5c..330ced5 100644
--- a/sanic/asgi.py
+++ b/sanic/asgi.py
@@ -207,4 +207,7 @@ class ASGIApp:
"""
Handle the incoming request.
"""
- await self.sanic_app.handle_request(self.request)
+ try:
+ await self.sanic_app.handle_request(self.request)
+ except Exception as e:
+ await self.sanic_app.handle_exception(self.request, e)
```
Nice. Can you make a PR? In your PR, you can also change the value of `./sanic/__version__.py` to 21.6.2 and I can get it ready to send out a patch. We probably should add a unit test for this. I am really surprised there is not one already. | 2021-08-02T14:11:26 |
sanic-org/sanic | 2,213 | sanic-org__sanic-2213 | [
"2212"
] | 71a631237dad4a541da283422cbbd85473ad53fc | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -893,6 +893,8 @@ async def _websocket_handler(
self.websocket_tasks.add(fut)
try:
await fut
+ except Exception as e:
+ self.error_handler.log(request, e)
except (CancelledError, ConnectionClosed):
pass
finally:
diff --git a/sanic/handlers.py b/sanic/handlers.py
--- a/sanic/handlers.py
+++ b/sanic/handlers.py
@@ -1,5 +1,3 @@
-from traceback import format_exc
-
from sanic.errorpages import exception_response
from sanic.exceptions import (
ContentRangeError,
@@ -99,7 +97,6 @@ def response(self, request, exception):
if response is None:
response = self.default(request, exception)
except Exception:
- self.log(format_exc())
try:
url = repr(request.url)
except AttributeError:
@@ -115,11 +112,6 @@ def response(self, request, exception):
return text("An error occurred while handling an error", 500)
return response
- def log(self, message, level="error"):
- """
- Deprecated, do not use.
- """
-
def default(self, request, exception):
"""
Provide a default behavior for the objects of :class:`ErrorHandler`.
@@ -135,6 +127,11 @@ def default(self, request, exception):
:class:`Exception`
:return:
"""
+ self.log(request, exception)
+ return exception_response(request, exception, self.debug)
+
+ @staticmethod
+ def log(request, exception):
quiet = getattr(exception, "quiet", False)
if quiet is False:
try:
@@ -142,13 +139,10 @@ def default(self, request, exception):
except AttributeError:
url = "unknown"
- self.log(format_exc())
error_logger.exception(
"Exception occurred while handling uri: %s", url
)
- return exception_response(request, exception, self.debug)
-
class ContentRangeHandler:
"""
| diff --git a/tests/test_asgi.py b/tests/test_asgi.py
--- a/tests/test_asgi.py
+++ b/tests/test_asgi.py
@@ -360,6 +360,7 @@ def _request(request):
_, response = await app.asgi_client.get("/error-prone")
assert response.status_code == 503
+
@pytest.mark.asyncio
async def test_request_exception_suppressed_by_middleware(app):
@app.get("/error-prone")
@@ -374,4 +375,4 @@ def forbidden(request):
assert response.status_code == 403
_, response = await app.asgi_client.get("/error-prone")
- assert response.status_code == 403
\ No newline at end of file
+ assert response.status_code == 403
diff --git a/tests/test_exceptions.py b/tests/test_exceptions.py
--- a/tests/test_exceptions.py
+++ b/tests/test_exceptions.py
@@ -1,3 +1,4 @@
+import logging
import warnings
import pytest
@@ -232,3 +233,20 @@ def test_sanic_exception(exception_app):
request, response = exception_app.test_client.get("/old_abort")
assert response.status == 500
assert len(w) == 1 and "deprecated" in w[0].message.args[0]
+
+
+def test_exception_in_ws_logged(caplog):
+ app = Sanic(__file__)
+
+ @app.websocket("/feed")
+ async def feed(request, ws):
+ raise Exception("...")
+
+ with caplog.at_level(logging.INFO):
+ app.test_client.websocket("/feed")
+
+ assert caplog.record_tuples[1][0] == "sanic.error"
+ assert caplog.record_tuples[1][1] == logging.ERROR
+ assert (
+ "Exception occurred while handling uri:" in caplog.record_tuples[1][2]
+ )
| Exceptions in websocket handler not logged
```python
@app.websocket("/feed")
async def feed(request, ws):
raise Exception("...")
```
No exception or traceback is displayed in logs.
| 2021-08-02T19:58:42 |
|
sanic-org/sanic | 2,231 | sanic-org__sanic-2231 | [
"2190"
] | 69c5dde9bfbe449bf32f7d77f2246fba38676a3a | diff --git a/sanic/server/protocols/base_protocol.py b/sanic/server/protocols/base_protocol.py
--- a/sanic/server/protocols/base_protocol.py
+++ b/sanic/server/protocols/base_protocol.py
@@ -81,13 +81,24 @@ async def receive_more(self):
self._data_received.clear()
await self._data_received.wait()
- def close(self):
+ def close(self, timeout: Optional[float] = None):
"""
- Force close the connection.
+ Attempt close the connection.
"""
# Cause a call to connection_lost where further cleanup occurs
if self.transport:
self.transport.close()
+ if timeout is None:
+ timeout = self.app.config.GRACEFUL_SHUTDOWN_TIMEOUT
+ self.loop.call_later(timeout, self.abort)
+
+ def abort(self):
+ """
+ Force close the connection.
+ """
+ # Cause a call to connection_lost where further cleanup occurs
+ if self.transport:
+ self.transport.abort()
self.transport = None
# asyncio.Protocol API Callbacks #
diff --git a/sanic/server/runners.py b/sanic/server/runners.py
--- a/sanic/server/runners.py
+++ b/sanic/server/runners.py
@@ -180,7 +180,7 @@ def serve(
if hasattr(conn, "websocket") and conn.websocket:
coros.append(conn.websocket.close_connection())
else:
- conn.close()
+ conn.abort()
_shutdown = asyncio.gather(*coros)
loop.run_until_complete(_shutdown)
| Transport may remain open after `close()`
**Describe the bug**
After applying the workaround in #2189 Sanic hangs on shutdown if it tries to force close SSL connection after `GRACEFUL_SHUTDOWN_TIMEOUT` expires. Digging into the code shows that the transport does not close after `close()` has been called but stays open for a prolonged time (I guess it will timeout eventually), and `connection_lost` is not called.
There are other reports of the same issue with asyncio SSL transports on the web, i.e. this [stack overflow thread](https://stackoverflow.com/questions/65168635/python-asyncio-ssl-transport-not-closing). I don't think this is really a bug on Sanic's end, but more of a pitfall with the transport.
Changing `HttpProtocol::close` to call `abort()` instead of `close()` fixes the issue. As this is a non-graceful shutdown of the connection anyway I think the change would be acceptable, but of course something more elaborate (like waiting another timeout before aborting the transports) would be possible, too.
I can prepare a PR if you like.
**Code snippet**
This can be reproduced the same way as #2189 --- launch a simple server with SSL, connect using Chrome and terminate. If
```
@app.listener("after_server_stop")
async def afterServerStop(app, loop):
await asyncio.gather(*[task for task in asyncio.all_tasks() if task != asyncio.current_task()])
```
is added to the server it will hang on exit. Debugging shows that `connection_lost` is never called.
**Expected behavior**
The transport closes.
**Environment (please complete the following information):**
- OS: MacOS
- Version: 21.6.0
| PRs are always welcome.
There you go ;) | 2021-09-01T01:51:15 |
|
sanic-org/sanic | 2,236 | sanic-org__sanic-2236 | [
"2228"
] | ef4f058a6cd809a292838ae9aa4e3fa70e323877 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -839,7 +839,7 @@ async def handle_request(self, request: Request): # no cov
if isawaitable(response):
response = await response
- if response:
+ if response is not None:
response = await request.respond(response)
elif not hasattr(handler, "is_websocket"):
response = request.stream.response # type: ignore
| diff --git a/tests/test_middleware.py b/tests/test_middleware.py
--- a/tests/test_middleware.py
+++ b/tests/test_middleware.py
@@ -5,7 +5,7 @@
from sanic.exceptions import NotFound
from sanic.request import Request
-from sanic.response import HTTPResponse, text
+from sanic.response import HTTPResponse, json, text
# ------------------------------------------------------------ #
@@ -283,3 +283,17 @@ async def handler(request):
request, response = app.test_client.get("/")
assert next(i) == 3
+
+
+def test_middleware_added_response(app):
+ @app.on_response
+ def display(_, response):
+ response["foo"] = "bar"
+ return json(response)
+
+ @app.get("/")
+ async def handler(request):
+ return {}
+
+ _, response = app.test_client.get("/")
+ assert response.json["foo"] == "bar"
| when HTTPRequestHandler returns empty(like: {}, []), request_exception run first than costom-response-middleware
when HTTPRequestHandler returns empty(like: {}, []), request_exception run first than costom-response-middleware
**Code snippet**
# No middleware results
if not response:
# -------------------------------------------- #
# Execute Handler
# -------------------------------------------- #
if handler is None:
raise ServerError(
(
"'None' was returned while requesting a "
"handler from the router"
)
)
# Run response handler
response = handler(request, **kwargs)
if isawaitable(response):
response = await response
if response:
response = await request.respond(response)
elif not hasattr(handler, "is_websocket"):
response = request.stream.response # type: ignore
# Make sure that response is finished / run StreamingHTTP callback
if isinstance(response, BaseHTTPResponse):
await response.send(end_stream=True)
else:
if not hasattr(handler, "is_websocket"):
raise ServerError(
f"Invalid response type {response!r} "
"(need HTTPResponse)"
)
except CancelledError:
raise
except Exception as e:
# Response Generation Failed
await self.handle_exception(request, e)
| <img width="918" alt="image" src="https://user-images.githubusercontent.com/55998415/131430745-6ce06f84-9abc-42fc-9b97-bb504af5fce3.png">
sanic==21.6.2
sanic-routing==0.7.1
I don't need Sanic code posted to understand your question, I need your code. Please post a snippet of your handler and what it's returning. Is it not returning an instance of HTTPResponse?
It is not clear if you are returning
```python
return empty()
```
or
```
return {}
```
api method handleโs code๏ผ
```python
class BUGHandleApi(HTTPMethodView):
async def get(self, request):
logger.debug(f"debug BUGHandleApi called-in..")
return {}
```
middlewareโs code๏ผjust like this
```python
async def debug_response_middleware(request, response):
# ...
if not isinstance(response, HTTPResponse):
body = {
"resultcode": StatusCode.SUCCESS.humanize_code(),
"msg": "",
"data": response,
}
response = HTTPResponse(json_dumps(body),
headers=None,
content_type="application/json; charset=utf-8",
status=200,)
return response
```
why sanic==21.6.2 use "if response:" to do it --- request.respond(response)
Agreed, one should be able to use middleware to process such responses into Sanic responses. The first `if response` check marked in red could instead check if there already is a response on the request, and only use the return value if there wasn't. Probably needs some other work too.
Also, I think this used to work, or worked at least for non-empty plain dicts returned from handlers that a middleware could turn into JSON responses.
It should work for truthy values since I think the check is: `if response`. We could probably change that to a `if response is not None`. It is a one line change.
@iduosi Would you be willing to add a PR for this? We can get it into the next release for sure.
Change that one line to `if response is not None` and then add simple unit test using your example?
Just confirmed with that change both this example works and the unit tests pass.
```python
@app.on_response
def display(_, response):
response["foo"] = "bar"
return json(response)
@app.get("/")
async def handler(request):
return {}
``` | 2021-09-11T20:36:54 |
sanic-org/sanic | 2,238 | sanic-org__sanic-2238 | [
"2235"
] | a937e08ef057f24c21c645f18b6df3688915fe1f | diff --git a/sanic/blueprint_group.py b/sanic/blueprint_group.py
--- a/sanic/blueprint_group.py
+++ b/sanic/blueprint_group.py
@@ -197,6 +197,27 @@ def append(self, value: Blueprint) -> None:
"""
self._blueprints.append(value)
+ def exception(self, *exceptions, **kwargs):
+ """
+ A decorator that can be used to implement a global exception handler
+ for all the Blueprints that belong to this Blueprint Group.
+
+ In case of nested Blueprint Groups, the same handler is applied
+ across each of the Blueprints recursively.
+
+ :param args: List of Python exceptions to be caught by the handler
+ :param kwargs: Additional optional arguments to be passed to the
+ exception handler
+ :return a decorated method to handle global exceptions for any
+ blueprint registered under this group.
+ """
+
+ def register_exception_handler_for_blueprints(fn):
+ for blueprint in self.blueprints:
+ blueprint.exception(*exceptions, **kwargs)(fn)
+
+ return register_exception_handler_for_blueprints
+
def insert(self, index: int, item: Blueprint) -> None:
"""
The Abstract class `MutableSequence` leverages this insert method to
| diff --git a/tests/test_blueprint_group.py b/tests/test_blueprint_group.py
--- a/tests/test_blueprint_group.py
+++ b/tests/test_blueprint_group.py
@@ -3,6 +3,7 @@
from sanic.app import Sanic
from sanic.blueprint_group import BlueprintGroup
from sanic.blueprints import Blueprint
+from sanic.exceptions import Forbidden, InvalidUsage, SanicException, ServerError
from sanic.request import Request
from sanic.response import HTTPResponse, text
@@ -96,16 +97,28 @@ def test_bp_group(app: Sanic):
def blueprint_1_default_route(request):
return text("BP1_OK")
+ @blueprint_1.route("/invalid")
+ def blueprint_1_error(request: Request):
+ raise InvalidUsage("Invalid")
+
@blueprint_2.route("/")
def blueprint_2_default_route(request):
return text("BP2_OK")
+ @blueprint_2.route("/error")
+ def blueprint_2_error(request: Request):
+ raise ServerError("Error")
+
blueprint_group_1 = Blueprint.group(
blueprint_1, blueprint_2, url_prefix="/bp"
)
blueprint_3 = Blueprint("blueprint_3", url_prefix="/bp3")
+ @blueprint_group_1.exception(InvalidUsage)
+ def handle_group_exception(request, exception):
+ return text("BP1_ERR_OK")
+
@blueprint_group_1.middleware("request")
def blueprint_group_1_middleware(request):
global MIDDLEWARE_INVOKE_COUNTER
@@ -130,10 +143,18 @@ def blueprint_group_1_convenience_2(request):
def blueprint_3_default_route(request):
return text("BP3_OK")
+ @blueprint_3.route("/forbidden")
+ def blueprint_3_forbidden(request: Request):
+ raise Forbidden("Forbidden")
+
blueprint_group_2 = Blueprint.group(
blueprint_group_1, blueprint_3, url_prefix="/api"
)
+ @blueprint_group_2.exception(SanicException)
+ def handle_non_handled_exception(request, exception):
+ return text("BP2_ERR_OK")
+
@blueprint_group_2.middleware("response")
def blueprint_group_2_middleware(request, response):
global MIDDLEWARE_INVOKE_COUNTER
@@ -161,14 +182,23 @@ def app_default_route(request):
_, response = app.test_client.get("/api/bp/bp1")
assert response.text == "BP1_OK"
+ _, response = app.test_client.get("/api/bp/bp1/invalid")
+ assert response.text == "BP1_ERR_OK"
+
_, response = app.test_client.get("/api/bp/bp2")
assert response.text == "BP2_OK"
+ _, response = app.test_client.get("/api/bp/bp2/error")
+ assert response.text == "BP2_ERR_OK"
+
_, response = app.test_client.get("/api/bp3")
assert response.text == "BP3_OK"
- assert MIDDLEWARE_INVOKE_COUNTER["response"] == 9
- assert MIDDLEWARE_INVOKE_COUNTER["request"] == 8
+ _, response = app.test_client.get("/api/bp3/forbidden")
+ assert response.text == "BP2_ERR_OK"
+
+ assert MIDDLEWARE_INVOKE_COUNTER["response"] == 18
+ assert MIDDLEWARE_INVOKE_COUNTER["request"] == 16
def test_bp_group_list_operations(app: Sanic):
| Exception handler decorator in blueprint groups
**Is your feature request related to a problem? Please describe.**
Currently exception handlers can be attached to app and blueprint instances, but it is not possible to do the same with blueprint groups.
**Describe the solution you'd like**
Ideally it would be possible to attach an exception handler to a blueprint group via means of a decorator. If it is not possible to attach a handler to a blueprint group directly, its decorator could simply iterate through each of its blueprint attaching the handler to each of them.
| 2021-09-12T16:56:43 |
|
sanic-org/sanic | 2,244 | sanic-org__sanic-2244 | [
"2132"
] | 404c5f9f9eb54f06d1f30250bc2cc840c18a0eab | diff --git a/sanic/mixins/routes.py b/sanic/mixins/routes.py
--- a/sanic/mixins/routes.py
+++ b/sanic/mixins/routes.py
@@ -592,6 +592,7 @@ def static(
strict_slashes=None,
content_type=None,
apply=True,
+ resource_type=None,
):
"""
Register a root to serve files from. The input can either be a
@@ -641,6 +642,7 @@ def static(
host,
strict_slashes,
content_type,
+ resource_type,
)
self._future_statics.add(static)
@@ -836,8 +838,27 @@ def _register_static(
name = static.name
# If we're not trying to match a file directly,
# serve from the folder
- if not path.isfile(file_or_directory):
+ if not static.resource_type:
+ if not path.isfile(file_or_directory):
+ uri += "/<__file_uri__:path>"
+ elif static.resource_type == "dir":
+ if path.isfile(file_or_directory):
+ raise TypeError(
+ "Resource type improperly identified as directory. "
+ f"'{file_or_directory}'"
+ )
uri += "/<__file_uri__:path>"
+ elif static.resource_type == "file" and not path.isfile(
+ file_or_directory
+ ):
+ raise TypeError(
+ "Resource type improperly identified as file. "
+ f"'{file_or_directory}'"
+ )
+ elif static.resource_type != "file":
+ raise ValueError(
+ "The resource_type should be set to 'file' or 'dir'"
+ )
# special prefix for static files
# if not static.name.startswith("_static_"):
diff --git a/sanic/models/futures.py b/sanic/models/futures.py
--- a/sanic/models/futures.py
+++ b/sanic/models/futures.py
@@ -52,6 +52,7 @@ class FutureStatic(NamedTuple):
host: Optional[str]
strict_slashes: Optional[bool]
content_type: Optional[bool]
+ resource_type: Optional[str]
class FutureSignal(NamedTuple):
| diff --git a/tests/test_static.py b/tests/test_static.py
--- a/tests/test_static.py
+++ b/tests/test_static.py
@@ -523,3 +523,56 @@ def test_multiple_statics(app, static_file_directory):
assert response.body == get_file_content(
static_file_directory, "python.png"
)
+
+
+def test_resource_type_default(app, static_file_directory):
+ app.static("/static", static_file_directory)
+ app.static("/file", get_file_path(static_file_directory, "test.file"))
+
+ _, response = app.test_client.get("/static")
+ assert response.status == 404
+
+ _, response = app.test_client.get("/file")
+ assert response.status == 200
+ assert response.body == get_file_content(
+ static_file_directory, "test.file"
+ )
+
+
+def test_resource_type_file(app, static_file_directory):
+ app.static(
+ "/file",
+ get_file_path(static_file_directory, "test.file"),
+ resource_type="file",
+ )
+
+ _, response = app.test_client.get("/file")
+ assert response.status == 200
+ assert response.body == get_file_content(
+ static_file_directory, "test.file"
+ )
+
+ with pytest.raises(TypeError):
+ app.static("/static", static_file_directory, resource_type="file")
+
+
+def test_resource_type_dir(app, static_file_directory):
+ app.static("/static", static_file_directory, resource_type="dir")
+
+ _, response = app.test_client.get("/static/test.file")
+ assert response.status == 200
+ assert response.body == get_file_content(
+ static_file_directory, "test.file"
+ )
+
+ with pytest.raises(TypeError):
+ app.static(
+ "/file",
+ get_file_path(static_file_directory, "test.file"),
+ resource_type="dir",
+ )
+
+
+def test_resource_type_unknown(app, static_file_directory, caplog):
+ with pytest.raises(ValueError):
+ app.static("/static", static_file_directory, resource_type="unknown")
| `RouteExists` thrown when registering a directory and a missing file to the same route.
**Describe the bug**
When registering a route to a file and a directory, if the file doesn't exist a `sanic_routing.exceptions.RouteExists` error is raised. This may be misleading and unintended.
**Code snippet**
`example.html` does not exist in this example. The directory `./resources/web` does.
```python
app.static('/', './resources/web')
app.static('/', './resources/web/example.html')
```
**Expected behavior**
Currently, if a non existent file is registered to a route, no error is thrown on startup and any attempt to retrieve registered resource raises a `FileNotFoundError: [Errno 2] No such file or directory` and a `404 Not Found` error is displayed in the browser. Instead of this happening, as mentioned before a `sanic_routing.exceptions.RouteExists` is raised on startup which may be misleading and unintended.
**Environment (please complete the following information):**
- OS: Ubuntu 20.04.2 LTS
| This is not really a bug, and is an intended outcome. Sanic is checking to see if a file like that exists. If not, it thinks you are trying to serve from a directory. Just by looking at the path, there is no way to determine if you intend to serve a file or a directory.
I think this is the correct behavior. If anything, perhaps what you really want is an explicit method to control this:
```python
app.static('/', './resources/web', as="dir")
app.static('/', './resources/web/example.html', as="file")
```
With an explicit kwarg, we would not need to run `path.isfile`. This is not really new behavior. If Sanic 20.12 is not functioning this way, I think that is more a problem on the earlier version than with the current.
---
A bit of behind the scenes info...
Let's look at what these two are doing:
```python
app.static("/", "./resources/web")
# Sanic checks this, sees that it is not a file and then converts to:
# "/<__file_uri__:path>"
# meaning that it will look for anything that matches using the `path` param type
app.static('/', './resources/web/example.html')
# If this is indeed a file, Sanic knows that you meant for an explicit path "/"
# There is no ambiguity, and the path does not need to be altered
# If the file does not exist, then like the first example, it tries to convert it to a `path` type
# But, since you already have a path expansion on that base path, there is now ambiguity
# and Sanic raises RouteExists
```
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is incorrect, please respond with an update. Thank you for your contributions.
| 2021-09-25T21:48:40 |
sanic-org/sanic | 2,245 | sanic-org__sanic-2245 | [
"2186"
] | d9796e9b1e930c2ac8bdc0cf7acc34dc6555241c | diff --git a/sanic/http.py b/sanic/http.py
--- a/sanic/http.py
+++ b/sanic/http.py
@@ -148,6 +148,12 @@ async def http1(self):
await self.response.send(end_stream=True)
except CancelledError:
# Write an appropriate response before exiting
+ if not self.protocol.transport:
+ logger.info(
+ f"Request: {self.request.method} {self.request.url} "
+ "stopped. Transport is closed."
+ )
+ return
e = self.exception or ServiceUnavailable("Cancelled")
self.exception = None
self.keep_alive = False
diff --git a/sanic/server/protocols/http_protocol.py b/sanic/server/protocols/http_protocol.py
--- a/sanic/server/protocols/http_protocol.py
+++ b/sanic/server/protocols/http_protocol.py
@@ -109,7 +109,7 @@ async def connection_task(self): # no cov
except Exception:
error_logger.exception("protocol.connection_task uncaught")
finally:
- if self.app.debug and self._http:
+ if self.app.debug and self._http and self.transport:
ip = self.transport.get_extra_info("peername")
error_logger.error(
"Connection lost before response written"
| diff --git a/tests/test_graceful_shutdown.py b/tests/test_graceful_shutdown.py
new file mode 100644
--- /dev/null
+++ b/tests/test_graceful_shutdown.py
@@ -0,0 +1,46 @@
+import asyncio
+import logging
+import time
+
+from collections import Counter
+from multiprocessing import Process
+
+import httpx
+
+
+PORT = 42101
+
+
+def test_no_exceptions_when_cancel_pending_request(app, caplog):
+ app.config.GRACEFUL_SHUTDOWN_TIMEOUT = 1
+
+ @app.get("/")
+ async def handler(request):
+ await asyncio.sleep(5)
+
+ @app.after_server_start
+ def shutdown(app, _):
+ time.sleep(0.2)
+ app.stop()
+
+ def ping():
+ time.sleep(0.1)
+ response = httpx.get("http://127.0.0.1:8000")
+ print(response.status_code)
+
+ p = Process(target=ping)
+ p.start()
+
+ with caplog.at_level(logging.INFO):
+ app.run()
+
+ p.kill()
+
+ counter = Counter([r[1] for r in caplog.record_tuples])
+
+ assert counter[logging.INFO] == 5
+ assert logging.ERROR not in counter
+ assert (
+ caplog.record_tuples[3][2]
+ == "Request: GET http://127.0.0.1:8000/ stopped. Transport is closed."
+ )
diff --git a/tests/test_signal_handlers.py b/tests/test_signal_handlers.py
--- a/tests/test_signal_handlers.py
+++ b/tests/test_signal_handlers.py
@@ -7,7 +7,6 @@
import pytest
-from sanic_testing.reusable import ReusableClient
from sanic_testing.testing import HOST, PORT
from sanic.compat import ctrlc_workaround_for_windows
@@ -29,13 +28,9 @@ def set_loop(app, loop):
signal.signal = mock
else:
loop.add_signal_handler = mock
- print(">>>>>>>>>>>>>>>1", id(loop))
- print(">>>>>>>>>>>>>>>1", loop.add_signal_handler)
def after(app, loop):
- print(">>>>>>>>>>>>>>>2", id(loop))
- print(">>>>>>>>>>>>>>>2", loop.add_signal_handler)
calledq.put(mock.called)
@@ -100,7 +95,7 @@ async def atest(stop_first):
os.kill(os.getpid(), signal.SIGINT)
await asyncio.sleep(0.2)
assert app.is_stopping
- assert app.stay_active_task.result() == None
+ assert app.stay_active_task.result() is None
# Second Ctrl+C should raise
with pytest.raises(KeyboardInterrupt):
os.kill(os.getpid(), signal.SIGINT)
| Exception when terminating while receiving a request
**Describe the bug**
Chrome, Safari and probably other browsers seem to open surplus connections to the server in advance in order to improve performance (see for example [this stackoverflow thread](https://stackoverflow.com/questions/4761913/server-socket-receives-2-http-requests-when-i-send-from-chrome-and-receives-one)). These connections are kept open in the state "wait for request".
If Sanic is terminated with such a pending request, the request gets cancelled after `GRACEFUL_SHUTDOWN_TIMEOUT` and the receive is cancelled. In reaction the the `CancelledError`, Sanic will try to send an error response, but the transport has already been cleaned out, and as the request has not been received yet a new empty, request with `transport=None` is created. The symptom is this trace:
```
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/http.py", line 126, in http1
await self.http1_request_header()
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/http.py", line 188, in http1_request_header
await self._receive_more()
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/server.py", line 222, in receive_more
await self._data_received.wait()
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/locks.py", line 226, in wait
await fut
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/server.py", line 197, in connection_task
await self._http.http1()
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/http.py", line 142, in http1
await self.error_response(e)
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/http.py", line 405, in error_response
await app.handle_exception(self.request, exception)
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/app.py", line 704, in handle_exception
await response.send(end_stream=True)
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/response.py", line 122, in send
await self.stream.send(data, end_stream=end_stream)
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/http.py", line 335, in http1_response_header
await self._send(ret)
File "/Users/pestix/.virtualenvs/server-25sdMkop/lib/python3.9/site-packages/sanic/server.py", line 267, in send
if self.transport.is_closing():
AttributeError: 'NoneType' object has no attribute 'is_closing'
```
**Code snippet**
This can be reproduced with the hello world example. Load the page in Chrome, then terminate. Sanic will wait 15 seconds, then die with above trace. On my Mac, this happens about 75% of the time.
**Expected behavior**
Clean exit.
**Environment (please complete the following information):**
- OS: MacOS
- Version: 21.6.0
| I've opened a PR... ๐ | 2021-09-25T22:47:18 |
sanic-org/sanic | 2,246 | sanic-org__sanic-2246 | [
"2242"
] | 6ffc4d9756798ddb9ca18a7b1c7ca66e9ce8e441 | diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -331,21 +331,22 @@ def register(self, app, options):
route_names = [route.name for route in routes if route]
- # Middleware
if route_names:
+ # Middleware
for future in self._future_middleware:
middleware.append(app._apply_middleware(future, route_names))
- # Exceptions
- for future in self._future_exceptions:
- exception_handlers.append(
- app._apply_exception_handler(future, route_names)
- )
+ # Exceptions
+ for future in self._future_exceptions:
+ exception_handlers.append(
+ app._apply_exception_handler(future, route_names)
+ )
# Event listeners
for listener in self._future_listeners:
listeners[listener.event].append(app._apply_listener(listener))
+ # Signals
for signal in self._future_signals:
signal.condition.update({"blueprint": self.name})
app._apply_signal(signal)
| diff --git a/tests/test_blueprints.py b/tests/test_blueprints.py
--- a/tests/test_blueprints.py
+++ b/tests/test_blueprints.py
@@ -83,7 +83,6 @@ def handler(request):
return text("OK")
else:
- print(func)
raise Exception(f"{func} is not callable")
app.blueprint(bp)
@@ -477,6 +476,58 @@ def handler_exception(request, exception):
assert response.status == 200
+def test_bp_exception_handler_applied(app):
+ class Error(Exception):
+ pass
+
+ handled = Blueprint("handled")
+ nothandled = Blueprint("nothandled")
+
+ @handled.exception(Error)
+ def handle_error(req, e):
+ return text("handled {}".format(e))
+
+ @handled.route("/ok")
+ def ok(request):
+ raise Error("uh oh")
+
+ @nothandled.route("/notok")
+ def notok(request):
+ raise Error("uh oh")
+
+ app.blueprint(handled)
+ app.blueprint(nothandled)
+
+ _, response = app.test_client.get("/ok")
+ assert response.status == 200
+ assert response.text == "handled uh oh"
+
+ _, response = app.test_client.get("/notok")
+ assert response.status == 500
+
+
+def test_bp_exception_handler_not_applied(app):
+ class Error(Exception):
+ pass
+
+ handled = Blueprint("handled")
+ nothandled = Blueprint("nothandled")
+
+ @handled.exception(Error)
+ def handle_error(req, e):
+ return text("handled {}".format(e))
+
+ @nothandled.route("/notok")
+ def notok(request):
+ raise Error("uh oh")
+
+ app.blueprint(handled)
+ app.blueprint(nothandled)
+
+ _, response = app.test_client.get("/notok")
+ assert response.status == 500
+
+
def test_bp_listeners(app):
app.route("/")(lambda x: x)
blueprint = Blueprint("test_middleware")
| Blueprint exceptions are sometimes handled by other blueprint's exception handlers
**Describe the bug**
Exceptions thrown from within one blueprint are sometimes handled by the exception handler of another, unrelated blueprint instead of by its own. I have not had time to check whether this happens all the time or only if specific conditions are met, but I have attached a code snippet with which the issue can be reproduced.
**Code snippet**
```py
from sanic import Blueprint, HTTPResponse, Request, Sanic
from sanic.exceptions import SanicException
from sanic.response import text
class BaseException(SanicException):
code: int
description: str
status_code: int
headers: dict = {}
class ExceptionA(BaseException):
error: str
class ExceptionB(BaseException):
code = 0
status_code = 400
class ExceptionBA(ExceptionB, ExceptionA):
error = "foo"
description = "Bar!"
app = Sanic("my_app")
bp1 = Blueprint("auth", url_prefix="/auth")
bp2 = Blueprint("token")
@bp1.route("/network", version=1)
async def error_bp1(_: Request):
raise ExceptionBA()
@bp2.route("/token")
async def hello_bp2(_: Request):
return text("P3rry7hePl4typu5")
bpg1 = Blueprint.group(bp1, version_prefix="/api/v")
bpg2 = Blueprint.group(bp2, url_prefix="/api/oauth2")
for bp in bpg1.blueprints:
@bp.exception(BaseException)
async def _(_, ex: BaseException) -> HTTPResponse:
return text("BPG1_BaseException")
@bp.exception(Exception)
async def _(request: Request, ex: Exception) -> HTTPResponse:
return text("BPG1_Exception")
for bp in bpg2.blueprints:
@bp.exception(ExceptionA)
async def _(_, ex: ExceptionA):
return text("BPG2_ExceptionA")
@bp.exception(Exception)
async def _(request: Request, ex: Exception):
return text("BPG2_Exception")
bpg_all = Blueprint.group(bpg1, bpg2)
app.blueprint(bpg_all)
if __name__ == "__main__":
app.run(debug=True)
```
**Expected behavior**
When accessing `/api/v1/auth/network`, an exception is raised, which is captured by the `bpg1` exception handler; rendering a final response which displays the string `BPG1_BaseException`.
**Actual behavior**
When accessing `/api/v1/auth/network`, an exception is raised, which is (somehow) captured by the `bpg2` exception handler; rendering a final response which displays the string `BPG2_ExceptionA`.
**Environment (please complete the following information):**
- OS: Windows 10 Pro; 21H1; Compilation 19043.1165.
- Versions: Sanic 21.6.2; Routing 0.7.1
| 2021-09-27T09:55:45 |
|
sanic-org/sanic | 2,247 | sanic-org__sanic-2247 | [
"2240"
] | ba2670e99c4902f331bc1eee1b4f6b1e2fe4aeba | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -1330,7 +1330,8 @@ def _helper(
logger.info(f"Goin' Fast @ {proto}://{host}:{port}")
debug_mode = "enabled" if self.debug else "disabled"
- logger.debug("Sanic auto-reload: enabled")
+ reload_mode = "enabled" if auto_reload else "disabled"
+ logger.debug(f"Sanic auto-reload: {reload_mode}")
logger.debug(f"Sanic debug mode: {debug_mode}")
return server_settings
| Sanic prints "auto-reload: enabled" even when auto_reload is False.
Kinda related to #2237
https://github.com/sanic-org/sanic/blob/404c5f9f9eb54f06d1f30250bc2cc840c18a0eab/sanic/app.py#L1340
| Message introduced in https://github.com/sanic-org/sanic/pull/2136
Can you provide an example?
```
app = Sanic("app")
if __name__ == "__main__"":
app.run("127.0.0.1", 8080, debug=True, auto_reload=False)
```
Always Prints:
> "Sanic auto-reload: enabled"
Even though auto-reload is actually disabled. | 2021-09-29T01:26:29 |
|
sanic-org/sanic | 2,259 | sanic-org__sanic-2259 | [
"2258"
] | 50a606adeef0075129038821f1c744e1bf3d7d0a | diff --git a/sanic/__version__.py b/sanic/__version__.py
--- a/sanic/__version__.py
+++ b/sanic/__version__.py
@@ -1 +1 @@
-__version__ = "21.9.0"
+__version__ = "21.9.1"
diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -1474,6 +1474,7 @@ def signalize(self):
async def _startup(self):
self.signalize()
self.finalize()
+ ErrorHandler.finalize(self.error_handler)
TouchUp.run(self)
async def _server_event(
diff --git a/sanic/handlers.py b/sanic/handlers.py
--- a/sanic/handlers.py
+++ b/sanic/handlers.py
@@ -1,3 +1,4 @@
+from inspect import signature
from typing import Dict, List, Optional, Tuple, Type
from sanic.errorpages import BaseRenderer, HTMLRenderer, exception_response
@@ -25,7 +26,9 @@ class ErrorHandler:
"""
# Beginning in v22.3, the base renderer will be TextRenderer
- def __init__(self, fallback: str, base: Type[BaseRenderer] = HTMLRenderer):
+ def __init__(
+ self, fallback: str = "auto", base: Type[BaseRenderer] = HTMLRenderer
+ ):
self.handlers: List[Tuple[Type[BaseException], RouteHandler]] = []
self.cached_handlers: Dict[
Tuple[Type[BaseException], Optional[str]], Optional[RouteHandler]
@@ -34,6 +37,34 @@ def __init__(self, fallback: str, base: Type[BaseRenderer] = HTMLRenderer):
self.fallback = fallback
self.base = base
+ @classmethod
+ def finalize(cls, error_handler):
+ if not isinstance(error_handler, cls):
+ error_logger.warning(
+ f"Error handler is non-conforming: {type(error_handler)}"
+ )
+
+ sig = signature(error_handler.lookup)
+ if len(sig.parameters) == 1:
+ error_logger.warning(
+ DeprecationWarning(
+ "You are using a deprecated error handler. The lookup "
+ "method should accept two positional parameters: "
+ "(exception, route_name: Optional[str]). "
+ "Until you upgrade your ErrorHandler.lookup, Blueprint "
+ "specific exceptions will not work properly. Beginning "
+ "in v22.3, the legacy style lookup method will not "
+ "work at all."
+ ),
+ )
+ error_handler._lookup = error_handler._legacy_lookup
+
+ def _full_lookup(self, exception, route_name: Optional[str] = None):
+ return self.lookup(exception, route_name)
+
+ def _legacy_lookup(self, exception, route_name: Optional[str] = None):
+ return self.lookup(exception)
+
def add(self, exception, handler, route_names: Optional[List[str]] = None):
"""
Add a new exception handler to an already existing handler object.
@@ -56,7 +87,7 @@ def add(self, exception, handler, route_names: Optional[List[str]] = None):
else:
self.cached_handlers[(exception, None)] = handler
- def lookup(self, exception, route_name: Optional[str]):
+ def lookup(self, exception, route_name: Optional[str] = None):
"""
Lookup the existing instance of :class:`ErrorHandler` and fetch the
registered handler for a specific type of exception.
@@ -94,6 +125,8 @@ def lookup(self, exception, route_name: Optional[str]):
handler = None
return handler
+ _lookup = _full_lookup
+
def response(self, request, exception):
"""Fetches and executes an exception handler and returns a response
object
@@ -109,7 +142,7 @@ def response(self, request, exception):
or registered handler for that type of exception.
"""
route_name = request.name if request else None
- handler = self.lookup(exception, route_name)
+ handler = self._lookup(exception, route_name)
response = None
try:
if handler:
| diff --git a/tests/test_exceptions.py b/tests/test_exceptions.py
--- a/tests/test_exceptions.py
+++ b/tests/test_exceptions.py
@@ -4,6 +4,7 @@
import pytest
from bs4 import BeautifulSoup
+from websockets.version import version as websockets_version
from sanic import Sanic
from sanic.exceptions import (
@@ -16,7 +17,6 @@
abort,
)
from sanic.response import text
-from websockets.version import version as websockets_version
class SanicExceptionTestException(Exception):
diff --git a/tests/test_exceptions_handler.py b/tests/test_exceptions_handler.py
--- a/tests/test_exceptions_handler.py
+++ b/tests/test_exceptions_handler.py
@@ -1,4 +1,5 @@
import asyncio
+import logging
import pytest
@@ -206,3 +207,23 @@ def test_exception_handler_processed_request_middleware(exception_handler_app):
request, response = exception_handler_app.test_client.get("/8")
assert response.status == 200
assert response.text == "Done."
+
+
+def test_single_arg_exception_handler_notice(exception_handler_app, caplog):
+ class CustomErrorHandler(ErrorHandler):
+ def lookup(self, exception):
+ return super().lookup(exception, None)
+
+ exception_handler_app.error_handler = CustomErrorHandler()
+
+ with caplog.at_level(logging.WARNING):
+ _, response = exception_handler_app.test_client.get("/1")
+
+ assert caplog.records[0].message == (
+ "You are using a deprecated error handler. The lookup method should "
+ "accept two positional parameters: (exception, route_name: "
+ "Optional[str]). Until you upgrade your ErrorHandler.lookup, "
+ "Blueprint specific exceptions will not work properly. Beginning in "
+ "v22.3, the legacy style lookup method will not work at all."
+ )
+ assert response.status == 400
| ErrorHandler.lookup signature change
The `ErrorHandler.lookup` now **requires** two positional arguments:
```python
def lookup(self, exception, route_name: Optional[str]):
```
A non-conforming method will cause Blueprint-specific exception handlers to not properly attach.
---
Related to #2250
| 2021-10-02T20:21:20 |
|
sanic-org/sanic | 2,260 | sanic-org__sanic-2260 | [
"2255"
] | b731a6b48c8bb6148e46df79d39a635657c9c1aa | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -72,6 +72,7 @@
FutureException,
FutureListener,
FutureMiddleware,
+ FutureRegistry,
FutureRoute,
FutureSignal,
FutureStatic,
@@ -115,6 +116,7 @@ class Sanic(BaseSanic, metaclass=TouchUpMeta):
"_future_exceptions",
"_future_listeners",
"_future_middleware",
+ "_future_registry",
"_future_routes",
"_future_signals",
"_future_statics",
@@ -187,6 +189,7 @@ def __init__(
self._test_manager: Any = None
self._blueprint_order: List[Blueprint] = []
self._delayed_tasks: List[str] = []
+ self._future_registry: FutureRegistry = FutureRegistry()
self._state: ApplicationState = ApplicationState(app=self)
self.blueprints: Dict[str, Blueprint] = {}
self.config: Config = config or Config(
@@ -1625,6 +1628,7 @@ def signalize(self):
raise e
async def _startup(self):
+ self._future_registry.clear()
self.signalize()
self.finalize()
ErrorHandler.finalize(self.error_handler)
diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -4,7 +4,9 @@
from collections import defaultdict
from copy import deepcopy
-from enum import Enum
+from functools import wraps
+from inspect import isfunction
+from itertools import chain
from types import SimpleNamespace
from typing import (
TYPE_CHECKING,
@@ -13,7 +15,9 @@
Iterable,
List,
Optional,
+ Sequence,
Set,
+ Tuple,
Union,
)
@@ -36,6 +40,32 @@
from sanic import Sanic # noqa
+def lazy(func, as_decorator=True):
+ @wraps(func)
+ def decorator(bp, *args, **kwargs):
+ nonlocal as_decorator
+ kwargs["apply"] = False
+ pass_handler = None
+
+ if args and isfunction(args[0]):
+ as_decorator = False
+
+ def wrapper(handler):
+ future = func(bp, *args, **kwargs)
+ if as_decorator:
+ future = future(handler)
+
+ if bp.registered:
+ for app in bp.apps:
+ bp.register(app, {})
+
+ return future
+
+ return wrapper if as_decorator else wrapper(pass_handler)
+
+ return decorator
+
+
class Blueprint(BaseSanic):
"""
In *Sanic* terminology, a **Blueprint** is a logical collection of
@@ -125,29 +155,16 @@ def apps(self):
)
return self._apps
- def route(self, *args, **kwargs):
- kwargs["apply"] = False
- return super().route(*args, **kwargs)
-
- def static(self, *args, **kwargs):
- kwargs["apply"] = False
- return super().static(*args, **kwargs)
-
- def middleware(self, *args, **kwargs):
- kwargs["apply"] = False
- return super().middleware(*args, **kwargs)
-
- def listener(self, *args, **kwargs):
- kwargs["apply"] = False
- return super().listener(*args, **kwargs)
-
- def exception(self, *args, **kwargs):
- kwargs["apply"] = False
- return super().exception(*args, **kwargs)
+ @property
+ def registered(self) -> bool:
+ return bool(self._apps)
- def signal(self, event: Union[str, Enum], *args, **kwargs):
- kwargs["apply"] = False
- return super().signal(event, *args, **kwargs)
+ exception = lazy(BaseSanic.exception)
+ listener = lazy(BaseSanic.listener)
+ middleware = lazy(BaseSanic.middleware)
+ route = lazy(BaseSanic.route)
+ signal = lazy(BaseSanic.signal)
+ static = lazy(BaseSanic.static, as_decorator=False)
def reset(self):
self._apps: Set[Sanic] = set()
@@ -284,6 +301,7 @@ def register(self, app, options):
middleware = []
exception_handlers = []
listeners = defaultdict(list)
+ registered = set()
# Routes
for future in self._future_routes:
@@ -310,12 +328,15 @@ def register(self, app, options):
)
name = app._generate_name(future.name)
+ host = future.host or self.host
+ if isinstance(host, list):
+ host = tuple(host)
apply_route = FutureRoute(
future.handler,
uri[1:] if uri.startswith("//") else uri,
future.methods,
- future.host or self.host,
+ host,
strict_slashes,
future.stream,
version,
@@ -329,6 +350,10 @@ def register(self, app, options):
error_format,
)
+ if (self, apply_route) in app._future_registry:
+ continue
+
+ registered.add(apply_route)
route = app._apply_route(apply_route)
operation = (
routes.extend if isinstance(route, list) else routes.append
@@ -340,6 +365,11 @@ def register(self, app, options):
# Prepend the blueprint URI prefix if available
uri = url_prefix + future.uri if url_prefix else future.uri
apply_route = FutureStatic(uri, *future[1:])
+
+ if (self, apply_route) in app._future_registry:
+ continue
+
+ registered.add(apply_route)
route = app._apply_static(apply_route)
routes.append(route)
@@ -348,30 +378,51 @@ def register(self, app, options):
if route_names:
# Middleware
for future in self._future_middleware:
+ if (self, future) in app._future_registry:
+ continue
middleware.append(app._apply_middleware(future, route_names))
# Exceptions
for future in self._future_exceptions:
+ if (self, future) in app._future_registry:
+ continue
exception_handlers.append(
app._apply_exception_handler(future, route_names)
)
# Event listeners
- for listener in self._future_listeners:
- listeners[listener.event].append(app._apply_listener(listener))
+ for future in self._future_listeners:
+ if (self, future) in app._future_registry:
+ continue
+ listeners[future.event].append(app._apply_listener(future))
# Signals
- for signal in self._future_signals:
- signal.condition.update({"blueprint": self.name})
- app._apply_signal(signal)
-
- self.routes = [route for route in routes if isinstance(route, Route)]
- self.websocket_routes = [
+ for future in self._future_signals:
+ if (self, future) in app._future_registry:
+ continue
+ future.condition.update({"blueprint": self.name})
+ app._apply_signal(future)
+
+ self.routes += [route for route in routes if isinstance(route, Route)]
+ self.websocket_routes += [
route for route in self.routes if route.ctx.websocket
]
- self.middlewares = middleware
- self.exceptions = exception_handlers
- self.listeners = dict(listeners)
+ self.middlewares += middleware
+ self.exceptions += exception_handlers
+ self.listeners.update(dict(listeners))
+
+ if self.registered:
+ self.register_futures(
+ self.apps,
+ self,
+ chain(
+ registered,
+ self._future_middleware,
+ self._future_exceptions,
+ self._future_listeners,
+ self._future_signals,
+ ),
+ )
async def dispatch(self, *args, **kwargs):
condition = kwargs.pop("condition", {})
@@ -403,3 +454,10 @@ def _extract_value(*values):
value = v
break
return value
+
+ @staticmethod
+ def register_futures(
+ apps: Set[Sanic], bp: Blueprint, futures: Sequence[Tuple[Any, ...]]
+ ):
+ for app in apps:
+ app._future_registry.update(set((bp, item) for item in futures))
diff --git a/sanic/models/futures.py b/sanic/models/futures.py
--- a/sanic/models/futures.py
+++ b/sanic/models/futures.py
@@ -60,3 +60,7 @@ class FutureSignal(NamedTuple):
handler: SignalHandler
event: str
condition: Optional[Dict[str, str]]
+
+
+class FutureRegistry(set):
+ ...
| diff --git a/tests/test_blueprint_copy.py b/tests/test_blueprint_copy.py
--- a/tests/test_blueprint_copy.py
+++ b/tests/test_blueprint_copy.py
@@ -1,6 +1,4 @@
-from copy import deepcopy
-
-from sanic import Blueprint, Sanic, blueprints, response
+from sanic import Blueprint, Sanic
from sanic.response import text
diff --git a/tests/test_blueprints.py b/tests/test_blueprints.py
--- a/tests/test_blueprints.py
+++ b/tests/test_blueprints.py
@@ -1088,3 +1088,31 @@ def test_bp_set_attribute_warning():
"and will be removed in version 21.12. You should change your "
"Blueprint instance to use instance.ctx.foo instead."
)
+
+
+def test_early_registration(app):
+ assert len(app.router.routes) == 0
+
+ bp = Blueprint("bp")
+
+ @bp.get("/one")
+ async def one(_):
+ return text("one")
+
+ app.blueprint(bp)
+
+ assert len(app.router.routes) == 1
+
+ @bp.get("/two")
+ async def two(_):
+ return text("two")
+
+ @bp.get("/three")
+ async def three(_):
+ return text("three")
+
+ assert len(app.router.routes) == 3
+
+ for path in ("one", "two", "three"):
+ _, response = app.test_client.get(f"/{path}")
+ assert response.text == path
| log a warning or error when adding something to blueprint after adding it to application
**Is your feature request related to a problem? Please describe.**
when i add a blueprint to the sanic app, and then add routes to it, those routes dont work. i have to add the blueprint to the app after adding routes. but there is no indication that im doing something wrong if i do it the wrong way
**Describe the solution you'd like**
log a warning or error message telling the programmer that they added routes to a blueprint after adding it to the app, and that those routes will not work. or straight up throw an error
| :100:
This should be the case for any Blueprint addition (routes, listeners, middleware, exception handlers, signals).
I might give this one a try tomorrow or on Monday
Or better yet, make the blueprint changes work even after added to an app, if doing so is feasible.
> Or better yet, make the blueprint changes work even after added to an app, if doing so is feasible.
This could be a bit messy, and I believe would require a bit of a refactor from how the decorators work. I think it also breaks the principle of what the Blueprint is, but perhaps could be doable with something like this:
```python
def route(self, *args, **kwargs):
kwargs["apply"] = False
sup = super().route
def wrapper(handler):
retval = sup(*args, **kwargs)(handler)
for app in self.apps:
self.register(app, {})
return retval
return wrapper
```
If we are going this direction, then we need to definitely beef up the unit testing around this, and probably make all of these methods (route, static, middleware, exception, signal) perhaps a little more DRY.
@prryplatypus I am going to take this one myself. I hope you do not mind. | 2021-10-03T13:30:32 |
sanic-org/sanic | 2,268 | sanic-org__sanic-2268 | [
"2289"
] | cde02b5936838e7a1574ba094e44d987176848d9 | diff --git a/sanic/http.py b/sanic/http.py
--- a/sanic/http.py
+++ b/sanic/http.py
@@ -105,7 +105,6 @@ def __init__(self, protocol):
self.keep_alive = True
self.stage: Stage = Stage.IDLE
self.dispatch = self.protocol.app.dispatch
- self.init_for_request()
def init_for_request(self):
"""Init/reset all per-request variables."""
@@ -129,14 +128,20 @@ async def http1(self):
"""
HTTP 1.1 connection handler
"""
- while True: # As long as connection stays keep-alive
+ # Handle requests while the connection stays reusable
+ while self.keep_alive and self.stage is Stage.IDLE:
+ self.init_for_request()
+ # Wait for incoming bytes (in IDLE stage)
+ if not self.recv_buffer:
+ await self._receive_more()
+ self.stage = Stage.REQUEST
try:
# Receive and handle a request
- self.stage = Stage.REQUEST
self.response_func = self.http1_response_header
await self.http1_request_header()
+ self.stage = Stage.HANDLER
self.request.conn_info = self.protocol.conn_info
await self.protocol.request_handler(self.request)
@@ -187,16 +192,6 @@ async def http1(self):
if self.response:
self.response.stream = None
- # Exit and disconnect if no more requests can be taken
- if self.stage is not Stage.IDLE or not self.keep_alive:
- break
-
- self.init_for_request()
-
- # Wait for the next request
- if not self.recv_buffer:
- await self._receive_more()
-
async def http1_request_header(self): # no cov
"""
Receive and parse request header into self.request.
@@ -299,7 +294,6 @@ async def http1_request_header(self): # no cov
# Remove header and its trailing CRLF
del buf[: pos + 4]
- self.stage = Stage.HANDLER
self.request, request.stream = request, self
self.protocol.state["requests_count"] += 1
| diff --git a/tests/test_request_timeout.py b/tests/test_request_timeout.py
deleted file mode 100644
--- a/tests/test_request_timeout.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import asyncio
-
-import httpcore
-import httpx
-import pytest
-
-from sanic_testing.testing import SanicTestClient
-
-from sanic import Sanic
-from sanic.response import text
-
-
-class DelayableHTTPConnection(httpcore._async.connection.AsyncHTTPConnection):
- async def arequest(self, *args, **kwargs):
- await asyncio.sleep(2)
- return await super().arequest(*args, **kwargs)
-
- async def _open_socket(self, *args, **kwargs):
- retval = await super()._open_socket(*args, **kwargs)
- if self._request_delay:
- await asyncio.sleep(self._request_delay)
- return retval
-
-
-class DelayableSanicConnectionPool(httpcore.AsyncConnectionPool):
- def __init__(self, request_delay=None, *args, **kwargs):
- self._request_delay = request_delay
- super().__init__(*args, **kwargs)
-
- async def _add_to_pool(self, connection, timeout):
- connection.__class__ = DelayableHTTPConnection
- connection._request_delay = self._request_delay
- await super()._add_to_pool(connection, timeout)
-
-
-class DelayableSanicSession(httpx.AsyncClient):
- def __init__(self, request_delay=None, *args, **kwargs) -> None:
- transport = DelayableSanicConnectionPool(request_delay=request_delay)
- super().__init__(transport=transport, *args, **kwargs)
-
-
-class DelayableSanicTestClient(SanicTestClient):
- def __init__(self, app, request_delay=None):
- super().__init__(app)
- self._request_delay = request_delay
- self._loop = None
-
- def get_new_session(self):
- return DelayableSanicSession(request_delay=self._request_delay)
-
-
[email protected]
-def request_no_timeout_app():
- app = Sanic("test_request_no_timeout")
- app.config.REQUEST_TIMEOUT = 0.6
-
- @app.route("/1")
- async def handler2(request):
- return text("OK")
-
- return app
-
-
[email protected]
-def request_timeout_default_app():
- app = Sanic("test_request_timeout_default")
- app.config.REQUEST_TIMEOUT = 0.6
-
- @app.route("/1")
- async def handler1(request):
- return text("OK")
-
- @app.websocket("/ws1")
- async def ws_handler1(request, ws):
- await ws.send("OK")
-
- return app
-
-
-def test_default_server_error_request_timeout(request_timeout_default_app):
- client = DelayableSanicTestClient(request_timeout_default_app, 2)
- _, response = client.get("/1")
- assert response.status == 408
- assert "Request Timeout" in response.text
-
-
-def test_default_server_error_request_dont_timeout(request_no_timeout_app):
- client = DelayableSanicTestClient(request_no_timeout_app, 0.2)
- _, response = client.get("/1")
- assert response.status == 200
- assert response.text == "OK"
-
-
-def test_default_server_error_websocket_request_timeout(
- request_timeout_default_app,
-):
-
- headers = {
- "Upgrade": "websocket",
- "Connection": "upgrade",
- "Sec-WebSocket-Key": "dGhlIHNhbXBsZSBub25jZQ==",
- "Sec-WebSocket-Version": "13",
- }
-
- client = DelayableSanicTestClient(request_timeout_default_app, 2)
- _, response = client.get("/ws1", headers=headers)
-
- assert response.status == 408
- assert "Request Timeout" in response.text
diff --git a/tests/test_timeout_logic.py b/tests/test_timeout_logic.py
--- a/tests/test_timeout_logic.py
+++ b/tests/test_timeout_logic.py
@@ -26,6 +26,7 @@ def protocol(app, mock_transport):
protocol = HttpProtocol(loop=loop, app=app)
protocol.connection_made(mock_transport)
protocol._setup_connection()
+ protocol._http.init_for_request()
protocol._task = Mock(spec=asyncio.Task)
protocol._task.cancel = Mock()
return protocol
| Exception when stopping the server.
**Describe the bug**
After using the browser like Chrome to access the API, and then stop the server, it will block and wait for a moment, and an exception will occur. 100% reproduction.
```shell
[2021-10-25 15:21:56 +0800] [1879] [INFO] Goin' Fast @ http://127.0.0.1:8000
[2021-10-25 15:21:56 +0800] [1879] [INFO] Starting worker [1879]
[2021-10-25 15:22:00 +0800] - (sanic.access)[INFO][127.0.0.1:51319]: GET http://127.0.0.1:8000/ 200 13
[2021-10-25 15:22:00 +0800] - (sanic.access)[INFO][127.0.0.1:51319]: GET http://127.0.0.1:8000/favicon.ico 404 673
^C[2021-10-25 15:22:03 +0800] [1879] [INFO] Stopping worker [1879]
[2021-10-25 15:22:19 +0800] [1879] [ERROR] protocol.connection_task uncaught
Traceback (most recent call last):
File "/Users/zeb/miniconda3/lib/python3.9/site-packages/sanic/http.py", line 138, in http1
await self.http1_request_header()
File "http1_request_header", line 18, in http1_request_header
ServerError,
File "/Users/zeb/miniconda3/lib/python3.9/site-packages/sanic/server/protocols/base_protocol.py", line 82, in receive_more
await self._data_received.wait()
File "/Users/zeb/miniconda3/lib/python3.9/asyncio/locks.py", line 226, in wait
await fut
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "connection_task", line 15, in connection_task
from sanic.http import Http, Stage
File "/Users/zeb/miniconda3/lib/python3.9/site-packages/sanic/http.py", line 153, in http1
f"Request: {self.request.method} {self.request.url} "
AttributeError: 'NoneType' object has no attribute 'method'
[2021-10-25 15:22:19 +0800] [1879] [INFO] Server Stopped
```
**Code snippet**
```python
from sanic import Sanic
from sanic.response import text
app = Sanic("MyHelloWorldApp")
@app.get("/")
async def hello_world(request):
return text("Hello, world.")
app.run()
```
**Expected behavior**
The server should exit gracefully.
**Environment (please complete the following information):**
- OS: MacOS 10.15.7
- Version: sanic-21.9.1
| 2021-10-06T00:06:46 |
|
sanic-org/sanic | 2,285 | sanic-org__sanic-2285 | [
"2284"
] | 57e98b62b30b51d83429e985f3afbd61a7fe09d4 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -1337,7 +1337,9 @@ def _helper(
if unix:
logger.info(f"Goin' Fast @ {unix} {proto}://...")
else:
- logger.info(f"Goin' Fast @ {proto}://{host}:{port}")
+ # colon(:) is legal for a host only in an ipv6 address
+ display_host = f"[{host}]" if ":" in host else host
+ logger.info(f"Goin' Fast @ {proto}://{display_host}:{port}")
debug_mode = "enabled" if self.debug else "disabled"
reload_mode = "enabled" if auto_reload else "disabled"
| diff --git a/tests/test_cli.py b/tests/test_cli.py
--- a/tests/test_cli.py
+++ b/tests/test_cli.py
@@ -62,6 +62,57 @@ def test_host_port(cmd):
assert firstline == b"Goin' Fast @ http://localhost:9999"
[email protected](
+ "cmd",
+ (
+ ("--host=127.0.0.127", "--port=9999"),
+ ("-H", "127.0.0.127", "-p", "9999"),
+ ),
+)
+def test_host_port(cmd):
+ command = ["sanic", "fake.server.app", *cmd]
+ out, err, exitcode = capture(command)
+ lines = out.split(b"\n")
+ firstline = lines[6]
+
+ assert exitcode != 1
+ assert firstline == b"Goin' Fast @ http://127.0.0.127:9999"
+
+
[email protected](
+ "cmd",
+ (
+ ("--host=::", "--port=9999"),
+ ("-H", "::", "-p", "9999"),
+ ),
+)
+def test_host_port(cmd):
+ command = ["sanic", "fake.server.app", *cmd]
+ out, err, exitcode = capture(command)
+ lines = out.split(b"\n")
+ firstline = lines[6]
+
+ assert exitcode != 1
+ assert firstline == b"Goin' Fast @ http://[::]:9999"
+
+
[email protected](
+ "cmd",
+ (
+ ("--host=::1", "--port=9999"),
+ ("-H", "::1", "-p", "9999"),
+ ),
+)
+def test_host_port(cmd):
+ command = ["sanic", "fake.server.app", *cmd]
+ out, err, exitcode = capture(command)
+ lines = out.split(b"\n")
+ firstline = lines[6]
+
+ assert exitcode != 1
+ assert firstline == b"Goin' Fast @ http://[::1]:9999"
+
+
@pytest.mark.parametrize(
"num,cmd",
(
| Goin' Fast URL IPv6 address is not bracketed
Sanic says:
```
sanic myprogram.app -H ::
Goin' Fast @ http://:::8000
```
The correct formatting for IPv6 would be:
```
Goin' Fast @ http://[::]:8000
```
Fixing the Goin' fast banner in `sanic/app.py` would be an easy enough task for someone wishing to start hacking Sanic. Existing code from `sanic/models/server_types.py` class `ConnInfo` could be useful, as there already is handling for adding brackets to IPv6 addresses.
| 2021-10-24T13:57:15 |
|
sanic-org/sanic | 2,299 | sanic-org__sanic-2299 | [
"2297"
] | 9c576c74db04754dd2907b7c7ef3f83bb29c3518 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -775,6 +775,14 @@ async def handle_exception(
if request.stream:
response = request.stream.response
if isinstance(response, BaseHTTPResponse):
+ await self.dispatch(
+ "http.lifecycle.response",
+ inline=True,
+ context={
+ "request": request,
+ "response": response,
+ },
+ )
await response.send(end_stream=True)
else:
raise ServerError(
| http.lifecycle.response should be dispatched in exception handler
Currently the `http.lifecycle.response` is only dispatched from `handle_request`. It also should be dispatched from `handle_exception`.
| 2021-11-05T15:30:20 |
||
sanic-org/sanic | 2,304 | sanic-org__sanic-2304 | [
"2267"
] | abe062b371b971ab39c4e4e5f3f1a2f9e6d8d904 | diff --git a/sanic/app.py b/sanic/app.py
--- a/sanic/app.py
+++ b/sanic/app.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+import asyncio
import logging
import logging.config
import os
@@ -11,6 +12,7 @@
AbstractEventLoop,
CancelledError,
Protocol,
+ Task,
ensure_future,
get_event_loop,
wait_for,
@@ -125,6 +127,7 @@ class Sanic(BaseSanic, metaclass=TouchUpMeta):
"_future_signals",
"_future_statics",
"_state",
+ "_task_registry",
"_test_client",
"_test_manager",
"asgi",
@@ -188,17 +191,22 @@ def __init__(
"load_env or env_prefix"
)
+ self.config: Config = config or Config(
+ load_env=load_env,
+ env_prefix=env_prefix,
+ )
+
self._asgi_client: Any = None
- self._test_client: Any = None
- self._test_manager: Any = None
self._blueprint_order: List[Blueprint] = []
self._delayed_tasks: List[str] = []
self._future_registry: FutureRegistry = FutureRegistry()
self._state: ApplicationState = ApplicationState(app=self)
+ self._task_registry: Dict[str, Task] = {}
+ self._test_client: Any = None
+ self._test_manager: Any = None
+ self.asgi = False
+ self.auto_reload = False
self.blueprints: Dict[str, Blueprint] = {}
- self.config: Config = config or Config(
- load_env=load_env, env_prefix=env_prefix
- )
self.configure_logging: bool = configure_logging
self.ctx: Any = ctx or SimpleNamespace()
self.debug = False
@@ -250,32 +258,6 @@ def loop(self):
# Registration
# -------------------------------------------------------------------- #
- def add_task(
- self,
- task: Union[Future[Any], Coroutine[Any, Any, Any], Awaitable[Any]],
- ) -> None:
- """
- Schedule a task to run later, after the loop has started.
- Different from asyncio.ensure_future in that it does not
- also return a future, and the actual ensure_future call
- is delayed until before server start.
-
- `See user guide re: background tasks
- <https://sanicframework.org/guide/basics/tasks.html#background-tasks>`__
-
- :param task: future, couroutine or awaitable
- """
- try:
- loop = self.loop # Will raise SanicError if loop is not started
- self._loop_add_task(task, self, loop)
- except SanicException:
- task_name = f"sanic.delayed_task.{hash(task)}"
- if not self._delayed_tasks:
- self.after_server_start(partial(self.dispatch_delayed_tasks))
-
- self.signal(task_name)(partial(self.run_delayed_task, task=task))
- self._delayed_tasks.append(task_name)
-
def register_listener(
self, listener: ListenerType[SanicVar], event: str
) -> ListenerType[SanicVar]:
@@ -1183,6 +1165,7 @@ def stop(self):
This kills the Sanic
"""
if not self.is_stopping:
+ self.shutdown_tasks(timeout=0)
self.is_stopping = True
get_event_loop().stop()
@@ -1456,7 +1439,29 @@ def _build_endpoint_name(self, *parts):
return ".".join(parts)
@classmethod
- def _prep_task(cls, task, app, loop):
+ def _cancel_websocket_tasks(cls, app, loop):
+ for task in app.websocket_tasks:
+ task.cancel()
+
+ @staticmethod
+ async def _listener(
+ app: Sanic, loop: AbstractEventLoop, listener: ListenerType
+ ):
+ maybe_coro = listener(app, loop)
+ if maybe_coro and isawaitable(maybe_coro):
+ await maybe_coro
+
+ # -------------------------------------------------------------------- #
+ # Task management
+ # -------------------------------------------------------------------- #
+
+ @classmethod
+ def _prep_task(
+ cls,
+ task,
+ app,
+ loop,
+ ):
if callable(task):
try:
task = task(app)
@@ -1466,14 +1471,22 @@ def _prep_task(cls, task, app, loop):
return task
@classmethod
- def _loop_add_task(cls, task, app, loop):
+ def _loop_add_task(
+ cls,
+ task,
+ app,
+ loop,
+ *,
+ name: Optional[str] = None,
+ register: bool = True,
+ ) -> Task:
prepped = cls._prep_task(task, app, loop)
- loop.create_task(prepped)
+ task = loop.create_task(prepped, name=name)
- @classmethod
- def _cancel_websocket_tasks(cls, app, loop):
- for task in app.websocket_tasks:
- task.cancel()
+ if name and register:
+ app._task_registry[name] = task
+
+ return task
@staticmethod
async def dispatch_delayed_tasks(app, loop):
@@ -1486,13 +1499,132 @@ async def run_delayed_task(app, loop, task):
prepped = app._prep_task(task, app, loop)
await prepped
- @staticmethod
- async def _listener(
- app: Sanic, loop: AbstractEventLoop, listener: ListenerType
+ def add_task(
+ self,
+ task: Union[Future[Any], Coroutine[Any, Any, Any], Awaitable[Any]],
+ *,
+ name: Optional[str] = None,
+ register: bool = True,
+ ) -> Optional[Task]:
+ """
+ Schedule a task to run later, after the loop has started.
+ Different from asyncio.ensure_future in that it does not
+ also return a future, and the actual ensure_future call
+ is delayed until before server start.
+
+ `See user guide re: background tasks
+ <https://sanicframework.org/guide/basics/tasks.html#background-tasks>`__
+
+ :param task: future, couroutine or awaitable
+ """
+ if name and sys.version_info == (3, 7):
+ name = None
+ error_logger.warning(
+ "Cannot set a name for a task when using Python 3.7. Your "
+ "task will be created without a name."
+ )
+ try:
+ loop = self.loop # Will raise SanicError if loop is not started
+ return self._loop_add_task(
+ task, self, loop, name=name, register=register
+ )
+ except SanicException:
+ task_name = f"sanic.delayed_task.{hash(task)}"
+ if not self._delayed_tasks:
+ self.after_server_start(partial(self.dispatch_delayed_tasks))
+
+ if name:
+ raise RuntimeError(
+ "Cannot name task outside of a running application"
+ )
+
+ self.signal(task_name)(partial(self.run_delayed_task, task=task))
+ self._delayed_tasks.append(task_name)
+ return None
+
+ def get_task(
+ self, name: str, *, raise_exception: bool = True
+ ) -> Optional[Task]:
+ if sys.version_info == (3, 7):
+ raise RuntimeError(
+ "This feature is only supported on using Python 3.8+."
+ )
+ try:
+ return self._task_registry[name]
+ except KeyError:
+ if raise_exception:
+ raise SanicException(
+ f'Registered task named "{name}" not found.'
+ )
+ return None
+
+ async def cancel_task(
+ self,
+ name: str,
+ msg: Optional[str] = None,
+ *,
+ raise_exception: bool = True,
+ ) -> None:
+ if sys.version_info == (3, 7):
+ raise RuntimeError(
+ "This feature is only supported on using Python 3.8+."
+ )
+ task = self.get_task(name, raise_exception=raise_exception)
+ if task and not task.cancelled():
+ args: Tuple[str, ...] = ()
+ if msg:
+ if sys.version_info >= (3, 9):
+ args = (msg,)
+ else:
+ raise RuntimeError(
+ "Cancelling a task with a message is only supported "
+ "on Python 3.9+."
+ )
+ task.cancel(*args)
+ try:
+ await task
+ except CancelledError:
+ ...
+
+ def purge_tasks(self):
+ if sys.version_info == (3, 7):
+ raise RuntimeError(
+ "This feature is only supported on using Python 3.8+."
+ )
+ for task in self.tasks:
+ if task.done() or task.cancelled():
+ name = task.get_name()
+ self._task_registry[name] = None
+
+ self._task_registry = {
+ k: v for k, v in self._task_registry.items() if v is not None
+ }
+
+ def shutdown_tasks(
+ self, timeout: Optional[float] = None, increment: float = 0.1
):
- maybe_coro = listener(app, loop)
- if maybe_coro and isawaitable(maybe_coro):
- await maybe_coro
+ if sys.version_info == (3, 7):
+ raise RuntimeError(
+ "This feature is only supported on using Python 3.8+."
+ )
+ for task in self.tasks:
+ task.cancel()
+
+ if timeout is None:
+ timeout = self.config.GRACEFUL_SHUTDOWN_TIMEOUT
+
+ while len(self._task_registry) and timeout:
+ self.loop.run_until_complete(asyncio.sleep(increment))
+ self.purge_tasks()
+ timeout -= increment
+
+ @property
+ def tasks(self):
+ if sys.version_info == (3, 7):
+ raise RuntimeError(
+ "This feature is only supported on using Python 3.8+."
+ )
+ return iter(self._task_registry.values())
# -------------------------------------------------------------------- #
# ASGI
diff --git a/sanic/server/runners.py b/sanic/server/runners.py
--- a/sanic/server/runners.py
+++ b/sanic/server/runners.py
@@ -1,5 +1,7 @@
from __future__ import annotations
+import sys
+
from ssl import SSLContext
from typing import TYPE_CHECKING, Dict, Optional, Type, Union
@@ -174,6 +176,9 @@ def serve(
loop.run_until_complete(asyncio.sleep(0.1))
start_shutdown = start_shutdown + 0.1
+ if sys.version_info > (3, 7):
+ app.shutdown_tasks(graceful - start_shutdown)
+
# Force close non-idle connection after waiting for
# graceful_shutdown_timeout
for conn in connections:
| diff --git a/tests/test_create_task.py b/tests/test_create_task.py
--- a/tests/test_create_task.py
+++ b/tests/test_create_task.py
@@ -1,7 +1,11 @@
import asyncio
+import sys
from threading import Event
+import pytest
+
+from sanic.exceptions import SanicException
from sanic.response import text
@@ -48,3 +52,41 @@ async def coro(app):
_, response = app.test_client.get("/")
assert response.text == "test_create_task_with_app_arg"
+
+
[email protected](sys.version_info < (3, 8), reason="Not supported in 3.7")
+def test_create_named_task(app):
+ async def dummy():
+ ...
+
+ @app.before_server_start
+ async def setup(app, _):
+ app.add_task(dummy, name="dummy_task")
+
+ @app.after_server_start
+ async def stop(app, _):
+ task = app.get_task("dummy_task")
+
+ assert app._task_registry
+ assert isinstance(task, asyncio.Task)
+
+ assert task.get_name() == "dummy_task"
+
+ app.stop()
+
+ app.run()
+
+
[email protected](sys.version_info < (3, 8), reason="Not supported in 3.7")
+def test_create_named_task_fails_outside_app(app):
+ async def dummy():
+ ...
+
+ message = "Cannot name task outside of a running application"
+ with pytest.raises(RuntimeError, match=message):
+ app.add_task(dummy, name="dummy_task")
+ assert not app._task_registry
+
+ message = 'Registered task named "dummy_task" not found.'
+ with pytest.raises(SanicException, match=message):
+ app.get_task("dummy_task")
diff --git a/tests/test_tasks.py b/tests/test_tasks.py
new file mode 100644
--- /dev/null
+++ b/tests/test_tasks.py
@@ -0,0 +1,91 @@
+import asyncio
+import sys
+
+from asyncio.tasks import Task
+from unittest.mock import Mock, call
+
+import pytest
+
+from sanic.app import Sanic
+from sanic.response import empty
+
+
+pytestmark = pytest.mark.asyncio
+
+
+async def dummy(n=0):
+ for _ in range(n):
+ await asyncio.sleep(1)
+ return True
+
+
[email protected](autouse=True)
+def mark_app_running(app):
+ app.is_running = True
+
+
[email protected](sys.version_info < (3, 8), reason="Not supported in 3.7")
+async def test_add_task_returns_task(app: Sanic):
+ task = app.add_task(dummy())
+
+ assert isinstance(task, Task)
+ assert len(app._task_registry) == 0
+
+
[email protected](sys.version_info < (3, 8), reason="Not supported in 3.7")
+async def test_add_task_with_name(app: Sanic):
+ task = app.add_task(dummy(), name="dummy")
+
+ assert isinstance(task, Task)
+ assert len(app._task_registry) == 1
+ assert task is app.get_task("dummy")
+
+ for task in app.tasks:
+ assert task in app._task_registry.values()
+
+
[email protected](sys.version_info < (3, 8), reason="Not supported in 3.7")
+async def test_cancel_task(app: Sanic):
+ task = app.add_task(dummy(3), name="dummy")
+
+ assert task
+ assert not task.done()
+ assert not task.cancelled()
+
+ await asyncio.sleep(0.1)
+
+ assert not task.done()
+ assert not task.cancelled()
+
+ await app.cancel_task("dummy")
+
+ assert task.cancelled()
+
+
[email protected](sys.version_info < (3, 8), reason="Not supported in 3.7")
+async def test_purge_tasks(app: Sanic):
+ app.add_task(dummy(3), name="dummy")
+
+ await app.cancel_task("dummy")
+
+ assert len(app._task_registry) == 1
+
+ app.purge_tasks()
+
+ assert len(app._task_registry) == 0
+
+
[email protected](sys.version_info < (3, 8), reason="Not supported in 3.7")
+def test_shutdown_tasks_on_app_stop(app: Sanic):
+ app.shutdown_tasks = Mock()
+
+ @app.route("/")
+ async def handler(_):
+ return empty()
+
+ app.test_client.get("/")
+
+ app.shutdown_tasks.call_args == [
+ call(timeout=0),
+ call(15.0),
+ ]
| app.stop() does not run tasks to completion
Stopping the app instantly drops all existing tasks, meaning that try-finally sections, context manager exits etc. do not get executed. This is bad. Tasks should get cancelled and then run to completion prior to stopping the loop.
I suppose one should use `loop.run_until_completion()` if doing such manual management of asyncio loops. The modern way of course is to remove the explicit loop passing and handling entirely, like asyncio have done in the last few Python versions, but that with Sanic would be a huge undertaking. (deprecation and removal of loop arguments on public APIs would be a start though)
| I discovered this problem in the test environment, with finally-blocks not getting executed. It would be helpful if someone confirmed whether this is intended operation or perhaps just some hack to have tests exit faster...
There has been discussion (on my phone will look later) about adding more task tracking and providing a similar graceful shutdown experience after connection task shutdown. | 2021-11-07T14:44:14 |
sanic-org/sanic | 2,317 | sanic-org__sanic-2317 | [
"2314"
] | 523db190a732177eda5a641768667173ba2e2452 | diff --git a/sanic/mixins/routes.py b/sanic/mixins/routes.py
--- a/sanic/mixins/routes.py
+++ b/sanic/mixins/routes.py
@@ -191,7 +191,7 @@ def add_route(
methods: Iterable[str] = frozenset({"GET"}),
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
stream: bool = False,
version_prefix: str = "/v",
@@ -256,7 +256,7 @@ def get(
uri: str,
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
ignore_body: bool = True,
version_prefix: str = "/v",
@@ -293,7 +293,7 @@ def post(
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
stream: bool = False,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
version_prefix: str = "/v",
error_format: Optional[str] = None,
@@ -329,7 +329,7 @@ def put(
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
stream: bool = False,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
version_prefix: str = "/v",
error_format: Optional[str] = None,
@@ -364,7 +364,7 @@ def head(
uri: str,
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
ignore_body: bool = True,
version_prefix: str = "/v",
@@ -408,7 +408,7 @@ def options(
uri: str,
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
ignore_body: bool = True,
version_prefix: str = "/v",
@@ -453,7 +453,7 @@ def patch(
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
stream=False,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
version_prefix: str = "/v",
error_format: Optional[str] = None,
@@ -498,7 +498,7 @@ def delete(
uri: str,
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
ignore_body: bool = True,
version_prefix: str = "/v",
@@ -535,7 +535,7 @@ def websocket(
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
subprotocols: Optional[List[str]] = None,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
apply: bool = True,
version_prefix: str = "/v",
@@ -576,7 +576,7 @@ def add_websocket_route(
host: Optional[str] = None,
strict_slashes: Optional[bool] = None,
subprotocols=None,
- version: Optional[int] = None,
+ version: Optional[Union[int, str, float]] = None,
name: Optional[str] = None,
version_prefix: str = "/v",
error_format: Optional[str] = None,
| Incorrect typehints in route shorthand methods
**Describe the bug**
Route decorators in `sanic/mixins/routes.py` have incorrect typehints for the ` version` parameter.
**Expected**
https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L58
**Actual**
https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L194 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L259 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L296 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L332 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L367 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L411 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L456 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L501 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L538 https://github.com/sanic-org/sanic/blob/b731a6b48c8bb6148e46df79d39a635657c9c1aa/sanic/mixins/routes.py#L579
| Hi ! I can take this one!
All yours! | 2021-11-18T17:11:47 |
|
sanic-org/sanic | 2,373 | sanic-org__sanic-2373 | [
"2371"
] | 32962d1e1c9230f5436e98bb5546dd39cb88f9e3 | diff --git a/sanic/server/protocols/websocket_protocol.py b/sanic/server/protocols/websocket_protocol.py
--- a/sanic/server/protocols/websocket_protocol.py
+++ b/sanic/server/protocols/websocket_protocol.py
@@ -5,7 +5,7 @@
from websockets.typing import Subprotocol
from sanic.exceptions import ServerError
-from sanic.log import error_logger
+from sanic.log import logger
from sanic.server import HttpProtocol
from ..websockets.impl import WebsocketImplProtocol
@@ -104,7 +104,7 @@ async def websocket_handshake(
max_size=self.websocket_max_size,
subprotocols=subprotocols,
state=OPEN,
- logger=error_logger,
+ logger=logger,
)
resp: "http11.Response" = ws_conn.accept(request)
except Exception:
| Websocket logger uses sanic.log.error_logger
Hey there,
Why do we see:
sanic.error - INFO - connection open
via stderr when getting new websocket connections. Shouldn't this go to stdout?
Also, is it possible to add "middleware" so we can properly log websocket connections and disconnects? Is it possible to get a callback on websocket disconnects?
Thanks!
| :thinking: What version of Sanic and websockets are you using? I do not see that log in our codebase at all.
@vgoklani I tracked it down. It is indeed a log from `websockets`. It is because of this:
[`logger=error_logger`](https://github.com/sanic-org/sanic/blob/8dfa49b6483d4730ee8e525764b764b2ee33f897/sanic/server/protocols/websocket_protocol.py#L107)
Would you be willing to make a PR for us to switch to the regular logger?
```python
from sanic.log import logger
```
thanks @ahopkins !
I've never done a PR before, so that may end up being a disaster...
@vgoklani There's no time like now to start :sunglasses:
To be honest, this one would be simple to do from the GitHub UI. Click the "fork" in the top right. Find the file and hit "edit", then just follow the prompts. Happy to help if you have questions. | 2022-01-14T01:22:20 |
|
sanic-org/sanic | 2,383 | sanic-org__sanic-2383 | [
"2377"
] | b8d991420b698ca18d18c8893a219a0d0e9e55c9 | diff --git a/sanic/server/websockets/impl.py b/sanic/server/websockets/impl.py
--- a/sanic/server/websockets/impl.py
+++ b/sanic/server/websockets/impl.py
@@ -518,8 +518,12 @@ async def recv(self, timeout: Optional[float] = None) -> Optional[Data]:
)
try:
self.recv_cancel = asyncio.Future()
+ tasks = (
+ self.recv_cancel,
+ asyncio.ensure_future(self.assembler.get(timeout)),
+ )
done, pending = await asyncio.wait(
- (self.recv_cancel, self.assembler.get(timeout)),
+ tasks,
return_when=asyncio.FIRST_COMPLETED,
)
done_task = next(iter(done))
@@ -570,8 +574,12 @@ async def recv_burst(self, max_recv=256) -> Sequence[Data]:
self.can_pause = False
self.recv_cancel = asyncio.Future()
while True:
+ tasks = (
+ self.recv_cancel,
+ asyncio.ensure_future(self.assembler.get(timeout=0)),
+ )
done, pending = await asyncio.wait(
- (self.recv_cancel, self.assembler.get(timeout=0)),
+ tasks,
return_when=asyncio.FIRST_COMPLETED,
)
done_task = next(iter(done))
| Update websockets implementation for 3.11
To make the websocket implementation 3.11 compat, we need to remove the deprecated usage of `asyncio.wait` here: https://github.com/sanic-org/sanic/blob/4a416e177aa5037ba9436e53f531631707e87ea7/sanic/server/websockets/impl.py#L521
---
thank you both
@ahopkins - We also get this warning in the error logs when using websockets:
DeprecationWarning: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11. done, pending = await asyncio.wait(
Is this from the websockets package?
_Originally posted by @vgoklani in https://github.com/sanic-org/sanic/issues/2371#issuecomment-1013105803_
@vgoklani
> Is this from the websockets package?
This is separate and should probably be opened as a new issue. It should be a simple fix:
```python
self.recv_cancel = asyncio.Future()
tasks = (
self.recv_cancel,
# NEXT LINE IS THE CHANGE NEEDED
# TO EXPLICITLY CREATE THE TASK
asyncio.create_task(self.assembler.get(timeout)),
)
done, pending = await asyncio.wait(
tasks,
return_when=asyncio.FIRST_COMPLETED,
)
```
_Originally posted by @ahopkins in https://github.com/sanic-org/sanic/issues/2371#issuecomment-1013822496_
| Can i have this?
Done | 2022-01-18T12:38:43 |
|
sanic-org/sanic | 2,415 | sanic-org__sanic-2415 | [
"2394"
] | 030987480c8d6e40d67ddfe1a4f9669b69131359 | diff --git a/sanic/exceptions.py b/sanic/exceptions.py
--- a/sanic/exceptions.py
+++ b/sanic/exceptions.py
@@ -51,6 +51,10 @@ class InvalidUsage(SanicException):
quiet = True
+class BadURL(InvalidUsage):
+ ...
+
+
class MethodNotSupported(SanicException):
"""
**Status**: 405 Method Not Allowed
diff --git a/sanic/request.py b/sanic/request.py
--- a/sanic/request.py
+++ b/sanic/request.py
@@ -30,10 +30,11 @@
from urllib.parse import parse_qs, parse_qsl, unquote, urlunparse
from httptools import parse_url # type: ignore
+from httptools.parser.errors import HttpParserInvalidURLError # type: ignore
from sanic.compat import CancelledErrors, Header
from sanic.constants import DEFAULT_HTTP_CONTENT_TYPE
-from sanic.exceptions import InvalidUsage, ServerError
+from sanic.exceptions import BadURL, InvalidUsage, ServerError
from sanic.headers import (
AcceptContainer,
Options,
@@ -129,8 +130,10 @@ def __init__(
):
self.raw_url = url_bytes
- # TODO: Content-Encoding detection
- self._parsed_url = parse_url(url_bytes)
+ try:
+ self._parsed_url = parse_url(url_bytes)
+ except HttpParserInvalidURLError:
+ raise BadURL(f"Bad URL: {url_bytes.decode()}")
self._id: Optional[Union[uuid.UUID, str, int]] = None
self._name: Optional[str] = None
self.app = app
| diff --git a/tests/test_request.py b/tests/test_request.py
--- a/tests/test_request.py
+++ b/tests/test_request.py
@@ -4,6 +4,7 @@
import pytest
from sanic import Sanic, response
+from sanic.exceptions import BadURL
from sanic.request import Request, uuid
from sanic.server import HttpProtocol
@@ -176,3 +177,17 @@ async def get(request):
"text/x-dvi; q=0.8",
"text/plain; q=0.5",
]
+
+
+def test_bad_url_parse():
+ message = "Bad URL: my.redacted-domain.com:443"
+ with pytest.raises(BadURL, match=message):
+ Request(
+ b"my.redacted-domain.com:443",
+ Mock(),
+ Mock(),
+ Mock(),
+ Mock(),
+ Mock(),
+ Mock(),
+ )
| URL parse needs better error
a weird stacktrace popped into my log out of nowhere and i have no idea waht triggered it or what it means
paste of the code cause githubs "code" removes formatting https://pastebin.com/uFgw9hyk
| As discussed with @sjsadowski on Discord, we should wrap the httptools parsing with an exception with a nicer error message.
**Is your feature request related to a problem? Please describe.**
Problem: when httptools parses a url with an incomplete or invalid schema, the server returns a 500 error instead of a 400 BAD REQUEST
**Describe the solution you'd like**
properly return 400 BAD REQUEST
| 2022-03-24T17:45:05 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.