repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
getredash/redash | 3,908 | getredash__redash-3908 | [
"2622"
] | 99bf6d122c16e55de0f86c62fa01c124c16c52b5 | diff --git a/redash/handlers/queries.py b/redash/handlers/queries.py
--- a/redash/handlers/queries.py
+++ b/redash/handlers/queries.py
@@ -112,6 +112,7 @@ def get_queries(self, search_term):
self.current_user.group_ids,
self.current_user.id,
include_drafts=True,
+ multi_byte_search=current_org.get_setting('multi_byte_search_enabled'),
)
else:
results = models.Query.all_queries(
@@ -256,6 +257,7 @@ def get_queries(self, search_term):
self.current_user.id,
include_drafts=False,
include_archived=True,
+ multi_byte_search=current_org.get_setting('multi_byte_search_enabled'),
)
else:
return models.Query.all_queries(
diff --git a/redash/models/__init__.py b/redash/models/__init__.py
--- a/redash/models/__init__.py
+++ b/redash/models/__init__.py
@@ -578,13 +578,24 @@ def outdated_queries(cls):
@classmethod
def search(cls, term, group_ids, user_id=None, include_drafts=False,
- limit=None, include_archived=False):
+ limit=None, include_archived=False, multi_byte_search=False):
all_queries = cls.all_queries(
group_ids,
user_id=user_id,
include_drafts=include_drafts,
include_archived=include_archived,
)
+
+ if multi_byte_search:
+ # Since tsvector doesn't work well with CJK languages, use `ilike` too
+ pattern = u'%{}%'.format(term)
+ return all_queries.filter(
+ or_(
+ cls.name.ilike(pattern),
+ cls.description.ilike(pattern)
+ )
+ ).order_by(Query.id).limit(limit)
+
# sort the result using the weight as defined in the search vector column
return all_queries.search(term, sort=True).limit(limit)
diff --git a/redash/settings/organization.py b/redash/settings/organization.py
--- a/redash/settings/organization.py
+++ b/redash/settings/organization.py
@@ -20,6 +20,7 @@
TIME_FORMAT = os.environ.get("REDASH_TIME_FORMAT", "HH:mm")
INTEGER_FORMAT = os.environ.get("REDASH_INTEGER_FORMAT", "0,0")
FLOAT_FORMAT = os.environ.get("REDASH_FLOAT_FORMAT", "0,0.00")
+MULTI_BYTE_SEARCH_ENABLED = parse_boolean(os.environ.get("MULTI_BYTE_SEARCH_ENABLED", "false"))
JWT_LOGIN_ENABLED = parse_boolean(os.environ.get("REDASH_JWT_LOGIN_ENABLED", "false"))
JWT_AUTH_ISSUER = os.environ.get("REDASH_JWT_AUTH_ISSUER", "")
@@ -41,6 +42,7 @@
"time_format": TIME_FORMAT,
"integer_format": INTEGER_FORMAT,
"float_format": FLOAT_FORMAT,
+ "multi_byte_search_enabled": MULTI_BYTE_SEARCH_ENABLED,
"auth_jwt_login_enabled": JWT_LOGIN_ENABLED,
"auth_jwt_auth_issuer": JWT_AUTH_ISSUER,
"auth_jwt_auth_public_certs_url": JWT_AUTH_PUBLIC_CERTS_URL,
| diff --git a/tests/models/test_queries.py b/tests/models/test_queries.py
--- a/tests/models/test_queries.py
+++ b/tests/models/test_queries.py
@@ -52,6 +52,17 @@ def test_search_finds_in_description(self):
self.assertIn(q2, queries)
self.assertNotIn(q3, queries)
+ def test_search_finds_in_multi_byte_name_and_description(self):
+ q1 = self.factory.create_query(name="日本語の名前テスト")
+ q2 = self.factory.create_query(description=u"日本語の説明文テスト")
+ q3 = self.factory.create_query(description=u"Testing search")
+
+ queries = Query.search(u"テスト", [self.factory.default_group.id], multi_byte_search=True)
+
+ self.assertIn(q1, queries)
+ self.assertIn(q2, queries)
+ self.assertNotIn(q3, queries)
+
def test_search_by_id_returns_query(self):
q1 = self.factory.create_query(description=u"Testing search")
q2 = self.factory.create_query(description=u"Testing searching")
| Can't search query correctly with non-ASCII chars
### Issue Summary
Can't search query correctly with non-ASCII chars.
### Steps to Reproduce
1. Make query which has non-ASCII chars name or description
2. Search with non-ASCII chars
e.g.
There is a query which has non-ASCII chars `ユーザ`.
<img width="380" alt="all_queries" src="https://user-images.githubusercontent.com/3317191/41804093-925f57a8-76cb-11e8-9a9e-4e3bb068c306.png">
Search with `ユーザ`, then no quries in the result.
<img width="366" alt="search_query1" src="https://user-images.githubusercontent.com/3317191/41804094-92a2f710-76cb-11e8-9f63-c64a9564613f.png">
When I search with `ユー`, then hit correctly.
<img width="315" alt="search_query2" src="https://user-images.githubusercontent.com/3317191/41804095-92d418f4-76cb-11e8-9481-c6e2ff9af210.png">
I guess that `Query.search`(in #2041) has changed this behavior. But I have no idea what we should fix it with keeping full text search feature.
### Technical details:
* Redash Version: master
* Browser/OS: Version 67.0.3396.87 (Official Build) (64-bit)
* How did you install Redash: Docker
| Possibly related to #2618.
Yeah, the recently udpated query full text search is based on Postgres' built-in [textsearch](https://www.postgresql.org/docs/9.5/static/textsearch-intro.html) extension which will use the "simple" configuration (parsers, templates, dictionaries) which only applies lower case and removes stop words from the content body while searching.
Unfortunately by default it only comes with support for [a few Indo-European languages](https://www.compose.com/articles/mastering-postgresql-tools-full-text-search-and-phrase-search/#languages) and misses others such as Korean, Japanese and Chinese (and more).
To add support for this, we'd need to add additional support for those languages, for example [PGroonga](https://pgroonga.github.io/), which supports all languages, but requires a 3rd party extension. The [tutorial](https://pgroonga.github.io/tutorial/) gives an idea how this would like, including for example the ability to just keep using `ILIKE` queries.
Alternatively we could move away the FTS from using Postgres altogether and switch to one of the many alternative search engines such as Elasticsearch, but that comes with a non-trivial amount of architectural changes.
> Alternatively we could move away the FTS from using Postgres altogether and switch to one of the many alternative search engines such as Elasticsearch, but that comes with a non-trivial amount of architectural changes.
I wouldn't want to have ES as a mandatory dependency in Redash as it will make deployments harder. But maybe we can make this functionality pluggable:
1. Have a hook for "index new content" (dashboard / query / other in the future) and "index updated content".
2. Have an interface for performing a search.
By default the two will use Postgres, but will have additional implementation using ES, Algolia, other.
It would complicate the list views a bit since it's written right now to not differ between searching and just fetching the list of all items. I guess the API handlers can provide the interface to cater to that and ask the search backend to provide a list of item model IDs in the order of the search ranking and then fetch the appropriate date model items from the data base.
While I don't think it will be a huge deal, there is some overhead involved that we should probably be testing. E.g. support for pagination in the search backend would seem like a good idea.
BTW, would you consider making this something to be distributed in the Redash core, or as extensions?
The list view is the least of the complications this will create :) I'm more worried about permissions and similar concerns, data sync (between search engine and database) and other.
And yes, this can be an extension.
Hi @jezdez,
Could we bring back the naive and slow `LIKE` search as an option? Maybe an ENV variable `LEGACY_FULL_TEXT_SEARCH` or something to switch between two ways of searching?
For me, being able to search in multi-byte is far more critical than having faster and more modern tsvector textsearch.
@deecay adding an option to enable simpler search sounds good to me. Considering the global usage of Redash, I expect this to be popular enough to put in `Organization Settings` UI. | 2019-06-18T07:20:51 |
getredash/redash | 3,952 | getredash__redash-3952 | [
"3053"
] | 8ad08a566adf1ce6fd461bf7cf11b51f12b3a48f | diff --git a/redash/models/parameterized_query.py b/redash/models/parameterized_query.py
--- a/redash/models/parameterized_query.py
+++ b/redash/models/parameterized_query.py
@@ -36,6 +36,21 @@ def dropdown_values(query_id):
return map(pluck, data["rows"])
+def join_parameter_list_values(parameters, schema):
+ updated_parameters = {}
+ for (key, value) in parameters.iteritems():
+ if isinstance(value, list):
+ definition = next((definition for definition in schema if definition["name"] == key), {})
+ multi_values_options = definition.get('multiValuesOptions', {})
+ separator = str(multi_values_options.get('separator', ','))
+ prefix = str(multi_values_options.get('prefix', ''))
+ suffix = str(multi_values_options.get('suffix', ''))
+ updated_parameters[key] = separator.join(map(lambda v: prefix + v + suffix, value))
+ else:
+ updated_parameters[key] = value
+ return updated_parameters
+
+
def _collect_key_names(nodes):
keys = []
for node in nodes._parse_tree:
@@ -92,6 +107,12 @@ def _is_date_range(obj):
return False
+def _is_value_within_options(value, dropdown_options, allow_list=False):
+ if isinstance(value, list):
+ return allow_list and set(map(unicode, value)).issubset(set(dropdown_options))
+ return unicode(value) in dropdown_options
+
+
class ParameterizedQuery(object):
def __init__(self, template, schema=None):
self.schema = schema or []
@@ -105,7 +126,7 @@ def apply(self, parameters):
raise InvalidParameterError(invalid_parameter_names)
else:
self.parameters.update(parameters)
- self.query = mustache_render(self.template, self.parameters)
+ self.query = mustache_render(self.template, join_parameter_list_values(parameters, self.schema))
return self
@@ -118,11 +139,22 @@ def _valid(self, name, value):
if not definition:
return False
+ enum_options = definition.get('enumOptions')
+ query_id = definition.get('queryId')
+ allow_multiple_values = isinstance(definition.get('multiValuesOptions'), dict)
+
+ if isinstance(enum_options, basestring):
+ enum_options = enum_options.split('\n')
+
validators = {
"text": lambda value: isinstance(value, basestring),
"number": _is_number,
- "enum": lambda value: value in definition["enumOptions"],
- "query": lambda value: unicode(value) in [v["value"] for v in dropdown_values(definition["queryId"])],
+ "enum": lambda value: _is_value_within_options(value,
+ enum_options,
+ allow_multiple_values),
+ "query": lambda value: _is_value_within_options(value,
+ [v["value"] for v in dropdown_values(query_id)],
+ allow_multiple_values),
"date": _is_date,
"datetime-local": _is_date,
"datetime-with-seconds": _is_date,
diff --git a/redash/tasks/queries.py b/redash/tasks/queries.py
--- a/redash/tasks/queries.py
+++ b/redash/tasks/queries.py
@@ -193,7 +193,7 @@ def refresh_queries():
if query.options and len(query.options.get('parameters', [])) > 0:
query_params = {p['name']: p.get('value')
for p in query.options['parameters']}
- query_text = mustache_render(query.query_text, query_params)
+ query_text = query.parameterized.apply(query_params).query
else:
query_text = query.query_text
| diff --git a/tests/models/test_parameterized_query.py b/tests/models/test_parameterized_query.py
--- a/tests/models/test_parameterized_query.py
+++ b/tests/models/test_parameterized_query.py
@@ -119,6 +119,18 @@ def test_raises_on_unlisted_enum_value_parameters(self):
with pytest.raises(InvalidParameterError):
query.apply({"bar": "shlomo"})
+ def test_raises_on_unlisted_enum_list_value_parameters(self):
+ schema = [{
+ "name": "bar",
+ "type": "enum",
+ "enumOptions": ["baz", "qux"],
+ "multiValuesOptions": {"separator": ",", "prefix": "", "suffix": ""}
+ }]
+ query = ParameterizedQuery("foo", schema)
+
+ with pytest.raises(InvalidParameterError):
+ query.apply({"bar": ["shlomo", "baz"]})
+
def test_validates_enum_parameters(self):
schema = [{"name": "bar", "type": "enum", "enumOptions": ["baz", "qux"]}]
query = ParameterizedQuery("foo {{bar}}", schema)
@@ -127,6 +139,19 @@ def test_validates_enum_parameters(self):
self.assertEquals("foo baz", query.text)
+ def test_validates_enum_list_value_parameters(self):
+ schema = [{
+ "name": "bar",
+ "type": "enum",
+ "enumOptions": ["baz", "qux"],
+ "multiValuesOptions": {"separator": ",", "prefix": "'", "suffix": "'"}
+ }]
+ query = ParameterizedQuery("foo {{bar}}", schema)
+
+ query.apply({"bar": ["qux", "baz"]})
+
+ self.assertEquals("foo 'qux','baz'", query.text)
+
@patch('redash.models.parameterized_query.dropdown_values', return_value=[{"value": "1"}])
def test_validation_accepts_integer_values_for_dropdowns(self, _):
schema = [{"name": "bar", "type": "query", "queryId": 1}]
| Support for multi select in parameters
When using a Dropdown List type of parameter, the user should be able to define the parameter to take multiple values.
This will require some additional configuration of how to serialize the values. We can probably do away with adding a checkbox for "quote each value".
| 2019-07-04T14:40:59 |
|
getredash/redash | 4,189 | getredash__redash-4189 | [
"3766"
] | 780fbceba5c95ae279d0b8aee7be6fd68b7cce3d | diff --git a/redash/query_runner/jql.py b/redash/query_runner/jql.py
--- a/redash/query_runner/jql.py
+++ b/redash/query_runner/jql.py
@@ -144,7 +144,7 @@ class JiraJQL(BaseHTTPQueryRunner):
requires_authentication = True
url_title = 'JIRA URL'
username_title = 'Username'
- password_title = 'Password'
+ password_title = 'API Token'
@classmethod
def name(cls):
| JIRA setup: change password field name to "API Token"
While a password can be used there, it's not recommended and eventually will be deprecated.
| #3765 | 2019-09-27T00:20:20 |
|
getredash/redash | 4,239 | getredash__redash-4239 | [
"4223"
] | 74beed80d20d858b51b5560e7984b20d5d2c874e | diff --git a/redash/destinations/pagerduty.py b/redash/destinations/pagerduty.py
--- a/redash/destinations/pagerduty.py
+++ b/redash/destinations/pagerduty.py
@@ -12,7 +12,7 @@
class PagerDuty(BaseDestination):
KEY_STRING = '{alert_id}_{query_id}'
- DESCRIPTION_STR = u'Alert - Redash Query #{query_id}: {query_name}'
+ DESCRIPTION_STR = u'Alert: {alert_name}'
@classmethod
def enabled(cls):
@@ -29,7 +29,7 @@ def configuration_schema(cls):
},
'description': {
'type': 'string',
- 'title': 'Description for the event, defaults to query',
+ 'title': 'Description for the event, defaults to alert name',
}
},
"required": ["integration_key"]
@@ -46,7 +46,7 @@ def notify(self, alert, query, user, new_state, app, host, options):
elif options.get('description'):
default_desc = options.get('description')
else:
- default_desc = self.DESCRIPTION_STR.format(query_id=query.id, query_name=query.name)
+ default_desc = self.DESCRIPTION_STR.format(alert_name=alert.name)
incident_key = self.KEY_STRING.format(alert_id=alert.id, query_id=query.id)
data = {
| Change PagerDuty's default summary text
Currently PagerDuty's Alert destination default summary text uses the query id and name. We should change it to use the alert name as it's usually better explains what the alert is.
While #4153 implements ability to customize the summary text, it's good to have a saner default regardless.
(If #4153 is not merged before implementing, should be implemented based on its branch)
| Hi @arikfr
I believe https://github.com/getredash/redash/blob/74beed80d20d858b51b5560e7984b20d5d2c874e/redash/destinations/pagerduty.py#L15 this needs to be changed. However, I could not find any accessors of `notify()` function (maybe a problem with my IDE), so I was unable to find the `alert` object properties. Can I assume I can get the alert name by `alert.name`?
And also, I would have to change https://github.com/getredash/redash/blob/74beed80d20d858b51b5560e7984b20d5d2c874e/redash/destinations/pagerduty.py#L49 to something like
```
self.DESCRIPTION_STR.format(alert_name=alert.name)
```
So can I assume, `alert.name` will always be non-null or do I need to involve checks?
Thanks!
So, I dug more and found a few examples of `alert.name` in `redash/destination/` | 2019-10-12T07:17:22 |
|
getredash/redash | 4,254 | getredash__redash-4254 | [
"2137"
] | 02d128e7aee83d08a8188ea3e9291a4665925c67 | diff --git a/migrations/versions/1038c2174f5d_make_case_insensitive_hash_of_query_text.py b/migrations/versions/1038c2174f5d_make_case_insensitive_hash_of_query_text.py
new file mode 100644
--- /dev/null
+++ b/migrations/versions/1038c2174f5d_make_case_insensitive_hash_of_query_text.py
@@ -0,0 +1,51 @@
+"""Make case insensitive hash of query text
+
+Revision ID: 1038c2174f5d
+Revises: fd4fc850d7ea
+Create Date: 2023-07-16 23:10:12.885949
+
+"""
+from alembic import op
+import sqlalchemy as sa
+from sqlalchemy.sql import table
+
+from redash.utils import gen_query_hash
+
+# revision identifiers, used by Alembic.
+revision = '1038c2174f5d'
+down_revision = 'fd4fc850d7ea'
+branch_labels = None
+depends_on = None
+
+
+
+def change_query_hash(conn, table, query_text_to):
+ for record in conn.execute(table.select()):
+ query_text = query_text_to(record.query)
+ conn.execute(
+ table
+ .update()
+ .where(table.c.id == record.id)
+ .values(query_hash=gen_query_hash(query_text)))
+
+
+def upgrade():
+ queries = table(
+ 'queries',
+ sa.Column('id', sa.Integer, primary_key=True),
+ sa.Column('query', sa.Text),
+ sa.Column('query_hash', sa.String(length=10)))
+
+ conn = op.get_bind()
+ change_query_hash(conn, queries, query_text_to=str)
+
+
+def downgrade():
+ queries = table(
+ 'queries',
+ sa.Column('id', sa.Integer, primary_key=True),
+ sa.Column('query', sa.Text),
+ sa.Column('query_hash', sa.String(length=10)))
+
+ conn = op.get_bind()
+ change_query_hash(conn, queries, query_text_to=str.lower)
diff --git a/redash/utils/__init__.py b/redash/utils/__init__.py
--- a/redash/utils/__init__.py
+++ b/redash/utils/__init__.py
@@ -51,14 +51,14 @@ def slugify(s):
def gen_query_hash(sql):
"""Return hash of the given query after stripping all comments, line breaks
- and multiple spaces, and lower casing all text.
+ and multiple spaces.
- TODO: possible issue - the following queries will get the same id:
+ The following queries will get different ids:
1. SELECT 1 FROM table WHERE column='Value';
2. SELECT 1 FROM table where column='value';
"""
sql = COMMENTS_REGEX.sub("", sql)
- sql = "".join(sql.split()).lower()
+ sql = "".join(sql.split())
return hashlib.md5(sql.encode("utf-8")).hexdigest()
| Case insensitive parameters in Redash query result cache
### Issue Summary
Redash may display wrong results for query with case sensitive parameter. It looks like results cache is ignoring param letters size.
### Steps to Reproduce
1. Create a query with global text parameter.
2. Create dashboard with visualization for this query.
3. Open dashboard with parameter set to lowercase letter for example 'test'
4. Open dashboard on a new tab with parameter 'TEST'.
In the last step you should receive visualization for parameter 'test' even when parameter value is set to 'TEST'
I have been able to reproduce this scenario on demo.redash.io. To reproduce do:
1. Open link: http://demo.redash.io/dashboard/ma_dashboard?p_dashboard_name=test
You should see rows from dashboard table with name `test`
2. Open link: http://demo.redash.io/dashboard/ma_dashboard?p_dashboard_name=Test
You should see rows from dashboard table with name `test` even when parameter `dashboard_name` is set to `Test`. After refreshing visualization result changed to proper one.
### Technical details:
* Redash Version: 3.0.0+b3134
* Browser/OS: Chrome/Firefox/Opera
* How did you install Redash: used demo.redash.io
| The problem is in the [`redash.utils.gen_query_hash`](https://github.com/getredash/redash/blob/073db37cfda27a33570339b6f4e34d69ff6032bf/redash/utils/__init__.py#L47-L57) method. And it's actually documented there as a TODO item 😳
I think we can stop lower casing the query in this method. We don't need to dedup that aggressively.
Hi, I've looked around at the code, reproduced the error and I think this can be my first PR.
@arikfr your suggestion is to remove the `.lower()` when we're creating the hash?
@ClaudioDavi yes, but also need a migration to recalculate the query hash for all existing queries.
Ok, thanks @arikfr ! Do you have any migration examples for me to have a look and follow through? | 2019-10-16T19:38:39 |
|
getredash/redash | 4,295 | getredash__redash-4295 | [
"4279"
] | ba413c210e317f3d0b3efb43d011f91148449684 | diff --git a/redash/models/__init__.py b/redash/models/__init__.py
--- a/redash/models/__init__.py
+++ b/redash/models/__init__.py
@@ -2,6 +2,7 @@
import calendar
import logging
import time
+import numbers
import pytz
from six import text_type
@@ -775,6 +776,43 @@ def are_favorites(cls, user, objects):
return [fav.object_id for fav in cls.query.filter(cls.object_id.in_([o.id for o in objects]), cls.object_type == object_type, cls.user_id == user)]
+OPERATORS = {
+ '>': lambda v, t: v > t,
+ '>=': lambda v, t: v >= t,
+ '<': lambda v, t: v < t,
+ '<=': lambda v, t: v <= t,
+ '==': lambda v, t: v == t,
+ '!=': lambda v, t: v != t,
+
+ # backward compatibility
+ 'greater than': lambda v, t: v > t,
+ 'less than': lambda v, t: v < t,
+ 'equals': lambda v, t: v == t,
+}
+
+
+def next_state(op, value, threshold):
+ if isinstance(value, numbers.Number) and not isinstance(value, bool):
+ try:
+ threshold = float(threshold)
+ except ValueError:
+ return Alert.UNKNOWN_STATE
+ # If it's a boolean cast to string and lower case, because upper cased
+ # boolean value is Python specific and most likely will be confusing to
+ # users.
+ elif isinstance(value, bool):
+ value = str(value).lower()
+ else:
+ value = str(value)
+
+ if op(value, threshold):
+ new_state = Alert.TRIGGERED_STATE
+ else:
+ new_state = Alert.OK_STATE
+
+ return new_state
+
+
@generic_repr('id', 'name', 'query_id', 'user_id', 'state', 'last_triggered_at', 'rearm')
class Alert(TimestampMixin, BelongsToOrgMixin, db.Model):
UNKNOWN_STATE = 'unknown'
@@ -819,28 +857,12 @@ def evaluate(self):
data = self.query_rel.latest_query_data.data
if data['rows'] and self.options['column'] in data['rows'][0]:
- operators = {
- '>': lambda v, t: v > t,
- '>=': lambda v, t: v >= t,
- '<': lambda v, t: v < t,
- '<=': lambda v, t: v <= t,
- '==': lambda v, t: v == t,
- '!=': lambda v, t: v != t,
-
- # backward compatibility
- 'greater than': lambda v, t: v > t,
- 'less than': lambda v, t: v < t,
- 'equals': lambda v, t: v == t,
- }
- should_trigger = operators.get(self.options['op'], lambda v, t: False)
+ op = OPERATORS.get(self.options['op'], lambda v, t: False)
value = data['rows'][0][self.options['column']]
threshold = self.options['value']
- if should_trigger(value, threshold):
- new_state = self.TRIGGERED_STATE
- else:
- new_state = self.OK_STATE
+ new_state = next_state(op, value, threshold)
else:
new_state = self.UNKNOWN_STATE
| diff --git a/tests/models/test_alerts.py b/tests/models/test_alerts.py
--- a/tests/models/test_alerts.py
+++ b/tests/models/test_alerts.py
@@ -1,5 +1,6 @@
+from unittest import TestCase
from tests import BaseTestCase
-from redash.models import Alert, db
+from redash.models import Alert, db, next_state, OPERATORS
from redash.utils import json_dumps
@@ -44,16 +45,20 @@ def get_results(value):
class TestAlertEvaluate(BaseTestCase):
- def create_alert(self, results, column='foo'):
+ def create_alert(self, results, column='foo', value="1"):
result = self.factory.create_query_result(data=results)
query = self.factory.create_query(latest_query_data_id=result.id)
- alert = self.factory.create_alert(query_rel=query, options={'op': 'equals', 'column': column, 'value': 1})
+ alert = self.factory.create_alert(query_rel=query, options={'op': 'equals', 'column': column, 'value': value})
return alert
def test_evaluate_triggers_alert_when_equal(self):
alert = self.create_alert(get_results(1))
self.assertEqual(alert.evaluate(), Alert.TRIGGERED_STATE)
+ def test_evaluate_number_value_and_string_threshold(self):
+ alert = self.create_alert(get_results(1), value="string")
+ self.assertEqual(alert.evaluate(), Alert.UNKNOWN_STATE)
+
def test_evaluate_return_unknown_when_missing_column(self):
alert = self.create_alert(get_results(1), column='bar')
self.assertEqual(alert.evaluate(), Alert.UNKNOWN_STATE)
@@ -61,4 +66,23 @@ def test_evaluate_return_unknown_when_missing_column(self):
def test_evaluate_return_unknown_when_empty_results(self):
results = json_dumps({'rows': [], 'columns': [{'name': 'foo', 'type': 'STRING'}]})
alert = self.create_alert(results)
- self.assertEqual(alert.evaluate(), Alert.UNKNOWN_STATE)
\ No newline at end of file
+ self.assertEqual(alert.evaluate(), Alert.UNKNOWN_STATE)
+
+
+class TestNextState(TestCase):
+ def test_numeric_value(self):
+ self.assertEqual(Alert.TRIGGERED_STATE, next_state(OPERATORS.get('=='), 1, "1"))
+ self.assertEqual(Alert.TRIGGERED_STATE, next_state(OPERATORS.get('=='), 1, "1.0"))
+
+ def test_numeric_value_and_plain_string(self):
+ self.assertEqual(Alert.UNKNOWN_STATE, next_state(OPERATORS.get('=='), 1, "string"))
+
+ def test_non_numeric_value(self):
+ self.assertEqual(Alert.OK_STATE, next_state(OPERATORS.get('=='), "1", "1.0"))
+
+ def test_string_value(self):
+ self.assertEqual(Alert.TRIGGERED_STATE, next_state(OPERATORS.get('=='), "string", "string"))
+
+ def test_boolean_value(self):
+ self.assertEqual(Alert.TRIGGERED_STATE, next_state(OPERATORS.get('=='), False, 'false'))
+ self.assertEqual(Alert.TRIGGERED_STATE, next_state(OPERATORS.get('!='), False, 'true'))
\ No newline at end of file
| Number-based alerts are broken
<!--
We use GitHub only for bug reports 🐛
Anything else should be posted to https://discuss.redash.io 👫
🚨For support, help & questions use https://discuss.redash.io/c/support
💡For feature requests & ideas use https://discuss.redash.io/c/feature-requests
**Found a security vulnerability?** Please email [email protected] to report any security vulnerabilities. We will acknowledge receipt of your vulnerability and strive to send you regular updates about our progress. If you're curious about the status of your disclosure please feel free to email us again. If you want to encrypt your disclosure email, you can use this PGP key.
-->
### Issue Summary
It looks like the alert always sees a query result as a string and not a number. So an alert configured to trigger when `field = 0` never triggers because `"0"` does not equal `0`.
The alerts seems to evaluate fine for targets that are obviously strings. If the alert should trigger if `field = boyhowdy` then the alert triggers when `field` is "boyhowdy".
### Steps to Reproduce
1. Save a query that returns 0: `SELECT 0 "value"`
2. Make an alert for the query that triggers when `value = 0`
3. Run the query
4. Observe the alert does not trigger.
The converse is also true, by the way.
1. Same as above
2. Make an alert for the query that triggers when `value != 0`
3. Run the query
4. Observe the alert triggers
### Technical details:
* Redash Version: 9.0.0-alpha+b348b58b (348b58b)
* Browser/OS: Firefox on MacOS
* How did you install Redash: SaaS
| I believe this is happens cause the threshold input field has been changed to accept texts (to accommodate text comparisons) so all new alerts yield a string from the threshold value.
@rauchy @kravets-levko wdyt of handling this in the backend - casting both args to string for `==`/`!=` and to int for the rest?
https://github.com/getredash/redash/blob/5d585036239e40973adebceadc0dfcceb82e2250/redash/models/__init__.py#L826-L847
I think the solution could be something like: if alert's threshold value contains numeric string and query result's column type is number - cast both to number; otherwise cast both to string:
| Query result column type | Alert threshold value | Cast both to |
|--|--|--|
| number | string containing number | number |
| number | non-numeric string | string |
| other types | string containing number | string |
| other types | non-numeric string | string |
I think that if query result is a number, then we should try to cast both to number. In other cases: use a string.
Because the operation options we show are based on the query result type, the threshold should adhere. We might want to enforce this in the UI.
Not sure I understand why casting-by-value would be preferable to casting-by-operation.
> Not sure I understand why casting-by-value would be preferable to casting-by-operation.
Because strings and numbers have different behavior, for example:
In case of a number `0.0` == `0`, but in case of strings `'0.0'` != `'0'`.
Aha. So I propose the following:
|Operator| Cast |
|--|--|
|== / !=|Threshold to result value type |
| Other | Both to number |
Not sure about this, but perhaps we can hint at string values with apostrophes.
<img width="298" alt="Screen Shot 2019-10-23 at 10 35 10" src="https://user-images.githubusercontent.com/486954/67369094-16959980-f581-11e9-86c1-02ea353603a3.png">
@susodapop reopening as the fix wasn't pushed to this repo yet. | 2019-10-27T14:25:14 |
getredash/redash | 4,354 | getredash__redash-4354 | [
"4286"
] | ef56e4e9208dbb373ed40d4c0040409e9d46977f | diff --git a/redash/app.py b/redash/app.py
--- a/redash/app.py
+++ b/redash/app.py
@@ -21,7 +21,6 @@ def __init__(self, *args, **kwargs):
def create_app():
from . import authentication, extensions, handlers, limiter, mail, migrate, security
- from .handlers import chrome_logger
from .handlers.webpack import configure_webpack
from .metrics import request as request_metrics
from .models import db, users
@@ -44,7 +43,6 @@ def create_app():
handlers.init_app(app)
configure_webpack(app)
extensions.init_app(app)
- chrome_logger.init_app(app)
users.init_app(app)
return app
diff --git a/redash/handlers/chrome_logger.py b/redash/handlers/chrome_logger.py
deleted file mode 100644
--- a/redash/handlers/chrome_logger.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import time
-import chromelogger
-from flask import g, request
-from flask_sqlalchemy import get_debug_queries
-
-
-def log_queries():
- total_duration = 0.0
- queries_count = 0
-
- chromelogger.group("SQL Queries")
-
- for q in get_debug_queries():
- total_duration += q.duration
- queries_count += 1
- chromelogger.info(q.statement % q.parameters)
- chromelogger.info("Runtime: {:.2f}ms".format(1000 * q.duration))
-
- chromelogger.info("{} queries executed in {:.2f}ms.".format(queries_count, total_duration*1000))
-
- chromelogger.group_end("SQL Queries")
-
-
-def chrome_log(response):
- request_duration = (time.time() - g.start_time) * 1000
- queries_duration = g.get('queries_duration', 0.0)
- queries_count = g.get('queries_count', 0)
-
- group_name = '{} {} ({}, {:.2f}ms runtime, {} queries in {:.2f}ms)'.format(
- request.method, request.path, response.status_code, request_duration, queries_count, queries_duration)
-
- chromelogger.group_collapsed(group_name)
-
- endpoint = (request.endpoint or 'unknown').replace('.', '_')
- chromelogger.info('Endpoint: {}'.format(endpoint))
- chromelogger.info('Content Type: {}'.format(response.content_type))
- chromelogger.info('Content Length: {}'.format(response.content_length or -1))
-
- log_queries()
-
- chromelogger.group_end(group_name)
-
- header = chromelogger.get_header()
- if header is not None:
- response.headers.add(*header)
-
- return response
-
-
-def init_app(app):
- if not app.debug:
- return
-
- app.after_request(chrome_log)
| Make Cypress tests work with [email protected]
Running our tests with [email protected] doesn't work. Need to figure out what happened, until then pinning the version to 3.4.1 (#4284).
| 2019-11-13T21:53:31 |
||
getredash/redash | 4,359 | getredash__redash-4359 | [
"4357"
] | 56b51be64ad617e58742ce18c11322b8ec115bb1 | diff --git a/redash/handlers/authentication.py b/redash/handlers/authentication.py
--- a/redash/handlers/authentication.py
+++ b/redash/handlers/authentication.py
@@ -292,6 +292,7 @@ def client_config():
"dashboardRefreshIntervals": settings.DASHBOARD_REFRESH_INTERVALS,
"queryRefreshIntervals": settings.QUERY_REFRESH_INTERVALS,
"googleLoginEnabled": settings.GOOGLE_OAUTH_ENABLED,
+ "ldapLoginEnabled": settings.LDAP_LOGIN_ENABLED,
"pageSize": settings.PAGE_SIZE,
"pageSizeOptions": settings.PAGE_SIZE_OPTIONS,
"tableCellMaxJSONSize": settings.TABLE_CELL_MAX_JSON_SIZE,
| Password Auth enabling itself when using LDAP
### Issue Summary
When using LDAP for auth, the checkbox for "Password Login Enabled" in settings becomes greyed out. However, when changing any other setting on that page and clicking save, the "Password Login Enabled" gets enabled. I can't find any way to them disable it other than doing so manually in the Postgres 'organizations' table.
| 2019-11-14T18:20:33 |
||
getredash/redash | 4,492 | getredash__redash-4492 | [
"4099",
"4099"
] | f420e02ceeec825ae4e7bad0e9054bd8104a9dd2 | diff --git a/redash/handlers/query_results.py b/redash/handlers/query_results.py
--- a/redash/handlers/query_results.py
+++ b/redash/handlers/query_results.py
@@ -11,6 +11,7 @@
not_view_only,
require_access,
require_permission,
+ require_any_of_permission,
view_only,
)
from redash.tasks import QueryTask
@@ -220,7 +221,7 @@ def add_cors_headers(headers):
settings.ACCESS_CONTROL_ALLOW_CREDENTIALS
).lower()
- @require_permission("view_query")
+ @require_any_of_permission(("view_query", "execute_query"))
def options(self, query_id=None, query_result_id=None, filetype="json"):
headers = {}
self.add_cors_headers(headers)
@@ -237,7 +238,7 @@ def options(self, query_id=None, query_result_id=None, filetype="json"):
return make_response("", 200, headers)
- @require_permission("view_query")
+ @require_any_of_permission(("view_query", "execute_query"))
def post(self, query_id):
"""
Execute a saved query.
@@ -283,7 +284,7 @@ def post(self, query_id):
else:
return error_messages["no_permission"]
- @require_permission("view_query")
+ @require_any_of_permission(("view_query", "execute_query"))
def get(self, query_id=None, query_result_id=None, filetype="json"):
"""
Retrieve query results.
diff --git a/redash/models/__init__.py b/redash/models/__init__.py
--- a/redash/models/__init__.py
+++ b/redash/models/__init__.py
@@ -1072,7 +1072,6 @@ def all(cls, org, group_ids, user_id):
(
DataSourceGroup.group_id.in_(group_ids)
| (Dashboard.user_id == user_id)
- | ((Widget.dashboard != None) & (Widget.visualization == None))
),
Dashboard.org == org,
)
diff --git a/redash/permissions.py b/redash/permissions.py
--- a/redash/permissions.py
+++ b/redash/permissions.py
@@ -55,13 +55,17 @@ def require_access(obj, user, need_view_only):
class require_permissions(object):
- def __init__(self, permissions):
+ def __init__(self, permissions, allow_one=False):
self.permissions = permissions
+ self.allow_one = allow_one
def __call__(self, fn):
@functools.wraps(fn)
def decorated(*args, **kwargs):
- has_permissions = current_user.has_permissions(self.permissions)
+ if self.allow_one:
+ has_permissions = any([current_user.has_permission(permission) for permission in self.permissions])
+ else:
+ has_permissions = current_user.has_permissions(self.permissions)
if has_permissions:
return fn(*args, **kwargs)
@@ -75,6 +79,10 @@ def require_permission(permission):
return require_permissions((permission,))
+def require_any_of_permission(permissions):
+ return require_permissions(permissions, True)
+
+
def require_admin(fn):
return require_permission("admin")(fn)
| diff --git a/tests/test_models.py b/tests/test_models.py
--- a/tests/test_models.py
+++ b/tests/test_models.py
@@ -736,24 +736,38 @@ def test_returns_dashboards_created_by_user(self):
d1, list(models.Dashboard.all(self.u2.org, self.u2.group_ids, self.u2.id))
)
- def test_returns_dashboards_with_text_widgets(self):
+ def test_returns_dashboards_with_text_widgets_to_creator(self):
w1 = self.factory.create_widget(visualization=None)
+ self.assertEqual(w1.dashboard.user, self.factory.user)
self.assertIn(
- w1.dashboard, models.Dashboard.all(self.u1.org, self.u1.group_ids, None)
+ w1.dashboard,
+ list(
+ models.Dashboard.all(
+ self.factory.user.org,
+ self.factory.user.group_ids,
+ self.factory.user.id,
+ )
+ ),
)
- self.assertIn(
- w1.dashboard, models.Dashboard.all(self.u2.org, self.u2.group_ids, None)
+ self.assertNotIn(
+ w1.dashboard,
+ list(models.Dashboard.all(self.u1.org, self.u1.group_ids, self.u1.id)),
)
def test_returns_dashboards_from_current_org_only(self):
- w1 = self.factory.create_widget(visualization=None)
+ w1 = self.factory.create_widget()
user = self.factory.create_user(org=self.factory.create_org())
self.assertIn(
- w1.dashboard, models.Dashboard.all(self.u1.org, self.u1.group_ids, None)
+ w1.dashboard,
+ list(
+ models.Dashboard.all(
+ self.factory.user.org, self.factory.user.group_ids, None
+ )
+ ),
)
self.assertNotIn(
- w1.dashboard, models.Dashboard.all(user.org, user.group_ids, None)
+ w1.dashboard, list(models.Dashboard.all(user.org, user.group_ids, user.id))
)
| Dashboards list includes dashboards with text widgets where you have no access to any other widgets
in this case, a use named A, he has not have any datasource access or view permission, while the dashboard have a text widget, the user A can find the dashboard which have text widget on dashboard lis page, the reason is than this python file: app/redash/model/__init__.py about the class Dashboard with that method :
```python
@classmethod
def all(cls, org, group_ids, user_id):
query = (
Dashboard.query
.options(
subqueryload(Dashboard.user).load_only('_profile_image_url', 'name'),
)
.outerjoin(Widget)
.outerjoin(Visualization)
.outerjoin(Query)
.outerjoin(DataSourceGroup, Query.data_source_id == DataSourceGroup.data_source_id)
.filter(
Dashboard.is_archived == False,
(DataSourceGroup.group_id.in_(group_ids) |
(Dashboard.user_id == user_id)
((Widget.dashboard != None) & (Widget.visualization == None))),
Dashboard.org == org)
.distinct())
query = query.filter(or_(Dashboard.user_id == user_id, Dashboard.is_draft == False))
return query
```
the query on Widget.visualization == None so will find all dashboard that have text widget
* Redash Version: 6.0.7.0.8.0
* Browser/OS: chrome/ios
* How did you install Redash: k8s
Dashboards list includes dashboards with text widgets where you have no access to any other widgets
in this case, a use named A, he has not have any datasource access or view permission, while the dashboard have a text widget, the user A can find the dashboard which have text widget on dashboard lis page, the reason is than this python file: app/redash/model/__init__.py about the class Dashboard with that method :
```python
@classmethod
def all(cls, org, group_ids, user_id):
query = (
Dashboard.query
.options(
subqueryload(Dashboard.user).load_only('_profile_image_url', 'name'),
)
.outerjoin(Widget)
.outerjoin(Visualization)
.outerjoin(Query)
.outerjoin(DataSourceGroup, Query.data_source_id == DataSourceGroup.data_source_id)
.filter(
Dashboard.is_archived == False,
(DataSourceGroup.group_id.in_(group_ids) |
(Dashboard.user_id == user_id)
((Widget.dashboard != None) & (Widget.visualization == None))),
Dashboard.org == org)
.distinct())
query = query.filter(or_(Dashboard.user_id == user_id, Dashboard.is_draft == False))
return query
```
the query on Widget.visualization == None so will find all dashboard that have text widget
* Redash Version: 6.0.7.0.8.0
* Browser/OS: chrome/ios
* How did you install Redash: k8s
| It was implemented this way intentionally, but we no longer need this behavior. You're welcome to submit a Pull Request to update this behavior.
It was implemented this way intentionally, but we no longer need this behavior. You're welcome to submit a Pull Request to update this behavior. | 2019-12-25T15:59:19 |
getredash/redash | 4,498 | getredash__redash-4498 | [
"4332"
] | fd46194580cf5a45a8464c16263fe56e5a23256d | diff --git a/redash/handlers/query_results.py b/redash/handlers/query_results.py
--- a/redash/handlers/query_results.py
+++ b/redash/handlers/query_results.py
@@ -1,9 +1,11 @@
import logging
import time
+import unicodedata
from flask import make_response, request
from flask_login import current_user
from flask_restful import abort
+from werkzeug.urls import url_quote
from redash import models, settings
from redash.handlers.base import BaseResource, get_object_or_404, record_event
from redash.permissions import (
@@ -128,6 +130,25 @@ def get_download_filename(query_result, query, filetype):
return "{}_{}.{}".format(filename, retrieved_at, filetype)
+def content_disposition_filenames(attachment_filename):
+ if not isinstance(attachment_filename, str):
+ attachment_filename = attachment_filename.decode("utf-8")
+
+ try:
+ attachment_filename = attachment_filename.encode("ascii")
+ except UnicodeEncodeError:
+ filenames = {
+ "filename": unicodedata.normalize("NFKD", attachment_filename).encode(
+ "ascii", "ignore"
+ ),
+ "filename*": "UTF-8''%s" % url_quote(attachment_filename, safe=b""),
+ }
+ else:
+ filenames = {"filename": attachment_filename}
+
+ return filenames
+
+
class QueryResultListResource(BaseResource):
@require_permission("execute_query")
def post(self):
@@ -381,9 +402,8 @@ def get(self, query_id=None, query_result_id=None, filetype="json"):
filename = get_download_filename(query_result, query, filetype)
- response.headers.add_header(
- "Content-Disposition", 'attachment; filename="{}"'.format(filename)
- )
+ filenames = content_disposition_filenames(filename)
+ response.headers.add("Content-Disposition", "attachment", **filenames)
return response
| diff --git a/tests/handlers/test_query_results.py b/tests/handlers/test_query_results.py
--- a/tests/handlers/test_query_results.py
+++ b/tests/handlers/test_query_results.py
@@ -29,6 +29,19 @@ def test_returns_404_if_no_cached_result_found(self):
self.assertEqual(404, rv.status_code)
+class TestQueryResultsContentDispositionHeaders(BaseTestCase):
+ def test_supports_unicode(self):
+ query_result = self.factory.create_query_result()
+ query = self.factory.create_query(name="עברית", latest_query_data=query_result)
+
+ rv = self.make_request("get", "/api/queries/{}/results.json".format(query.id))
+ # This is what gunicorn will do with it
+ try:
+ rv.headers['Content-Disposition'].encode('ascii')
+ except Exception as e:
+ self.fail(repr(e))
+
+
class TestQueryResultListAPI(BaseTestCase):
def test_get_existing_result(self):
query_result = self.factory.create_query_result()
| Queries with unicode in query name fails downloading result file
This just happened on deploy preview. Sentry error:
https://sentry.io/share/issue/b8268b7c97784a30b67c512332a8d779/
Some possible solutions:
1. Implement a solution similar to pallets/flask#2223.
2. Switch to using Flask's `send_file` method instead of handling this on our own (need to make sure that we always have a file-like object at hand and that the behavior consistent with our current one).
3. Try quoting the filename, as suggested here: https://github.com/benoitc/gunicorn/issues/1214#issuecomment-238193505.
3 is the simplest, 2 is probably the most future proof.
| +1 on (2) | 2019-12-26T20:14:30 |
getredash/redash | 4,582 | getredash__redash-4582 | [
"3185"
] | cbc56264eadb44a6755865c8257cc3fba09c2e86 | diff --git a/redash/query_runner/treasuredata.py b/redash/query_runner/treasuredata.py
--- a/redash/query_runner/treasuredata.py
+++ b/redash/query_runner/treasuredata.py
@@ -68,7 +68,7 @@ def get_schema(self, get_stats=False):
schema = {}
if self.configuration.get("get_schema", False):
try:
- with tdclient.Client(self.configuration.get("apikey")) as client:
+ with tdclient.Client(self.configuration.get("apikey"),endpoint=self.configuration.get("endpoint")) as client:
for table in client.tables(self.configuration.get("db")):
table_name = "{}.{}".format(
self.configuration.get("db"), table.name
| TreasureData getSchema fails when setting non-default region
<!--
#####################################################################
#
# Need support? USE THE FORUM! https://discuss.redash.io/c/support.
#
# Don't have steps to reproduce and actually not sure it's a bug?
# Use the forum! https://discuss.redash.io/c/support.
#
#####################################################################
**Got an idea for a new feature?** Check if it isn't on the roadmap already: https://bit.ly/redash-roadmap and start a new discussion in the features category: https://discuss.redash.io/c/feature-requests 🌟.
Found a bug? Please fill out the sections below... thank you 👍
Found a security vulnerability? Please email [email protected] to report any security vulnerabilities. We will acknowledge receipt of your vulnerability and strive to send you regular updates about our progress. If you're curious about the status of your disclosure please feel free to email us again. If you want to encrypt your disclosure email, you can use this PGP key.
-->
### Issue Summary
There are some regions in Treasure Data, but getSchema alsways fails when setting non-default region.
### Steps to Reproduce
1. Set datasource using non-default region (e.g. Tokyo region)
2. Push schema refresh then "Schema refresh failed" error occurs
### Technical details:
* Redash Version: confirmed v5.0.2
* Browser/OS: any Browsers/OSs
* How did you install Redash: from Amazon AMI
### Details
When accessing Treasure Data to get schema, always default region will be set because the parameter is not prepared.
https://github.com/getredash/redash/blob/6c364369bb0eb98e2191c2e502fed72abe5a74c7/redash/query_runner/treasuredata.py#L82
| Treasure Data team will take care of this.
> Treasure Data team will take care of this.
Thanks! | 2020-01-23T00:48:55 |
|
getredash/redash | 4,624 | getredash__redash-4624 | [
"4621"
] | 69893f03049bfeee74a69b50642b81a7cd1a050e | diff --git a/redash/query_runner/databricks.py b/redash/query_runner/databricks.py
--- a/redash/query_runner/databricks.py
+++ b/redash/query_runner/databricks.py
@@ -48,8 +48,8 @@ def _get_connection(self):
transport = THttpClient.THttpClient(http_uri)
password = self.configuration.get("http_password", "")
- auth = base64.b64encode("token:" + password)
- transport.setCustomHeaders({"Authorization": "Basic " + auth})
+ auth = base64.b64encode(b"token:" + password.encode("ascii"))
+ transport.setCustomHeaders({"Authorization": "Basic " + auth.decode()})
connection = hive.connect(thrift_transport=transport)
return connection
diff --git a/redash/query_runner/hive_ds.py b/redash/query_runner/hive_ds.py
--- a/redash/query_runner/hive_ds.py
+++ b/redash/query_runner/hive_ds.py
@@ -223,8 +223,8 @@ def _get_connection(self):
username = self.configuration.get("username", "")
password = self.configuration.get("http_password", "")
if username or password:
- auth = base64.b64encode(username + ":" + password)
- transport.setCustomHeaders({"Authorization": "Basic " + auth})
+ auth = base64.b64encode(username.encode("ascii") + b":" + password.encode("ascii"))
+ transport.setCustomHeaders({"Authorization": "Basic " + auth.decode()})
# create connection
connection = hive.connect(thrift_transport=transport)
| Databricks Data Source Broken
<!--
We use GitHub only for bug reports 🐛
Anything else should be posted to https://discuss.redash.io 👫
🚨For support, help & questions use https://discuss.redash.io/c/support
💡For feature requests & ideas use https://discuss.redash.io/c/feature-requests
**Found a security vulnerability?** Please email [email protected] to report any security vulnerabilities. We will acknowledge receipt of your vulnerability and strive to send you regular updates about our progress. If you're curious about the status of your disclosure please feel free to email us again. If you want to encrypt your disclosure email, you can use this PGP key.
-->
### Issue Summary
Databricks data source does not work. The authorization token for Databricks data source needs to be converted into a byte string as it currently raises `TypeError: a bytes-like object is required, not 'str'`
Calling `.encode()` to transform to a byte string makes the data source work.
https://github.com/getredash/redash/blob/b089f5f0eff9b047c093dcc7abbd0ae5bfcf643c/redash/query_runner/databricks.py#L51-L52
### Steps to Reproduce
1. Create a Databricks data source
2. Run test connection and you will get the message: a bytes-like object is required, not 'str'
Any other info e.g. Why do you consider this to be a bug? What did you expect to happen instead?
### Technical details:
* Redash Version: master
* Browser/OS: Chrome
* How did you install Redash: Docker development environment
| 2020-02-09T09:49:07 |
||
getredash/redash | 4,638 | getredash__redash-4638 | [
"4786"
] | ddb0ef15c1340e7de627e928f80486dfd3d6e1d5 | diff --git a/redash/query_runner/oracle.py b/redash/query_runner/oracle.py
--- a/redash/query_runner/oracle.py
+++ b/redash/query_runner/oracle.py
@@ -35,7 +35,11 @@ class Oracle(BaseSQLQueryRunner):
@classmethod
def get_col_type(cls, col_type, scale):
if col_type == cx_Oracle.NUMBER:
- return TYPE_FLOAT if scale > 0 else TYPE_INTEGER
+ if scale is None:
+ return TYPE_INTEGER
+ if scale > 0:
+ return TYPE_FLOAT
+ return TYPE_INTEGER
else:
return TYPES_MAP.get(col_type, None)
| error running query : ** '>' is not supported between instance of NoneType and 'int'
Issue Summary:
Database = Oracle 12c
`select count(*) from table `
throwing the following error
`error running query : ** '>' is not supported between instance of NoneType and 'int'`
Redash v9.0.0-alpha(dev)
| 2020-02-11T21:51:40 |
||
getredash/redash | 4,676 | getredash__redash-4676 | [
"4356"
] | d2cc2d20b6712230e0966f9259ad67b6d433db7e | diff --git a/redash/query_runner/mssql_odbc.py b/redash/query_runner/mssql_odbc.py
--- a/redash/query_runner/mssql_odbc.py
+++ b/redash/query_runner/mssql_odbc.py
@@ -25,24 +25,31 @@ def configuration_schema(cls):
return {
"type": "object",
"properties": {
+ "server": {"type": "string"},
+ "port": {"type": "number", "default": 1433},
"user": {"type": "string"},
"password": {"type": "string"},
- "server": {"type": "string", "default": "127.0.0.1"},
- "port": {"type": "number", "default": 1433},
+ "db": {"type": "string", "title": "Database Name"},
"charset": {
"type": "string",
"default": "UTF-8",
"title": "Character Set",
},
- "db": {"type": "string", "title": "Database Name"},
- "driver": {
- "type": "string",
- "title": "Driver Identifier",
- "default": "{ODBC Driver 13 for SQL Server}",
+ "use_ssl": {
+ "type": "boolean",
+ "title": "Use SSL",
+ "default": False,
+ },
+ "verify_ssl": {
+ "type": "boolean",
+ "title": "Verify SSL certificate",
+ "default": True,
},
},
- "required": ["db"],
+ "order": ["server", "port", "user", "password", "db", "charset", "use_ssl", "verify_ssl"],
+ "required": ["host", "user", "password", "db"],
"secret": ["password"],
+ "extra_options": ["verify_ssl", "use_ssl"],
}
@classmethod
@@ -91,20 +98,26 @@ def run_query(self, query, user):
connection = None
try:
- server = self.configuration.get("server", "")
+ server = self.configuration.get("server")
user = self.configuration.get("user", "")
password = self.configuration.get("password", "")
db = self.configuration["db"]
port = self.configuration.get("port", 1433)
charset = self.configuration.get("charset", "UTF-8")
- driver = self.configuration.get("driver", "{ODBC Driver 13 for SQL Server}")
connection_string_fmt = (
- "DRIVER={};PORT={};SERVER={};DATABASE={};UID={};PWD={}"
+ "DRIVER={{ODBC Driver 17 for SQL Server}};PORT={};SERVER={};DATABASE={};UID={};PWD={}"
)
connection_string = connection_string_fmt.format(
- driver, port, server, db, user, password
+ port, server, db, user, password
)
+
+ if self.configuration.get('use_ssl', False):
+ connection_string += ";Encrypt=YES"
+
+ if not self.configuration.get('verify_ssl'):
+ connection_string += ";TrustServerCertificate=YES"
+
connection = pyodbc.connect(connection_string)
cursor = connection.cursor()
logger.debug("SQLServerODBC running query: %s", query)
diff --git a/redash/settings/__init__.py b/redash/settings/__init__.py
--- a/redash/settings/__init__.py
+++ b/redash/settings/__init__.py
@@ -332,6 +332,7 @@ def email_server_is_configured():
"redash.query_runner.sqlite",
"redash.query_runner.dynamodb_sql",
"redash.query_runner.mssql",
+ "redash.query_runner.mssql_odbc",
"redash.query_runner.memsql_ds",
"redash.query_runner.mapd",
"redash.query_runner.jql",
| Azure Data Warehouse through MS SQL (ODBC) runner doesn't work on a default docker image
### Issue Summary
[Default docker image](https://github.com/getredash/redash/blob/v8.0.0/Dockerfile) has neither pyodbc nor Microsoft ODBC Driver installed, so you can't query Azure Data Warehouse with default docker image, even if you enable pyodbc.
### Steps to Reproduce
1. Have Redash running in docker
2. Enable mssql_odbc query runner through env: `REDASH_ADDITIONAL_QUERY_RUNNERS: "redash.query_runner.mssql_odbc"`
3. It doesn't get registered:
`[2019-11-13 13:29:19,823][PID:10][DEBUG][redash.query_runner] Microsoft SQL Server (ODBC) query runner enabled but not supported, not registering. Either disable or install missing dependencies.`
4. I found out why:
a) `pyodbc==4.0.27` pypi package is missing in `requirements_all_ds.txt`
b) Even if you put it in `requirements_all_ds.txt` , it can't be downloaded & built by pip because you don't have `g++ unixodbc-dev` package installed
c) Even if you get `g++ unixodbc-dev`, you still need Microsoft ODBC Driver installed: `msodbcsql17`
5. this line in Dockerfile fixes packages, you also have to add `pyodbc` to `requirements_all_ds.txt`:
```Dockerfile
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list && apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql17 g++ unixodbc-dev
```
So I see the main problem as:
* You can't query Azure SQL/Data Warehouse straight out of the box
Is it even possible to package all these dependencies into standard Redash docker image and make it work without apt-get and pyodbc troubles? That means accepting Microsoft's EULA, and some native dependencies, e.g. pyodbc builds itself from source. I haven't measured image size increase but `apt-get` writes `After this operation, 15.8 MB of additional disk space will be used`
```bash
The license terms for this product can be downloaded from
https://aka.ms/odbc131eula and found in
/usr/share/doc/msodbcsql/LICENSE.TXT . By entering 'YES',
you indicate that you accept the license terms.
```
The runner itself(#1906) works. I just had to spend 30 minutes to make it run in docker.
### Technical details:
* Redash Version: 8
Thanks for the awesome v8 release!
| The EULA seems reasonable and the size increasing as well. We can even offset it by dropping `FreeTDS` and `pymssql` from our default packages -- this supposed to cover the same databases and be even better supported, right?
I fear this could break compatibility? There were two datasources, now there would be one - existing queries will fail.
Or we can do it in v9? I can make PR to master. | 2020-02-23T09:09:41 |
|
getredash/redash | 4,682 | getredash__redash-4682 | [
"4677"
] | 35250d64b9387daeb33f55b7cfb81939953c22b1 | diff --git a/redash/query_runner/clickhouse.py b/redash/query_runner/clickhouse.py
--- a/redash/query_runner/clickhouse.py
+++ b/redash/query_runner/clickhouse.py
@@ -68,7 +68,7 @@ def _send_query(self, data, stream=False):
verify = self.configuration.get("verify", True)
r = requests.post(
url,
- data=data,
+ data=data.encode("utf-8","ignore"),
stream=stream,
timeout=self.configuration.get("timeout", 30),
params={
| Clickhouse column name encoding problem
<!--
We use GitHub only for bug reports 🐛
Anything else should be posted to https://discuss.redash.io 👫
🚨For support, help & questions use https://discuss.redash.io/c/support
💡For feature requests & ideas use https://discuss.redash.io/c/feature-requests
**Found a security vulnerability?** Please email [email protected] to report any security vulnerabilities. We will acknowledge receipt of your vulnerability and strive to send you regular updates about our progress. If you're curious about the status of your disclosure please feel free to email us again. If you want to encrypt your disclosure email, you can use this PGP key.
-->
### Issue Summary
Alias column name contains non 'latin-1' characters return encoding error.
For example : select count(*) as 'כמות'…
**Error message**: 'latin-1' codec can't encode characters in position 285-288: Body ('כמות') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.
BTW, This works fine with other data source like mysql.
### Steps to Reproduce
1. This is the first step
Create a query with ClickHouse as the data source
2. This is the second step, etc.
Add a column alias in utf8 chars like : select colum1 as 'ש'
Any other info e.g. Why do you consider this to be a bug? What did you expect to happen instead?
I expected to see the column alias like I do with mysql data source .
No problem with the 'utf-8' data, so probably column names should support 'utf-8' charset as well.
This used to work with the older version (v8)
### Technical details:
* Redash Version:
9.0.0-alpha
* Browser/OS:
Chrome
* How did you install Redash:
Docker Based Developer Installation
| I think that #4627 might fix this. What commit are you on?
Im on 9.0.0-alpha
#4627 looks very similar but its more about the email then the actual data.
Probably, the fix should be similar as well (but here encode the sql data that is going to Clickhouse).
| 2020-02-25T06:46:06 |
|
getredash/redash | 4,705 | getredash__redash-4705 | [
"4561"
] | d687befa594670a837d1294dad673205b4087b80 | diff --git a/redash/models/__init__.py b/redash/models/__init__.py
--- a/redash/models/__init__.py
+++ b/redash/models/__init__.py
@@ -909,18 +909,25 @@ def are_favorites(cls, user, objects):
def next_state(op, value, threshold):
- if isinstance(value, numbers.Number) and not isinstance(value, bool):
- try:
- threshold = float(threshold)
- except ValueError:
- return Alert.UNKNOWN_STATE
- # If it's a boolean cast to string and lower case, because upper cased
- # boolean value is Python specific and most likely will be confusing to
- # users.
- elif isinstance(value, bool):
+ if isinstance(value, bool):
+ # If it's a boolean cast to string and lower case, because upper cased
+ # boolean value is Python specific and most likely will be confusing to
+ # users.
value = str(value).lower()
else:
- value = str(value)
+ try:
+ value = float(value)
+ value_is_number = True
+ except ValueError:
+ value_is_number = isinstance(value, numbers.Number)
+
+ if value_is_number:
+ try:
+ threshold = float(threshold)
+ except ValueError:
+ return Alert.UNKNOWN_STATE
+ else:
+ value = str(value)
if op(value, threshold):
new_state = Alert.TRIGGERED_STATE
| diff --git a/tests/models/test_alerts.py b/tests/models/test_alerts.py
--- a/tests/models/test_alerts.py
+++ b/tests/models/test_alerts.py
@@ -81,6 +81,7 @@ def test_numeric_value(self):
self.assertEqual(
Alert.TRIGGERED_STATE, next_state(OPERATORS.get("=="), 1, "1.0")
)
+ self.assertEqual(Alert.TRIGGERED_STATE, next_state(OPERATORS.get(">"), "5", 1))
def test_numeric_value_and_plain_string(self):
self.assertEqual(
@@ -88,7 +89,7 @@ def test_numeric_value_and_plain_string(self):
)
def test_non_numeric_value(self):
- self.assertEqual(Alert.OK_STATE, next_state(OPERATORS.get("=="), "1", "1.0"))
+ self.assertEqual(Alert.OK_STATE, next_state(OPERATORS.get("=="), "string", "1.0"))
def test_string_value(self):
self.assertEqual(
| Alerts: possible issue when query result value is a string number
Some data sources return only strings, even when the value is a number. One such example is Google Analytics. In this case, it's possible that your value will be `"600"` (i.e. the text representation of 600). Here's some possible scenarios:
* Newly created alerts, will have the threshold as a string. Both values will be compares as strings, and while it won't fail and it will return the wrong alert state.
* For alerts created before the alert page update, the threshold value would be a number. In Python 2, if you compared a number and a string it would be the same as comparing two strings. But in Python 3 it just fails (raises an exception).
I wonder what should we do? Should we cast values to numbers whenever we use the greater than/less than operators? Should we catch the error and ignore it?
| > Should we cast values to numbers whenever we use the greater than/less than operators
If string comparisons with those operators never existed/worked before, I think this is a safe way to go. Conceptually, this will make the "<" and ">" operators be associated with Number values, if at some point we decide to add new operators for strings, it's a matter of adding a toggle in UI to disable the casting.
Anyway, I assume the majority of Alert operations are intended to happen as number operations, so this feels like the option that should allow more use cases.
For me it looks good if "<" and ">" will work only for numbers (and will cast both operands to number), and "=="/"!=" will work both for string and numbers (cast to type of results column?). | 2020-03-03T10:54:09 |
getredash/redash | 4,741 | getredash__redash-4741 | [
"4675"
] | 1e9b8f112610581ec3580231fa70fbf3d0c22f7d | diff --git a/redash/models/base.py b/redash/models/base.py
--- a/redash/models/base.py
+++ b/redash/models/base.py
@@ -12,10 +12,14 @@
class RedashSQLAlchemy(SQLAlchemy):
def apply_driver_hacks(self, app, info, options):
options.update(json_serializer=json_dumps)
+ if settings.SQLALCHEMY_ENABLE_POOL_PRE_PING:
+ options.update(pool_pre_ping=True)
super(RedashSQLAlchemy, self).apply_driver_hacks(app, info, options)
def apply_pool_defaults(self, app, options):
super(RedashSQLAlchemy, self).apply_pool_defaults(app, options)
+ if settings.SQLALCHEMY_ENABLE_POOL_PRE_PING:
+ options["pool_pre_ping"] = True
if settings.SQLALCHEMY_DISABLE_POOL:
options["poolclass"] = NullPool
# Remove options NullPool does not support:
diff --git a/redash/settings/__init__.py b/redash/settings/__init__.py
--- a/redash/settings/__init__.py
+++ b/redash/settings/__init__.py
@@ -36,6 +36,9 @@
SQLALCHEMY_DISABLE_POOL = parse_boolean(
os.environ.get("SQLALCHEMY_DISABLE_POOL", "false")
)
+SQLALCHEMY_ENABLE_POOL_PRE_PING = parse_boolean(
+ os.environ.get("SQLALCHEMY_ENABLE_POOL_PRE_PING", "false")
+)
SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_ECHO = False
| "OperationalError: server closed the connection unexpectedly" on SQL: ‘SELECT … FROM organizations’
From time to time the UI start getting “internal server error” after a couple of refreshes it is start working again normally. This happens on every screen: login, dashboard, queries, etc… I can see in the log that redash-server is trying to make the following SQL query:
`[SQL: 'SELECT organizations.updated_at AS organizations_updated_at, organizations.created_at AS organizations_created_at, organizations.id AS organizations_id, organizations.name AS organizations_name, organizations.slug AS organizations_slug, organizations.settings AS organizations_settings \nFROM organizations \nWHERE organizations.slug = %(slug_1)s \n LIMIT %(param_1)s'] [parameters: {'slug_1': 'default', 'param_1': 1}] (Background on this error at: http://sqlalche.me/e/e3q8)`
But it fails with **OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly**
I have selfhosted redash with the following setup:
standalone postgre machine
standalone redis machine
docker redash-server: redash/redash:8.0.2.b37747
docker redash-scheduler: redash/redash:8.0.2.b37747
docker redash-worker: redash/redash:8.0.2.b37747
docker ad-hoc-worker: redash/redash:8.0.2.b37747
The error stacktrace is:
```
Feb 22, 2020 @ 10:19:44.908 cursor.execute(statement, parameters)
Feb 22, 2020 @ 10:19:44.908 File "/usr/local/lib/python2.7/site-packages/flask_login/login_manager.py", line 317, in _load_user
Feb 22, 2020 @ 10:19:44.908 This probably means the server terminated abnormally
Feb 22, 2020 @ 10:19:44.908 before or while processing the request.
Feb 22, 2020 @ 10:19:44.908 [SQL: 'SELECT organizations.updated_at AS organizations_updated_at, organizations.created_at AS organizations_created_at, organizations.id AS organizations_id, organizations.name AS organizations_name, organizations.slug AS organizations_slug, organizations.settings AS organizations_settings \nFROM organizations \nWHERE organizations.slug = %(slug_1)s \n LIMIT %(param_1)s'] [parameters: {'slug_1': 'default', 'param_1': 1}] (Background on this error at: http://sqlalche.me/e/e3q8)
Feb 22, 2020 @ 10:19:44.908 response = self.full_dispatch_request()
Feb 22, 2020 @ 10:19:44.908 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
Feb 22, 2020 @ 10:19:44.908 exc_info
Feb 22, 2020 @ 10:19:44.908 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
Feb 22, 2020 @ 10:19:44.908 context)
Feb 22, 2020 @ 10:19:44.908 File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
Feb 22, 2020 @ 10:19:44.908 reraise(type(exception), exception, tb=exc_tb, cause=cause)
Feb 22, 2020 @ 10:19:44.908 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
Feb 22, 2020 @ 10:19:44.908 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
Feb 22, 2020 @ 10:19:44.908 OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly
Feb 22, 2020 @ 10:19:44.907 Traceback (most recent call last):
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1609, in full_dispatch_request
Feb 22, 2020 @ 10:19:44.907 user = self.user_callback(user_id)
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
Feb 22, 2020 @ 10:19:44.907 request_started.send(self)
Feb 22, 2020 @ 10:19:44.907 File "/app/redash/authentication/__init__.py", line 48, in load_user
Feb 22, 2020 @ 10:19:44.907 org = current_org._get_current_object()
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/werkzeug/local.py", line 302, in _get_current_object
Feb 22, 2020 @ 10:19:44.907 return self.__local()
Feb 22, 2020 @ 10:19:44.907 File "/app/redash/authentication/org_resolving.py", line 18, in _get_current_org
Feb 22, 2020 @ 10:19:44.907 g.org = Organization.get_by_slug(slug)
Feb 22, 2020 @ 10:19:44.907 File "/app/redash/models/organizations.py", line 33, in get_by_slug
Feb 22, 2020 @ 10:19:44.907 return cls.query.filter(cls.slug == slug).first()
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2895, in first
Feb 22, 2020 @ 10:19:44.907 ret = list(self[0:1])
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2687, in __getitem__
Feb 22, 2020 @ 10:19:44.907 return list(res)
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2995, in __iter__
Feb 22, 2020 @ 10:19:44.907 return self._execute_and_instances(context)
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 3018, in _execute_and_instances
Feb 22, 2020 @ 10:19:44.907 result = conn.execute(querycontext.statement, self._params)
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
Feb 22, 2020 @ 10:19:44.907 return meth(self, multiparams, params)
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
Feb 22, 2020 @ 10:19:44.907 return connection._execute_clauseelement(self, multiparams, params)
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
Feb 22, 2020 @ 10:19:44.907 compiled_sql, distilled_params
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
Feb 22, 2020 @ 10:19:44.907 context)
Feb 22, 2020 @ 10:19:44.907 rv = self.handle_user_exception(e)
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/blinker/base.py", line 267, in send
Feb 22, 2020 @ 10:19:44.907 for receiver in self.receivers_for(sender)]
Feb 22, 2020 @ 10:19:44.907 File "/app/redash/models/users.py", line 54, in update_user_active_at
Feb 22, 2020 @ 10:19:44.907 if current_user.is_authenticated and not current_user.is_api_user():
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/werkzeug/local.py", line 343, in __getattr__
Feb 22, 2020 @ 10:19:44.907 return getattr(self._get_current_object(), name)
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/werkzeug/local.py", line 302, in _get_current_object
Feb 22, 2020 @ 10:19:44.907 return self.__local()
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/flask_login/utils.py", line 26, in <lambda>
Feb 22, 2020 @ 10:19:44.907 current_user = LocalProxy(lambda: _get_user())
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/flask_login/utils.py", line 302, in _get_user
Feb 22, 2020 @ 10:19:44.907 current_app.login_manager._load_user()
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception
Feb 22, 2020 @ 10:19:44.907 return self.reload_user()
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/flask_restful/__init__.py", line 271, in error_router
Feb 22, 2020 @ 10:19:44.907 return original_handler(e)
Feb 22, 2020 @ 10:19:44.907 [2020-02-22 08:19:44,905] ERROR in app: Exception on /api/organization/status [GET]
Feb 22, 2020 @ 10:19:44.907 reraise(exc_type, exc_value, tb)
Feb 22, 2020 @ 10:19:44.907 File "/usr/local/lib/python2.7/site-packages/flask_login/login_manager.py", line 279, in reload_user
```
However thank you for this great tool ! Appreciate your work :)
| I'm not familiar with python but I guess that you are missing **ping** option for the DB connections. After a little research and looking in your code I think that this will fix the issue:
```
class SQLAlchemy(_BaseSQLAlchemy):
def apply_pool_defaults(self, app, options):
super(SQLAlchemy, self).apply_pool_defaults(self, app, options)
options["pool_pre_ping"] = True
```
your setup is:
```
def apply_pool_defaults(self, app, options):
super(RedashSQLAlchemy, self).apply_pool_defaults(app, options)
if settings.SQLALCHEMY_DISABLE_POOL:
options["poolclass"] = NullPool
# Remove options NullPool does not support:
options.pop("max_overflow", None)
```
The `options["pool_pre_ping"] = True` should resolve the issue. I have never used python so it will be very hard for me to test this and make pull request sorry :(
Documentation about this option: https://docs.sqlalchemy.org/en/13/core/pooling.html#sqlalchemy.pool.Pool.params.pre_ping
```
pre_ping –
if True, the pool will emit a “ping” (typically “SELECT 1”, but is dialect-specific) on the connection upon checkout, to test if the connection is alive or not. If not, the connection is transparently re-connected and upon success, all other pooled connections established prior to that timestamp are invalidated. Requires that a dialect is passed as well to interpret the disconnection error.
```
I think that this will resolve the issue :)
@arikfr I resolved the issue with the following code:
def apply_driver_hacks(self, app, info, options):
options.update(json_serializer=json_dumps)
options.update(pool_pre_ping=True)
super(RedashSQLAlchemy, self).apply_driver_hacks(app, info, options)
def apply_pool_defaults(self, app, options):
super(RedashSQLAlchemy, self).apply_pool_defaults(app, options)
options["pool_pre_ping"] = True
if settings.SQLALCHEMY_DISABLE_POOL:
options["poolclass"] = NullPool
# Remove options NullPool does not support:
options.pop("max_overflow", None)
The magic is done by:
```
options.update(pool_pre_ping=True)
options["pool_pre_ping"] = True
```
I have forked your code and build it locally. So far it looks good on our dev environment. I don't have permissions to create branch and pull request.
You can read more about the issue here:
- https://github.com/pallets/flask-sqlalchemy/issues/589#issuecomment-361075700
- https://github.com/psycopg/psycopg2/issues/829
- https://stackoverflow.com/questions/55457069/how-to-fix-operationalerror-psycopg2-operationalerror-server-closed-the-conn | 2020-03-18T09:45:18 |
|
getredash/redash | 4,792 | getredash__redash-4792 | [
"4791"
] | 6a5445b72670e928c740e6797407e8bd85ee8887 | diff --git a/redash/query_runner/pg.py b/redash/query_runner/pg.py
--- a/redash/query_runner/pg.py
+++ b/redash/query_runner/pg.py
@@ -328,6 +328,7 @@ def _get_tables(self, schema):
ordinal_position AS pos
FROM svv_columns
WHERE table_schema NOT IN ('pg_internal','pg_catalog','information_schema')
+ AND table_schema NOT LIKE 'pg_temp_%'
)
SELECT table_name, table_schema, column_name
FROM tables
| Bug: Redshift data source unable to refresh schema
Redash version: 8.0.0+b32245 (a16f551e)
Data Source: Redshift
Redshift cluster version: 1.0.14436
Error:

API: `/api/data_sources/1/schema?refresh=true`
Response: `{"error": {"message": "Error retrieving schema.", "code": 2}}`
Diagnosis:
Manually running the following query: https://github.com/getredash/redash/blob/master/redash/query_runner/pg.py#L323 returns error
`Error running query: schema "pg_temp_16" does not exist`
Solution:
Update https://github.com/getredash/redash/blob/master/redash/query_runner/pg.py#L330
to include `and table_schema NOT LIKE 'pg_%'`
| 2020-04-09T09:08:18 |
||
getredash/redash | 4,894 | getredash__redash-4894 | [
"4893"
] | bac15db21f4932e69f2b458db02ca84768983836 | diff --git a/redash/tasks/queries/maintenance.py b/redash/tasks/queries/maintenance.py
--- a/redash/tasks/queries/maintenance.py
+++ b/redash/tasks/queries/maintenance.py
@@ -81,6 +81,7 @@ def _apply_auto_limit(query_text, query):
def refresh_queries():
+ started_at = time.time()
logger.info("Refreshing queries...")
enqueued = []
for query in models.Query.outdated_queries():
@@ -105,6 +106,7 @@ def refresh_queries():
sentry.capture_exception(error)
status = {
+ "started_at": started_at,
"outdated_queries_count": len(enqueued),
"last_refresh_at": time.time(),
"query_ids": json_dumps([q.id for q in enqueued]),
| Admin shows n/a for started value in manager column
<!--
We use GitHub only for bug reports 🐛
Anything else should be posted to https://discuss.redash.io 👫
🚨For support, help & questions use https://discuss.redash.io/c/support
💡For feature requests & ideas use https://discuss.redash.io/c/feature-requests
**Found a security vulnerability?** Please email [email protected] to report any security vulnerabilities. We will acknowledge receipt of your vulnerability and strive to send you regular updates about our progress. If you're curious about the status of your disclosure please feel free to email us again. If you want to encrypt your disclosure email, you can use this PGP key.
-->
### Issue Summary
When going to the admin section you can see in the manager column that the "Started" value is always "n/a".
<img width="630" alt="CleanShot 2020-05-15 at 01 58 09@2x" src="https://user-images.githubusercontent.com/1610/81997320-b31fa600-964f-11ea-90c0-e0c2988fdcce.png">
### Steps to Reproduce
I **think** this a regression from first https://github.com/getredash/redash/commit/26f0ce0749c1f683ba3b8f0b1d2a9e70cb6ea0e2#diff-5ca0dc869c9c3218d8a95ba7f99f5f4cL460 where it removed the QueryTaskTracker and by that the ability to store the `started_at` time and then later when Celery was removed, where this code path became no-op.
Originally this value was shown in v6.0.0 here: https://github.com/getredash/redash/blob/4780bd9c5ef212dd4c38bafb525abc991812d59b/client/app/pages/admin/status/status.html#L35
Now it's pulled in from the `/status.json` api in https://github.com/getredash/redash/blob/8010781f0d3d14260523e135a5abef0847714b9e/client/app/pages/admin/SystemStatus.jsx#L47 but the value behind is not set anymore in https://github.com/getredash/redash/blob/8010781f0d3d14260523e135a5abef0847714b9e/redash/tasks/queries/maintenance.py#L95-L101.
Any other info e.g. Why do you consider this to be a bug? What did you expect to happen instead?
### Technical details:
* Redash Version: 8010781f0d3d14260523e135a5abef0847714b9e
* Browser/OS: Firefox macOS
* How did you install Redash: Docker
| 2020-05-15T00:19:08 |
||
getredash/redash | 4,936 | getredash__redash-4936 | [
"4608"
] | b30622e53164ad6a70e54a9d50800a8c1fb0a7d3 | diff --git a/redash/handlers/query_results.py b/redash/handlers/query_results.py
--- a/redash/handlers/query_results.py
+++ b/redash/handlers/query_results.py
@@ -51,10 +51,14 @@ def error_response(message, http_status=400):
),
"no_permission": error_response("You do not have permission to run queries with this data source.", 403),
"select_data_source": error_response("Please select data source to run this query.", 401),
+ "no_data_source": error_response("Target data source not available.", 401),
}
def run_query(query, parameters, data_source, query_id, should_apply_auto_limit, max_age=0):
+ if not data_source:
+ return error_messages["no_data_source"]
+
if data_source.paused:
if data_source.pause_reason:
message = "{} is paused ({}). Please try later.".format(data_source.name, data_source.pause_reason)
| diff --git a/tests/handlers/test_query_results.py b/tests/handlers/test_query_results.py
--- a/tests/handlers/test_query_results.py
+++ b/tests/handlers/test_query_results.py
@@ -1,9 +1,16 @@
-from redash.handlers.query_results import error_messages
+from redash.handlers.query_results import error_messages, run_query
from redash.models import db
from redash.utils import json_dumps
from tests import BaseTestCase
+class TestRunQuery(BaseTestCase):
+ def test_run_query_with_no_data_source(self):
+ response, status = run_query(None, None, None, None, None)
+ self.assertDictEqual(response, error_messages["no_data_source"][0])
+ self.assertEqual(status, error_messages["no_data_source"][1])
+
+
class TestQueryResultsCacheHeaders(BaseTestCase):
def test_uses_cache_headers_for_specific_result(self):
query_result = self.factory.create_query_result()
| Show a helpful error message when trying to execute queries detached from a data source
When a data source gets deleted, it's queries are still accessible (detailed explanation can be found in #4336) but they error out due to [`data_source` being None](https://github.com/getredash/redash/blob/master/redash/handlers/query_results.py#L65).
We should provide a helpful error message there (perhaps by raising `QueryDetachedFromDataSourceError`).
| 2020-06-02T23:40:25 |
|
getredash/redash | 4,983 | getredash__redash-4983 | [
"4982"
] | 3a543a4ab2b22a05aa7879ccf64953199238fed1 | diff --git a/redash/query_runner/__init__.py b/redash/query_runner/__init__.py
--- a/redash/query_runner/__init__.py
+++ b/redash/query_runner/__init__.py
@@ -258,7 +258,7 @@ def get_auth(self):
return None
def get_response(self, url, auth=None, http_method="get", **kwargs):
- if is_private_address(url):
+ if is_private_address(url) and settings.ENFORCE_PRIVATE_ADDRESS_BLOCK:
raise Exception("Can't query private addresses.")
# Get authentication values if not given
| Can't query private addresses again
Hello!
I use Redash docker image on my own server to getting query from my API.
Since version 9.0.0 alpha I don't have issues with this, because of REDASH_ENFORCE_PRIVATE_IP_BLOCK flag set to false.
But after upgrading to version beta, this issues came back! What should I do?
| 2020-06-18T18:29:34 |
||
getredash/redash | 5,354 | getredash__redash-5354 | [
"4402"
] | 4107265feb8237d1e8e0676cf972a879930ecc15 | diff --git a/redash/query_runner/salesforce.py b/redash/query_runner/salesforce.py
--- a/redash/query_runner/salesforce.py
+++ b/redash/query_runner/salesforce.py
@@ -81,7 +81,7 @@ def configuration_schema(cls):
"default": DEFAULT_API_VERSION,
},
},
- "required": ["username", "password", "token"],
+ "required": ["username", "password"],
"secret": ["password", "token"],
}
| Minor Salesforce runner fix
<!--
We use GitHub only for bug reports 🐛
Anything else should be posted to https://discuss.redash.io 👫
🚨For support, help & questions use https://discuss.redash.io/c/support
💡For feature requests & ideas use https://discuss.redash.io/c/feature-requests
**Found a security vulnerability?** Please email [email protected] to report any security vulnerabilities. We will acknowledge receipt of your vulnerability and strive to send you regular updates about our progress. If you're curious about the status of your disclosure please feel free to email us again. If you want to encrypt your disclosure email, you can use this PGP key.
-->
### Issue Summary
A Security Token isn't required in all SFDC environments - depending on configuration. See [here](https://help.salesforce.com/articleView?id=000331668&type=1&mode=1) for more information.
### Steps to Reproduce
1. Add Salesforce as a data source where a token isn't required (and cannot be generated)
2. Cannot proceed without required field
### Technical details:
https://github.com/getredash/redash/blob/be56035bd6d9856361edc6b23d30a38c8f2d2be2/redash/query_runner/salesforce.py#L81
Just remove `token` from the `required` list. Seemed like it'd be faster to create an issue than submit a PR for such a small change
| Might be worth noting that I'm running a version with the change applied and it works 👌 | 2021-01-15T16:34:56 |
|
getredash/redash | 5,394 | getredash__redash-5394 | [
"4778"
] | c865293aaae014e06ae02482931522c6d55910fd | diff --git a/redash/tasks/queries/execution.py b/redash/tasks/queries/execution.py
--- a/redash/tasks/queries/execution.py
+++ b/redash/tasks/queries/execution.py
@@ -94,7 +94,7 @@ def enqueue_query(
"data_source_id": data_source.id,
"org_id": data_source.org_id,
"scheduled": scheduled_query_id is not None,
- "query_id": metadata.get("Query ID"),
+ "query_id": metadata.get("query_id", metadata.get("Query ID")),
"user_id": user_id,
},
}
@@ -150,22 +150,28 @@ def _resolve_user(user_id, is_api_key, query_id):
class QueryExecutor(object):
def __init__(
- self, query, data_source_id, user_id, is_api_key, metadata, scheduled_query
+ self, query, data_source_id, user_id, is_api_key, metadata, is_scheduled_query
):
self.job = get_current_job()
self.query = query
self.data_source_id = data_source_id
self.metadata = metadata
self.data_source = self._load_data_source()
+ self.query_id = metadata.get("query_id")
self.user = _resolve_user(user_id, is_api_key, metadata.get("Query ID"))
+ self.query_model = (
+ models.Query.query.get(self.query_id)
+ if self.query_id and self.query_id != "adhoc"
+ else None
+ )
# Close DB connection to prevent holding a connection for a long time while the query is executing.
models.db.session.close()
self.query_hash = gen_query_hash(self.query)
- self.scheduled_query = scheduled_query
- # Load existing tracker or create a new one if the job was created before code update:
- if scheduled_query:
- models.scheduled_queries_executions.update(scheduled_query.id)
+ self.is_scheduled_query = is_scheduled_query
+ if self.is_scheduled_query:
+ # Load existing tracker or create a new one if the job was created before code update:
+ models.scheduled_queries_executions.update(self.query_model.id)
def run(self):
signal.signal(signal.SIGINT, signal_handler)
@@ -202,20 +208,16 @@ def run(self):
if error is not None and data is None:
result = QueryExecutionError(error)
- if self.scheduled_query is not None:
- self.scheduled_query = models.db.session.merge(
- self.scheduled_query, load=False
- )
- track_failure(self.scheduled_query, error)
+ if self.is_scheduled_query:
+ self.query_model = models.db.session.merge(self.query_model, load=False)
+ track_failure(self.query_model, error)
raise result
else:
- if self.scheduled_query and self.scheduled_query.schedule_failures > 0:
- self.scheduled_query = models.db.session.merge(
- self.scheduled_query, load=False
- )
- self.scheduled_query.schedule_failures = 0
- self.scheduled_query.skip_updated_at = True
- models.db.session.add(self.scheduled_query)
+ if self.query_model and self.query_model.schedule_failures > 0:
+ self.query_model = models.db.session.merge(self.query_model, load=False)
+ self.query_model.schedule_failures = 0
+ self.query_model.skip_updated_at = True
+ models.db.session.add(self.query_model)
query_result = models.QueryResult.store_result(
self.data_source.org_id,
@@ -242,7 +244,7 @@ def run(self):
def _annotate_query(self, query_runner):
self.metadata["Job ID"] = self.job.id
self.metadata["Query Hash"] = self.query_hash
- self.metadata["Scheduled"] = self.scheduled_query is not None
+ self.metadata["Scheduled"] = self.is_scheduled_query
return query_runner.annotate_query(self.query, self.metadata)
@@ -275,14 +277,14 @@ def execute_query(
scheduled_query_id=None,
is_api_key=False,
):
- if scheduled_query_id is not None:
- scheduled_query = models.Query.query.get(scheduled_query_id)
- else:
- scheduled_query = None
-
try:
return QueryExecutor(
- query, data_source_id, user_id, is_api_key, metadata, scheduled_query
+ query,
+ data_source_id,
+ user_id,
+ is_api_key,
+ metadata,
+ scheduled_query_id is not None,
).run()
except QueryExecutionError as e:
models.db.session.rollback()
diff --git a/redash/tasks/worker.py b/redash/tasks/worker.py
--- a/redash/tasks/worker.py
+++ b/redash/tasks/worker.py
@@ -75,7 +75,7 @@ class HardLimitingWorker(HerokuWorker):
"""
grace_period = 15
- queue_class = CancellableQueue
+ queue_class = RedashQueue
job_class = CancellableJob
def stop_executing_job(self, job):
| diff --git a/tests/tasks/test_queries.py b/tests/tasks/test_queries.py
--- a/tests/tasks/test_queries.py
+++ b/tests/tasks/test_queries.py
@@ -201,7 +201,10 @@ def test_success_scheduled(self, _):
with patch.object(PostgreSQL, "run_query") as qr:
qr.return_value = ([1, 2], None)
result_id = execute_query(
- "SELECT 1, 2", self.factory.data_source.id, {}, scheduled_query_id=q.id
+ "SELECT 1, 2",
+ self.factory.data_source.id,
+ {"query_id": q.id},
+ scheduled_query_id=q.id,
)
q = models.Query.get_by_id(q.id)
self.assertEqual(q.schedule_failures, 0)
@@ -219,14 +222,20 @@ def test_failure_scheduled(self, _):
qr.side_effect = ValueError("broken")
result = execute_query(
- "SELECT 1, 2", self.factory.data_source.id, {}, scheduled_query_id=q.id
+ "SELECT 1, 2",
+ self.factory.data_source.id,
+ {"query_id": q.id},
+ scheduled_query_id=q.id,
)
self.assertTrue(isinstance(result, QueryExecutionError))
q = models.Query.get_by_id(q.id)
self.assertEqual(q.schedule_failures, 1)
result = execute_query(
- "SELECT 1, 2", self.factory.data_source.id, {}, scheduled_query_id=q.id
+ "SELECT 1, 2",
+ self.factory.data_source.id,
+ {"query_id": q.id},
+ scheduled_query_id=q.id,
)
self.assertTrue(isinstance(result, QueryExecutionError))
q = models.Query.get_by_id(q.id)
@@ -242,7 +251,10 @@ def test_success_after_failure(self, _):
with patch.object(PostgreSQL, "run_query") as qr:
qr.side_effect = ValueError("broken")
result = execute_query(
- "SELECT 1, 2", self.factory.data_source.id, {}, scheduled_query_id=q.id
+ "SELECT 1, 2",
+ self.factory.data_source.id,
+ {"query_id": q.id},
+ scheduled_query_id=q.id,
)
self.assertTrue(isinstance(result, QueryExecutionError))
q = models.Query.get_by_id(q.id)
@@ -251,7 +263,41 @@ def test_success_after_failure(self, _):
with patch.object(PostgreSQL, "run_query") as qr:
qr.return_value = ([1, 2], None)
execute_query(
- "SELECT 1, 2", self.factory.data_source.id, {}, scheduled_query_id=q.id
+ "SELECT 1, 2",
+ self.factory.data_source.id,
+ {"query_id": q.id},
+ scheduled_query_id=q.id,
+ )
+ q = models.Query.get_by_id(q.id)
+ self.assertEqual(q.schedule_failures, 0)
+
+ def test_adhoc_success_after_scheduled_failure(self, _):
+ """
+ Query execution success resets the failure counter, even if it runs as an adhoc query.
+ """
+ q = self.factory.create_query(
+ query_text="SELECT 1, 2", schedule={"interval": 300}
+ )
+ with patch.object(PostgreSQL, "run_query") as qr:
+ qr.side_effect = ValueError("broken")
+ result = execute_query(
+ "SELECT 1, 2",
+ self.factory.data_source.id,
+ {"query_id": q.id},
+ scheduled_query_id=q.id,
+ user_id=self.factory.user.id,
+ )
+ self.assertTrue(isinstance(result, QueryExecutionError))
+ q = models.Query.get_by_id(q.id)
+ self.assertEqual(q.schedule_failures, 1)
+
+ with patch.object(PostgreSQL, "run_query") as qr:
+ qr.return_value = ([1, 2], None)
+ execute_query(
+ "SELECT 1, 2",
+ self.factory.data_source.id,
+ {"query_id": q.id},
+ user_id=self.factory.user.id,
)
q = models.Query.get_by_id(q.id)
self.assertEqual(q.schedule_failures, 0)
| Update schedule_failures when saving latest_query_data_id
Currently a failing scheduled query's failures counter will only reset when it's executed again from the scheduler. This means that if the query failed enough times the next execution might be significantly delayed even though it runs properly when triggered by a user.
It might be tricky to update the counter as we don't always pass the query object, a temporary fix is to update the counter when updating `latest_query_data_id`.
| 2021-02-11T10:27:59 |
|
getredash/redash | 5,448 | getredash__redash-5448 | [
"5445"
] | a2c96c1e6ddb372eb3bf996b77e222c8024c4601 | diff --git a/redash/handlers/query_results.py b/redash/handlers/query_results.py
--- a/redash/handlers/query_results.py
+++ b/redash/handlers/query_results.py
@@ -60,7 +60,9 @@ def error_response(message, http_status=400):
}
-def run_query(query, parameters, data_source, query_id, should_apply_auto_limit, max_age=0):
+def run_query(
+ query, parameters, data_source, query_id, should_apply_auto_limit, max_age=0
+):
if data_source.paused:
if data_source.pause_reason:
message = "{} is paused ({}). Please try later.".format(
@@ -76,7 +78,9 @@ def run_query(query, parameters, data_source, query_id, should_apply_auto_limit,
except (InvalidParameterError, QueryDetachedFromDataSourceError) as e:
abort(400, message=str(e))
- query_text = data_source.query_runner.apply_auto_limit(query.text, should_apply_auto_limit)
+ query_text = data_source.query_runner.apply_auto_limit(
+ query.text, should_apply_auto_limit
+ )
if query.missing_params:
return error_response(
@@ -118,7 +122,7 @@ def run_query(query, parameters, data_source, query_id, should_apply_auto_limit,
"Username": repr(current_user)
if current_user.is_api_user()
else current_user.email,
- "Query ID": query_id,
+ "query_id": query_id,
},
)
return serialize_job(job)
@@ -195,7 +199,12 @@ def post(self):
return error_messages["no_permission"]
return run_query(
- parameterized_query, parameters, data_source, query_id, should_apply_auto_limit, max_age
+ parameterized_query,
+ parameters,
+ data_source,
+ query_id,
+ should_apply_auto_limit,
+ max_age,
)
@@ -392,10 +401,10 @@ def get(self, query_id=None, query_result_id=None, filetype="json"):
self.record_event(event)
response_builders = {
- 'json': self.make_json_response,
- 'xlsx': self.make_excel_response,
- 'csv': self.make_csv_response,
- 'tsv': self.make_tsv_response
+ "json": self.make_json_response,
+ "xlsx": self.make_excel_response,
+ "csv": self.make_csv_response,
+ "tsv": self.make_tsv_response,
}
response = response_builders[filetype](query_result)
@@ -426,12 +435,16 @@ def make_json_response(query_result):
@staticmethod
def make_csv_response(query_result):
headers = {"Content-Type": "text/csv; charset=UTF-8"}
- return make_response(serialize_query_result_to_dsv(query_result, ","), 200, headers)
+ return make_response(
+ serialize_query_result_to_dsv(query_result, ","), 200, headers
+ )
@staticmethod
def make_tsv_response(query_result):
headers = {"Content-Type": "text/tab-separated-values; charset=UTF-8"}
- return make_response(serialize_query_result_to_dsv(query_result, "\t"), 200, headers)
+ return make_response(
+ serialize_query_result_to_dsv(query_result, "\t"), 200, headers
+ )
@staticmethod
def make_excel_response(query_result):
diff --git a/redash/tasks/queries/execution.py b/redash/tasks/queries/execution.py
--- a/redash/tasks/queries/execution.py
+++ b/redash/tasks/queries/execution.py
@@ -95,7 +95,7 @@ def enqueue_query(
"data_source_id": data_source.id,
"org_id": data_source.org_id,
"scheduled": scheduled_query_id is not None,
- "query_id": metadata.get("query_id", metadata.get("Query ID")),
+ "query_id": metadata.get("query_id"),
"user_id": user_id,
},
}
@@ -159,7 +159,7 @@ def __init__(
self.metadata = metadata
self.data_source = self._load_data_source()
self.query_id = metadata.get("query_id")
- self.user = _resolve_user(user_id, is_api_key, metadata.get("Query ID"))
+ self.user = _resolve_user(user_id, is_api_key, metadata.get("query_id"))
self.query_model = (
models.Query.query.get(self.query_id)
if self.query_id and self.query_id != "adhoc"
@@ -259,7 +259,7 @@ def _log_progress(self, state):
self.data_source.id,
self.job.id,
self.metadata.get("Queue", "unknown"),
- self.metadata.get("Query ID", "unknown"),
+ self.metadata.get("query_id", "unknown"),
self.metadata.get("Username", "unknown"),
)
diff --git a/redash/tasks/queries/maintenance.py b/redash/tasks/queries/maintenance.py
--- a/redash/tasks/queries/maintenance.py
+++ b/redash/tasks/queries/maintenance.py
@@ -78,7 +78,9 @@ class RefreshQueriesError(Exception):
def _apply_auto_limit(query_text, query):
should_apply_auto_limit = query.options.get("apply_auto_limit", False)
- return query.data_source.query_runner.apply_auto_limit(query_text, should_apply_auto_limit)
+ return query.data_source.query_runner.apply_auto_limit(
+ query_text, should_apply_auto_limit
+ )
def refresh_queries():
@@ -96,7 +98,7 @@ def refresh_queries():
query.data_source,
query.user_id,
scheduled_query=query,
- metadata={"Query ID": query.id, "Username": "Scheduled"},
+ metadata={"query_id": query.id, "Username": "Scheduled"},
)
enqueued.append(query)
except Exception as e:
| diff --git a/tests/tasks/test_queries.py b/tests/tasks/test_queries.py
--- a/tests/tasks/test_queries.py
+++ b/tests/tasks/test_queries.py
@@ -48,7 +48,7 @@ def test_multiple_enqueue_of_same_query(self, enqueue, _):
query.user_id,
False,
query,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
enqueue_query(
query.query_text,
@@ -56,7 +56,7 @@ def test_multiple_enqueue_of_same_query(self, enqueue, _):
query.user_id,
False,
query,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
enqueue_query(
query.query_text,
@@ -64,7 +64,7 @@ def test_multiple_enqueue_of_same_query(self, enqueue, _):
query.user_id,
False,
query,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
self.assertEqual(1, enqueue.call_count)
@@ -79,7 +79,7 @@ def test_multiple_enqueue_of_expired_job(self, enqueue, fetch_job):
query.user_id,
False,
query,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
# "expire" the previous job
@@ -91,7 +91,7 @@ def test_multiple_enqueue_of_expired_job(self, enqueue, fetch_job):
query.user_id,
False,
query,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
self.assertEqual(2, enqueue.call_count)
@@ -106,7 +106,7 @@ def test_reenqueue_during_job_cancellation(self, enqueue, my_fetch_job):
query.user_id,
False,
query,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
# "cancel" the previous job
@@ -123,7 +123,7 @@ def cancel_job(*args, **kwargs):
query.user_id,
False,
query,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
self.assertEqual(2, enqueue.call_count)
@@ -139,7 +139,7 @@ def test_limits_query_time(self, _, enqueue, __):
query.user_id,
False,
query,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
_, kwargs = enqueue.call_args
@@ -155,7 +155,7 @@ def test_multiple_enqueue_of_different_query(self, enqueue, _):
query.user_id,
False,
None,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
enqueue_query(
query.query_text + "2",
@@ -163,7 +163,7 @@ def test_multiple_enqueue_of_different_query(self, enqueue, _):
query.user_id,
False,
None,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
enqueue_query(
query.query_text + "3",
@@ -171,7 +171,7 @@ def test_multiple_enqueue_of_different_query(self, enqueue, _):
query.user_id,
False,
None,
- {"Username": "Arik", "Query ID": query.id},
+ {"Username": "Arik", "query_id": query.id},
)
self.assertEqual(3, enqueue.call_count)
diff --git a/tests/tasks/test_refresh_queries.py b/tests/tasks/test_refresh_queries.py
--- a/tests/tasks/test_refresh_queries.py
+++ b/tests/tasks/test_refresh_queries.py
@@ -14,8 +14,9 @@ def test_enqueues_outdated_queries_for_sqlquery(self):
"""
query1 = self.factory.create_query(options={"apply_auto_limit": True})
query2 = self.factory.create_query(
- query_text="select 42;", data_source=self.factory.create_data_source(),
- options={"apply_auto_limit": True}
+ query_text="select 42;",
+ data_source=self.factory.create_data_source(),
+ options={"apply_auto_limit": True},
)
oq = staticmethod(lambda: [query1, query2])
with patch(ENQUEUE_QUERY) as add_job_mock, patch.object(
@@ -30,14 +31,14 @@ def test_enqueues_outdated_queries_for_sqlquery(self):
query1.data_source,
query1.user_id,
scheduled_query=query1,
- metadata=ANY,
+ metadata={"query_id": query1.id, "Username": "Scheduled"},
),
call(
"select 42 LIMIT 1000",
query2.data_source,
query2.user_id,
scheduled_query=query2,
- metadata=ANY,
+ metadata={"query_id": query2.id, "Username": "Scheduled"},
),
],
any_order=True,
@@ -51,7 +52,9 @@ def test_enqueues_outdated_queries_for_non_sqlquery(self):
ds = self.factory.create_data_source(
group=self.factory.org.default_group, type="prometheus"
)
- query1 = self.factory.create_query(data_source=ds, options={"apply_auto_limit": True})
+ query1 = self.factory.create_query(
+ data_source=ds, options={"apply_auto_limit": True}
+ )
query2 = self.factory.create_query(
query_text="select 42;", data_source=ds, options={"apply_auto_limit": True}
)
@@ -68,14 +71,14 @@ def test_enqueues_outdated_queries_for_non_sqlquery(self):
query1.data_source,
query1.user_id,
scheduled_query=query1,
- metadata=ANY,
- ),
+ metadata={"query_id": query1.id, "Username": "Scheduled"},
+ ),
call(
query2.query_text,
query2.data_source,
query2.user_id,
scheduled_query=query2,
- metadata=ANY,
+ metadata={"query_id": query2.id, "Username": "Scheduled"},
),
],
any_order=True,
@@ -106,7 +109,9 @@ def test_doesnt_enqueue_outdated_queries_for_paused_data_source_for_sqlquery(sel
metadata=ANY,
)
- def test_doesnt_enqueue_outdated_queries_for_paused_data_source_for_non_sqlquery(self):
+ def test_doesnt_enqueue_outdated_queries_for_paused_data_source_for_non_sqlquery(
+ self,
+ ):
"""
refresh_queries() does not launch execution tasks for queries whose
data source is paused.
@@ -114,7 +119,9 @@ def test_doesnt_enqueue_outdated_queries_for_paused_data_source_for_non_sqlquery
ds = self.factory.create_data_source(
group=self.factory.org.default_group, type="prometheus"
)
- query = self.factory.create_query(data_source=ds, options={"apply_auto_limit": True})
+ query = self.factory.create_query(
+ data_source=ds, options={"apply_auto_limit": True}
+ )
oq = staticmethod(lambda: [query])
query.data_source.pause()
with patch.object(Query, "outdated_queries", oq):
@@ -132,7 +139,7 @@ def test_doesnt_enqueue_outdated_queries_for_paused_data_source_for_non_sqlquery
query.user_id,
scheduled_query=query,
metadata=ANY,
- )
+ )
def test_enqueues_parameterized_queries_for_sqlquery(self):
"""
@@ -150,7 +157,7 @@ def test_enqueues_parameterized_queries_for_sqlquery(self):
"title": "n",
}
],
- "apply_auto_limit": True
+ "apply_auto_limit": True,
},
)
oq = staticmethod(lambda: [query])
@@ -185,8 +192,7 @@ def test_enqueues_parameterized_queries_for_non_sqlquery(self):
"title": "n",
}
],
- "apply_auto_limit": True
-
+ "apply_auto_limit": True,
},
data_source=ds,
)
@@ -219,7 +225,7 @@ def test_doesnt_enqueue_parameterized_queries_with_invalid_parameters(self):
"title": "n",
}
],
- "apply_auto_limit": True
+ "apply_auto_limit": True,
},
)
oq = staticmethod(lambda: [query])
@@ -230,7 +236,7 @@ def test_doesnt_enqueue_parameterized_queries_with_invalid_parameters(self):
add_job_mock.assert_not_called()
def test_doesnt_enqueue_parameterized_queries_with_dropdown_queries_that_are_detached_from_data_source(
- self
+ self,
):
"""
Scheduled queries with a dropdown parameter which points to a query that is detached from its data source are skipped.
@@ -247,7 +253,7 @@ def test_doesnt_enqueue_parameterized_queries_with_dropdown_queries_that_are_det
"title": "n",
}
],
- "apply_auto_limit": True
+ "apply_auto_limit": True,
},
)
diff --git a/tests/tasks/test_worker.py b/tests/tasks/test_worker.py
--- a/tests/tasks/test_worker.py
+++ b/tests/tasks/test_worker.py
@@ -29,7 +29,7 @@ def test_worker_records_success_metrics(self, incr):
query.user_id,
False,
None,
- {"Username": "Patrick", "Query ID": query.id},
+ {"Username": "Patrick", "query_id": query.id},
)
Worker(["queries"]).work(max_jobs=1)
@@ -38,7 +38,7 @@ def test_worker_records_success_metrics(self, incr):
call("rq.jobs.running.queries"),
call("rq.jobs.started.queries"),
call("rq.jobs.running.queries", -1, 1),
- call("rq.jobs.finished.queries")
+ call("rq.jobs.finished.queries"),
]
incr.assert_has_calls(calls)
@@ -56,7 +56,7 @@ def test_worker_records_failure_metrics(self, _, incr):
query.user_id,
False,
None,
- {"Username": "Patrick", "Query ID": query.id},
+ {"Username": "Patrick", "query_id": query.id},
)
job.set_status(JobStatus.FAILED)
@@ -66,7 +66,7 @@ def test_worker_records_failure_metrics(self, _, incr):
call("rq.jobs.running.queries"),
call("rq.jobs.started.queries"),
call("rq.jobs.running.queries", -1, 1),
- call("rq.jobs.failed.queries")
+ call("rq.jobs.failed.queries"),
]
incr.assert_has_calls(calls)
@@ -88,7 +88,7 @@ def test_enqueue_query_records_created_metric(self, incr):
query.user_id,
False,
None,
- {"Username": "Patrick", "Query ID": query.id},
+ {"Username": "Patrick", "query_id": query.id},
)
incr.assert_called_with("rq.jobs.created.queries")
| Scheduled query not working in the latest preview Docker Image (redash/redash:preview)
### Issue Summary
Scheduled query not working in the latest preview Docker Image (redash/redash:preview).
### Steps to Reproduce
`redash_scheduled_worker_1` has error:
```
[2021-03-27 07:27:06,319][PID:520][ERROR][rq.worker] AttributeError: 'NoneType' object has no attribute 'id'
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 934, in perform_job
rv = job.perform()
File "/usr/local/lib/python3.7/site-packages/rq/job.py", line 686, in perform
self._result = self._execute()
File "/usr/local/lib/python3.7/site-packages/rq/job.py", line 692, in _execute
return self.func(*self.args, **self.kwargs)
File "/app/redash/tasks/queries/execution.py", line 288, in execute_query
scheduled_query_id is not None,
File "/app/redash/tasks/queries/execution.py", line 175, in __init__
models.scheduled_queries_executions.update(self.query_model.id)
AttributeError: 'NoneType' object has no attribute 'id'
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 934, in perform_job
rv = job.perform()
File "/usr/local/lib/python3.7/site-packages/rq/job.py", line 686, in perform
self._result = self._execute()
File "/usr/local/lib/python3.7/site-packages/rq/job.py", line 692, in _execute
return self.func(*self.args, **self.kwargs)
File "/app/redash/tasks/queries/execution.py", line 288, in execute_query
scheduled_query_id is not None,
File "/app/redash/tasks/queries/execution.py", line 175, in __init__
models.scheduled_queries_executions.update(self.query_model.id)
AttributeError: 'NoneType' object has no attribute 'id'
```
If I revert this commit (9fdf1f341d02d903f045ea65d130a3cc93884299), it works again.
### Technical details:
* Redash Version: Version: 9.0.0-beta (44178d99)
* Browser/OS: Chrome
* How did you install Redash: Docker
| 2021-03-29T18:36:41 |
|
getredash/redash | 5,516 | getredash__redash-5516 | [
"5466"
] | 64a1d7a6cd5f8eae7e68bd18f4b8b083921010d8 | diff --git a/redash/models/__init__.py b/redash/models/__init__.py
--- a/redash/models/__init__.py
+++ b/redash/models/__init__.py
@@ -1120,7 +1120,7 @@ def all(cls, org, group_ids, user_id):
joinedload(Dashboard.user).load_only(
"id", "name", "_profile_image_url", "email"
)
- )
+ ).distinct(Dashboard.created_at, Dashboard.slug)
.outerjoin(Widget)
.outerjoin(Visualization)
.outerjoin(Query)
| diff --git a/tests/models/test_dashboards.py b/tests/models/test_dashboards.py
--- a/tests/models/test_dashboards.py
+++ b/tests/models/test_dashboards.py
@@ -50,3 +50,32 @@ def test_returns_drafts_by_the_user(self):
# not using self.assertIn/NotIn because otherwise this fails :O
self.assertTrue(d in dashboards)
self.assertFalse(d2 in dashboards)
+
+
+ def test_returns_correct_number_of_dashboards(self):
+ # Solving https://github.com/getredash/redash/issues/5466
+
+ usr = self.factory.create_user()
+
+ ds1 = self.factory.create_data_source()
+ ds2 = self.factory.create_data_source()
+
+ qry1 = self.factory.create_query(data_source=ds1, user=usr)
+ qry2 = self.factory.create_query(data_source=ds2, user=usr)
+
+ viz1 = self.factory.create_visualization(query_rel=qry1, )
+ viz2 = self.factory.create_visualization(query_rel=qry2, )
+
+ def create_dashboard():
+ dash = self.factory.create_dashboard(name="boy howdy", user=usr)
+ self.factory.create_widget(dashboard=dash, visualization=viz1)
+ self.factory.create_widget(dashboard=dash, visualization=viz2)
+
+ return dash
+
+ d1 = create_dashboard()
+ d2 = create_dashboard()
+
+ results = Dashboard.all(self.factory.org, usr.group_ids, usr.id)
+
+ self.assertEqual(2, results.count(), "The incorrect number of dashboards were returned")
| The item count DashBoards View displayed per page is not correct
### Issue Summary
The item count DashBoards view displayed per page is not correct. By default, 20/page, but it actually shows only 12 items. If I switch to 5/page, it only shows 2 items. It can be reproduced on the deploy preview. https://redash-preview.netlify.app/dashboards?order=name&page=1&page_size=5

### Technical details:
* Redash Version: 9.0.0-beta (67263e1b)
* Browser/OS: Chrome 89.0.4389.114
* How did you install Redash: Docker
| It's because of an issue with the `all` query. Haven't debugged the query yet, but here's why:

Presumably because of some join leading to extra rows in the query
Found the root cause here: https://github.com/getredash/redash/pull/5267/files#r628401694
Hi there, I can't reproduce this on the latest tip of master.
@susodapop I just tested the latest image on dockerhub ( redash:latest) and the bug is still present.
Thank you @bratao. I have reproduced it locally now. Working on a patch today. | 2021-06-15T19:55:06 |
getredash/redash | 5,623 | getredash__redash-5623 | [
"5622"
] | 143d22db04a9058966b8c7d678b06f228b937326 | diff --git a/redash/query_runner/sqlite.py b/redash/query_runner/sqlite.py
--- a/redash/query_runner/sqlite.py
+++ b/redash/query_runner/sqlite.py
@@ -29,7 +29,7 @@ def __init__(self, configuration):
def _get_tables(self, schema):
query_table = "select tbl_name from sqlite_master where type='table'"
- query_columns = "PRAGMA table_info(%s)"
+ query_columns = "PRAGMA table_info(\"%s\")"
results, error = self.run_query(query_table, None)
| Loading schema for Sqlite DB with "Order" column name fails
### Issue Summary
I added a Sqlite Database which has an column with the name `Order`.
When I try to create a query, the error `Schema refresh failed.` comes up.
### Steps to Reproduce
1. Add an Sqlite Database which has a column with the name `Order`
2. Try to create a query
3. Get the error `Schema refresh failed.`
### Technical details:
* Redash Version: cloned from master
* Browser/OS: Brave Browser & Ubuntu 18.1
* How did you install Redash: built from source
| 2021-10-15T12:31:40 |
||
getredash/redash | 5,697 | getredash__redash-5697 | [
"4469"
] | 12c475068483ba1cc2525e73ecd20843bf435922 | diff --git a/migrations/versions/fd4fc850d7ea_.py b/migrations/versions/fd4fc850d7ea_.py
new file mode 100644
--- /dev/null
+++ b/migrations/versions/fd4fc850d7ea_.py
@@ -0,0 +1,60 @@
+"""Convert user details to jsonb and move user profile image url into details column
+
+Revision ID: fd4fc850d7ea
+Revises: 89bc7873a3e0
+Create Date: 2022-01-31 15:24:16.507888
+
+"""
+from alembic import op
+import sqlalchemy as sa
+from sqlalchemy.dialects import postgresql
+
+from redash.models import db
+
+# revision identifiers, used by Alembic.
+revision = 'fd4fc850d7ea'
+down_revision = '89bc7873a3e0'
+branch_labels = None
+depends_on = None
+
+
+def upgrade():
+ connection = op.get_bind()
+
+ ### commands auto generated by Alembic - please adjust! ###
+ op.alter_column('users', 'details',
+ existing_type=postgresql.JSON(astext_type=sa.Text()),
+ type_=postgresql.JSONB(astext_type=sa.Text()),
+ existing_nullable=True,
+ existing_server_default=sa.text("'{}'::jsonb"))
+ ### end Alembic commands ###
+
+ update_query = """
+ update users
+ set details = details::jsonb || ('{"profile_image_url": "' || profile_image_url || '"}')::jsonb
+ where 1=1
+ """
+ connection.execute(update_query)
+ op.drop_column("users", "profile_image_url")
+
+
+def downgrade():
+ # ### commands auto generated by Alembic - please adjust! ###
+ connection = op.get_bind()
+ op.add_column("users", sa.Column("profile_image_url", db.String(320), nullable=True))
+
+ update_query = """
+ update users set
+ profile_image_url = details->>'profile_image_url',
+ details = details - 'profile_image_url' ;
+ """
+
+ connection.execute(update_query)
+ db.session.commit()
+ op.alter_column('users', 'details',
+ existing_type=postgresql.JSONB(astext_type=sa.Text()),
+ type_=postgresql.JSON(astext_type=sa.Text()),
+ existing_nullable=True,
+ existing_server_default=sa.text("'{}'::json"))
+
+ # ### end Alembic commands ###
diff --git a/redash/models/__init__.py b/redash/models/__init__.py
--- a/redash/models/__init__.py
+++ b/redash/models/__init__.py
@@ -1118,7 +1118,7 @@ def all(cls, org, group_ids, user_id):
query = (
Dashboard.query.options(
joinedload(Dashboard.user).load_only(
- "id", "name", "_profile_image_url", "email"
+ "id", "name", "details", "email"
)
).distinct(Dashboard.created_at, Dashboard.slug)
.outerjoin(Widget)
diff --git a/redash/models/users.py b/redash/models/users.py
--- a/redash/models/users.py
+++ b/redash/models/users.py
@@ -85,7 +85,6 @@ class User(
org = db.relationship("Organization", backref=db.backref("users", lazy="dynamic"))
name = Column(db.String(320))
email = Column(EmailType)
- _profile_image_url = Column("profile_image_url", db.String(320), nullable=True)
password_hash = Column(db.String(128), nullable=True)
group_ids = Column(
"groups", MutableList.as_mutable(postgresql.ARRAY(key_type("Group"))), nullable=True
@@ -94,7 +93,7 @@ class User(
disabled_at = Column(db.DateTime(True), default=None, nullable=True)
details = Column(
- MutableDict.as_mutable(postgresql.JSON),
+ MutableDict.as_mutable(postgresql.JSONB),
nullable=True,
server_default="{}",
default={},
@@ -102,6 +101,9 @@ class User(
active_at = json_cast_property(
db.DateTime(True), "details", "active_at", default=None
)
+ _profile_image_url = json_cast_property(
+ db.Text(), "details", "profile_image_url", default=None
+ )
is_invitation_pending = json_cast_property(
db.Boolean(True), "details", "is_invitation_pending", default=False
)
| Redash can't create user from Google with long profile_image_url
### Redash can't create user with long profile_image_url
My coworker tried to login to redash and it failed.
```
[2019-12-20 14:32:12,927] ERROR in app: Exception on /oauth/google_callback [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/site-packages/flask_restful/__init__.py", line 271, in error_router
return original_handler(e)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/app/redash/authentication/google_oauth.py", line 101, in authorized
user = create_and_login_user(org, profile['name'], profile['email'], picture_url)
File "/app/redash/authentication/__init__.py", line 257, in create_and_login_user
models.db.session.commit()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 153, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 943, in commit
self.transaction.commit()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 467, in commit
self._prepare_impl()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 447, in _prepare_impl
self.session.flush()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2254, in flush
self._flush(objects)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2380, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2344, in _flush
flush_context.execute()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 391, in execute
rec.execute(self)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 556, in execute
uow
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj
mapper, table, insert)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 866, in _emit_insert_statements
execute(statement, params)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
context)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
exc_info
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
DataError: (psycopg2.DataError) value too long for type character varying(320)
[SQL: 'INSERT INTO users (updated_at, created_at, org_id, name, email, profile_image_url, password_hash, groups, api_key, disabled_at, details) VALUES (now(), now(), %(org_id)s, %(name)s, %(email)s, %(profile_image_url)s, %(password_hash)s, %(groups)s, %(api_key)s, %(disabled_at)s, %(details)s) RETURNING users.id'] [parameters: {'name': u'<redacted>', 'org_id': <redacted>, 'profile_image_url': u'https://lh3.googleusercontent.com/a-/AAuE7mCQC1y8a0Bew0vZ3zVr835IDK1pq8_J75Jy4YNUwe2TdaYqr8vJBF1eQB8k5u6kooonWTfrnVdpOjR3_Epvit-sKbkbjq12GgcW6qv1iva ... (517 characters truncated) ... JfmDWlN_ESNOyJu6JRgNKLqFN5pJQJQ44IcS0OEt5ozElvbEV35vX7sw-OBptVnBUPW4wy9cElsIhnw8ISHgp8zSqJhwQfrn5bII6fN42EMrq1_sv66KBAm-0NIit0QYWkocdT58V4PClb8?sz=40', 'disabled_at': None, 'details': '{"is_invitation_pending": false}', 'groups': [2], 'api_key': '<redacted>', 'email': u'<redacted>', 'password_hash': None}] (Background on this error at: http://sqlalche.me/e/9h9h)
```
Turns out, he had basic auto-generated one letter avatar \([more info](https://gsuiteupdates.googleblog.com/2015/09/change-to-default-avatar-for-google.html)\) for which Google returns profile_image_url 815 characters long. Maybe they generate it on the fly, who knows. We changed this auto-generated avatar to a normal one and it succeeded.
### Steps to Reproduce
1. Have an user in identity provider (G suite in our case) with profile_image_url longer than 320 characters
2. Try to login first time
3. Get internal server error
### Technical details:
* Redash Version: 8.0
* Browser/OS: Chrome
* How did you install Redash: Helmchart
| Ran across this issue as well today. User uploaded the same picture again, and was given a smaller URL which worked.
Can we get the schema updated to allow for extra length in this field?
Let's move this to the `details` JSON column we have on the User model and then we won't have any such or similar issues.
Another interim quick solution to allow creating the user is to check the length of the URL before assigning it (and assign null if it's too long).
This is what I did (as a workaround) to address it (My redash is in docker):
```sql
docker exec redash_postgres_1 su postgres -c "psql -d postgres -c 'alter table users add column profile_image_url_new varchar null'"
docker exec redash_postgres_1 su postgres -c "psql -d postgres -c 'update users set profile_image_url_new=profile_image_url'"
docker exec redash_postgres_1 su postgres -c "psql -d postgres -c 'alter table users drop column profile_image_url'"
docker exec redash_postgres_1 su postgres -c "psql -d postgres -c 'alter table users rename column profile_image_url_new to profile_image_url'"
```
> This is what I did (as a workaround) to address it (My redash is in docker):
>
> ```sql
> docker exec redash_postgres_1 su postgres -c "psql -d postgres -c 'alter table users add column profile_image_url_new varchar null'"
> docker exec redash_postgres_1 su postgres -c "psql -d postgres -c 'update users set profile_image_url_new=profile_image_url'"
> docker exec redash_postgres_1 su postgres -c "psql -d postgres -c 'alter table users drop column profile_image_url'"
> docker exec redash_postgres_1 su postgres -c "psql -d postgres -c 'alter table users rename column profile_image_url_new to profile_image_url'"
> ```
If this is necessary through the long-term it would be best to write an alembic migration for it. | 2022-01-31T18:14:06 |
|
getredash/redash | 5,734 | getredash__redash-5734 | [
"5733"
] | e6ebef1e5ab866ce1e706eaee6260edaffdc2bd7 | diff --git a/redash/query_runner/mongodb.py b/redash/query_runner/mongodb.py
--- a/redash/query_runner/mongodb.py
+++ b/redash/query_runner/mongodb.py
@@ -221,15 +221,21 @@ def _get_collection_fields(self, db, collection_name):
# document written.
collection_is_a_view = self._is_collection_a_view(db, collection_name)
documents_sample = []
- if collection_is_a_view:
- for d in db[collection_name].find().limit(2):
- documents_sample.append(d)
- else:
- for d in db[collection_name].find().sort([("$natural", 1)]).limit(1):
- documents_sample.append(d)
-
- for d in db[collection_name].find().sort([("$natural", -1)]).limit(1):
- documents_sample.append(d)
+ try:
+ if collection_is_a_view:
+ for d in db[collection_name].find().limit(2):
+ documents_sample.append(d)
+ else:
+ for d in db[collection_name].find().sort([("$natural", 1)]).limit(1):
+ documents_sample.append(d)
+
+ for d in db[collection_name].find().sort([("$natural", -1)]).limit(1):
+ documents_sample.append(d)
+ except Exception as ex:
+ template = "An exception of type {0} occurred. Arguments:\n{1!r}"
+ message = template.format(type(ex).__name__, ex.args)
+ logger.error(message)
+ return []
columns = []
for d in documents_sample:
self._merge_property_names(columns, d)
@@ -242,10 +248,11 @@ def get_schema(self, get_stats=False):
if collection_name.startswith("system."):
continue
columns = self._get_collection_fields(db, collection_name)
- schema[collection_name] = {
- "name": collection_name,
- "columns": sorted(columns),
- }
+ if columns:
+ schema[collection_name] = {
+ "name": collection_name,
+ "columns": sorted(columns),
+ }
return list(schema.values())
| Error to load MongoDB collections
### Issue Summary
When you create a Mongodb data source using a mongodb user which has access to a databse but doesn't have privileges to find records in a specific collection under the database, redash can't refresh the schema as it tried to get a data sample even though the user doesn't have access to the collection, that probably happens because the command list_collections returns a list of all collections regardless if the user has access to its data or not.
### Steps to Reproduce
1. Create a role in mongodb and give access only to certain collections.
2. Create a user in mongodb and assign the previous role to it.
3. Create a data source in redash.
4. Try selecting the newly created data source in the query page.
### Technical details:
* Redash Version: 10.1.0
* Browser/OS: Any
* How did you install Redash: Tried with Helm chart and aws market place
| 2022-04-12T04:42:08 |
||
getredash/redash | 5,812 | getredash__redash-5812 | [
"5811"
] | 90cd27fa25889b18b28f08b36fa845682100e39c | diff --git a/redash/query_runner/mssql_odbc.py b/redash/query_runner/mssql_odbc.py
--- a/redash/query_runner/mssql_odbc.py
+++ b/redash/query_runner/mssql_odbc.py
@@ -114,9 +114,9 @@ def run_query(self, query, user):
port = self.configuration.get("port", 1433)
charset = self.configuration.get("charset", "UTF-8")
- connection_string_fmt = "DRIVER={{ODBC Driver 17 for SQL Server}};PORT={};SERVER={};DATABASE={};UID={};PWD={}"
+ connection_string_fmt = "DRIVER={{ODBC Driver 17 for SQL Server}};SERVER={},{};DATABASE={};UID={};PWD={}"
connection_string = connection_string_fmt.format(
- port, server, db, user, password
+ server, port, db, user, password
)
if self.configuration.get("use_ssl", False):
| Timing out when connecting to a MSSQL database on non-default port using ODBC driver
I had to use "Microsoft SQL Server (ODBC)" data source because the "Microsoft SQL Server" one does not currently support using SSL. However, when trying to connect to my server on a port different than 1433, connection timed out.
After a bit of digging, I found this:
> Microsoft's ODBC drivers for SQL Server do not use a PORT= parameter. The port number, if any, is appended to the server name/IP with a comma
source: https://stackoverflow.com/a/50051708/1277401
| 2022-08-04T20:57:03 |
||
getredash/redash | 6,497 | getredash__redash-6497 | [
"6496"
] | 09ec299e6556bcd7234e935dc661d4fec05e70ae | diff --git a/redash/models/users.py b/redash/models/users.py
--- a/redash/models/users.py
+++ b/redash/models/users.py
@@ -5,8 +5,7 @@
from functools import reduce
from operator import or_
-from flask import current_app as app
-from flask import request_started, url_for
+from flask import current_app, request_started, url_for
from flask_login import AnonymousUserMixin, UserMixin, current_user
from passlib.apps import custom_app_context as pwd_context
from sqlalchemy.dialects import postgresql
@@ -129,7 +128,7 @@ def regenerate_api_key(self):
def to_dict(self, with_api_key=False):
profile_image_url = self.profile_image_url
if self.is_disabled:
- assets = app.extensions["webpack"]["assets"] or {}
+ assets = current_app.extensions["webpack"]["assets"] or {}
path = "images/avatar.svg"
profile_image_url = url_for("static", filename=assets.get(path, path))
@@ -158,7 +157,8 @@ def to_dict(self, with_api_key=False):
return d
- def is_api_user(self):
+ @staticmethod
+ def is_api_user():
return False
@property
@@ -377,7 +377,8 @@ class AnonymousUser(AnonymousUserMixin, PermissionsCheckMixin):
def permissions(self):
return []
- def is_api_user(self):
+ @staticmethod
+ def is_api_user():
return False
@@ -397,7 +398,8 @@ def __init__(self, api_key, org, groups, name=None):
def __repr__(self):
return "<{}>".format(self.name)
- def is_api_user(self):
+ @staticmethod
+ def is_api_user():
return True
@property
@@ -410,5 +412,9 @@ def org_id(self):
def permissions(self):
return ["view_query"]
- def has_access(self, obj, access_type):
+ @staticmethod
+ def has_access(obj, access_type):
return False
+
+ def get_actual_user(self):
+ return repr(self)
| diff --git a/tests/models/test_users.py b/tests/models/test_users.py
--- a/tests/models/test_users.py
+++ b/tests/models/test_users.py
@@ -1,5 +1,5 @@
from redash import redis_connection
-from redash.models import User, db
+from redash.models import ApiUser, User, db
from redash.models.users import LAST_ACTIVE_KEY, sync_last_active_at
from redash.utils import dt_from_timestamp
from tests import BaseTestCase, authenticated_user
@@ -103,3 +103,16 @@ def test_sync(self):
user_reloaded = User.query.filter(User.id == user.id).first()
self.assertIn("active_at", user_reloaded.details)
self.assertEqual(user_reloaded.active_at, timestamp)
+
+
+class TestUserGetActualUser(BaseTestCase):
+ def test_default_user(self):
+ user_email = "[email protected]"
+ user = self.factory.create_user(email=user_email)
+ self.assertEqual(user.get_actual_user(), user_email)
+
+ def test_api_user(self):
+ user_email = "[email protected]"
+ user = self.factory.create_user(email=user_email)
+ api_user = ApiUser(user.api_key, user.org, user.group_ids)
+ self.assertEqual(api_user.get_actual_user(), repr(api_user))
| Shared queries don't work
<!--
We use GitHub only for bug reports 🐛
Anything else should be a discussion: https://github.com/getredash/redash/discussions/ 👫
🚨For support, help & questions use https://github.com/getredash/redash/discussions/categories/q-a
💡For feature requests & ideas use https://github.com/getredash/redash/discussions/categories/ideas
**Found a security vulnerability?** Please email [email protected] to report any security vulnerabilities. We will acknowledge receipt of your vulnerability and strive to send you regular updates about our progress. If you're curious about the status of your disclosure please feel free to email us again. If you want to encrypt your disclosure email, you can use this PGP key.
-->
### Issue Summary
Embedded query shows an error.
### Steps to Reproduce
1. Create query
2. Click on `Emded elsewhere` -> `Public URL`
3. Open that URL
4. See `Error: Internal Server Error`
```
[2023-10-03 10:24:37,182][PID:10][ERROR][redash.app] Exception on /api/queries/1696/results [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask_login/utils.py", line 277, in decorated_view
return current_app.ensure_sync(func)(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/flask/views.py", line 109, in view
return current_app.ensure_sync(self.dispatch_request)(**kwargs)
File "/app/redash/handlers/base.py", line 31, in dispatch_request
return super(BaseResource, self).dispatch_request(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/flask_restful/__init__.py", line 604, in dispatch_request
resp = meth(*args, **kwargs)
File "/app/redash/permissions.py", line 71, in decorated
return fn(*args, **kwargs)
File "/app/redash/handlers/query_results.py", line 270, in post
return run_query(
File "/app/redash/handlers/query_results.py", line 109, in run_query
"Username": current_user.get_actual_user(),
AttributeError: 'ApiUser' object has no attribute 'get_actual_user'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/usr/local/lib/python3.8/site-packages/flask_restful/__init__.py", line 489, in wrapper
resp = resource(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/flask_login/utils.py", line 279, in decorated_view
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/flask/views.py", line 109, in view
return current_app.ensure_sync(self.dispatch_request)(**kwargs)
File "/app/redash/handlers/base.py", line 31, in dispatch_request
return super(BaseResource, self).dispatch_request(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/flask_restful/__init__.py", line 604, in dispatch_request
resp = meth(*args, **kwargs)
File "/app/redash/permissions.py", line 71, in decorated
return fn(*args, **kwargs)
File "/app/redash/handlers/query_results.py", line 270, in post
return run_query(
File "/app/redash/handlers/query_results.py", line 109, in run_query
"Username": current_user.get_actual_user(),
AttributeError: 'ApiUser' object has no attribute 'get_actual_user'
```
### Technical details:
* Redash Version: master branch
* How did you install Redash: helm
| 2023-10-03T11:35:45 |
|
getredash/redash | 6,505 | getredash__redash-6505 | [
"6179"
] | 138339a8a4ad8b2096eee306a60ce038956f1ee5 | diff --git a/redash/query_runner/influx_db.py b/redash/query_runner/influx_db.py
--- a/redash/query_runner/influx_db.py
+++ b/redash/query_runner/influx_db.py
@@ -1,6 +1,12 @@
import logging
-from redash.query_runner import BaseQueryRunner, register
+from redash.query_runner import (
+ TYPE_FLOAT,
+ TYPE_INTEGER,
+ TYPE_STRING,
+ BaseQueryRunner,
+ register,
+)
from redash.utils import json_dumps
logger = logging.getLogger(__name__)
@@ -14,25 +20,36 @@
enabled = False
+TYPES_MAP = {
+ str: TYPE_STRING,
+ int: TYPE_INTEGER,
+ float: TYPE_FLOAT,
+}
+
+
+def _get_type(value):
+ return TYPES_MAP.get(type(value), TYPE_STRING)
+
+
def _transform_result(results):
- result_columns = []
+ column_names = []
result_rows = []
for result in results:
for series in result.raw.get("series", []):
for column in series["columns"]:
- if column not in result_columns:
- result_columns.append(column)
+ if column not in column_names:
+ column_names.append(column)
tags = series.get("tags", {})
for key in tags.keys():
- if key not in result_columns:
- result_columns.append(key)
+ if key not in column_names:
+ column_names.append(key)
for result in results:
for series in result.raw.get("series", []):
for point in series["values"]:
result_row = {}
- for column in result_columns:
+ for column in column_names:
tags = series.get("tags", {})
if column in tags:
result_row[column] = tags[column]
@@ -42,7 +59,12 @@ def _transform_result(results):
result_row[column] = value
result_rows.append(result_row)
- return json_dumps({"columns": [{"name": c} for c in result_columns], "rows": result_rows})
+ if len(result_rows) > 0:
+ result_columns = [{"name": c, "type": _get_type(result_rows[0][c])} for c in result_rows[0].keys()]
+ else:
+ result_columns = [{"name": c, "type": TYPE_STRING} for c in column_names]
+
+ return json_dumps({"columns": result_columns, "rows": result_rows})
class InfluxDB(BaseQueryRunner):
| diff --git a/tests/query_runner/test_influx_db.py b/tests/query_runner/test_influx_db.py
new file mode 100644
--- /dev/null
+++ b/tests/query_runner/test_influx_db.py
@@ -0,0 +1,58 @@
+import json
+
+from influxdb.resultset import ResultSet
+
+from redash.query_runner import (
+ TYPE_FLOAT,
+ TYPE_INTEGER,
+ TYPE_STRING,
+)
+from redash.query_runner.influx_db import _transform_result
+
+raw = {
+ "series": [
+ {
+ "name": "typetest",
+ "columns": ["time", "k1", "v1", "v2"],
+ "values": [
+ ["2023-10-06T13:30:51.323358136Z", "foo", 0.5, 2],
+ ["2023-10-06T13:31:08.882953339Z", "bar", 0.6, 4],
+ ],
+ }
+ ]
+}
+
+raw_no_rows = {"series": [{"name": "typetest", "columns": ["time", "k1", "v1", "v2"], "values": []}]}
+
+
+def test_influxdb_result_types_with_rows():
+ result = ResultSet(raw)
+ transformed = _transform_result([result])
+ expected = {
+ "columns": [
+ {"name": "time", "type": TYPE_STRING},
+ {"name": "k1", "type": TYPE_STRING},
+ {"name": "v1", "type": TYPE_FLOAT},
+ {"name": "v2", "type": TYPE_INTEGER},
+ ],
+ "rows": [
+ {"k1": "foo", "time": "2023-10-06T13:30:51.323358136Z", "v1": 0.5, "v2": 2},
+ {"k1": "bar", "time": "2023-10-06T13:31:08.882953339Z", "v1": 0.6, "v2": 4},
+ ],
+ }
+ assert json.loads(transformed) == expected
+
+
+def test_influxdb_result_types_with_no_rows_are_string():
+ result = ResultSet(raw_no_rows)
+ transformed = _transform_result([result])
+ expected = {
+ "columns": [
+ {"name": "time", "type": TYPE_STRING},
+ {"name": "k1", "type": TYPE_STRING},
+ {"name": "v1", "type": TYPE_STRING},
+ {"name": "v2", "type": TYPE_STRING},
+ ],
+ "rows": [],
+ }
+ assert json.loads(transformed) == expected
| Error when running CSV download with InfluxDB as data source
Below is the error log for the Redash application.
Downloading CSV with InfluxDB as the data source causes an error.
The error log shows the following error
Since the test connection succeeds, we believe it is an application anomaly in Redash.
```
server_1 | KeyError: 'type'
server_1 | [2023-07-07 05:01:02,196][PID:9087][ERROR][redash.app] Exception on /api/queries/71/results/2093.csv [GET]
server_1 | Traceback (most recent call last):
server_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
server_1 | rv = self.dispatch_request()
server_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
server_1 | return self.view_functions[rule.endpoint](**req.view_args)
server_1 | File "/usr/local/lib/python3.7/site-packages/flask_restful/__init__.py", line 458, in wrapper
server_1 | resp = resource(*args, **kwargs)
server_1 | File "/usr/local/lib/python3.7/site-packages/flask_login/utils.py", line 261, in decorated_view
server_1 | return func(*args, **kwargs)
server_1 | File "/usr/local/lib/python3.7/site-packages/flask/views.py", line 89, in view
server_1 | return self.dispatch_request(*args, **kwargs)
server_1 | File "/app/redash/handlers/base.py", line 33, in dispatch_request
server_1 | return super(BaseResource, self).dispatch_request(*args, **kwargs)
server_1 | File "/usr/local/lib/python3.7/site-packages/flask_restful/__init__.py", line 573, in dispatch_request
server_1 | resp = meth(*args, **kwargs)
server_1 | File "/app/redash/permissions.py", line 71, in decorated
server_1 | return fn(*args, **kwargs)
server_1 | File "/app/redash/handlers/query_results.py", line 409, in get
server_1 | response = response_builders[filetype](query_result)
server_1 | File "/app/redash/handlers/query_results.py", line 439, in make_csv_response
server_1 | serialize_query_result_to_dsv(query_result, ","), 200, headers
server_1 | File "/app/redash/serializers/query_result.py", line 87, in serialize_query_result_to_dsv
server_1 | fieldnames, special_columns = _get_column_lists(query_data["columns"] or [])
server_1 | File "/app/redash/serializers/query_result.py", line 68, in _get_column_lists
server_1 | if col["type"] == col_type:
server_1 | KeyError: 'type'
```
Below is the JSON data when CSV download fails.
(Some parts are excerpted due to the length of the file.)
```
{
"query_result": {
"id": 2093,
"query": "SELECT\n SUM(d3) / 60\nFROM\n charger \nWHERE\n chg_id = '1'\n AND time >= '2023-04-30T15:00:00Z'\n AND time < '2023-05-03T15:00:00Z'\nGROUP BY chg_id, time(15m) fill(linear)\nlimit 10;",
"data": {
"columns": [{ "name": "time" }, { "name": "sum" }, { "name": "chg_id" }],
"rows": [
{ "time": "2023-04-30T15:00:00Z", "sum": 0, "chg_id": "1" },
]
}
}
}
```
Below is the JSON data for a successful CSV download.
```
"columns": [
{ "name": "id", "friendly_name": "id", "type": "integer" },
{
"name": "project_id",
"friendly_name": "project_id",
"type": "integer"
},
{ "name": "name", "friendly_name": "name", "type": "string" },
{ "name": "filters", "friendly_name": "filters", "type": null },
{ "name": "user_id", "friendly_name": "user_id", "type": "integer" },
{
"name": "column_names",
"friendly_name": "column_names",
"type": null
},
{
```
In the normal case, an element is generated for each column,
In the case of failed data, three columns made up one element.
Therefore, the required type element was missing, resulting in a program error.
The following is a part of the program that caused the error.
```
for col in columns:
fieldnames.append(col["name"])
for col_type in special_types.keys():
if col["type"] == col_type:
special_columns[col["name"]] = special_types[col_type]
```
This is just a guess, I am guessing, but it seems to me that the SQL results being executed against InfluxDB are special and are not being converted properly in Redash, which is what is causing the problem.
All text translated by Google Translate.
| That's probably a useful bug report, but it's missing what the version of Redash this is happening with is. (?)
oh sorry.
The Version is Version: 10.0.0 (9c928bd1)
Thank you
Thanks, that does sound like a bug we should look at. :smile:
@justinclift It looks like that InfluxDB v1 REST API and InfluxDBClient don't return a column type. So, the Query Runner for InfluxDB v1 doesn't return a column type.
https://github.com/getredash/redash/blob/3d32c55531d0b2148a51b2ac4201feebb495d818/redash/query_runner/influx_db.py#L45
My option is to take the value from the first row of the query results returned by InfluxDBClient and derive the column type from the Python type of that value. Can I create PR for this ?
@masayuki038 That sounds like a decent idea, and a reasonable foundation to possibly extend later on, if we need to adjust/improve the type detection. :smile: | 2023-10-07T05:50:58 |
getredash/redash | 6,561 | getredash__redash-6561 | [
"6560",
"6560"
] | 39e4ea155c1b1929130a527f023ad67fa047a6f3 | diff --git a/redash/handlers/organization.py b/redash/handlers/organization.py
--- a/redash/handlers/organization.py
+++ b/redash/handlers/organization.py
@@ -15,7 +15,7 @@ def organization_status(org_slug=None):
"data_sources": models.DataSource.all(current_org, group_ids=current_user.group_ids).count(),
"queries": models.Query.all_queries(current_user.group_ids, current_user.id, include_drafts=True).count(),
"dashboards": models.Dashboard.query.filter(
- models.Dashboard.org == current_org, models.Dashboard.is_archived is False
+ models.Dashboard.org == current_org, models.Dashboard.is_archived.is_(False)
).count(),
}
| The 'Create your first Dashboard' newbie link will not dispear even I create dashboards
### Issue Summary
The 'Create your first Dashboard' newbie link will not dispear even I create dashboards. Other newbie link works fine. I tried a completely new Redash instance, this issue still exists. I remember there is a commit related to the newbie link recently, but I cannot find which. This issue does not exists in the previous Docker preview image, so I assume that it should be related to recent commits.
### Steps to Reproduce
1. Create new dashboards.
2. The link still there.
<img width="280" alt="image" src="https://github.com/getredash/redash/assets/8188177/19555165-b2df-4b07-89cf-7443858ca704">
### Technical details:
* Redash Version: 23.10.0-dev (dev)
* Browser/OS: Chrome 118
* How did you install Redash: Docker
The 'Create your first Dashboard' newbie link will not dispear even I create dashboards
### Issue Summary
The 'Create your first Dashboard' newbie link will not dispear even I create dashboards. Other newbie link works fine. I tried a completely new Redash instance, this issue still exists. I remember there is a commit related to the newbie link recently, but I cannot find which. This issue does not exists in the previous Docker preview image, so I assume that it should be related to recent commits.
### Steps to Reproduce
1. Create new dashboards.
2. The link still there.
<img width="280" alt="image" src="https://github.com/getredash/redash/assets/8188177/19555165-b2df-4b07-89cf-7443858ca704">
### Technical details:
* Redash Version: 23.10.0-dev (dev)
* Browser/OS: Chrome 118
* How did you install Redash: Docker
| Well, that's clearly a bug. :wink:
This seems to be a bug introduced by #6167.
https://github.com/getredash/redash/commit/9b2f635692741396def5b78e0ba0d564a852de6f#diff-f7dc04ba20ba742413851922998a25301eec334c2f3836eb5d1e99b730ddddefR18
It seems to stem from SQLAlchemy, specifically: the `==` comparison operator is overloaded for `Column` but you can't overload `is`. So it's comparing a `Column` to `False`, and that's clearly not identical. [Relevant link](https://stackoverflow.com/a/48275373)
It seems elsewhere in the code that defines `Dashboard`, we use `is_`, so that should be an easy change to make.
Well, that's clearly a bug. :wink:
This seems to be a bug introduced by #6167.
https://github.com/getredash/redash/commit/9b2f635692741396def5b78e0ba0d564a852de6f#diff-f7dc04ba20ba742413851922998a25301eec334c2f3836eb5d1e99b730ddddefR18
It seems to stem from SQLAlchemy, specifically: the `==` comparison operator is overloaded for `Column` but you can't overload `is`. So it's comparing a `Column` to `False`, and that's clearly not identical. [Relevant link](https://stackoverflow.com/a/48275373)
It seems elsewhere in the code that defines `Dashboard`, we use `is_`, so that should be an easy change to make. | 2023-10-29T13:31:51 |
|
getredash/redash | 6,652 | getredash__redash-6652 | [
"6636"
] | 2b4b1cf7e3a53a7f78ee9e1a43a44864d15cfe76 | diff --git a/redash/query_runner/google_spreadsheets.py b/redash/query_runner/google_spreadsheets.py
--- a/redash/query_runner/google_spreadsheets.py
+++ b/redash/query_runner/google_spreadsheets.py
@@ -23,6 +23,7 @@
try:
import google.auth
import gspread
+ from google.auth.exceptions import GoogleAuthError
from google.oauth2.service_account import Credentials
from gspread.exceptions import APIError
from gspread.exceptions import WorksheetNotFound as GSWorksheetNotFound
@@ -230,13 +231,17 @@ def _get_spreadsheet_service(self):
return spreadsheetservice
def test_connection(self):
- service = self._get_spreadsheet_service()
test_spreadsheet_key = "1S0mld7LMbUad8LYlo13Os9f7eNjw57MqVC0YiCd1Jis"
try:
+ service = self._get_spreadsheet_service()
service.open_by_key(test_spreadsheet_key).worksheets()
except APIError as e:
+ logger.exception(e)
message = parse_api_error(e)
raise Exception(message)
+ except GoogleAuthError as e:
+ logger.exception(e)
+ raise Exception(str(e))
def run_query(self, query, user):
logger.debug("Spreadsheet is about to execute query: %s", query)
| diff --git a/tests/query_runner/test_google_spreadsheets.py b/tests/query_runner/test_google_spreadsheets.py
--- a/tests/query_runner/test_google_spreadsheets.py
+++ b/tests/query_runner/test_google_spreadsheets.py
@@ -1,12 +1,16 @@
import datetime
from unittest import TestCase
-from mock import MagicMock
+import pytest
+from google.auth.exceptions import TransportError
+from gspread.exceptions import APIError
+from mock import MagicMock, patch
from redash.query_runner import TYPE_DATETIME, TYPE_FLOAT
from redash.query_runner.google_spreadsheets import (
TYPE_BOOLEAN,
TYPE_STRING,
+ GoogleSpreadsheet,
WorksheetNotFoundByTitleError,
WorksheetNotFoundError,
_get_columns_and_column_names,
@@ -171,3 +175,32 @@ def test_is_url_key(self):
_key = "key|0"
self.assertFalse(is_url_key(_key))
+
+
+class TestConnection(TestCase):
+ @patch("redash.query_runner.google_spreadsheets.google.auth.default")
+ @patch("redash.query_runner.google_spreadsheets.gspread.Client")
+ def test_connect_succuess(self, mock_client, _mock_auth_default):
+ try:
+ qr_gspread = GoogleSpreadsheet({})
+ qr_gspread.test_connection()
+ mock_client().login.assert_called_once_with()
+ mock_client().open_by_key.assert_called_once()
+ except Exception:
+ self.fail("test_connection failed")
+
+ @patch("redash.query_runner.google_spreadsheets.google.auth.default")
+ def test_connect_fail_with_transport_error(self, mock_auth_default):
+ mock_auth_default.side_effect = TransportError("Connection Refused")
+ qr_gspread = GoogleSpreadsheet({})
+ with pytest.raises(Exception):
+ qr_gspread.test_connection()
+
+ @patch("redash.query_runner.google_spreadsheets.google.auth.default")
+ def test_connect_fail_with_api_error(self, mock_auth_default):
+ mock_response = MagicMock()
+ mock_response.json.return_value = {"error": {"message": "Sheet API is disabled"}}
+ mock_auth_default.side_effect = APIError(mock_response)
+ qr_gspread = GoogleSpreadsheet({})
+ with pytest.raises(Exception):
+ qr_gspread.test_connection()
| For self-hosted instance, Google sheet connection test failed, but can't find any logs, how to trouble shooting such cases


When test connections timeout, can't find any logs on worker/scheduler/server docker logs.
even make sure the log level is debug, can't find any related logs. how to trouble shooting this
| I succeeded to reproduce this.
When clicking "Test Connection", it shows "Connection Test Failed:" at right bottom on screen. However, there is no error log in redash-server and redash-worker.
#### redash-server-1
```
[2023-11-30 14:12:50,676][PID:11][INFO][metrics] method=POST path=/api/data_sources/2/test endpoint=datasourcetestresource status=200 content_type=application/json content_length=28 duration=2086.25 query_count=4 query_duration=24.14
[2023-11-30 14:12:50,694][PID:11][INFO][werkzeug] 10.0.2.2 - - [30/Nov/2023 14:12:50] "POST /api/data_sources/2/test HTTP/1.1" 200 -
```
#### redash-worker-1
```
[2023-11-30 14:12:50,676][PID:7][INFO][rq.worker] default: 023f6ae5-21ae-4146-9482-2829b3d06e8e
[2023-11-30 14:12:50,798][PID:552][INFO][rq.worker] default: Job OK (023f6ae5-21ae-4146-9482-2829b3d06e8e)
[2023-11-30 14:12:50,799][PID:552][INFO][rq.worker] Result is kept for 500 seconds
```

By the way, the cause of the Connection Failure was that the Google Sheet API was not enabled. After enabling this, I ran "Test Connection" and it shows "Success".
I confirmed that the query can also be executed successfully. If you run a query with the Google Sheet API disabled, you will not get an error and will see 0 results. At this time, the following error is output in the redash-worker log.
```
File "/usr/local/lib/python3.8/site-packages/gspread/client.py", line 199, in open_by_key
spreadsheet = Spreadsheet(self, {"id": key})
File "/usr/local/lib/python3.8/site-packages/gspread/spreadsheet.py", line 37, in __init__
metadata = self.fetch_sheet_metadata()
File "/usr/local/lib/python3.8/site-packages/gspread/spreadsheet.py", line 247, in fetch_sheet_metadata
r = self.client.request("get", url, params=params)
File "/usr/local/lib/python3.8/site-packages/gspread/client.py", line 94, in request
raise APIError(response)
gspread.exceptions.APIError: {'code': 403, 'message': 'Google Sheets API has not been used in (...snip...)
```
Thanks masayuki038 for your kindly reply, but actually I have two self-hosted instance, one is OK for the API test, another is not, and from the docker logs, I can't tell any different, and I did the test with the same google sheet API and credential, don't know how to trouble shoot the issue and how to fix the instance
By the way, I found some evidence, but not sure what's going wrong.
For the not working instance, in the server logs, can see "[INFO][werkzeug] 10.100.3.61 - - [30/Nov/2023 10:57:56] "GET /api/data_sources/1 HTTP/1.1" 200 -" the IP address is not the IP of server running docker containers.
For the working instance, in the server logs, can see "[INFO][werkzeug] 172.21.0.1 - - [30/Nov/2023 21:50:35] "GET /api/admin/queries/rq_status HTTP/1.1" 200 -" the IP address is the Galway of running docker containers.
how did this happened
@joeyrensh Could you try these?
1. For the not working instance, create a query with Google Sheets data source and run it even if "Test Connection" failed
2. Check **redash worker log**
You will find a traceback like the one I described in a previous comment. For security reasons, I omitted some parts in it, but they may contain information about the cause of the error. I noticed from this log that Google Sheet API is disabled.
Thanks @masayuki038 , follow your steps, I got below trace " HTTPSConnectionPool(host='oauth2.googleapis.com', port=443): Read timed out. (read timeout=120)" , do you know how to resolve it? From host, I can ping 'oauth2.googleapis.com' with success response.
@joeyrensh Thanks for your confirmation.
Since "Read timed out" was shown, I think this is a network issue. Even if ping is OK, we don't know whether TLS is OK or not. Please your network settings and gateways.
If you run this command on your Redash server and it returns 404 or any HTTP response code, you can access to googleapi server with TLS.
```
curl -XGET https://oauth2.googleapis.com -o /dev/null -w '%{http_code}\n' -s
```
@masayuki038 Follow your steps, I think I have found some evidence, I try to explain a little bit.
1、For the workable instance, run above command in the worker container can get 404 response.
2、For the not workable instance, run above command in the worker container get no response, just hang in there.
3、when I build the local image, I found that, I need to add "network = host" mode to the docker compose build part, otherwise APT-get can't be updated.
so my guess is that, the docker may have network issue, it have different with the host network, but I haven't found any settings or configuration have problem, including the IPtables, do you have any proposal for this ?
below is the redash_default networking, no any issues found.
[
{
"Name": "redash_default",
"Id": "1a5461c1f3b0fb298720fa6ab8221c31da4ac00f5addfee4cba6eec4147c1dc7",
"Created": "2023-12-04T13:18:06.462520271+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"464f97dad1b1cc61b33371e08c5e9fdaf4b6cc22b7ee4a393efffbc3566dded9": {
"Name": "redash-postgres-1",
"EndpointID": "20e5ccd51eee64b8eb03a97c31e27864c5910e4d536d7598ced80b620e79f6bb",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"4d25c4c38c8bb5fbd4f8424f12c945b07a71088a2bb15ad2a0c54dc9b2954227": {
"Name": "redash-worker-1",
"EndpointID": "137fb29430f9abc7f8535ea7d57b036580c656267da8935b478c4596385d075d",
"MacAddress": "02:42:ac:12:00:06",
"IPv4Address": "172.18.0.6/16",
"IPv6Address": ""
},
"7d0cccd8f56db9c5674d65ed0c2da728bef9b22e9100d245617be5c091f7e55e": {
"Name": "redash-server-1",
"EndpointID": "4757013004d6c24e0a03d54abb1e5c3f9ac57ac8afc2f1c630f09d9dceacbb12",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
},
"e50c1fca6ed508f001200b3d370492f6f483a936f1c1c14449138f8f38c70e17": {
"Name": "redash-redis-1",
"EndpointID": "371c7aa28a10f0ad87ff4caa1e401642449103b34d13d0d0ae7d64acc842199c",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"ed217029feb3da964c0d064ec6dc4634c16035fa07463ff215db011b274609df": {
"Name": "redash-email-1",
"EndpointID": "25699a86bcc9ee597afbf489a9a15050487ea797d4714f7322bde3badbb1ba92",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"eeb9b3321513628a5ad2fbbbeda168f3a3e81fd6430649d4be596f08b54f0be0": {
"Name": "redash-scheduler-1",
"EndpointID": "e5f8e0c4dbfa6d680f173eea3016cddb391055cfc8552bace26c8bc096f6220a",
"MacAddress": "02:42:ac:12:00:07",
"IPv4Address": "172.18.0.7/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "redash",
"com.docker.compose.version": "2.20.2"
}
}
]
@joeyrensh Thanks for your reply. However, sorry, but I don't think we should discuss Docker's network settings here. This is the place to discuss Redash issues.
> 3、when I build the local image, I found that, I need to add "network = host" mode to the docker compose build part, otherwise APT-get can't be updated.
I think it's best to contact the administrator of your Linux Server hosting Docker.
thanks a lot for your help , I have resolve all the issues, can close this ticket
@joeyrensh Thanks for reporting the result. I'm happy to hear it!
Please keep this issue since I think Redash should log an error on "Test Connection" failure. I will fix this.
Thanks for your contribution! | 2023-12-09T06:33:39 |
getredash/redash | 6,918 | getredash__redash-6918 | [
"6917"
] | e2a39de7d1f6ac8ccea74aa1a81b8ebefed7c908 | diff --git a/redash/query_runner/elasticsearch.py b/redash/query_runner/elasticsearch.py
--- a/redash/query_runner/elasticsearch.py
+++ b/redash/query_runner/elasticsearch.py
@@ -129,6 +129,8 @@ def _get_query_mappings(self, url):
for index_name in mappings_data:
index_mappings = mappings_data[index_name]
for m in index_mappings.get("mappings", {}):
+ if not isinstance(index_mappings["mappings"][m], dict):
+ continue
if "properties" not in index_mappings["mappings"][m]:
continue
for property_name in index_mappings["mappings"][m]["properties"]:
| Boolean field in elasticsearch index mapping causes query to fail
### Issue Summary
This bug occurs when querying Elasticsearch index which has a boolean field as its mapping. Here is a mapping example:
```json
{
"my_index":{
"mappings":{
"dynamic_templates":[
{
...
}
],
"date_detection":false,
"properties":{
...
}
}
}
}
```
The field that causes an error in this example is `date_detection`.
### Steps to Reproduce
1. Add index mapping that has boolean field as a value (for example date_detection).
2. Run any search query on index with mapping in question.
3. Get this error: `Error running query: argument of type 'bool' is not iterable `
### Technical details:
* Redash Version: v10.1.0 and preview version
* Browser/OS: Firefox, Ubuntu v22.04
* How did you install Redash: running setup.sh script
Here is also a log retrieved from docker compose logs:
```
adhoc_worker-1 | [2024-04-22 13:14:19,403][PID:119][INFO][rq.job.redash.tasks.queries.execution] job.func_name=redash.tasks.queries.execution.execute_query job.id=a2b1c322-fcb9-4923-ba0c-6b186013562e job=execute_query state=executing_query query_hash=12a4edf85528ecb9a65594f625d13ee3 type=elasticsearch ds_id=1 job_id=a2b1c322-fcb9-4923-ba0c-6b186013562e queue=queries query_id=1 username=*
adhoc_worker-1 | [2024-04-22 13:14:19,414][PID:119][WARNING][rq.job.redash.tasks.queries.execution] job.func_name=redash.tasks.queries.execution.execute_query job.id=a2b1c322-fcb9-4923-ba0c-6b186013562e Unexpected error while running query:
adhoc_worker-1 | Traceback (most recent call last):
adhoc_worker-1 | File "/app/redash/tasks/queries/execution.py", line 182, in run
adhoc_worker-1 | data, error = query_runner.run_query(annotated_query, self.user)
adhoc_worker-1 | File "/app/redash/query_runner/elasticsearch.py", line 449, in run_query
adhoc_worker-1 | mappings, error = self._get_query_mappings(mapping_url)
adhoc_worker-1 | File "/app/redash/query_runner/elasticsearch.py", line 132, in _get_query_mappings
adhoc_worker-1 | if "properties" not in index_mappings["mappings"][m]:
adhoc_worker-1 | TypeError: argument of type 'bool' is not iterable
adhoc_worker-1 | [2024-04-22 13:14:19,415][PID:119][INFO][rq.job.redash.tasks.queries.execution] job.func_name=redash.tasks.queries.execution.execute_query job.id=a2b1c322-fcb9-4923-ba0c-6b186013562e job=execute_query query_hash=12a4edf85528ecb9a65594f625d13ee3 ds_id=1 data_length=None error=[argument of type 'bool' is not iterable]
adhoc_worker-1 | [2024-04-22 13:14:19,418][PID:119][INFO][rq.worker] queries: Job OK (a2b1c322-fcb9-4923-ba0c-6b186013562e)
adhoc_worker-1 | [2024-04-22 13:14:19,418][PID:119][INFO][rq.worker] Result is kept for 43200 seconds
```
Here is where the error occurs:
https://github.com/getredash/redash/blob/6c68b489170270774a9cdec25d3bb8d3dc846c15/redash/query_runner/elasticsearch.py#L129-L143
### Possible solution
```python
mappings = {}
for index_name in mappings_data:
index_mappings = mappings_data[index_name]
for m in index_mappings.get("mappings", {}):
if not isinstance(index_mappings["mappings"][m], dict):
continue
if "properties" not in index_mappings["mappings"][m]:
continue
for property_name in index_mappings["mappings"][m]["properties"]:
property_data = index_mappings["mappings"][m]["properties"][property_name]
if property_name not in mappings:
property_type = property_data.get("type", None)
if property_type:
if property_type in ELASTICSEARCH_TYPES_MAPPING:
mappings[property_name] = ELASTICSEARCH_TYPES_MAPPING[property_type]
else:
mappings[property_name] = TYPE_STRING
```
| Hi @stankovic-marko
Could you please submit a PR? | 2024-04-22T17:42:49 |
|
google/flax | 107 | google__flax-107 | [
"101"
] | 15bcf6cc9a18af53af92eb340998544dea1bac4a | diff --git a/examples/vae/main.py b/examples/vae/main.py
--- a/examples/vae/main.py
+++ b/examples/vae/main.py
@@ -120,7 +120,9 @@ def loss_fn(model):
kld_loss = kl_divergence(mean, logvar)
loss = jnp.mean(bce_loss + kld_loss)
return loss, recon_x
- optimizer, _, _ = optimizer.optimize(loss_fn)
+ grad_fn = jax.value_and_grad(loss_fn, has_aux=True)
+ _, grad = grad_fn(optimizer.target)
+ optimizer = optimizer.apply_gradient(grad)
return optimizer
| VAE example uses deprecated `optimizer.optimize()`
| @makora9143 if you look at the console output when you run your example you'll see a warning. Can you please replace with `jax.grad()` or `jax.value_and_grad()` then `optimizer.apply_gradient()`?
@avital Thank you for your comment.
Unfortunately, I didn't find the warning at my console when I execute my vae example:
```bash
03/23/20 22:01:54 $ python main.py
~/.pyenv/versions/miniconda3-latest/envs/jax/lib/python3.7/site-packages/jax/lib/xla_bridge.py:123: UserWarning: No GPU/TPU found, falling back to CPU.
warnings.warn('No GPU/TPU found, falling back to CPU.')
I0323 22:01:59.797530 4402519488 dataset_builder.py:193] Overwrite dataset info from restored data version.
I0323 22:01:59.799996 4402519488 dataset_builder.py:273] Reusing dataset mnist (~/tensorflow_datasets/mnist/1.0.0)
I0323 22:01:59.800137 4402519488 dataset_builder.py:434] Constructing tf.data.Dataset for split train, from ~/tensorflow_datasets/mnist/1.0.0
I0323 22:01:59.974323 4402519488 dataset_builder.py:193] Overwrite dataset info from restored data version.
I0323 22:01:59.975799 4402519488 dataset_builder.py:273] Reusing dataset mnist (~/tensorflow_datasets/mnist/1.0.0)
I0323 22:01:59.975924 4402519488 dataset_builder.py:434] Constructing tf.data.Dataset for split test, from ~/tensorflow_datasets/mnist/1.0.0
eval epoch: 1, loss: 121.4550, BCE: 98.3277, KLD: 23.1273
```
I use :
- `jax=0.1.62`
- `flax (pip upgrade at a few minutes ago)`
on macOS.
Which version that outputs deprecated warning?
By the way, I have confirmed that using `jax.value_and_grad()` and `optimizer.apply_gradient()` is no problem.
Do I need to create a new PR?
Thank you for your support!
Hmm, does the latest push to pip not have this change?
https://github.com/google/flax/blob/prerelease/flax/optim.py#L289
Yes, please file a new PR. Thanks for /your/ support! | 2020-03-24T05:01:17 |
|
google/flax | 177 | google__flax-177 | [
"175"
] | b24c2d0fa79d0db1a35b9cade171186dc957cbac | diff --git a/flax/nn/base.py b/flax/nn/base.py
--- a/flax/nn/base.py
+++ b/flax/nn/base.py
@@ -919,7 +919,7 @@ def truncate_at(self, module_path):
def __getattr__(self, name):
value = getattr(self.module, name)
- if issubclass(value, Module):
+ if inspect.isclass(value) and issubclass(value, Module):
def wrapper(*args, **kwargs):
return value.call(self.params, *args, **kwargs)
return wrapper
| diff --git a/tests/nn_test.py b/tests/nn_test.py
--- a/tests/nn_test.py
+++ b/tests/nn_test.py
@@ -102,6 +102,16 @@ def test_init_by_shape_module(self):
self.assertEqual(y2, jnp.array([2.]))
self.assertEqual(params, {'bias': jnp.array([1.])})
+ def test_model(self):
+ rng = random.PRNGKey(0)
+ x = jnp.array([1.])
+ _, params = DummyModule.init(rng, x)
+ model = nn.Model(DummyModule, params)
+ y = model(x)
+ self.assertEqual(y, jnp.array([2.]))
+ y2 = jax.jit(model)(x)
+ self.assertEqual(y2, jnp.array([2.]))
+
def test_shared_module(self):
rng = random.PRNGKey(0)
x = jnp.array([1.])
@@ -272,6 +282,11 @@ def apply(self, x):
MultiMethod.__qualname__ + '.l2')
x = jnp.array([1., 2.])
+
+ _, params = MultiMethod.init(random.PRNGKey(0), x)
+ model = nn.Model(MultiMethod, params)
+ self.assertEqual(model.l2(), 2.)
+
y, _ = MultiMethodModel.init(random.PRNGKey(0), x)
self.assertEqual(y, 2.)
| Error when JITting `Model.__call__`
eg
```python
import jax
from flax import nn
layer=nn.Dense.partial(features=1)
key=jax.random.PRNGKey(0)
x=jax.random.normal(key, (20, 2))
_,params=layer.init(key, x)
layer_m=nn.Model(layer, params)
jax.jit(layer_m)(x)
```
errors with
```
TypeError Traceback (most recent call last)
<ipython-input-2-2e4e0581e3f5> in <module>
6 _,params=layer.init(key, x[0,...])
7 layer_m=nn.Model(layer, params)
----> 8 jax.jit(layer_m)(x)
~/opt/anaconda3/lib/python3.7/site-packages/jax/api.py in f_jitted(*args, **kwargs)
148 flat_fun, out_tree = flatten_fun(f, in_tree)
149 out = xla.xla_call(flat_fun, *args_flat, device=device, backend=backend,
--> 150 name=flat_fun.__name__)
151 return tree_unflatten(out_tree(), out)
152
~/opt/anaconda3/lib/python3.7/site-packages/jax/linear_util.py in __name__(self)
121 @property
122 def __name__(self):
--> 123 return getattr(self.f, '__name__', '<unnamed wrapped function>')
124
125 def wrap(self, gen, gen_static_args, out_store) -> 'WrappedFun':
~/opt/anaconda3/lib/python3.7/site-packages/flax/nn/base.py in __getattr__(self, name)
897 def __getattr__(self, name):
898 value = getattr(self.module, name)
--> 899 if issubclass(value, Module):
900 def wrapper(*args, **kwargs):
901 return value.call(self.params, *args, **kwargs)
~/opt/anaconda3/lib/python3.7/abc.py in __subclasscheck__(cls, subclass)
141 def __subclasscheck__(cls, subclass):
142 """Override for issubclass(subclass, cls)."""
--> 143 return _abc_subclasscheck(cls, subclass)
144
145 def _dump_registry(cls, file=None):
TypeError: issubclass() arg 1 must be a class
```
| Sorry, it took me a bit to figure out what was going on.
A Model should be pmap'able - what's happening here is a bit of a subtle bug:
First, a short-term "fix" is just wrapping it in a lambda passthrough:
```python
import jax
from flax import nn
layer=nn.Dense.partial(features=1)
key=jax.random.PRNGKey(0)
x=jax.random.normal(key, (4, 20, 2))
_,params=layer.init(key, x[0,...])
layer_m=nn.Model(layer, params)
jax.pmap(lambda z: layer_m(z))(x)
```
Now, what's going on:
- in a great change https://github.com/google/jax/pull/2073 made ~2 months ago to improve XLA call stack metadata JAX tries to get the `__name__` attribute from the pmap'd function, which in this case is our callable Model instance.
- the problem is that in another refactoring of the base flax code a month ago https://github.com/google/flax/commit/baf43e73cb0088a607c4da26be981a83bfaf6a52 we override `__getattr__` on Model to passthrough and grab the requested attr from Module, but inside that we are trying to eval `issubclass(fetched_attr, flax.nn.Module)` and `issubclass(<string object>, flax.nn.Module)` throws an error in python since it's nonsense.
We almost always use a Model inside an optimizer or indirectly in another function, and I think we must not have a unit test of a direct jit/pmap on a Model - my apologies for letting this slip through, we'll try to get a fix in asap. | 2020-04-07T08:16:10 |
google/flax | 217 | google__flax-217 | [
"212"
] | fe94c075d3c996dc1e1faa1e8682a63c278f444a | diff --git a/flax/optim/base.py b/flax/optim/base.py
--- a/flax/optim/base.py
+++ b/flax/optim/base.py
@@ -134,7 +134,7 @@ def apply_gradient(self, hyper_params, params, state, grads):
out = [self.apply_param_gradient(step, hyper_params, param, state, grad)
for param, state, grad in zip(params_flat, states_flat, grads_flat)]
- new_params_flat, new_states_flat = list(zip(*out))
+ new_params_flat, new_states_flat = list(zip(*out)) if out else ((), ())
new_params = jax.tree_unflatten(treedef, new_params_flat)
new_param_states = jax.tree_unflatten(treedef, new_states_flat)
new_state = OptimizerState(step + 1, new_param_states)
| diff --git a/tests/optim_test.py b/tests/optim_test.py
--- a/tests/optim_test.py
+++ b/tests/optim_test.py
@@ -79,6 +79,14 @@ def test_optimizer_with_focus(self):
self.assertEqual(new_optimizer.state, expected_state)
self.assertEqual(new_optimizer.target, expected_params)
+ def test_empty_optimizer(self):
+ params = {}
+ optimizer_def = optim.Momentum(learning_rate=0.1)
+ optimizer = optimizer_def.create(params)
+ new_optimizer = optimizer.apply_gradient({})
+ expected_state = optim.OptimizerState(1, {})
+ self.assertEqual(new_optimizer.state, expected_state)
+
class ModelParamTraversalTest(absltest.TestCase):
| apply_gradient with no parameters gives ValueError
This issue is admittedly a corner case, but one we've run into. If we consider the following `flax.nn.Module`:
```python
class Identity(flax.nn.Module):
def apply(self, x):
return x
```
We won't be able to call `apply_gradient` since the output from [this line](https://github.com/google/flax/blob/master/flax/optim/base.py#L134) will be an empty list.
This should probably (?) be addressed since it's exceptional behavior that may surprise, but could see arguments for different ways of resolving. One simple answer is to just no-op, but there might be some higher-level concerns I'm not thinking about which say we don't even want parameterless modules (in which case, raise on construction).
Anyway, we've resolved for now by just adding a dummy parameter. Here's the full minimum example and the resulting value error:
```python
import flax
import jax
import jax.numpy as jnp
class Identity(flax.nn.Module):
def apply(self, x):
return x
model_def = Identity.partial()
_, params = model_def.init_by_shape(jax.random.PRNGKey(0), [(1,)])
model = flax.nn.Model(model_def, params)
def loss_fn(model, x, y):
y_hat = model(x)
return jnp.square(y - y_hat).mean(), y_hat
optim_def = flax.optim.Adam(learning_rate=1.0)
optimizer = optim_def.create(model)
(loss, y_hat), grad = jax.value_and_grad(loss_fn, has_aux=True)(optimizer.target, 1.0, 2.0)
optimizer.apply_gradient(grad)
```
```python
~/src/flax/flax/optim/base.py in apply_gradient(self, hyper_params, params, state, grads)
135 for param, state, grad in zip(params_flat, states_flat, grads_flat)]
136
--> 137 new_params_flat, new_states_flat = list(zip(*out))
138 new_params = jax.tree_unflatten(treedef, new_params_flat)
139 new_param_states = jax.tree_unflatten(treedef, new_states_flat)
ValueError: not enough values to unpack (expected 2, got 0)
```
| 2020-04-20T08:22:32 |
|
google/flax | 236 | google__flax-236 | [
"232"
] | 95a773e36f43c254d739caeca449ca745562fe9c | diff --git a/examples/lm1b/input_pipeline.py b/examples/lm1b/input_pipeline.py
--- a/examples/lm1b/input_pipeline.py
+++ b/examples/lm1b/input_pipeline.py
@@ -129,7 +129,8 @@ def bin_and_batch(dataset,
if not training:
max_eval_length = max_eval_length or target_bucket_length * 32
bucket_boundaries[-1] = max_eval_length
- bucket_batch_sizes[-1] = target_batch_size // max_eval_length
+ bucket_batch_sizes[-1] = (target_batch_size //
+ (max_eval_length // target_bucket_length))
# We will pad to boundaries which pads to bucket_boundary-1: add 1 here.
bucket_boundaries = [b + 1 for b in bucket_boundaries]
# Make batch sizes divisible by n_devices.
| Clarification regarding LM1B input pipeline
Hi, I am looking for two clarifications regarding the [input_pipeline](https://github.com/google/flax/blob/master/examples/lm1b/input_pipeline.py) in Flax LM1B example.
1. I think there might be a bug at
https://github.com/google/flax/blob/master/examples/lm1b/input_pipeline.py#L132.
```
max_eval_length = max_eval_length or target_bucket_length * 32
bucket_boundaries[-1] = max_eval_length
bucket_batch_sizes[-1] = target_batch_size // max_eval_length
```
The last statement might result in 0 batch size for the last bucket.
If `max_eval_length == target_bucket_length * 32`, the `bucket_batch_size[-1]` should be `target_batch_size // 32` instead of `target_batch_size // (32 * target_bucket_length)` (which is what current implementation does). In general, `max_eval_length >> target_batch_size`, hence, this might result in 0 batch size for the last bucket.
2. The documentation [here](https://github.com/google/flax/blob/master/examples/lm1b/input_pipeline.py#L241) mentions that
dynamic batching is currently not compatible with multiple hosts, although the bucketing function handles the case when `n_devices > 1`. Currently, if I understand the control flow correctly, the binning (and batching) happens first, then followed by distribution across the hosts through pmap. If this is the case and the effective batch size is ensured to be a multiple of `n_devices`, why should dynamic batching be any different from the static batching control flow?
| 1. This particular function is more than a bit convoluted! My apologies for the unclear logic, we'll probably just remove this special case altogether - we're just trying to deal w. the more general case of evaluating on eval-set examples much longer than those in the training set, which I've needed to do occasionally but haven't used in ages!
This does look incorrect, I'll make a fix, I believe the correct expression to attempt to maintain the same total per-batch token count is:
`bucket_batch_sizes[-1] = target_batch_size // (max_eval_length // target_bucket_length)`
However, this doesn't cause much trouble, this never divides by zero, since that's forced to be 1 or more at: https://github.com/google/flax/blob/master/examples/lm1b/input_pipeline.py#L137 for long eval examples this usually ends up having a batch size of 1 anyway, which is why we didn't notice this earlier. Thanks for pointing it out!
2. multiple __hosts__ are not the same thing as multiple __devices__ : this function works fine for multiple devices, but in the case of multiple hosts each with their own set of devices, we would need to synchronize the "bucket" being sharded and fed to the devices on each host - in JAX the multihost programming model requires each host to feed its own devices with exactly the same input shape at each synchronized pmap step. | 2020-05-04T11:29:23 |
|
google/flax | 270 | google__flax-270 | [
"269"
] | aff10f032e892e28a1acf4dd4ee9dcc6cd39a606 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,7 @@
"numpy>=1.12",
"jax>=0.1.59",
"matplotlib", # only needed for tensorboard export
- "dataclasses", # will only install on py3.6
+ "dataclasses;python_version<'3.7'", # will only install on py3.6
"msgpack",
]
| `typing._ClassVar` cannot be accessed in the iPython shell – dataclasses package seems to mess up on Python 3.7
### Problem you have encountered:
I just installed flax and tried to import it from the iPython shell. But it raises an `AttributeError`.
```
In [1]: import flax
*snip*
~/.virtualenvs/flax2/lib/python3.7/site-packages/dataclasses.py in _is_classvar(a_type, typing)
548 # This test uses a typing internal class, but it's the best way to
549 # test if this is a ClassVar.
--> 550 return type(a_type) is typing._ClassVar
551
552
AttributeError: module 'typing' has no attribute '_ClassVar'
```
This does not happen in the normal interpreter, where everything goes fine.
### What you expected to happen:
I expected the import to work the same in iPython and the normal python shell.
### Logs, error messages, etc:
Full traceback in this gist: https://gist.github.com/bayerj/96f096c7fb09a7c9b758dabdbca32671
### Steps to reproduce:
On Mac OS X with Python 3.7.6, not anaconda, virtuelenvwrapper installed.
```
❯❯❯ mkvirtualenv flax2
❯❯❯ pip install jaxlib
*snip*
❯❯❯ pip install flax
*snip*
❯❯❯ ipython
*snip*
In [1]: import flax
```
### Workaround
The problem seems to be in the `dataclasses` package–not python's own one–from PyPI. If I uninstall it...
```
❯❯❯ pip uninstall dataclasses
Found existing installation: dataclasses 0.6
Uninstalling dataclasses-0.6:
Would remove:
/Users/bayerj/.virtualenvs/debug2/lib/python3.7/site-packages/dataclasses-0.6.dist-info/*
/Users/bayerj/.virtualenvs/debug2/lib/python3.7/site-packages/dataclasses.py
Proceed (y/n)? y
Successfully uninstalled dataclasses-0.6
❯❯❯ ipython
/usr/local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:931: UserWarning: Attempting to work in a virtualenv. If you encounter problems, please install IPython inside the virtualenv.
warn("Attempting to work in a virtualenv. If you encounter problems, please "
Python 3.7.6 (default, Dec 30 2019, 19:38:28)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.9.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import flax
```
... this goes fine.
| This is my fault, I thought that the `requires_python` directive in the backported dataclasses pypi package would prevent installation on >=3.7, but this is clearly not the case. I believe the correct approach is using the pep508 `python_version` environment marker in our setup.py file. | 2020-05-18T15:32:31 |
|
google/flax | 362 | google__flax-362 | [
"356"
] | 6b58fc4e4fdabb523c7aebd97d9d7567a457552d | diff --git a/flax/nn/pooling.py b/flax/nn/pooling.py
--- a/flax/nn/pooling.py
+++ b/flax/nn/pooling.py
@@ -44,6 +44,14 @@ def pool(inputs, init, reduce_fn, window_shape, strides, padding):
strides = strides or (1,) * len(window_shape)
strides = (1,) + strides + (1,)
dims = (1,) + window_shape + (1,)
+ if not isinstance(padding, str):
+ padding = tuple(map(tuple, padding))
+ assert(len(padding) == len(window_shape)), (
+ f"padding {padding} must specify pads for same number of dims as "
+ f"window_shape {window_shape}")
+ assert(all([len(x) == 2 for x in padding])), (
+ f"each entry in padding {padding} must be length 2")
+ padding = ((0,0),) + padding + ((0,0),)
return lax.reduce_window(inputs, init, reduce_fn, dims, strides, padding)
| diff --git a/tests/nn_test.py b/tests/nn_test.py
--- a/tests/nn_test.py
+++ b/tests/nn_test.py
@@ -545,6 +545,24 @@ def test_max_pool(self):
]).reshape((1, 3, 3, 1))
onp.testing.assert_allclose(y_grad, expected_grad)
+ def test_max_pool_explicit_pads(self):
+ x = jnp.arange(9).reshape((1, 3, 3, 1)).astype(jnp.float32)
+ pool = lambda x: nn.max_pool(x, (2, 2), padding=((1,1),(1,1)))
+ expected_y = jnp.array([
+ [0.,1.,2.,2.],
+ [3.,4.,5.,5.],
+ [6.,7.,8.,8.],
+ [6.,7.,8.,8.],
+ ]).reshape((1, 4, 4, 1))
+ y = pool(x)
+ onp.testing.assert_allclose(y, expected_y)
+ y_grad = jax.grad(lambda x: pool(x).sum())(x)
+ expected_grad = jnp.array([
+ [1., 1., 2.],
+ [1., 1., 2.],
+ [2., 2., 4.],
+ ]).reshape((1, 3, 3, 1))
+ onp.testing.assert_allclose(y_grad, expected_grad)
class NormalizationTest(absltest.TestCase):
| Pooling: passing "sequence of `n` `(low, high)` integer pairs" resulting in TypeError
Trying to pass a tuple or list of tuples to a pool operation's padding parameter gives out the following errors:
`TypeError: Unknown padding type: (1, 1).`
`TypeError : unhashable type: 'list' `
Sample code for reproducing the bug:
```python3
from flax import nn
from jax import random
class FlaxModel(nn.Module):
def apply(self, x):
x = nn.max_pool(x, (3, 3), strides=(2, 2), padding=[(1, 1), (1, 1)])
return x
rng = random.PRNGKey(0)
model, _ = FlaxModel.init_by_shape(rng, [(1, 100, 100, 1)])
```
| Indeed looks like our code doesn't support padding that's a sequence of pairs. @hawkinsp has said that the version of JAX on HEAD added support for this, we should add a test and plumb it through correctly. (Or in the meanwhile if that's impossible, support this by manually padding before calling into `lax.reduce_window`)
Yes, JAX at head supports a sequence of `(low, high)` padding pairs.
Flax probably still needs to do some work to add batch and feature dimensions to what the user provides. JAX and XLA don't have opinions about which dimensions are batch and which are feature, but Flax is documented to only accept padding for the spatial dimensions. | 2020-07-18T13:18:34 |
google/flax | 365 | google__flax-365 | [
"364"
] | a5dfa2900fd7d014c2f48a8f69dae5ce291a0d8a | diff --git a/flax/optim/weight_norm.py b/flax/optim/weight_norm.py
--- a/flax/optim/weight_norm.py
+++ b/flax/optim/weight_norm.py
@@ -147,7 +147,7 @@ def _split_grad(self, param, state, grad, decay):
scale_grad = jnp.sum(
grad * direction, axis=red_dims, keepdims=True)
direction_grad = state.mult * (grad - scale_grad * direction)
- if decay is not 0:
+ if decay != 0:
direction_grad = direction_grad + decay * direction
direction_info = direction, state.direction_state, direction_grad
scale_info = scale, state.scale_state, scale_grad
| Syntax warning due to comparison of literals using is in Python 3.8
### Problem you have encountered:
Syntax warning due to comparison of literals using is in Python 3.8 to use != .
### Steps to reproduce:
```
find . -iname '*.py' | grep -v example | grep -v doc | xargs -P4 -I{} python3.8 -Wall -m py_compile {}
./flax/optim/weight_norm.py:150: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if decay is not 0:
```
| 2020-07-18T18:37:39 |
||
google/flax | 541 | google__flax-541 | [
"539"
] | ae2e446328d7eaeee56007ca1ede735508812668 | diff --git a/examples/ppo/agent.py b/examples/ppo/agent.py
--- a/examples/ppo/agent.py
+++ b/examples/ppo/agent.py
@@ -43,6 +43,7 @@ def __init__(self, game: str):
parent_conn, child_conn = multiprocessing.Pipe()
self.proc = multiprocessing.Process(
target=rcv_action_send_exp, args=(child_conn, game))
+ self.proc.daemon = True
self.conn = parent_conn
self.proc.start()
diff --git a/examples/ppo/ppo_main.py b/examples/ppo/ppo_main.py
--- a/examples/ppo/ppo_main.py
+++ b/examples/ppo/ppo_main.py
@@ -19,6 +19,8 @@
import jax.random
from ml_collections import config_flags
+import tensorflow as tf
+
import ppo_lib
import models
import env_utils
@@ -34,6 +36,9 @@
'File path to the default configuration file.')
def main(argv):
+ # Make sure tf does not allocate gpu memory.
+ tf.config.experimental.set_visible_devices([], 'GPU')
+
config = FLAGS.config
game = config.game + 'NoFrameskip-v4'
num_actions = env_utils.get_num_actions(game)
| PPO example does not terminate properly
### Configuration
Running the PPO example for a short number of frames in order to reproduce as fast as possible on a cloud VM with a V100 GPU. Config python3.7, flax 0.2.2, jax 0.2.1, jaxlib 0.1.55 .
Command run:
`python ppo_main.py --config.game=Qbert --config.total_frames=4000`
### Problem you have encountered:
Program does not exit. One can `print('Done')` after `ppo_lib.train` in `ppo_main` but there is an open thread and program can't exit (even after adding `raise SystemExit`).
### Extra comments
Added extra line in `main` ` tf.config.experimental.set_visible_devices([],'GPU')` in order for the program to run properly with `tensorflow-gpu`, this is common in other `flax/examples`.
| 2020-10-19T09:44:21 |
||
google/flax | 551 | google__flax-551 | [
"547"
] | 7cb7c33e0712908e979864d525f00f5f15b164fe | diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -195,7 +195,9 @@ class Conv(Module):
Args:
features: number of convolution filters.
- kernel_size: shape of the convolutional kernel.
+ kernel_size: shape of the convolutional kernel. For 1D convolution,
+ the kernel size can be passed as an integer. For all other cases, it must
+ be a sequence of integers.
strides: a sequence of `n` integers, representing the inter-window
strides.
padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
@@ -219,7 +221,7 @@ class Conv(Module):
bias_init: initializer for the bias.
"""
features: int
- kernel_size: Sequence[int]
+ kernel_size: Union[int, Sequence[int]]
strides: Optional[Sequence[int]] = None
padding: Union[str, Sequence[Tuple[int, int]]] = 'SAME'
input_dilation: Optional[Sequence[int]] = None
@@ -244,8 +246,13 @@ def __call__(self, inputs: Array) -> Array:
inputs = jnp.asarray(inputs, self.dtype)
+ if isinstance(self.kernel_size, int):
+ kernel_size = (self.kernel_size,)
+ else:
+ kernel_size = self.kernel_size
+
is_single_input = False
- if inputs.ndim == len(self.kernel_size) + 1:
+ if inputs.ndim == len(kernel_size) + 1:
is_single_input = True
inputs = jnp.expand_dims(inputs, axis=0)
@@ -254,7 +261,7 @@ def __call__(self, inputs: Array) -> Array:
in_features = inputs.shape[-1]
assert in_features % self.feature_group_count == 0
- kernel_shape = self.kernel_size + (
+ kernel_shape = kernel_size + (
in_features // self.feature_group_count, self.features)
kernel = self.param('kernel', self.kernel_init, kernel_shape)
kernel = jnp.asarray(kernel, self.dtype)
@@ -285,7 +292,9 @@ class ConvTranspose(Module):
Args:
features: number of convolution filters.
- kernel_size: shape of the convolutional kernel.
+ kernel_size: shape of the convolutional kernel. For 1D convolution,
+ the kernel size can be passed as an integer. For all other cases, it must
+ be a sequence of integers.
strides: a sequence of `n` integers, representing the inter-window
strides.
padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
@@ -303,7 +312,7 @@ class ConvTranspose(Module):
bias_init: initializer for the bias.
"""
features: int
- kernel_size: Sequence[int]
+ kernel_size: Union[int, Sequence[int]]
strides: Optional[Sequence[int]] = None
padding: Union[str, Sequence[Tuple[int, int]]] = 'SAME'
kernel_dilation: Optional[Sequence[int]] = None
@@ -325,15 +334,21 @@ def __call__(self, inputs: Array) -> Array:
The convolved data.
"""
inputs = jnp.asarray(inputs, self.dtype)
+
+ if isinstance(self.kernel_size, int):
+ kernel_size = (self.kernel_size,)
+ else:
+ kernel_size = self.kernel_size
+
is_single_input = False
- if inputs.ndim == len(self.kernel_size) + 1:
+ if inputs.ndim == len(kernel_size) + 1:
is_single_input = True
inputs = jnp.expand_dims(inputs, axis=0)
strides = self.strides or (1,) * (inputs.ndim - 2)
in_features = inputs.shape[-1]
- kernel_shape = self.kernel_size + (in_features, self.features)
+ kernel_shape = kernel_size + (in_features, self.features)
kernel = self.param('kernel', self.kernel_init, kernel_shape)
kernel = jnp.asarray(kernel, self.dtype)
diff --git a/flax/nn/linear.py b/flax/nn/linear.py
--- a/flax/nn/linear.py
+++ b/flax/nn/linear.py
@@ -192,7 +192,9 @@ def apply(self,
Args:
inputs: input data with dimensions (batch, spatial_dims..., features).
features: number of convolution filters.
- kernel_size: shape of the convolutional kernel.
+ kernel_size: shape of the convolutional kernel. For 1D convolution,
+ the kernel size can be passed as an integer. For all other cases, it must
+ be a sequence of integers.
strides: a sequence of `n` integers, representing the inter-window
strides.
padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
@@ -219,12 +221,14 @@ def apply(self,
"""
inputs = jnp.asarray(inputs, dtype)
+ if isinstance(kernel_size, int):
+ kernel_size = (kernel_size,)
is_single_input = False
if inputs.ndim == len(kernel_size) + 1:
is_single_input = True
inputs = jnp.expand_dims(inputs, axis=0)
-
+
if strides is None:
strides = (1,) * (inputs.ndim - 2)
@@ -276,7 +280,9 @@ def apply(self,
Args:
inputs: input data with dimensions (batch, spatial_dims..., features).
features: number of convolution filters.
- kernel_size: shape of the convolutional kernel.
+ kernel_size: shape of the convolutional kernel. For 1D convolution,
+ the kernel size can be passed as an integer. For all other cases, it must
+ be a sequence of integers.
strides: a sequence of `n` integers, representing the inter-window
strides.
padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
@@ -296,12 +302,14 @@ def apply(self,
The convolved data.
"""
inputs = jnp.asarray(inputs, dtype)
+ if isinstance(kernel_size, int):
+ kernel_size = (kernel_size,)
is_single_input = False
if inputs.ndim == len(kernel_size) + 1:
is_single_input = True
inputs = jnp.expand_dims(inputs, axis=0)
-
+
strides = strides or (1,) * (inputs.ndim - 2)
in_features = inputs.shape[-1]
| diff --git a/tests/linen/linen_linear_test.py b/tests/linen/linen_linear_test.py
--- a/tests/linen/linen_linear_test.py
+++ b/tests/linen/linen_linear_test.py
@@ -163,12 +163,13 @@ def test_dense_general_vs_numpy(self, axis, batch_dims, einsum_expr):
target = np.einsum(einsum_expr, x, initial_params['params']['kernel']) + 1.
np.testing.assert_allclose(y, target, atol=1e-6)
- def test_conv(self):
+ @parameterized.parameters([((3,),), (3,)])
+ def test_conv(self, kernel_size):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((1, 8, 3))
conv_module = nn.Conv(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -177,12 +178,13 @@ def test_conv(self):
self.assertEqual(initial_params['params']['kernel'].shape, (3, 3, 4))
np.testing.assert_allclose(y, np.full((1, 6, 4), 10.))
- def test_single_input_conv(self):
+ @parameterized.parameters([((3,),), (3,)])
+ def test_single_input_conv(self, kernel_size):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((8, 3))
conv_module = nn.Conv(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -191,12 +193,13 @@ def test_single_input_conv(self):
self.assertEqual(initial_params['params']['kernel'].shape, (3, 3, 4))
np.testing.assert_allclose(y, np.full((6, 4), 10.))
- def test_group_conv(self):
+ @parameterized.parameters([((3,),), (3,)])
+ def test_group_conv(self, kernel_size):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((1, 8, 4))
conv_module = nn.Conv(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
feature_group_count=2,
padding='VALID',
kernel_init=initializers.ones,
@@ -206,12 +209,13 @@ def test_group_conv(self):
self.assertEqual(initial_params['params']['kernel'].shape, (3, 2, 4))
np.testing.assert_allclose(y, np.full((1, 6, 4), 7.))
- def test_conv_transpose(self):
+ @parameterized.parameters([((3,),), (3,)])
+ def test_conv_transpose(self, kernel_size):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((1, 8, 3))
conv_transpose_module = nn.ConvTranspose(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -230,12 +234,13 @@ def test_conv_transpose(self):
[ 4., 4., 4., 4.]]])
np.testing.assert_allclose(y, correct_ans)
- def test_single_input_conv_transpose(self):
+ @parameterized.parameters([((3,),), (3,)])
+ def test_single_input_conv_transpose(self, kernel_size):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((8, 3))
conv_transpose_module = nn.ConvTranspose(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
diff --git a/tests/nn_linear_test.py b/tests/nn_linear_test.py
--- a/tests/nn_linear_test.py
+++ b/tests/nn_linear_test.py
@@ -162,12 +162,13 @@ def test_dense_general_vs_numpy(self, axis, batch_dims, einsum_expr):
target = onp.einsum(einsum_expr, x, dg_module.params['kernel']) + 1.
onp.testing.assert_allclose(y, target, atol=1e-6)
- def test_conv(self):
+ @parameterized.parameters([((3,),), (3,)])
+ def test_conv(self, kernel_size):
rng = random.PRNGKey(0)
x = jnp.ones((1, 8, 3))
conv_module = nn.Conv.partial(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -176,13 +177,14 @@ def test_conv(self):
model = nn.Model(conv_module, initial_params)
self.assertEqual(model.params['kernel'].shape, (3, 3, 4))
onp.testing.assert_allclose(y, onp.full((1, 6, 4), 10.))
-
- def test_single_input_conv(self):
+
+ @parameterized.parameters([((3,),), (3,)])
+ def test_single_input_conv(self, kernel_size):
rng = random.PRNGKey(0)
x = jnp.ones((8, 3))
conv_module = nn.Conv.partial(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -192,12 +194,13 @@ def test_single_input_conv(self):
self.assertEqual(model.params['kernel'].shape, (3, 3, 4))
onp.testing.assert_allclose(y, onp.full((6, 4), 10.))
- def test_group_conv(self):
+ @parameterized.parameters([((3,),), (3,)])
+ def test_group_conv(self, kernel_size):
rng = random.PRNGKey(0)
x = jnp.ones((1, 8, 4))
conv_module = nn.Conv.partial(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
feature_group_count=2,
padding='VALID',
kernel_init=initializers.ones,
@@ -208,12 +211,13 @@ def test_group_conv(self):
self.assertEqual(model.params['kernel'].shape, (3, 2, 4))
onp.testing.assert_allclose(y, onp.full((1, 6, 4), 7.))
- def test_conv_transpose(self):
+ @parameterized.parameters([((3,),), (3,)])
+ def test_conv_transpose(self, kernel_size):
rng = random.PRNGKey(0)
x = jnp.ones((1, 8, 3))
conv_transpose_module = nn.ConvTranspose.partial(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -232,13 +236,14 @@ def test_conv_transpose(self):
[ 7., 7., 7., 7.],
[ 4., 4., 4., 4.]]])
onp.testing.assert_allclose(y, correct_ans)
-
- def test_single_input_conv_transpose(self):
+
+ @parameterized.parameters([((3,),), (3,)])
+ def test_single_input_conv_transpose(self, kernel_size):
rng = random.PRNGKey(0)
x = jnp.ones((8, 3))
conv_transpose_module = nn.ConvTranspose.partial(
features=4,
- kernel_size=(3,),
+ kernel_size=kernel_size,
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
| Helpful error when kernel_size is not an array in Conv
### Problem you have encountered:
If you try to init a Conv module by setting `kernel_size` to an int, you get an unhelpful error message.
### What you expected to happen:
Helpful error message that explains I should set `kernel_size` to an array.
### Logs, error messages, etc:
`TypeError: object of type 'int' has no len()`
### Steps to reproduce:
```python
from flax import nn
from jax import numpy as jnp, random
class CNN(nn.Module):
def apply(self, x):
x = nn.Conv(x, features=32, kernel_size=3)
x = nn.relu(x)
return x
cnn = CNN.init(random.PRNGKey(0), jnp.ones((1, 28, 28, 1)))
```
| Good point! We should improve our error message here. (Or maybe we should allow simply passing in a single int? What do other frameworks do here?)
I think both [PyTorch](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html) and [Tensorflow](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) allow passing a single int which means the same value will be used for all dimensions.
Sure, then by all means let's also do that. We'll take a pull request for this (with tests) if anyone is interested. | 2020-10-22T17:01:43 |
google/flax | 560 | google__flax-560 | [
"511"
] | b5a6aebebe2dec15217a8cc5967b8e5fdc6f4410 | diff --git a/flax/core/lift.py b/flax/core/lift.py
--- a/flax/core/lift.py
+++ b/flax/core/lift.py
@@ -70,7 +70,8 @@ def _dup_scopes(orig_scopes, scopes, paths):
def pack(fn: Callable[..., Any],
in_variable_filters: Sequence[CollectionFilter],
out_variable_filters: Sequence[CollectionFilter],
- rng_filters: Sequence[PRNGSequenceFilter]) -> Callable[..., Any]:
+ rng_filters: Sequence[PRNGSequenceFilter],
+ name=None) -> Callable[..., Any]:
"""Pack variables and rngs for functional transformations.
The pack function is the building block for all other lifted transformations.
@@ -123,9 +124,16 @@ def scope_fn(variable_groups_xs, rng_groups_xs):
# make sure variable dicts are cloned and can't be manipulated by ref sharing.
variables = jax.tree_map(lambda x: x, variables)
scope_mutable = intersect_filters(scope.root.mutable, mutable)
+ new_path = scope.path
+ if name:
+ if new_path:
+ new_path = new_path[:-1] + (f'{name}({new_path[-1]})',)
+ else:
+ new_path = (f'{name}()',)
inner_scope = Scope(
variables, name=scope.name, rngs=rngs,
- mutable=scope_mutable, parent=None)
+ mutable=scope_mutable, parent=None,
+ path=new_path)
inner_scopes.append(inner_scope)
inner_scopes = _dup_scopes(scopes, inner_scopes, paths)
return treedef.unflatten(inner_scopes)
@@ -158,8 +166,8 @@ def repack(inner_scope_tree):
for scope, out_variable_groups in zip(scopes, out_variable_groups_xs):
for out_variable_group in out_variable_groups:
for col_name, collection in out_variable_group.items():
- for name, value in collection.items():
- scope.put_variable(col_name, name, value)
+ for var_name, value in collection.items():
+ scope.put_variable(col_name, var_name, value)
return y
return wrapper
@@ -205,7 +213,7 @@ def wrapper(scope_fn, repack, variable_groups_xs, rng_groups_xs, fn, *args):
is_target_out = mutable or init
in_vars = (target, variables)
out_vars = (target, variables) if is_target_out else ((), variables)
- wrapper = pack(wrapper, in_vars, out_vars, (rngs,))
+ wrapper = pack(wrapper, in_vars, out_vars, (rngs,), name='transform')
return wrapper
@@ -350,7 +358,8 @@ def mapped(variable_groups_xs, rng_groups_xs, args):
return mapped(variable_groups_xs, rng_groups_xs, args)
return pack(
- inner, variable_in_groups, variable_out_groups, rng_groups)
+ inner, variable_in_groups, variable_out_groups, rng_groups,
+ name='vmap')
ScanAxis = int
@@ -491,7 +500,8 @@ def scanned(broadcast_vars, carry, variable_groups_xs, rng_groups_xs, args):
inner,
(variable_broadcast, variable_carry) + variable_in_groups,
(variable_broadcast, variable_carry) + variable_out_groups,
- rng_groups)
+ rng_groups,
+ name='scan')
def custom_vjp(fn: Callable[..., Any], backward_fn: Callable[..., Any],
@@ -560,7 +570,8 @@ def f_bwd(*args):
variable_out_groups = (grad_kind, True,)
rng_groups = (True,)
return pack(
- inner, variable_in_groups, variable_out_groups, rng_groups)
+ inner, variable_in_groups, variable_out_groups, rng_groups,
+ name='custom_vjp')
def remat(fn: Callable[..., Any],
@@ -576,7 +587,7 @@ def rematted(variable_groups_xs, rng_groups_xs, *args):
return y, repack_fn(scope)
return rematted(variable_groups, rng_groups, *args)
- return pack(inner, (variables,), (variables,), (rngs,))
+ return pack(inner, (variables,), (variables,), (rngs,), name='remat')
def jit(fn: Callable[..., Any],
@@ -601,7 +612,7 @@ def jitted(variable_groups_xs, rng_groups_xs, *args):
return jitted(variable_groups_xs, rng_groups_xs, *args)
- return pack(inner, (variables,), (variables,), (rngs,))
+ return pack(inner, (variables,), (variables,), (rngs,), name='jit')
def remat_scan(body_fn: Callable[..., Any], scope: Scope, carry: Any,
diff --git a/flax/core/scope.py b/flax/core/scope.py
--- a/flax/core/scope.py
+++ b/flax/core/scope.py
@@ -45,6 +45,8 @@
PRNGKey = Any
Array = Any
+RNGSequences = Dict[str, PRNGKey]
+
Filter = Union[bool, str, Sequence[str]]
CollectionFilter = Filter
PRNGSequenceFilter = Filter
@@ -54,6 +56,7 @@
MaybeFrozenCollection = Union[MutableCollection, FrozenCollection]
Variables = Dict[str, MaybeFrozenCollection]
+FrozenVariables = Dict[str, FrozenCollection]
def _fold_in_str(rng: PRNGKey, data: str) -> PRNGKey:
@@ -204,7 +207,8 @@ def __init__(self,
rngs: Optional[Dict[str, PRNGKey]] = None,
name: Optional[str] = None,
mutable: CollectionFilter = False,
- parent: Optional['Scope'] = None):
+ parent: Optional['Scope'] = None,
+ path: Tuple[str] = ()):
"""Initializes a Scope.
Args:
@@ -216,6 +220,7 @@ def __init__(self,
self._variables = variables
self.parent = parent
self.name = name
+ self.path = path
self.rngs = rngs if rngs else {}
self.mutable = mutable
@@ -229,6 +234,12 @@ def __init__(self,
self._invalid = False
+
+ @property
+ def path_text(self) -> str:
+ """Returns the path as a human readable string with slashes between parts."""
+ return '/' + '/'.join(self.path)
+
@property
def invalid(self) -> bool:
"""Returns true if this scope is invalidated as a result of `Scope.temporary`."""
@@ -279,6 +290,8 @@ def reserve(self, name: str):
Args:
name: The name to reserve.
"""
+ if not isinstance(name, str):
+ raise ValueError('Variable and child scopes should have a string name.')
if name in self.reservations:
raise ValueError(f'Duplicate use of name: "{name}"')
self.reservations.add(name)
@@ -315,7 +328,7 @@ def push(self, name: Optional[str] = None, prefix: str = '', reuse=False) -> 'Sc
return self._children[name]
self.reserve(name)
rngs = {key: _fold_in_str(rng, name) for key, rng in self.rngs.items()}
- scope = Scope({}, name=name, rngs=rngs, parent=self)
+ scope = Scope({}, name=name, rngs=rngs, parent=self, path=self.path + (name,))
self._children[name] = scope
return scope
@@ -358,7 +371,6 @@ def is_mutable_collection(self, col: str) -> bool:
"""Check whether a collection is mutable."""
return in_filter(self.root.mutable, col)
-
def _mutable_collection(self, col: str) -> MutableCollection:
if not self.is_mutable_collection(col):
raise ValueError(f'Collection is not mutable: "{col}"')
@@ -413,6 +425,10 @@ def put_variable(self, col: str, name: str, value: Any):
"""Update the value of a Variable."""
self._check_valid()
self._validate_trace_level()
+ if not self.is_mutable_collection(col):
+ raise ValueError(
+ f'Trying to update variable "{name}" in "{self.path_text}" '
+ f'but collection "{col}" is immutable.')
variables = self._mutable_collection(col)
variables[name] = value
@@ -421,6 +437,8 @@ def variable(self, col: str, name: str, init_fn: Callable[..., T],
"""Create a Variable."""
self.reserve(name)
if not self.has_variable(col, name):
+ if not self.is_mutable_collection('params'):
+ raise ValueError(f'No paramater named "{name}" exists in "{self.path_text}".')
init_value = init_fn(*init_args)
self.put_variable(col, name, init_value)
return Variable(self, col, name)
@@ -441,9 +459,11 @@ def param(self, name: str, init_fn: Callable[..., T], *init_args) -> T:
# we might intentionally change the dtype for inference to a half float type for example.
if jnp.shape(val) != jnp.shape(abs_val):
raise ValueError('Inconsistent shapes between value and initializer '
- f'for parameter "{name}": {jnp.shape(val)}, {jnp.shape(abs_val)}')
+ f'for parameter "{name}" in "{self.path_text}": {jnp.shape(val)}, {jnp.shape(abs_val)}')
return value
else:
+ if not self.is_mutable_collection('params'):
+ raise ValueError(f'No paramater named "{name}" exists in "{self.path_text}".')
value = init_fn(self.make_rng('params'), *init_args)
self.put_variable('params', name, value)
return value
@@ -474,7 +494,15 @@ def apply(fn: Callable[..., Any],
`fn` with the scope partially applied.
"""
@functools.wraps(fn)
- def wrapper(variables, *args, rngs=None, **kwargs):
+ def wrapper(variables: FrozenVariables, *args,
+ rngs: Optional[RNGSequences] = None, **kwargs) -> (Any, FrozenVariables):
+
+ if not _is_valid_variables(variables):
+ raise ValueError('The first argument passed to an apply function '
+ 'should be a dictionary of collections. '
+ 'Each collection should be a `FrozenDict` with string keys.')
+ if rngs is not None and not _is_valid_rngs(rngs):
+ raise ValueError('rngs should be a dictionary mapping strings to `jax.PRNGKey`.')
new_variables = _unfreeze_variables(variables, mutable)
with Scope(new_variables, rngs=rngs, mutable=mutable).temporary() as root:
y = fn(root, *args, **kwargs)
@@ -498,9 +526,52 @@ def init(fn: Callable[..., Any], mutable: CollectionFilter = True) -> Callable[.
`fn` with the scope partially applied.
"""
@functools.wraps(fn)
- def wrapper(rngs, *args, **kwargs):
+ def wrapper(rngs, *args, **kwargs) -> (Any, FrozenVariables):
+ if not _is_valid_rng(rngs) and not _is_valid_rngs(rngs):
+ raise ValueError('First argument passed to an init function should be a `jax.PRNGKey` '
+ 'or a dictionary mapping strings to `jax.PRNGKey`.')
if not isinstance(rngs, dict):
- assert rngs.shape == (2,)
rngs = {'params': rngs}
return apply(fn, mutable=mutable)({}, *args, rngs=rngs, **kwargs)
return wrapper
+
+
+def _is_valid_collection(col: FrozenCollection):
+ if not isinstance(col, FrozenDict):
+ return False
+ for name in col.keys():
+ # any value can be stored in a collection so
+ # only keys can be verified.
+ if not isinstance(name, str):
+ return False
+ return True
+
+
+def _is_valid_variables(variables: FrozenVariables):
+ if not isinstance(variables, (dict, FrozenDict)):
+ return False
+ for name, col in variables.items():
+ if not isinstance(name, str):
+ return False
+ if not _is_valid_collection(col):
+ return False
+ return True
+
+
+def _is_valid_rng(rng: Array):
+ if not isinstance(rng, jnp.ndarray):
+ return False
+ if rng.shape != (2,) or rng.dtype != jnp.uint32:
+ return False
+ return True
+
+
+def _is_valid_rngs(rngs: RNGSequences):
+ if not isinstance(rngs, dict):
+ return False
+ for key, val in rngs.items():
+ if not isinstance(key, str):
+ return False
+ if not _is_valid_rng(val):
+ return False
+ return True
| diff --git a/tests/core/lift_test.py b/tests/core/lift_test.py
--- a/tests/core/lift_test.py
+++ b/tests/core/lift_test.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from flax.core import Scope, init, apply, lift
+from flax.core import Scope, init, apply, lift, nn
from jax import random
from jax import numpy as jnp
@@ -36,6 +36,18 @@ def g(scopes, _):
init(f)(random.PRNGKey(0))
+ def test_undefined_param(self):
+ def f(scope):
+ dense = lift.vmap(nn.dense,
+ in_axes=(0, None), out_axes=0,
+ variable_axes={'params': 0},
+ split_rngs={'params': True})
+ dense(scope.push('dense'), np.ones((3, 2)), 2)
+
+ with self.assertRaisesWithLiteralMatch(ValueError, 'No paramater named "kernel" exists in "/vmap(dense)".'):
+ apply(f)({})
+
+
if __name__ == '__main__':
absltest.main()
diff --git a/tests/core/scope_test.py b/tests/core/scope_test.py
--- a/tests/core/scope_test.py
+++ b/tests/core/scope_test.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from flax.core import Scope, scope, init, apply, nn
+from flax.core import Scope, scope, freeze, init, apply, nn
from jax import random
@@ -64,17 +64,25 @@ def test_inconsistent_param_shapes(self):
def f(scope):
scope.param('test', nn.initializers.ones, (4,))
- msg = 'Inconsistent shapes between value and initializer for parameter "test": (2,), (4,)'
+ msg = 'Inconsistent shapes between value and initializer for parameter "test" in "/": (2,), (4,)'
with self.assertRaisesWithLiteralMatch(ValueError, msg):
- apply(f)({'params': {'test': np.ones((2,))}})
+ apply(f)(freeze({'params': {'test': np.ones((2,))}}))
def test_mutate_undefined_collection(self):
def f(scope):
- scope.put_variable('test', 'test', 123)
+ scope.put_variable('state', 'test', 123)
- with self.assertRaisesWithLiteralMatch(ValueError, 'Collection is not mutable: "test"'):
+ msg = 'Trying to update variable "test" in "/" but collection "state" is immutable.'
+ with self.assertRaisesWithLiteralMatch(ValueError, msg):
init(f, mutable='params')(random.PRNGKey(0))
+ def test_undefined_param(self):
+ def f(scope):
+ nn.dense(scope.push('dense'), np.ones((1, 2)), 2)
+
+ with self.assertRaisesWithLiteralMatch(ValueError, 'No paramater named "kernel" exists in "/dense".'):
+ apply(f)({})
+
if __name__ == '__main__':
absltest.main()
| Linen: cryptic error message when feeding with incorrect rngs keys
I mention a problem I encountered recently, it cost me a lot of time since the error message is cryptic
### Problem you have encountered:
I made a typing mistake (shame on me) :
``` python
# a good key
key1, key2,key3 = random.split(random.PRNGKey(0), 3)
# mistake while typing
bad_key = random.split(random.PRNGKey(0), 2)
```
And then cryptic message in `init` or `apply`:
```python
m = MyModule()
p = m.init({'params':key1,'dropout':bad_key},x)
```
### Steps to reproduce:
https://colab.research.google.com/drive/1Ijr74leHGN8ZrvipgpQnVo9Ql8SI03-Y?usp=sharing
### Logs, error messages, etc:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-e5d297b6aa21> in <module>()
----> 1 p = m.init({'params':key1,'dropout':bad_key},x)
34 frames
/usr/local/lib/python3.6/dist-packages/flax/linen/module.py in init(self, rngs, method, *args, **kwargs)
474 def init(self, rngs, *args, method=None, **kwargs):
475 """Create and return initialized data for module with rngs."""
--> 476 _, v_out = self.init_with_output(rngs, *args, method=method, **kwargs)
477 return v_out
478
/usr/local/lib/python3.6/dist-packages/flax/linen/module.py in init_with_output(self, rngs, method, *args, **kwargs)
470 rngs = {'params': rngs}
471 return self.apply(
--> 472 {}, *args, rngs=rngs, method=method, mutable=True, **kwargs)
473
474 def init(self, rngs, *args, method=None, **kwargs):
/usr/local/lib/python3.6/dist-packages/flax/linen/module.py in apply(self, variables, rngs, method, mutable, *args, **kwargs)
462 fn = lambda scope: method(self.clone(parent=scope),
463 *args, **kwargs)
--> 464 return apply(fn, mutable=mutable)(variables, rngs=rngs)
465
466 def init_with_output(self, rngs, *args, method=None, **kwargs):
/usr/local/lib/python3.6/dist-packages/flax/core/scope.py in wrapper(variables, rngs, *args, **kwargs)
338 new_variables = _unfreeze_variables(variables, mutable)
339 with Scope(new_variables, rngs=rngs).temporary() as root:
--> 340 y = fn(root, *args, **kwargs)
341 if mutable:
342 return y, freeze(new_variables)
/usr/local/lib/python3.6/dist-packages/flax/linen/module.py in <lambda>(scope)
461 method = get_unbound_fn(method)
462 fn = lambda scope: method(self.clone(parent=scope),
--> 463 *args, **kwargs)
464 return apply(fn, mutable=mutable)(variables, rngs=rngs)
465
/usr/local/lib/python3.6/dist-packages/flax/linen/module.py in wrapped_module_method(self, *args, **kwargs)
154 _context.module_stack.append(self)
155 try:
--> 156 return fun(self, *args, **kwargs)
157 finally:
158 _context.module_stack.pop()
<ipython-input-3-efadaf5263bf> in __call__(self, x)
3 @nn.compact
4 def __call__(self, x):
----> 5 self.make_rng('dropout')
6 return x
/usr/local/lib/python3.6/dist-packages/flax/linen/module.py in make_rng(self, kind)
451 def make_rng(self, kind: str) -> PRNGKey:
452 """Get a new rng key of a given kind from this Module."""
--> 453 return self.scope.make_rng(kind)
454
455 def apply(self, variables, *args, rngs=None,
/usr/local/lib/python3.6/dist-packages/flax/core/scope.py in make_rng(self, name)
272 self._validate_trace_level()
273 self.rng_counters[name] += 1
--> 274 return random.fold_in(self.rngs[name], self.rng_counters[name])
275
276 def get_variable(self, col: str, name: str, default: T = None) -> T:
/usr/local/lib/python3.6/dist-packages/jax/random.py in fold_in(key, data)
294 statistically safe for producing a stream of new pseudo-random values.
295 """
--> 296 return _fold_in(key, data)
297
298 @jit
/usr/local/lib/python3.6/dist-packages/jax/api.py in f_jitted(*args, **kwargs)
213 backend=backend,
214 name=flat_fun.__name__,
--> 215 donated_invars=donated_invars)
216 return tree_unflatten(out_tree(), out)
217
/usr/local/lib/python3.6/dist-packages/jax/core.py in bind(self, fun, *args, **params)
1142
1143 def bind(self, fun, *args, **params):
-> 1144 return call_bind(self, fun, *args, **params)
1145
1146 def process(self, trace, fun, tracers, params):
/usr/local/lib/python3.6/dist-packages/jax/core.py in call_bind(primitive, fun, *args, **params)
1133 tracers = map(top_trace.full_raise, args)
1134 with maybe_new_sublevel(top_trace):
-> 1135 outs = primitive.process(top_trace, fun, tracers, params)
1136 return map(full_lower, apply_todos(env_trace_todo(), outs))
1137
/usr/local/lib/python3.6/dist-packages/jax/core.py in process(self, trace, fun, tracers, params)
1145
1146 def process(self, trace, fun, tracers, params):
-> 1147 return trace.process_call(self, fun, tracers, params)
1148
1149 def post_process(self, trace, out_tracers, params):
/usr/local/lib/python3.6/dist-packages/jax/core.py in process_call(self, primitive, f, tracers, params)
575
576 def process_call(self, primitive, f, tracers, params):
--> 577 return primitive.impl(f, *tracers, **params)
578 process_map = process_call
579
/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py in _xla_call_impl(fun, device, backend, name, donated_invars, *args)
528 def _xla_call_impl(fun: lu.WrappedFun, *args, device, backend, name, donated_invars):
529 compiled_fun = _xla_callable(fun, device, backend, name, donated_invars,
--> 530 *unsafe_map(arg_spec, args))
531 try:
532 return compiled_fun(*args)
/usr/local/lib/python3.6/dist-packages/jax/linear_util.py in memoized_fun(fun, *args)
232 fun.populate_stores(stores)
233 else:
--> 234 ans = call(fun, *args)
235 cache[key] = (ans, fun.stores)
236 return ans
/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py in _xla_callable(fun, device, backend, name, donated_invars, *arg_specs)
593 abstract_args, arg_devices = unzip2(arg_specs)
594 if config.omnistaging_enabled:
--> 595 jaxpr, out_avals, consts = pe.trace_to_jaxpr_final(fun, abstract_args)
596 if any(isinstance(c, core.Tracer) for c in consts):
597 raise core.UnexpectedTracerError("Encountered an unexpected tracer.")
/usr/local/lib/python3.6/dist-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr_final(fun, in_avals)
1021 main.source_info = fun_sourceinfo(fun.f) # type: ignore
1022 main.jaxpr_stack = () # type: ignore
-> 1023 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
1024 del main
1025 return jaxpr, out_avals, consts
/usr/local/lib/python3.6/dist-packages/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1002 trace = DynamicJaxprTrace(main, core.cur_sublevel())
1003 in_tracers = map(trace.new_arg, in_avals)
-> 1004 ans = fun.call_wrapped(*in_tracers)
1005 out_tracers = map(trace.full_raise, ans)
1006 jaxpr, out_avals, consts = frame.to_jaxpr(in_tracers, out_tracers)
/usr/local/lib/python3.6/dist-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
149
150 try:
--> 151 ans = self.f(*args, **dict(self.params, **kwargs))
152 except:
153 # Some transformations yield from inside context managers, so we have to
/usr/local/lib/python3.6/dist-packages/jax/random.py in _fold_in(key, data)
298 @jit
299 def _fold_in(key, data):
--> 300 return threefry_2x32(key, PRNGKey(data))
301
302
/usr/local/lib/python3.6/dist-packages/jax/api.py in f_jitted(*args, **kwargs)
213 backend=backend,
214 name=flat_fun.__name__,
--> 215 donated_invars=donated_invars)
216 return tree_unflatten(out_tree(), out)
217
/usr/local/lib/python3.6/dist-packages/jax/core.py in bind(self, fun, *args, **params)
1142
1143 def bind(self, fun, *args, **params):
-> 1144 return call_bind(self, fun, *args, **params)
1145
1146 def process(self, trace, fun, tracers, params):
/usr/local/lib/python3.6/dist-packages/jax/core.py in call_bind(primitive, fun, *args, **params)
1133 tracers = map(top_trace.full_raise, args)
1134 with maybe_new_sublevel(top_trace):
-> 1135 outs = primitive.process(top_trace, fun, tracers, params)
1136 return map(full_lower, apply_todos(env_trace_todo(), outs))
1137
/usr/local/lib/python3.6/dist-packages/jax/core.py in process(self, trace, fun, tracers, params)
1145
1146 def process(self, trace, fun, tracers, params):
-> 1147 return trace.process_call(self, fun, tracers, params)
1148
1149 def post_process(self, trace, out_tracers, params):
/usr/local/lib/python3.6/dist-packages/jax/interpreters/partial_eval.py in process_call(self, call_primitive, f, tracers, params)
938 def process_call(self, call_primitive, f, tracers, params):
939 in_avals = [t.aval for t in tracers]
--> 940 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(f, self.main, in_avals)
941 if not jaxpr.eqns:
942 return core.eval_jaxpr(jaxpr, consts, *tracers)
/usr/local/lib/python3.6/dist-packages/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1002 trace = DynamicJaxprTrace(main, core.cur_sublevel())
1003 in_tracers = map(trace.new_arg, in_avals)
-> 1004 ans = fun.call_wrapped(*in_tracers)
1005 out_tracers = map(trace.full_raise, ans)
1006 jaxpr, out_avals, consts = frame.to_jaxpr(in_tracers, out_tracers)
/usr/local/lib/python3.6/dist-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
149
150 try:
--> 151 ans = self.f(*args, **dict(self.params, **kwargs))
152 except:
153 # Some transformations yield from inside context managers, so we have to
/usr/local/lib/python3.6/dist-packages/jax/random.py in threefry_2x32(keypair, count)
261 out = jnp.concatenate(x)
262 assert out.dtype == np.uint32
--> 263 return lax.reshape(out[:-1] if odd_size else out, count.shape)
264
265
/usr/local/lib/python3.6/dist-packages/jax/lax/lax.py in reshape(operand, new_sizes, dimensions)
688 return reshape_p.bind(
689 operand, new_sizes=new_sizes,
--> 690 dimensions=None if dimensions is None or same_dims else tuple(dimensions))
691
692 def pad(operand: Array, padding_value: Array,
/usr/local/lib/python3.6/dist-packages/jax/core.py in bind(self, *args, **params)
264 top_trace = find_top_trace(args)
265 tracers = map(top_trace.full_raise, args)
--> 266 out = top_trace.process_primitive(self, tracers, params)
267 return map(full_lower, out) if self.multiple_results else full_lower(out)
268
/usr/local/lib/python3.6/dist-packages/jax/interpreters/partial_eval.py in process_primitive(self, primitive, tracers, params)
926 def process_primitive(self, primitive, tracers, params):
927 avals = [t.aval for t in tracers]
--> 928 out_avals = primitive.abstract_eval(*avals, **params)
929 out_avals = [out_avals] if not primitive.multiple_results else out_avals
930 source_info = source_info_util.current()
/usr/local/lib/python3.6/dist-packages/jax/lax/lax.py in standard_abstract_eval(prim, shape_rule, dtype_rule, *args, **kwargs)
1909 return ConcreteArray(prim.impl(*[x.val for x in args], **kwargs))
1910 elif least_specialized is ShapedArray:
-> 1911 return ShapedArray(shape_rule(*args, **kwargs), dtype_rule(*args, **kwargs))
1912 elif least_specialized is UnshapedArray:
1913 return UnshapedArray(dtype_rule(*args, **kwargs))
/usr/local/lib/python3.6/dist-packages/jax/lax/lax.py in _reshape_shape_rule(operand, new_sizes, dimensions)
3365 if prod(np.shape(operand)) != prod(new_sizes):
3366 msg = 'reshape total size must be unchanged, got new_sizes {} for shape {}.'
-> 3367 raise TypeError(msg.format(new_sizes, np.shape(operand)))
3368 if dimensions is not None:
3369 if set(dimensions) != set(range(np.ndim(operand))):
TypeError: reshape total size must be unchanged, got new_sizes (2,) for shape (4,).
```
### Steps to reproduce:
https://colab.research.google.com/drive/1Ijr74leHGN8ZrvipgpQnVo9Ql8SI03-Y?usp=sharing
| Sure, it would be good to add assertions that arguments that we expect to be RNGs are indeed RNGs. (I see that `jax.random` has `_is_prng_key` but that's a private method, so we should either ask the JAX core folks to make this public or, as a first step, replicate it in Flax).
I'm looking into hardening init/apply arg validation (also for Frozen vs normal dict). I'll make sure the RNGs are validated as well. | 2020-10-27T15:39:37 |
google/flax | 596 | google__flax-596 | [
"595"
] | 397d63c49e90f4907c70f3ac3947bfc3d9495d7b | diff --git a/flax/core/frozen_dict.py b/flax/core/frozen_dict.py
--- a/flax/core/frozen_dict.py
+++ b/flax/core/frozen_dict.py
@@ -24,6 +24,14 @@
V = TypeVar('V')
+def _indent(x, num_spaces):
+ indent_str = ' ' * num_spaces
+ lines = x.split('\n')
+ assert lines[-1] == ''
+ # skip the final line because it's empty and should not be indented.
+ return '\n'.join(indent_str + line for line in lines[:-1]) + '\n'
+
+
@jax.tree_util.register_pytree_node_class
class FrozenDict(Mapping[K, V]):
"""An immutable variant of the Python dict."""
@@ -55,7 +63,21 @@ def __len__(self):
return len(self._dict)
def __repr__(self):
- return 'FrozenDict(%r)' % self._dict
+ return self.pretty_repr()
+
+ def pretty_repr(self, num_spaces=4):
+ """Returns an indented representation of the nested dictionary."""
+ def pretty_dict(x):
+ if not isinstance(x, dict):
+ return repr(x)
+ rep = ''
+ for key, val in x.items():
+ rep += f'{key}: {pretty_dict(val)},\n'
+ if rep:
+ return '{\n' + _indent(rep, num_spaces) + '}'
+ else:
+ return '{}'
+ return f'FrozenDict({pretty_dict(self._dict)})'
def __hash__(self):
if self._hash is None:
| diff --git a/tests/core/frozen_dict_test.py b/tests/core/frozen_dict_test.py
--- a/tests/core/frozen_dict_test.py
+++ b/tests/core/frozen_dict_test.py
@@ -59,5 +59,20 @@ def test_frozen_items(self):
self.assertEqual(items, [('a', 1), ('b', freeze(xs['b']))])
+ def test_frozen_dict_repr(self):
+ expected = (
+"""FrozenDict({
+ a: 1,
+ b: {
+ c: 2,
+ d: {},
+ },
+})""")
+
+ xs = FrozenDict({'a': 1, 'b': {'c': 2, 'd': {}}})
+ self.assertEqual(repr(xs), expected)
+ self.assertEqual(repr(FrozenDict()), 'FrozenDict({})')
+
+
if __name__ == '__main__':
absltest.main()
| QoL: better print for FrozenDict
The best way I'm aware of to get an overview of model shape is via `jax.tree_map(jnp.shape, params)`. FrozenDicts have no concept of pretty printing the way dicts do, so large models are unwieldy to parse at a glance.
| Yes I noticed the output ends up without indentst and newlines. Let's try to fix that. | 2020-11-04T14:35:42 |
google/flax | 628 | google__flax-628 | [
"627"
] | 8ce8e5cdb693db891d86b18618a329139968454a | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
install_requires = [
"numpy>=1.12",
- "jax>=0.1.59",
+ "jax>=0.1.77",
"matplotlib", # only needed for tensorboard export
"dataclasses;python_version<'3.7'", # will only install on py3.6
"msgpack",
| After update from 0.2.0: AttributeError: module 'jax.core' has no attribute 'eval_context'
After updating from flax 0.2.0 to flax 0.2.2 I get the above error message. Downgrading to 0.2.0 solves this, so the error source is located. I'm working with the now deprecated flax.nn package if backward-compatibility might be the reason for this issue.
The Issue is encountered in a custom RNN, when using the init_by_shape function in conjunction with jax.lax.scan.
| Hi @mr128254 -- we are about to officially deprecate `flax.nn` but if you have a minimal repro we can perhaps take a look. (Also have you upgraded your version of JAX in parallel to upgrading the Flax version?)
I am pretty sure this has something to do with the Jax version. We should raise the minimal version in `setup.py` | 2020-11-12T14:52:07 |
|
google/flax | 823 | google__flax-823 | [
"674"
] | 809221154d41b3ac53eb36e3147543b19b575556 | diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -257,8 +257,7 @@ def __call__(self, inputs: Array) -> Array:
is_single_input = True
inputs = jnp.expand_dims(inputs, axis=0)
- if self.strides is None:
- self.strides = (1,) * (inputs.ndim - 2)
+ strides = self.strides or (1,) * (inputs.ndim - 2)
in_features = inputs.shape[-1]
assert in_features % self.feature_group_count == 0
@@ -271,7 +270,7 @@ def __call__(self, inputs: Array) -> Array:
y = lax.conv_general_dilated(
inputs,
kernel,
- self.strides,
+ strides,
self.padding,
lhs_dilation=self.input_dilation,
rhs_dilation=self.kernel_dilation,
diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -31,7 +31,7 @@
from flax import serialization
from flax.core import Scope, apply
from flax.core.scope import CollectionFilter, Variable, VariableDict
-from flax.core.frozen_dict import freeze
+from flax.core.frozen_dict import FrozenDict, freeze
# from .dotgetter import DotGetter
@@ -72,7 +72,7 @@ def _module_repr(module: 'Module', num_spaces: int = 4):
rep = ''
attributes = {k: v for k, v in cls.__annotations__.items()
if k not in ('parent', 'name')}
- child_modules = {k: v for k, v in module.children.items() # pytype: disable=attribute-error
+ child_modules = {k: v for k, v in module._state.children.items() # pytype: disable=attribute-error
if isinstance(v, Module)}
if attributes:
rep += '# attributes\n'
@@ -126,7 +126,7 @@ def disable_named_call():
_use_named_call = False
-# Utilities for autonaming pytrees of Modules defined inside setup()
+# Utilities for pytrees of Modules defined inside setup()
# -----------------------------------------------------------------------------
def _get_suffix_value_pairs(
tree_or_leaf: Any) -> List[Tuple[str, Type["Module"]]]:
@@ -153,6 +153,15 @@ def _all_names_on_object(obj: Any) -> Set[str]:
return nameset
+def _freeze_attr(val: Any) -> Any:
+ if isinstance(val, (dict, FrozenDict)):
+ return FrozenDict({k: _freeze_attr(v) for k, v in val.items()})
+ elif isinstance(val, (list, tuple)):
+ return tuple(_freeze_attr(v) for v in val)
+ else:
+ return val
+
+
# Method wrapping of "compact methods" and setup()
# -----------------------------------------------------------------------------
def compact(fun: Callable) -> Callable:
@@ -268,6 +277,8 @@ class _ModuleInternalState:
in_setup: bool = False
last_varname: Optional[str] = None
autoname_cursor: Optional[dict] = dataclasses.field(default_factory=dict)
+ frozen: bool = False
+ children: Dict[str, Union[str, 'Module']] = dataclasses.field(default_factory=dict)
def reset(self):
self.in_compact_method = False
@@ -408,6 +419,10 @@ def __setattr__(self, name: str, val: Any):
name: Attribute to set.
val: Value of the attribute.
"""
+ if name != '_state' and self._state.frozen:
+ # raises a TypeError just like frozen python dataclasses
+ raise TypeError("Module instance is frozen outside of setup method.")
+
# We don't mess with the parent module.
if name == 'parent':
pass
@@ -416,6 +431,7 @@ def __setattr__(self, name: str, val: Any):
pass
# Submodules are being defined and attached in setup()
else:
+ val = _freeze_attr(val)
for suffix, subvalue in _get_suffix_value_pairs(val):
if isinstance(subvalue, Module):
if not self._state.in_setup:
@@ -454,7 +470,6 @@ def __post_init__(self):
# this Module at the top-level to variables and rngs.
self._state = _ModuleInternalState()
- self.children = dict() # tracks child modules
# Typically we set the parent based on the dynamic module context.
if self.parent is _unspecified_parent: # pytype: disable=attribute-error
@@ -488,7 +503,7 @@ def __post_init__(self):
f"trying to share submodule {self.__class__.__name__} by name "
f"{self.name}. To share submodules, store module instances as a"
f" Python object or as an attribute on self and reuse.")
- self.parent.children[self.name] = self
+ self.parent._state.children[self.name] = self
self.scope = self.parent.scope.push(self.name)
# Top-level invocation with a functional Scope.
@@ -500,6 +515,7 @@ def __post_init__(self):
# Call the user-defined initialization setup() function.
self.setup()
+ self._state.frozen = True
def __repr__(self):
return _module_repr(self)
@@ -590,7 +606,7 @@ def variable(self, col: str, name: str, init_fn, *init_args) -> Variable:
# ephemeral state for setattr name-equality-check
self._state.last_varname = name
v = self.scope.variable(col, name, init_fn, *init_args)
- self.children[name] = col
+ self._state.children[name] = col
return v
def param(self, name: str, init_fn: Callable[..., T], *init_args) -> T:
@@ -619,7 +635,7 @@ def param(self, name: str, init_fn: Callable[..., T], *init_args) -> T:
# ephemeral state for setattr name-equality-check
self._state.last_varname = name
v = self.scope.param(name, init_fn, *init_args)
- self.children[name] = 'params'
+ self._state.children[name] = 'params'
return v
def has_variable(self, col: str, name: str) -> bool:
diff --git a/flax/linen/transforms.py b/flax/linen/transforms.py
--- a/flax/linen/transforms.py
+++ b/flax/linen/transforms.py
@@ -139,8 +139,6 @@ def core_fn(scopes, *args, **kwargs):
cloned = set_module_scopes(cloned, scopes)
cloned._state = copy.deepcopy(self._state) # pylint: disable=protected-access
res = fn(cloned, *args, **kwargs)
- # preserve submodule-tree stripped of scopes/tracers for introspection
- object.__setattr__(self, 'children', clean_clone(cloned).children)
self._state = copy.deepcopy(cloned._state) # pylint: disable=protected-access
return res
# here we apply the given lifting transform to the scope-ingesting fn
@@ -172,8 +170,6 @@ def core_fn(scopes, *args, **kwargs):
cloned = set_module_scopes(self, scopes)
cloned._state = copy.deepcopy(self._state) # pylint: disable=protected-access
res = rewrapped_fn(cloned, *args, **kwargs)
- # preserve submodule-tree stripped of scopes/tracers for introspection
- object.__setattr__(self, 'children', clean_clone(cloned).children)
self._state = copy.deepcopy(cloned._state) # pylint: disable=protected-access
return res
# here we apply the given lifting transform to the scope-ingesting fn
@@ -224,8 +220,6 @@ def core_fn(scopes, *args, **kwargs):
cloned = set_module_scopes(self, scopes)
cloned._state = copy.deepcopy(self._state) # pylint: disable=protected-access
res = rewrapped_fn(cloned, *args, **kwargs)
- # preserve submodule-tree stripped of scopes/tracers for introspection
- object.__setattr__(self, 'children', clean_clone(cloned).children)
self._state = copy.deepcopy(cloned._state) # pylint: disable=protected-access
return res
# here we apply the given lifting transform to the scope-ingesting fn
| diff --git a/tests/linen/module_test.py b/tests/linen/module_test.py
--- a/tests/linen/module_test.py
+++ b/tests/linen/module_test.py
@@ -721,6 +721,20 @@ def __call__(self, x):
variables = foo.init(random.PRNGKey(0), x)
self.assertEqual(variables['params']['bar']['kernel'].shape, (2, 3))
+ def test_module_frozen(self):
+ class Foo(nn.Module):
+ bar: nn.Dense = dataclasses.field(init=False)
+
+ def setup(self):
+ self.i = 1
+
+ def __call__(self):
+ self.i = 2
+
+ foo = Foo()
+ with self.assertRaisesWithLiteralMatch(TypeError, "Module instance is frozen outside of setup method."):
+ foo.init(random.PRNGKey(0))
+
if __name__ == '__main__':
absltest.main()
| Linen modules should be frozen
Currently we don't enforce linen Modules to be frozen after setup. However, this should be the case because Module instances need to be clone-able to work correctly. `__setattr__` should refuse to set attributes after setup is finished.
Update:
Actually there are more sharp edges that can be fixed by freezing correctly.
Currently we accept lists and dicts of sub modules which are registered on assignment. But we can actually freeze them to avoid this common trap:
```
def setup(self):
self.sub_modules = [Dense()]
self.sub_modules.append(Dense())
```
We could avoid this by making sub_modules is stored as a tuple and similarly we can avoid the same issue with dicts by transforming them into a FrozenDict
| Marking as "pull requests welcome" if anyone wants to take a look at it. This change will help avoid possible footguns for users. | 2021-01-11T13:58:46 |
google/flax | 845 | google__flax-845 | [
"844"
] | 87276132fad29a13c400a0ec261b32e753b98ce8 | diff --git a/flax/core/scope.py b/flax/core/scope.py
--- a/flax/core/scope.py
+++ b/flax/core/scope.py
@@ -223,6 +223,10 @@ def value(self, value: T):
"""Updates the value of this Variable."""
self.scope.put_variable(self.collection, self.name, value)
+ def is_mutable(self) -> bool:
+ """Checks if this Variable is mutable."""
+ return self.scope.is_mutable_collection(self.collection)
+
class Scope:
"""A Scope allows easy access to variables and manages RNGS of a neural network layer.
diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -654,6 +654,12 @@ def has_variable(self, col: str, name: str) -> bool:
raise ValueError("Can't access variables on unbound modules")
return self.scope.has_variable(col, name)
+ def is_mutable_collection(self, col: str) -> bool:
+ """Returns true if the collection `col` is mutable."""
+ if self.scope is None:
+ raise ValueError("Can't check mutability on unbound modules")
+ return self.scope.is_mutable_collection(col)
+
def make_rng(self, name: str) -> PRNGKey:
"""Returns a new RNG key from a given RNG sequence for this Module.
| diff --git a/tests/core/scope_test.py b/tests/core/scope_test.py
--- a/tests/core/scope_test.py
+++ b/tests/core/scope_test.py
@@ -83,6 +83,14 @@ def f(scope):
with self.assertRaisesWithLiteralMatch(ValueError, 'No paramater named "kernel" exists in "/dense".'):
apply(f)({})
+ def test_variable_is_mutable(self):
+ def f(scope, should_be_mutable):
+ test = scope.variable('state', 'test', lambda: 1)
+ self.assertEqual(test.is_mutable(), should_be_mutable)
+
+ _, variables = apply(f, mutable='state')({}, True)
+ apply(f, mutable=False)(variables, False)
+
if __name__ == '__main__':
absltest.main()
diff --git a/tests/linen/module_test.py b/tests/linen/module_test.py
--- a/tests/linen/module_test.py
+++ b/tests/linen/module_test.py
@@ -734,6 +734,15 @@ def __call__(self):
foo = Foo()
with self.assertRaisesWithLiteralMatch(TypeError, "Module instance is frozen outside of setup method."):
foo.init(random.PRNGKey(0))
+
+ def test_is_mutable_collection(self):
+ class EmptyModule(nn.Module):
+ def __call__(self):
+ return self.is_mutable_collection('test')
+
+ empty = EmptyModule()
+ self.assertTrue(empty.apply({}, mutable=['test'])[0])
+ self.assertFalse(empty.apply({}, mutable=False))
if __name__ == '__main__':
| Mutable / Unmutable state when training
Hi Flax team,
I'm working on a model with an internal state which gets updated during training. When calling the model during validation, I do not want to update these variables. I could technically introduce a training variable, but I feel it could be more elegantly (and much simpler!) by simply checking if a state is mutable:
1 ) Is there an easy way to check if a variable is mutable, so that my code only updates when it is? e.g.
`if is_initialized and var.is_mutable(): var.value = f(....)`
2 ) If I set mutable=False, I only get back the output. Is there a way to get back the state regardless? e.g.
`output, updated_state = model.apply(inputs, mutable=False, return_state=True)`
My usecase is that for my validation metrics I call my loss function with the testdata and extract the metrics, so that I can use the same code for both training and validation.
Thanks!
| The code for this is already there it is just not exposed as an API. This will be pretty easy.
We decided not to return state that isn't updated. the reasoning is that it becomes easy to accidentally store the same variables twice or return a copy of variables from a compiled function that you don't need.
I'd be happy to give it a try and implement / write an example if you could give me some pointers; I couldn't find anything in the source code when I looked at it though...
Alright, I understand, but still a shame. Adding a keyword also goes against the design? | 2021-01-15T13:25:54 |
google/flax | 910 | google__flax-910 | [
"879"
] | e2cb2844ed15a01541c34ae940d572d1007cd24a | diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -725,6 +725,13 @@ def init(self, rngs: Union[PRNGKey, RNGSequences], *args,
**kwargs) -> VariableDict:
"""Initializes a module method with variables and returns modified variables.
+ Jitting `init` initializes a model lazily using only the shapes of the
+ provided arguments, and avoids computing the forward pass with actual
+ values. Example::
+
+ jit_init = jax.jit(SomeModule.init)
+ jit_init(rng, jnp.ones(input_shape, jnp.float32))
+
Args:
rngs: The rngs for the variable collections.
method: An optional method. If provided, applies this method. If not
| Improve Documentation: Jitting init()
In some of our examples we `jax.jit` the `init()`, for instance in the [WMT example](https://github.com/google/flax/blob/master/linen_examples/wmt/train.py#L472), and in other examples we don't.
@Marvin182 mentioned in a chat: "Jitting the model.init() of the PixelCNN example takes forever (>5min) but runs without jitting in <20s. " [on TPU]
@jheek replied: "jitting init is a difficult thing. On one hand we save time because we are lazy and potentially avoid lot's of small compiles. on the other hand we have a lot of duplicate HLO's when a model has many paramaters with the same shape & dtype"
It thus seems there are some best practices on how and when to `jit` the `init()` of Flax modules, and it would be useful to document this since it can make a big difference in practice, especially on TPU.
| Another comment from Daniel Johnson:
"if you expect to create some expensive value and then immediately throw it away. In the case of flax models, if you call model.init it actually runs a forward pass through the model, which can be expensive. Putting it in jit means jax will trace through it, then dead-code-eliminate the forward pass and only keep the parameters." | 2021-01-20T15:37:43 |
|
google/flax | 965 | google__flax-965 | [
"924"
] | 5f4b50801712bc6aa8660566ccea4e2a419d28fb | diff --git a/flax/optim/adam.py b/flax/optim/adam.py
--- a/flax/optim/adam.py
+++ b/flax/optim/adam.py
@@ -98,7 +98,7 @@ def apply_param_gradient(self, step, hyper_params, param, state, grad):
grad_sq_ema = beta2 * state.grad_sq_ema + (1. - beta2) * grad_sq
# bias correction
- t = step + 1.
+ t = jnp.array(step + 1, lax.dtype(param.dtype))
grad_ema_corr = grad_ema / (1 - beta1 ** t)
grad_sq_ema_corr = grad_sq_ema / (1 - beta2 ** t)
diff --git a/flax/optim/lamb.py b/flax/optim/lamb.py
--- a/flax/optim/lamb.py
+++ b/flax/optim/lamb.py
@@ -74,7 +74,7 @@ def apply_param_gradient(self, step, hyper_params, param, state, grad):
grad_ema = beta1 * state.grad_ema + (1. - beta1) * grad
grad_sq_ema = beta2 * state.grad_sq_ema + (1. - beta2) * grad_sq
- t = step + 1.
+ t = jnp.array(step + 1, lax.dtype(param.dtype))
grad_ema_corr = grad_ema / (1. - beta1 ** t)
grad_sq_ema_corr = grad_sq_ema / (1. - beta2 ** t)
| When jax_enable_x64 is set Adam promotes everything to float64
### Problem you have encountered:
When `jax_enable_x64` is set, Adam's `apply_gradient` method will promote all float32 arrays to float64, potentially unexpectedly degrading performance.
This is due to jax's wonky type promotion semantics. The offending line is:
https://github.com/google/flax/blob/3e36db3e5e3b8e6e1777d612f270e7948238aa9c/flax/optim/adam.py#L82
which promotes like:
```python
jnp.array([0], dtype=jnp.int32) + 1. # == DeviceArray([1.], dtype=float64)
```
and then cascades from there promoting everything to float64
### What you expected to happen:
Arrays should retain their dtypes on optimizer updates.
### Logs, error messages, etc:
### Steps to reproduce:
```python
from jax.config import config
config.update("jax_enable_x64", True)
import jax.numpy as jnp
import flax
opt = flax.optim.Adam(1e-3).create(
{"x": jnp.zeros(10, dtype=jnp.float32)})
assert opt.target["x"].dtype == jnp.float32
opt = opt.apply_gradient({"x": jnp.zeros(10, dtype=jnp.float32)})
# This fails, since dtype was promoted to float64
assert opt.target["x"].dtype == jnp.float32
```
| 2021-02-01T06:35:47 |
||
google/flax | 985 | google__flax-985 | [
"785"
] | 947923ec0f39282d0c6c3a0c369ebe17e5358051 | diff --git a/docs/_ext/codediff.py b/docs/_ext/codediff.py
--- a/docs/_ext/codediff.py
+++ b/docs/_ext/codediff.py
@@ -26,14 +26,14 @@
Use directive as follows:
.. codediff::
- :title-left: <LEFT_CODE_BLOCK_TITLE>
- :title-right: <RIGHT_CODE_BLOCK_TITLE>
- :highlight-left: <LINES_TO_HIGHLIGHT_LEFT>
- :highlight-right: <LINES_TO_HIGHLIGHT_RIGHT>
+ :title_left: <LEFT_CODE_BLOCK_TITLE>
+ :title_right: <RIGHT_CODE_BLOCK_TITLE>
<CODE_BLOCK_LEFT>
---
<CODE_BLOCK_RIGHT>
+
+In order to highlight a line of code, prepend it with "#!".
"""
class CodeDiffParser:
@@ -94,7 +94,7 @@ class CodeDiffDirective(SphinxDirective):
'code_sep': directives.unchanged,
}
- def run(self):
+ def run(self):
new_content = CodeDiffParser().parse(list(self.content), **self.options)
node = nodes.paragraph()
| Port ensembling HOWTO from old diff based system
And instead, use a standalone doc with tests like in #771
Here is the old (pre-Linen) HOWTO diff, for reference:
https://github.com/google/flax/blob/master/howtos/diffs/ensembling.diff
| 2021-02-04T15:33:26 |
||
google/flax | 1,072 | google__flax-1072 | [
"847"
] | aaf512bb42e94b8aad8d38478179dc7ad65f2554 | diff --git a/flax/core/scope.py b/flax/core/scope.py
--- a/flax/core/scope.py
+++ b/flax/core/scope.py
@@ -377,9 +377,10 @@ def reserve(self, name: str):
name: the name to reserve.
"""
if not isinstance(name, str):
- raise errors.ScopeNameTypeError(name)
+ raise TypeError('The type of scope "{name}" should be string but '
+ f'it is {type(name)}')
if name in self.reservations:
- raise errors.ScopeNameInUseError(name)
+ raise ValueError(f'Duplicate use of scope name: "{name}"')
self.reservations.add(name)
def default_name(self, prefix: str) -> str:
@@ -502,7 +503,8 @@ def has_rng(self, name: str) -> bool:
def make_rng(self, name: str) -> PRNGKey:
"""Generates A PRNGKey from a PRNGSequence with name `name`."""
- assert self.has_rng(name), f'Need PRNG for "{name}"'
+ if not self.has_rng(name):
+ raise errors.InvalidRngError(f'{self.name} needs PRNG for "{name}"')
self._check_valid()
self._validate_trace_level()
self.rng_counters[name] += 1
@@ -649,7 +651,8 @@ def bind(variables: VariableDict,
if not _is_valid_variables(variables):
raise errors.ApplyScopeInvalidVariablesError()
if rngs is not None and not _is_valid_rngs(rngs):
- raise errors.ApplyScopeInvalidRngsError()
+ raise errors.InvalidRngError(
+ 'rngs should be a dictionary mapping strings to `jax.PRNGKey`.')
new_variables = _unfreeze_variables(variables, mutable)
return Scope(new_variables, rngs=rngs, mutable=mutable)
@@ -696,7 +699,9 @@ def init(fn: Callable[..., Any],
@functools.wraps(fn)
def wrapper(rngs, *args, **kwargs) -> Tuple[Any, VariableDict]:
if not _is_valid_rng(rngs) and not _is_valid_rngs(rngs):
- raise errors.InitScopeInvalidRngsError()
+ raise ValueError('First argument passed to an init function should be a '
+ '`jax.PRNGKey` or a dictionary mapping strings to '
+ '`jax.PRNGKey`.')
if not isinstance(rngs, dict):
rngs = {'params': rngs}
return apply(fn, mutable=mutable)({}, *args, rngs=rngs, **kwargs)
diff --git a/flax/errors.py b/flax/errors.py
--- a/flax/errors.py
+++ b/flax/errors.py
@@ -69,77 +69,76 @@ def __init__(self, message):
error_msg = f'{message} ({error_page}#{module_name}.{class_name})'
super().__init__(error_msg)
+
#################################################
# scope.py errors #
#################################################
-class InitScopeInvalidRngsError(FlaxError):
- """
- When initializing a Module with
- :meth:`Module.init() <flax.linen.Module.init>`, the first argument can be of
- two forms:
- 1. A single PRNGKey. This is in case only one PRNGKey is needed to initialize
- the ``params`` collection. Note that this::
+class InvalidRngError(FlaxError):
+ """
+ All rngs used in a Module should be passed to
+ :meth:`Module.init() <flax.linen.Module.init>` and
+ :meth:`Module.apply() <flax.linen.Module.apply>` appropriately. We explain
+ both separately using the following example::
- SomeModule(...).init(jax.random.PRNGKey(0), ...)
+ class Bar(nn.Module):
+ @nn.compact
+ def __call__(self, x):
+ some_param = self.param('some_param', nn.initializers.zeros, (1, ))
+ dropout_rng = self.make_rng('dropout')
+ x = nn.Dense(features=4)(x)
+ ...
- Is shorthand for::
+ class Foo(nn.Module):
+ @nn.compact
+ def __call__(self, x):
+ x = Bar()(x)
+ ...
- SomeModule(...).init({'params': jax.random.PRNGKey(0)}, ...)
+ **PRNGs for Module.init()**
+
+ In this example, two rngs are used:
- 2. A directionary mapping collections to the PRNGKey to initialize them with.
- This is useful if the Module has more rngs than one for ``params``.
-
- For instance, suppose an ``EncoderDecoder`` Module that requires an RNG for
- decoding tokens based on a categorical probability distribution. Then a
- typical call looks as follows::
+ * ``params`` is used for initializing the parameters of the model. This rng
+ is used to initialize the ``some_params`` parameter, and for initializing
+ the weights of the ``Dense`` Module used in ``Bar``.
+
+ * ``dropout`` is used for the dropout rng that is used in ``Bar``.
- EncoderDecoder(...).init({'params': rng1, 'decode': rng2}, ...)
+ So, ``Foo`` is initialized as follows::
+
+ init_rngs = {'params': random.PRNGKey(0), 'dropout': random.PRNGKey(1)}
+ variables = Foo().init(init_rngs, init_inputs)
- Note that even though they may be used inside submodules, the rngs for the
- collections should be defined at the top-level. So the ``EncoderDecoder``
- module above may contain a submodule ``Decoder``, which then uses the
- ``decode`` collection. The RNGs will be passed down to submodules
- automatically.
- """
- def __init__(self):
- super().__init__('First argument passed to an init function should be a '
- '`jax.PRNGKey` or a dictionary mapping strings to '
- '`jax.PRNGKey`.')
+ If a Module only requires an rng for ``params``, you can use::
+ SomeModule().init(rng, ...) # Shorthand for {'params': rng}
-class ApplyScopeInvalidRngsError(FlaxError):
- """
- When applying a Module, the `rng` argument should be a dictionary mapping
- collections to the PRNGKeys that are used when computing their new values.
- For instance, suppose an ``EncoderDecoder`` Module that requires an RNG for
- decoding tokens based on a categorical probability distribution. Then a
- typical call to :meth:`Module.apply() <flax.linen.Module.apply>` looks as
- follows::
+ **PRNGs for Module.apply()**
+
+ When applying ``Foo``, only the rng for ``dropout`` is needed, because
+ ``params`` is only used for initializing the Module parameters::
- EncoderDecoder(...).apply(params, ... {'decode': rng2}, ...)
+ Foo().apply(variables, inputs, rngs={'dropout': random.PRNGKey(2)})
- Remarks:
+ If a Module only requires an rng for ``params``, you don't have to provide
+ rngs for apply at all::
- * While :meth:`Module.init() <flax.linen.Module.init>` requires a rngs for
- the collection ``params``, this is not necessary when applying the module,
- because this collection is only use to initialize the model with.
- * Even though they may be used inside submodules, the rngs for the collections
- should be defined at the top-level. So the ``EncoderDecoder`` module above
- may contain a submodule ``Decoder``, which then uses the ``decode``
- collection. The RNGs will be passed down to submodules automatically.
+ SomeModule().apply(variables, inputs) # rngs=None
"""
- def __init__(self):
- super().__init__('rngs should be a dictionary mapping strings to '
- '`jax.PRNGKey`.')
-
+ def __init__(self, msg):
+ # For this error message we pass the entire message, since there are various
+ # different kinds of RNG errors and we want to be able to be more specific
+ # in the error message, while always linking to the same documentation.
+ super().__init__(msg)
+
class ApplyScopeInvalidVariablesError(FlaxError):
"""
When calling :meth:`Module.apply() <flax.linen.Module.apply>`, the first
- argument should be a variable dict. For more explanation on variable direct,
+ argument should be a variable dict. For more explanation on variable dicts,
please see :mod:`flax.core.variables`.
"""
def __init__(self):
@@ -166,11 +165,8 @@ def __call__(self, inputs, embed_name='embedding'):
(self.num_embeddings, self.features))
return embedding[inputs]
- vars = Embed(4, 8).init(random.PRNGKey(0), jnp.ones((5, 5, 1)))
- print(jax.tree_map(lambda x : x.shape, vars))
- _ = NoBiasDense().apply(vars, jnp.ones((5, 5, 1)), 'embed')
-
-
+ variables = Embed(4, 8).init(random.PRNGKey(0), jnp.ones((5, 5, 1)))
+ _ = NoBiasDense().apply(variables, jnp.ones((5, 5, 1)), 'embed')
"""
def __init__(self, param_name, scope_path):
super().__init__(f'No parameter named "{param_name}" exists in '
@@ -201,8 +197,8 @@ def __call__(self, x):
(((x.ndim - 1,), (0,)), ((), ())))
return y
- vars = NoBiasDense().init(random.PRNGKey(0), jnp.ones((5, 5, 1)))
- _ = NoBiasDense().apply(vars, jnp.ones((5, 5)))
+ variables = NoBiasDense().init(random.PRNGKey(0), jnp.ones((5, 5, 1)))
+ _ = NoBiasDense().apply(variables, jnp.ones((5, 5)))
"""
def __init__(self, param_name, scope_path, value_shape, init_shape):
super().__init__('Inconsistent shapes between value and initializer '
@@ -214,7 +210,7 @@ class ScopeVariableNotFoundError(FlaxError):
"""
This error is thrown when trying to use a variable in a Scope in a collection
that is immutable. In order to create this variable, mark the collection as
- mutable explicitly using the `mutable` keyword in
+ mutable explicitly using the ``mutable`` keyword in
:meth:`Module.apply() <flax.linen.Module.apply>`.
"""
def __init__(self, name, col, scope_path):
@@ -257,35 +253,38 @@ def __call__(self, x):
var.value = ...
...
- vars = MyModule.init(...)
+ v = MyModule.init(...)
...
- logits = MyModule.apply(vars, batch) # This throws an error.
- logits = MyModule.apply(vars, batch, mutable=['batch_stats']) # This works.
+ logits = MyModule.apply(v, batch) # This throws an error.
+ logits = MyModule.apply(v, batch, mutable=['batch_stats']) # This works.
"""
def __init__(self, col, variable_name, scope_path):
super().__init__(f'Cannot update variable "{variable_name}" in '
f'"{scope_path}" because collection "{col}" is immutable.')
-class ScopeNameTypeError(FlaxError):
- """
- Scope names should be strings.
- """
- def __init__(self, scope_name):
- super().__init__(f'The type of scope "{scope_name}" should be string but '
- f'it is {type(scope_name)}')
+#################################################
+# module.py errors #
+#################################################
-class ScopeNameInUseError(FlaxError):
+class NameInUseError(FlaxError):
"""
- Module names are unique within a subscope::
+ This error is raised when trying to create a submodule, param, or variable
+ with an existing name. They are all considered to be in the same namespace.
- class MyModule(nn.Module):
- @nn.compact
- def __call__(self, x):
- x = MySubModule(name='m1')(x)
- x = MySubModule(name='m1')(x) # This is not allowed.
- return x
+ **Sharing Submodules**
+
+ This is the wrong pattern for sharing submodules::
+
+ y = nn.Dense(feature=3, name='bar')(x)
+ z = nn.Dense(feature=3, name='bar')(x+epsilon)
+
+ Instead, modules should be shared by instance::
+
+ dense = nn.Dense(feature=3, name='bar')
+ y = dense(x)
+ z = dense(x+epsilon)
If submodules are not provided with a name, a unique name will be given to
them automatically::
@@ -296,9 +295,226 @@ def __call__(self, x):
x = MySubModule()(x)
x = MySubModule()(x) # This is fine.
return x
+
+ **Parameters and Variables**
+
+ A parameter name can collide with a submodule or variable, since they are all
+ stored in the same variable dict::
+
+ class Foo(nn.Module):
+ @nn.compact
+ def __call__(self, x):
+ bar = self.param('bar', nn.initializers.zeros, (1, ))
+ embed = nn.Embed(num_embeddings=2, features=5, name='bar') # <-- ERROR!
+
+ Variables should also have unique names, even if they have their own
+ collection::
+
+ class Foo(nn.Module):
+ @nn.compact
+ def __call__(self, inputs):
+ _ = self.param('mean', initializers.lecun_normal(), (2, 2))
+ _ = self.variable('stats', 'mean', initializers.zeros, (2, 2))
"""
- def __init__(self, scope_name):
- super().__init__(f'Duplicate use of scope name: "{scope_name}"')
+ def __init__(self, key_type, value, module_name):
+ # key_type is in {param, variable, submodule}.
+ super().__init__(f'Could not create {key_type} "{value}" in Module '
+ f'{module_name}: Name in use.')
+
+
+class AssignSubModuleError(FlaxError):
+ """
+ You are only allowed to create submodules in two places:
+
+ 1. If your Module is noncompact: inside
+ :meth:`Module.setup() <flax.linen.Module.setup>`.
+ 2. If your Module is compact: inside the method wrapped in
+ :meth:`nn.compact() <flax.linen.compact>`.
+
+ For instance, the following code throws this error, because ``nn.Conv`` is
+ created in ``__call__``, which is not marked as compact::
+
+ class Foo(nn.Module):
+ def setup(self):
+ pass
+
+ def __call__(self, x):
+ conv = nn.Conv(features=3, kernel_size=3)
+
+ Foo().init(random.PRNGKey(0), jnp.zeros((1,)))
+
+ Note that this error is also thrown if you partially defined a Module inside
+ setup::
+
+ class Foo(nn.Module):
+ def setup(self):
+ self.conv = functools.partial(nn.Conv, features=3)
+
+ def __call__(self, x):
+ x = self.conv(kernel_size=4)(x)
+ return x
+
+ Foo().init(random.PRNGKey(0), jnp.zeros((1,)))
+
+ In this case, ``self.conv(kernel_size=4)`` is called from ``__call__``, which
+ is disallowed beause it's neither within ``setup`` nor a method wrapped in
+ x``nn.compact``.
+ """
+ def __init__(self, cls):
+ super().__init__(f'Submodule {cls} must be defined in `setup()` or in a '
+ 'method wrapped in `@compact`')
+
+
+class SetAttributeInModuleSetupError(FlaxError):
+ """
+ You are not allowed to modify Module class attributes in
+ :meth:`Module.setup() <flax.linen.Module.setup>`::
+
+ class Foo(nn.Module):
+ features: int = 6
+
+ def setup(self):
+ self.features = 3 # <-- ERROR
+
+ def __call__(self, x):
+ return nn.Dense(self.features)(x)
+
+ variables = SomeModule().init(random.PRNGKey(0), jnp.ones((1, )))
+
+ Instead, these attributes should be set when initializing the Module::
+
+ class Foo(nn.Module):
+ features: int = 6
+
+ @nn.compact
+ def __call__(self, x):
+ return nn.Dense(self.features)(x)
+
+ variables = SomeModule(features=3).init(random.PRNGKey(0), jnp.ones((1, )))
+
+ TODO(marcvanzee): Link to a design note explaining why it's necessary for
+ modules to stay frozen (otherwise we can't safely clone them, which we use for
+ lifted transformations).
+ """
+ def __init__(self):
+ super().__init__(f'Module construction attributes are frozen.')
+
+
+class SetAttributeFrozenModuleError(FlaxError):
+ """
+ You can only assign Module attributes to ``self`` inside
+ :meth:`Module.setup() <flax.linen.Module.setup>`. Outside of that method, the
+ Module instance is frozen (i.e., immutable). This behavior is similar to
+ frozen Python dataclasses.
+
+ For instance, this error is raised in the following case::
+
+ class SomeModule(nn.Module):
+ @nn.compact
+ def __call__(self, x, num_features=10):
+ self.num_features = num_features # <-- ERROR!
+ x = nn.Dense(self.num_features)(x)
+ return x
+
+ s = SomeModule().init(random.PRNGKey(0), jnp.ones((5, 5)))
+
+ Similarly, the error is raised when trying to modify a submodule's attributes
+ after constructing it, even if this is done in the ``setup()`` method of the
+ parent module::
+
+ class Foo(nn.Module):
+ def setup(self):
+ self.dense = nn.Dense(features=10)
+ self.dense.features = 20 # <--- This is not allowed
+
+ def __call__(self, x):
+ return self.dense(x)
+ """
+ def __init__(self, module_cls, attr_name, attr_val):
+ super().__init__(f'Can\'t set {attr_name}={attr_val} for Module of type '
+ f'{module_cls}: Module instance is frozen outside of '
+ 'setup method.')
+
+
+class MultipleMethodsCompactError(FlaxError):
+ """
+ The ``@compact`` decorator may only be added to at most one method in a Flax
+ module. In order to resolve this, you can:
+
+ * remove ``@compact`` and define submodules and variables using
+ :meth:`Module.setup() <flax.linen.Module.setup>`.
+ * Use two separate modules that both have a unique ``@compact`` method.
+
+ TODO(marcvanzee): Link to a design note explaining the motivation behind this.
+ There is no need for an equivalent to `hk.transparent` and it makes submodules
+ much more sane because there is no need to prefix the method names.
+ """
+ def __init__(self):
+ super().__init__(f'Only one method per class can be @compact')
+
+class ReservedModuleAttributeError(FlaxError):
+ """
+ This error is thrown when creating a Module that is using reserved attributes.
+ The following attributes are reserved:
+
+ * ``parent``: The parent Module of this Module.
+ * ``name``: The name of this Module.
+ """
+ def __init__(self, annotations):
+ super().__init__(f'properties `parent` and `name` are reserved: '
+ f'{annotations}')
+
+
+class ApplyModuleInvalidMethodError(FlaxError):
+ """
+ When calling :meth:`Module.apply() <flax.linen.Module.apply>`, you can specify
+ the method to apply using parameter ``method``. This error is thrown if the
+ provided parameter is not a method in the Module and not a function with at
+ least one argument.
+
+ Learn more on the reference docs for
+ :meth:`Module.apply() <flax.linen.Module.apply>`.
+ """
+ def __init__(self, method):
+ super().__init__(f'Cannot call apply(): {method} is not a valid function '
+ 'for apply().')
+
+
+class CallCompactUnboundModuleError(FlaxError):
+ """
+ This error occurs when you are trying to call a Module directly, rather than
+ through :meth:`Module.apply() <flax.linen.Module.apply>`. For instance, the
+ error will be raised when trying to run this code::
+
+ from flax import linen as nn
+ import jax.numpy as jnp
+
+ test_dense = nn.Dense(10)
+ test_dense(jnp.ones((5,5)))
+
+ Instead, you should pass the variables (parameters and other state) via
+ :meth:`Module.apply() <flax.linen.Module.apply>` (or use
+ :meth:`Module.init() <flax.linen.Module.init>` to get initial variables)::
+
+ from jax import random
+ variables = test_dense.init(random.PRNGKey(0), jnp.ones((5,5)))
+
+ y = test_dense.apply(variables, jnp.ones((5,5)))
+ """
+ def __init__(self):
+ super().__init__('Can\'t call compact methods on unbound modules')
+
+
+class JaxOmnistagingError(FlaxError):
+ """
+ The Flax linen API requires JAX omnistaging to be enabled. In order to enable
+ this, add this to your imports::
+
+ from jax.config import config
+ config.enable_omnistaging()
+ """
+ def __init__(self):
+ super().__init__(f'Flax Linen requires Omnistaging to be enabled')
class InvalidCheckpointError(FlaxError):
@@ -310,4 +526,4 @@ class InvalidCheckpointError(FlaxError):
overwrite existing checkpoints in the target directory.
"""
def __init__(self, path, step):
- super().__init__(f'Trying to save an outdated checkpoint at step: "{step}" and path: "{path}".')
\ No newline at end of file
+ super().__init__(f'Trying to save an outdated checkpoint at step: "{step}" and path: "{path}".')
diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -30,6 +30,7 @@
import numpy as np
import flax
+from flax import errors
from flax import traverse_util
from flax import serialization
from flax import core
@@ -53,10 +54,7 @@
def _check_omnistaging():
if not jax.config.omnistaging_enabled:
- raise RuntimeError(
- "Flax linen API requires JAX omnistaging to be enabled:\n"
- " from jax.config import config\n"
- " config.enable_omnistaging()")
+ raise errors.JaxOmnistagingError()
def _indent(x: str, num_spaces: int):
@@ -271,7 +269,7 @@ def wrapped_module_method(*args, **kwargs):
if is_compact_method:
if self.scope is None:
- raise ValueError("Can't call compact methods on unbound modules")
+ raise errors.CallCompactUnboundModuleError()
self._state.in_compact_method = True
_context.module_stack.append(self)
try:
@@ -303,8 +301,9 @@ def wrapped(self):
def _get_unbound_fn(method_or_fn: Callable[..., Any]) -> Callable[..., Any]:
"""Returns an unbound function from a method that is possibly bound.
- This means that the returned function does no longer depend on the instance
- of the class, which is passed as it first argument.
+ This means that if the passed function belongs of an instance of a class, then
+ the returned function does no longer depend on the instance, which is passed
+ as the first argument to the function.
Args:
method_or_fn: A class method or function.
@@ -312,12 +311,16 @@ def _get_unbound_fn(method_or_fn: Callable[..., Any]) -> Callable[..., Any]:
An unbound version of input function.
"""
if inspect.ismethod(method_or_fn):
- return method_or_fn.__func__ # pytype: disable=attribute-error
- elif callable(method_or_fn):
- return method_or_fn
- else:
- raise ValueError('Expect a function or method.')
+ method_or_fn = method_or_fn.__func__ # pytype: disable=attribute-error
+ # The method should be callable, and it should have at least one argument
+ # representing the class that is passed in.
+ if (not callable(method_or_fn) or
+ len(inspect.signature(method_or_fn).parameters) < 1):
+ raise errors.ApplyModuleInvalidMethodError(method_or_fn)
+
+ return method_or_fn
+
@dataclasses.dataclass
class _ModuleInternalState:
@@ -443,8 +446,7 @@ def _customized_dataclass_transform(cls):
# Use cls.__dict__ to get annotations of cls itself (no parent class).
annotations = dict(cls.__dict__.get('__annotations__', {}))
if 'parent' in annotations or 'name' in annotations:
- raise ValueError(
- f'properties `parent` and `name` are reserved: {annotations}')
+ raise errors.ReservedModuleAttributeError(annotations)
# Add `parent` and `name` default fields at end.
# We temporarily modify base class __dataclass_fields__ to force desired
# argument behavior and ordering from dataclass class-transform.
@@ -475,10 +477,7 @@ def _verify_single_or_no_compact(cls):
n_compact_fns = len([method_name for method_name in methods
if hasattr(getattr(cls, method_name), 'compact')])
if n_compact_fns > 1:
- raise RuntimeError(
- 'Only one method per class can be @compact. You can remove @compact '
- 'and define submodules and variables in setup(), or use two '
- 'separate modules.')
+ raise errors.MultipleMethodsCompactError()
@classmethod
def _wrap_module_methods(cls):
@@ -499,7 +498,7 @@ def __setattr__(self, name: str, val: Any):
"""Sets an attribute on this Module.
We overload setattr solely to support pythonic naming via assignment of
- submodules in the special setup() function::
+ submodules in the special :meth:`setup` function::
self.submodule_name = MyModule(...)
@@ -515,10 +514,11 @@ def __setattr__(self, name: str, val: Any):
if not self._state.in_setup and self._state.is_initialized:
# Raises a TypeError just like frozen python dataclasses.
- raise TypeError("Module instance is frozen outside of setup method.")
+ raise errors.SetAttributeFrozenModuleError(self.__class__.__name__, name,
+ val)
if is_dataclass_attr:
if self._state.in_setup:
- raise TypeError("Module construction attributes are frozen.")
+ raise errors.SetAttributeInModuleSetupError()
object.__setattr__(self, name, val)
# Submodules are being defined and attached in setup()
else:
@@ -534,7 +534,7 @@ def __getattr__(self, name: str) -> Any:
return self.__dict__[name]
else:
raise AttributeError(
- f"'{self.__class__.__name__}' object has no attribute '{name}'")
+ f'"{self.__class__.__name__}" object has no attribute "{name}"')
def __dir__(self) -> List[str]:
"""Call setup() before listing attributes."""
@@ -568,9 +568,7 @@ def __post_init__(self):
if self.parent._state.in_setup and self.name is None: # pytype: disable=attribute-error
return
if not self.parent._initialization_allowed:
- raise ValueError(
- 'Submodules must be defined in `setup()` or in a method wrapped '
- 'in `@compact`')
+ raise errors.AssignSubModuleError(self.__class__.__name__)
# Autonaming of submodules.
if self.name is None: # pytype: disable=attribute-error
prefix = f"{self.__class__.__name__}"
@@ -578,11 +576,8 @@ def __post_init__(self):
self.name = f"{prefix}_{cursor}"
self.parent._state.autoname_cursor[prefix] = cursor + 1
if self.parent._name_taken(self.name, self):
- raise ValueError(
- f"A variable of name {self.name} exists already, or "
- f"trying to share submodule {self.__class__.__name__} by name "
- f"{self.name}. To share submodules, store module instances as a"
- f" Python object or as an attribute on self and reuse.")
+ parent_class = self.parent.__class__.__name__
+ raise errors.NameInUseError('submodule', self.name, parent_class)
self.parent._state.children[self.name] = self
object.__setattr__(self, 'scope', self.parent.scope.push(self.name))
@@ -737,8 +732,7 @@ def variable(self, col: str, name: str, init_fn, *init_args) -> Variable:
'Variables must be initialized in `setup()` or in a method '
'wrapped in `@compact`')
if self._name_taken(name):
- raise ValueError(
- f'Name {name} already in use in {self.__class__.__name__}.')
+ raise errors.NameInUseError('variable', name, self.__class__.__name__)
v = self.scope.variable(col, name, init_fn, *init_args)
self._state.children[name] = col
return v
@@ -774,8 +768,7 @@ def param(self, name: str, init_fn: Callable[..., T], *init_args) -> T:
'Parameters must be initialized in `setup()` or in a method '
'wrapped in `@compact`')
if self._name_taken(name):
- raise ValueError(
- f'Name {name} already in use in {self.__class__.__name__}.')
+ raise errors.NameInUseError('param', name, self.__class__.__name__)
v = self.scope.param(name, init_fn, *init_args)
self._state.children[name] = 'params'
return v
@@ -871,12 +864,27 @@ def apply(self, variables: VariableDict, *args, rngs: RNGSequences = None,
"""Applies a module method to variables and returns output and modified variables.
Note that `method` should be set if one would like to call `apply` on a
- different class method than ``__call__``. For instance, suppose a Transformer
- modules has a method called `encode`, then the following calls `apply` on
- that method::
+ different class method than ``__call__``. For instance, suppose a
+ Transformer modules has a method called `encode`, then the following calls
+ `apply` on that method::
+
+ model = Transformer()
+ encoded = model.apply({'params': params}, x, method=Transformer.encode)
+
+ If a function instance is provided, the unbound function is used. For
+ instance, the example below is equivalent to the one above::
+
+ encoded = model.apply({'params': params}, x, method=model.encode)
+
+ Note ``method`` can also be a function that is not defined in
+ ``Transformer``. In that case, the function should have at least one
+ argument representing an instance of the Module class::
+
+ def other_fn(instance, ...):
+ instance.some_module_attr(...)
+ ...
- model = models.Transformer(config)
- encoded = model.apply({'params': params}, inputs, method=model.encode)
+ model.apply({'params': params}, x, method=other_fn)
Args:
variables: A dictionary containing variables keyed by variable
@@ -884,8 +892,9 @@ def apply(self, variables: VariableDict, *args, rngs: RNGSequences = None,
about variables.
rngs: a dict of PRNGKeys to initialize the PRNG sequences.
The "params" PRNG sequence is used to initialize parameters.
- method: The literal name of a method in this class. If provided, applies
- this method. If not provided, applies the ``__call__`` method.
+ method: A function to call apply on. This is generally a function in the
+ module. If provided, applies this method. If not provided, applies the
+ ``__call__`` method of the module.
mutable: Can be bool, str, or list. Specifies which collections should be
treated as mutable: ``bool``: all/no collections are mutable.
``str``: The name of a single mutable collection. ``list``: A
@@ -924,7 +933,10 @@ def init_with_output(self, rngs: Union[PRNGKey, RNGSequences], *args,
collections.
"""
if not isinstance(rngs, dict):
- assert rngs.shape == (2,)
+ if rngs.shape != (2,):
+ raise errors.InvalidRngError(
+ 'RNGs should be of shape (2,) in Module '
+ f'{self.__class__.__name__}, but rngs are: {rngs}')
rngs = {'params': rngs}
return self.apply(
{}, *args, rngs=rngs, method=method, mutable=True, **kwargs)
| diff --git a/tests/linen/module_test.py b/tests/linen/module_test.py
--- a/tests/linen/module_test.py
+++ b/tests/linen/module_test.py
@@ -253,35 +253,38 @@ def __call__(self, x):
return x + self.bias
x = jnp.array([1.])
scope = Scope({}, {'params': rngkey}, mutable=['params'])
- with self.assertRaisesRegex(ValueError, 'bias already in use'):
+ msg = 'Could not create param "bias" in Module Dummy: Name in use'
+ with self.assertRaisesRegex(errors.NameInUseError, msg):
y = Dummy(x.shape, parent=scope)(x)
- def test_setup_var_collision(self):
+ def test_call_var_collision(self):
rngkey = jax.random.PRNGKey(0)
class Dummy(nn.Module):
xshape: Tuple[int]
- def setup(self):
- self.bias = self.param('bias', initializers.ones, self.xshape)
- self.bias = self.param('bias', initializers.ones, self.xshape)
+ @compact
def __call__(self, x):
- return x + self.bias
+ bias = self.param('bias', initializers.ones, self.xshape)
+ bias = self.param('bias', initializers.ones, self.xshape)
+ return x + bias
x = jnp.array([1.])
scope = Scope({}, {'params': rngkey}, mutable=['params'])
- with self.assertRaisesRegex(ValueError, 'bias already in use'):
+ msg = 'Could not create param "bias" in Module Dummy: Name in use'
+ with self.assertRaisesRegex(errors.NameInUseError, msg):
y = Dummy(x.shape, parent=scope)(x)
- def test_call_var_collision(self):
+ def test_setup_var_collision(self):
rngkey = jax.random.PRNGKey(0)
class Dummy(nn.Module):
xshape: Tuple[int]
- @compact
+ def setup(self):
+ self.bias = self.param('bias', initializers.ones, self.xshape)
+ self.bias = self.param('bias', initializers.ones, self.xshape)
def __call__(self, x):
- bias = self.param('bias', initializers.ones, self.xshape)
- bias = self.param('bias', initializers.ones, self.xshape)
- return x + bias
+ return x + self.bias
x = jnp.array([1.])
scope = Scope({}, {'params': rngkey}, mutable=['params'])
- with self.assertRaisesRegex(ValueError, 'bias already in use'):
+ msg = 'Could not create param "bias" in Module Dummy: Name in use'
+ with self.assertRaisesRegex(errors.NameInUseError, msg):
y = Dummy(x.shape, parent=scope)(x)
def test_setattr_name_var_disagreement_allowed_in_lists(self):
@@ -320,43 +323,66 @@ def __call__(self, x):
y = Dummy(x.shape, parent=scope)(x)
self.assertEqual(y, jnp.array([2.]))
- def test_submodule_var_collision(self):
+ def test_submodule_var_collision_with_scope(self):
rngkey = jax.random.PRNGKey(0)
+
class Dummy(nn.Module):
xshape: Tuple[int]
+
def setup(self):
self.bias = self.param('bias', initializers.ones, self.xshape)
self.bias = DummyModule()
+
def __call__(self, x):
return x + self.bias
+
x = jnp.array([1.])
scope = Scope({}, {'params': rngkey}, mutable=['params'])
- msg = r'Duplicate use of scope name: "bias"'
- with self.assertRaisesRegex(errors.ScopeNameInUseError, msg):
+
+ msg = 'Duplicate use of scope name: "bias"'
+ with self.assertRaisesWithLiteralMatch(ValueError, msg):
y = Dummy(x.shape, parent=scope)(x)
+
+ def test_submodule_var_collision_with_submodule(self):
+ rngkey = jax.random.PRNGKey(0)
+
class Dummy(nn.Module):
xshape: Tuple[int]
+
def setup(self):
self.bias = self.param('bias', initializers.ones, self.xshape)
+
@compact
def __call__(self, x):
bias = DummyModule(name='bias')
return x + self.bias
+
x = jnp.array([1.])
scope = Scope({}, {'params': rngkey}, mutable=['params'])
- with self.assertRaisesRegex(ValueError, 'name bias exists already'):
+
+ msg = 'Could not create submodule "bias" in Module Dummy: Name in use'
+ with self.assertRaisesRegex(errors.NameInUseError, msg):
y = Dummy(x.shape, parent=scope)(x)
+
+ def test_submodule_var_collision_with_params(self):
+ rngkey = jax.random.PRNGKey(0)
+
class Dummy(nn.Module):
xshape: Tuple[int]
+
def setup(self):
self.bias = DummyModule()
+
@compact
def __call__(self, x):
bias = self.param('bias', initializers.ones, self.xshape)
return x + self.bias
+
x = jnp.array([1.])
scope = Scope({}, {'params': rngkey}, mutable=['params'])
- with self.assertRaisesRegex(ValueError, 'bias already'):
+
+ msg = 'Could not create param "bias" in Module Dummy: Name in use'
+ with self.assertRaisesRegex(errors.NameInUseError, msg):
y = Dummy(x.shape, parent=scope)(x)
def test_attr_param_name_collision(self):
@@ -369,7 +395,8 @@ def __call__(self, x):
return x + self.bias
x = jnp.array([1.])
scope = Scope({}, {'params': rngkey}, mutable=['params'])
- with self.assertRaisesRegex(ValueError, 'Name bias already in use'):
+ msg = 'Could not create param "bias" in Module Dummy: Name in use'
+ with self.assertRaisesRegex(errors.NameInUseError, msg):
y = Dummy(x.shape, parent=scope)(x)
def test_attr_submodule_name_collision(self):
@@ -382,11 +409,13 @@ def __call__(self, x):
return self.bias(x)
x = jnp.array([1.])
scope = Scope({}, {'params': rngkey}, mutable=['params'])
- with self.assertRaisesRegex(ValueError, 'bias exists already'):
+ msg = 'Could not create submodule "bias" in Module Dummy: Name in use'
+ with self.assertRaisesRegex(errors.NameInUseError, msg):
y = Dummy(x.shape, parent=scope)(x)
def test_only_one_compact_method(self):
- with self.assertRaisesRegex(RuntimeError, '@compact'):
+ msg = 'Only one method per class can be @compact'
+ with self.assertRaisesRegex(errors.MultipleMethodsCompactError, msg):
class Dummy(nn.Module):
@compact
def call1(self):
@@ -424,7 +453,9 @@ def __call__(self, x):
x = bar(x)
x = bar(x)
return x
- with self.assertRaisesRegex(ValueError, '@compact'):
+ msg = (r'Submodule Dense must be defined in `setup\(\)` or in a method '
+ 'wrapped in `@compact`')
+ with self.assertRaisesRegex(errors.AssignSubModuleError, msg):
Foo().init(random.PRNGKey(0), jnp.ones((1, 3)))
def test_forgotten_compact_annotation_with_explicit_parent(self):
@@ -440,7 +471,9 @@ def __call__(self, x):
x = bar(x)
return x
- with self.assertRaisesRegex(ValueError, '@compact'):
+ msg = (r'Submodule Dense must be defined in `setup\(\)` or in a method '
+ 'wrapped in `@compact`')
+ with self.assertRaisesRegex(errors.AssignSubModuleError, msg):
Foo().init(random.PRNGKey(0), jnp.ones((1, 3)))
def test_numpy_array_shape_class_args(self):
@@ -568,7 +601,8 @@ def test_module_is_hashable(self):
def test_module_with_scope_is_not_hashable(self):
module_a = nn.Dense(10, parent=Scope({}))
- with self.assertRaisesWithLiteralMatch(ValueError, 'Can\'t call __hash__ on modules that hold variables.'):
+ msg = 'Can\'t call __hash__ on modules that hold variables.'
+ with self.assertRaisesWithLiteralMatch(ValueError, msg):
hash(module_a)
def test_module_trace(self):
@@ -615,9 +649,38 @@ def __call__(self, x):
self.assertEqual(trace, expected_trace)
+ def test_module_apply_method(self):
+ class Foo(nn.Module):
+ @nn.compact
+ def __call__(self):
+ pass
+
+ def test(self):
+ pass
+
+ # We can use both instance and class methods in apply.
+ Foo().apply({}, method=Foo.test)
+ Foo().apply({}, method=Foo().test)
+
+ # We also use a function that is not in the provided Module, although it
+ # should have a first argument representing an instance of the Module (Foo
+ # in this case).
+ x = Foo().apply({}, method=lambda foo_instance: foo_instance)
+ self.assertEqual(type(x), type(Foo()))
+
+ # This is not allowed.
+ msg = 'Cannot call apply()'
+ with self.assertRaisesRegex(errors.ApplyModuleInvalidMethodError, msg):
+ Foo().apply({}, method=lambda: True)
+
+ with self.assertRaisesRegex(errors.ApplyModuleInvalidMethodError, msg):
+ Foo().apply({}, method='allowed_apply_fn')
+
+
def test_call_unbound_compact_module_methods(self):
dense = Dense(3)
- with self.assertRaisesRegex(ValueError, "compact.*unbound module"):
+ msg = r'Can\'t call compact methods on unbound modules'
+ with self.assertRaisesRegex(errors.CallCompactUnboundModuleError, msg):
dense(jnp.ones((1, )))
@@ -660,22 +723,23 @@ def bar(self):
empty = EmptyModule()
# It's fine to call methods of unbound methods that don't depend on
- # attributes defined during `setup`
+ # attributes defined during `setup`.
self.assertEqual(empty.bar(), 3)
- def test_call_unbound_noncompact_module_methods(self):
+ def test_call_unbound_noncompact_module_methods_depending_on_setup(self):
class EmptyModule(nn.Module):
- foo: int = 3
+ def setup(self):
+ self.foo = 2
def bar(self):
return self.foo
empty = EmptyModule()
- # It's fine to call methods of unbound methods that don't depend on
- # attributes defined during `setup`
- self.assertEqual(empty.bar(), 3)
-
+ msg = r'"EmptyModule" object has no attribute "foo"'
+ with self.assertRaisesRegex(AttributeError, msg):
+ empty.bar()
+
def test_module_with_attrs(self):
class Foo(nn.Module):
@@ -700,18 +764,24 @@ def setup(self):
def __call__(self):
self.i = 2 # This is not allowed.
- with self.assertRaisesWithLiteralMatch(TypeError, "Module instance is frozen outside of setup method."):
+ msg = ('Can\'t set i=2 for Module of type Foo: Module instance is frozen '
+ 'outside of setup method.')
+ with self.assertRaisesRegex(errors.SetAttributeFrozenModuleError, msg):
Foo().init(random.PRNGKey(0))
+
def test_compact_module_frozen(self):
class Foo(nn.Module):
@nn.compact
def __call__(self):
self.i = 2
- with self.assertRaisesWithLiteralMatch(TypeError, "Module instance is frozen outside of setup method."):
+ msg = ('Can\'t set i=2 for Module of type Foo: Module instance is frozen '
+ 'outside of setup method.')
+ with self.assertRaisesRegex(errors.SetAttributeFrozenModuleError, msg):
Foo().init(random.PRNGKey(0))
+
def test_submodule_frozen(self):
class Foo(nn.Module):
@nn.compact
@@ -719,7 +789,9 @@ def __call__(self):
dense = nn.Dense(10)
dense.features = 20 # <--- This is not allowed
- with self.assertRaisesWithLiteralMatch(TypeError, "Module instance is frozen outside of setup method."):
+ msg = ('Can\'t set features=20 for Module of type Dense: Module instance '
+ 'is frozen outside of setup method.')
+ with self.assertRaisesRegex(errors.SetAttributeFrozenModuleError, msg):
Foo().init(random.PRNGKey(0))
@@ -727,10 +799,11 @@ def test_module_call_not_implemented(self):
class Foo(nn.Module):
pass
- foo = Foo()
- with self.assertRaisesWithLiteralMatch(AttributeError, "'Foo' object has no attribute '__call__'"):
- foo.init(random.PRNGKey(0))
-
+ msg = '"Foo" object has no attribute "__call__"'
+ with self.assertRaisesRegex(AttributeError, msg):
+ Foo().init(random.PRNGKey(0))
+
+
def test_is_mutable_collection(self):
class EmptyModule(nn.Module):
def __call__(self):
@@ -795,7 +868,8 @@ class B(nn.Module):
def setup(self):
self.c = nn.Dense(2)
- with self.assertRaisesWithLiteralMatch(AttributeError, "'B' object has no attribute 'c'"):
+ msg = '"B" object has no attribute "c"'
+ with self.assertRaisesRegex(AttributeError, msg):
A().init(random.PRNGKey(0))
def test_unbound_setup_call(self):
| Improve Error Message: Naming a module in setup
The error `TypeError: Module instance is frozen outside of setup method.` is thrown at any time you assign module attributes somewhere other than within setup. It is not always clear to users how to resolve this.
-- Example 1
This code
```
def setup(self):
self.layer = nn.Dense(...)
self.layer.name = 'dense'
```
Throws the error `TypeError: Module instance is frozen outside of setup method.`, which confuses users.
-- Example 2
See: #936
| Similarly, users can be confused about the error message `ValueError: In setup, assign names of Modules via self.<name> and not using keyword argument name="<name>"`. We should provide a more elaborate error message with an example.
A bit more color here:
For submodules defined in-line within a `@nn.compact` method, you either explicitly pass names to submodules via a `name` argument to the submodule constructor, or they are automatically generated if not, e.g.:
```py
# ... inside a module
@nn.compact
def func(self, x):
dense1 = Dense(features=16) # submodule name autoassigned to "Dense1"
dense2 = Dense(features=16, name='final') # submodule name is "final"
```
For submodules defined inside `setup`, names are always explicit and are derived from the name of the attribute on which they are assigned (via `__setattr__`, following a very similar logic to that of PyTorch):
```py
# ... inside a module
def setup(self):
self.final = Dense(features=16) # submodule name is "final"
```
This issue is very closely related to #524.
In short, as summarized by @salayatana66, "really in `setup` the attribute name is the name of the module." | 2021-03-02T14:43:05 |
google/flax | 1,075 | google__flax-1075 | [
"1074"
] | d82de14a674d8356b1c310abd6ca365086dfa6f1 | diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -329,11 +329,16 @@ class _ModuleInternalState:
in_compact_method: bool = False
in_setup: bool = False
setup_called: bool = False
+ is_initialized: bool = False
autoname_cursor: Optional[dict] = dataclasses.field(default_factory=dict)
children: Dict[str, Union[str, 'Module']] = dataclasses.field(default_factory=dict)
def reset(self):
- """Resets transient state."""
+ """Resets transient state.
+
+ This function is called after each module method, so only attributes that
+ are method-dependent are reset.
+ """
self.in_compact_method = False
self.in_setup = False
self.autoname_cursor = dict()
@@ -344,6 +349,7 @@ def export(self):
in_compact_method=self.in_compact_method,
in_setup=self.in_setup,
setup_called=False, # setup_called is object local, not shared.
+ is_initialized=self.is_initialized,
autoname_cursor=dict(self.autoname_cursor))
return cloned
@@ -351,6 +357,7 @@ def reimport(self, other):
"""Re-imports transform-preserved state from across transform boundary."""
self.in_compact_method = other.in_compact_method
self.in_setup = other.in_setup
+ self.is_initialized = other.is_initialized
self.autoname_cursor = dict(other.autoname_cursor)
_uninitialized_module_internal_state = _ModuleInternalState()
@@ -504,8 +511,8 @@ def __setattr__(self, name: str, val: Any):
val: Value of the attribute.
"""
is_dataclass_attr = name in self.__dataclass_fields__ and self.__dataclass_fields__[name].init # pytype: disable=attribute-error
-
- if not self._state.in_setup and not is_dataclass_attr:
+
+ if not self._state.in_setup and self._state.is_initialized:
# Raises a TypeError just like frozen python dataclasses.
raise TypeError("Module instance is frozen outside of setup method.")
if is_dataclass_attr:
@@ -584,6 +591,8 @@ def __post_init__(self):
else:
raise ValueError("parent must be None, Module or Scope")
+ self._state.is_initialized = True
+
def __repr__(self):
return _module_repr(self)
| diff --git a/tests/linen/module_test.py b/tests/linen/module_test.py
--- a/tests/linen/module_test.py
+++ b/tests/linen/module_test.py
@@ -691,19 +691,35 @@ def __call__(self, x):
variables = foo.init(random.PRNGKey(0), x)
self.assertEqual(variables['params']['bar']['kernel'].shape, (2, 3))
- def test_module_frozen(self):
+ def test_noncompact_module_frozen(self):
class Foo(nn.Module):
- bar: nn.Dense = dataclasses.field(init=False)
-
def setup(self):
- self.i = 1
+ self.i = 1 # This is allowed (for assigning submodules).
+
+ def __call__(self):
+ self.i = 2 # This is not allowed.
+ with self.assertRaisesWithLiteralMatch(TypeError, "Module instance is frozen outside of setup method."):
+ Foo().init(random.PRNGKey(0))
+
+ def test_compact_module_frozen(self):
+ class Foo(nn.Module):
+ @nn.compact
def __call__(self):
self.i = 2
- foo = Foo()
with self.assertRaisesWithLiteralMatch(TypeError, "Module instance is frozen outside of setup method."):
- foo.init(random.PRNGKey(0))
+ Foo().init(random.PRNGKey(0))
+
+ def test_submodule_frozen(self):
+ class Foo(nn.Module):
+ @nn.compact
+ def __call__(self):
+ dense = nn.Dense(10)
+ dense.features = 20 # <--- This is not allowed
+
+ with self.assertRaisesWithLiteralMatch(TypeError, "Module instance is frozen outside of setup method."):
+ Foo().init(random.PRNGKey(0))
def test_is_mutable_collection(self):
class EmptyModule(nn.Module):
| Bug in error catching
The following code throws an error at the indicated line:
```
class SomeModule(nn.Module):
@nn.compact
def __call__(self, x):
dense = nn.Dense(10)
dense.features = 20
dense.new_attr = 20 # <--- ERROR!
return dense(x)
SomeModule().init(random.PRNGKey(0), jnp.ones((5, 5)))
```
The error is: `Module instance is frozen outside of setup method.` This seems odd: if `dense` were frozen, why are we allowed to modify the existing attribute `features`, but we aren't allowed to add a new one called `new_attr`? It seems we should not be allowed to modify `dense.features` are all and we should already throw an error the line before.
| 2021-03-03T12:07:25 |
|
google/flax | 1,120 | google__flax-1120 | [
"1091"
] | 15b6229d1a55d81e0b5ae6ee38642e1b5f160f6c | diff --git a/flax/errors.py b/flax/errors.py
--- a/flax/errors.py
+++ b/flax/errors.py
@@ -298,4 +298,16 @@ def __call__(self, x):
return x
"""
def __init__(self, scope_name):
- super().__init__(f'Duplicate use of scope name: "{scope_name}"')
\ No newline at end of file
+ super().__init__(f'Duplicate use of scope name: "{scope_name}"')
+
+
+class InvalidCheckpointError(FlaxError):
+ """
+ A checkpoint cannot be stored in a directory that already has
+ a checkpoint at the current or a later step.
+
+ You can pass `overwrite=True` to disable this behavior and
+ overwrite existing checkpoints in the target directory.
+ """
+ def __init__(self, path, step):
+ super().__init__(f'Trying to save an outdated checkpoint at step: "{step}" and path: "{path}".')
\ No newline at end of file
diff --git a/flax/training/checkpoints.py b/flax/training/checkpoints.py
--- a/flax/training/checkpoints.py
+++ b/flax/training/checkpoints.py
@@ -25,6 +25,7 @@
from absl import logging
from flax import core
+from flax import errors
from flax import serialization
from tensorflow.io import gfile
@@ -73,7 +74,8 @@ def save_checkpoint(ckpt_dir,
target,
step,
prefix='checkpoint_',
- keep=1):
+ keep=1,
+ overwrite=False):
"""Save a checkpoint of the model.
Attempts to be pre-emption safe by writing to temporary before
@@ -85,7 +87,8 @@ def save_checkpoint(ckpt_dir,
step: int or float: training step number or other metric number.
prefix: str: checkpoint file name prefix.
keep: number of past checkpoint files to keep.
-
+ overwrite: overwrite existing checkpoint files if a checkpoint
+ at the current or a later step already exits (default: False).
Returns:
Filename of saved checkpoint.
"""
@@ -94,16 +97,38 @@ def save_checkpoint(ckpt_dir,
ckpt_tmp_path = _checkpoint_path(ckpt_dir, 'tmp', prefix)
ckpt_path = _checkpoint_path(ckpt_dir, step, prefix)
gfile.makedirs(os.path.dirname(ckpt_path))
+ base_path = os.path.join(ckpt_dir, prefix)
+ checkpoint_files = gfile.glob(base_path + '*')
+
+ if ckpt_path in checkpoint_files:
+ if not overwrite:
+ raise errors.InvalidCheckpointError(ckpt_path, step)
+ else:
+ checkpoint_files.append(ckpt_path)
+
+ checkpoint_files = natural_sort(checkpoint_files)
+ if ckpt_path != checkpoint_files[-1]:
+ if not overwrite:
+ raise errors.InvalidCheckpointError(ckpt_path, step)
+
with gfile.GFile(ckpt_tmp_path, 'wb') as fp:
fp.write(serialization.to_bytes(target))
# Rename once serialization and writing finished.
- gfile.rename(ckpt_tmp_path, ckpt_path)
+ gfile.rename(ckpt_tmp_path, ckpt_path, overwrite=overwrite)
logging.info('Saved checkpoint at %s', ckpt_path)
+ print(ckpt_path)
+
+ # Remove newer checkpoints
+ if overwrite:
+ ind = checkpoint_files.index(ckpt_path) + 1
+ newer_ckpts = checkpoint_files[ind:]
+ checkpoint_files = checkpoint_files[:ind]
+ for path in newer_ckpts:
+ logging.info('Removing checkpoint at %s', path)
+ gfile.remove(path)
# Remove old checkpoint files.
- base_path = os.path.join(ckpt_dir, f'{prefix}')
- checkpoint_files = natural_sort(gfile.glob(base_path + '*'))
if len(checkpoint_files) > keep:
old_ckpts = checkpoint_files[:-keep]
for path in old_ckpts:
| diff --git a/tests/checkpoints_test.py b/tests/checkpoints_test.py
--- a/tests/checkpoints_test.py
+++ b/tests/checkpoints_test.py
@@ -21,6 +21,7 @@
from absl.testing import absltest
import flax
from flax import core
+from flax import errors
from flax.training import checkpoints
import jax
from jax import numpy as jnp
@@ -156,6 +157,32 @@ def test_save_restore_checkpoints(self):
checkpoints.restore_checkpoint(
tmp_dir, test_object0, step=5, prefix='test_')
+ def test_overwrite_checkpoints(self):
+ tmp_dir = self.create_tempdir().full_path
+ test_object0 = {'a': np.array([0, 0, 0], np.int32)}
+ test_object = {'a': np.array([1, 2, 3], np.int32)}
+
+ checkpoints.save_checkpoint(
+ tmp_dir, test_object0, 0, keep=1)
+ with self.assertRaises(errors.InvalidCheckpointError):
+ checkpoints.save_checkpoint(
+ tmp_dir, test_object, 0, keep=1)
+ checkpoints.save_checkpoint(
+ tmp_dir, test_object, 0, keep=1, overwrite=True)
+ new_object = checkpoints.restore_checkpoint(tmp_dir, test_object0)
+ jtu.check_eq(new_object, test_object)
+ checkpoints.save_checkpoint(
+ tmp_dir, test_object0, 2, keep=1, overwrite=True)
+ new_object = checkpoints.restore_checkpoint(tmp_dir, test_object)
+ jtu.check_eq(new_object, test_object0)
+ with self.assertRaises(errors.InvalidCheckpointError):
+ checkpoints.save_checkpoint(
+ tmp_dir, test_object, 1, keep=1)
+ checkpoints.save_checkpoint(
+ tmp_dir, test_object, 1, keep=1, overwrite=True)
+ new_object = checkpoints.restore_checkpoint(tmp_dir, test_object0)
+ jtu.check_eq(new_object, test_object)
+
def test_save_restore_checkpoints_w_float_steps(self):
tmp_dir = self.create_tempdir().full_path
test_object0 = {'a': np.array([0, 0, 0], np.int32),
@@ -174,20 +201,14 @@ def test_save_restore_checkpoints_w_float_steps(self):
jtu.check_eq(new_object, test_object1)
checkpoints.save_checkpoint(
tmp_dir, test_object1, 2.0, prefix='test_', keep=1)
- checkpoints.save_checkpoint(
- tmp_dir, test_object2, 1.0, prefix='test_', keep=1)
- new_object = checkpoints.restore_checkpoint(
- tmp_dir, test_object0, prefix='test_')
- jtu.check_eq(new_object, test_object1)
+ with self.assertRaises(errors.InvalidCheckpointError):
+ checkpoints.save_checkpoint(
+ tmp_dir, test_object2, 1.0, prefix='test_', keep=1)
checkpoints.save_checkpoint(
tmp_dir, test_object2, 3.0, prefix='test_', keep=2)
- checkpoints.save_checkpoint(
- tmp_dir, test_object1, -1.0, prefix='test_', keep=2)
- new_object = checkpoints.restore_checkpoint(
- tmp_dir, test_object0, prefix='test_')
self.assertIn('test_3.0', os.listdir(tmp_dir))
self.assertIn('test_2.0', os.listdir(tmp_dir))
- jtu.check_eq(new_object, test_object2)
+ jtu.check_eq(new_object, test_object1)
def test_save_restore_checkpoints_target_none(self):
tmp_dir = self.create_tempdir().full_path
| flax.training.checkpoint.save_checkpoint with keep=1 leads to "file already exists" error
I'm using Jax latest, Tensorflow latest, Jaxlib 0.1.59.
Let say I run a python code which does a save_checkpoint with keep=1 once. Then, I rerun the python code again, it will give me a "tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists". This is really annoying because this means that I have to manually delete the checkpoints if I want to reuse the same python code which happens a lot when debugging.
I think that this happens because the extra files are only deleted at the end of save_checkpoint.
> File "/localscratch/jolicoea.63359842.0/1/ScoreSDEMore/run_lib.py", line 469, in evaluate
> checkpoints.save_checkpoint(
> File "/localscratch/jolicoea.63359842.0/1/env/lib/python3.8/site-packages/flax/training/checkpoints.py", line 99, in save_checkpoint
> gfile.rename(ckpt_tmp_path, ckpt_path)
> File "/localscratch/jolicoea.63359842.0/1/env/lib/python3.8/site-packages/tensorflow/python/lib/io/file_io.py", line 548, in rename_v2
> _pywrap_file_io.RenameFile(
> tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
| I'm not quite sure how to resolve this. We definitely want to avoid the case where we accidentally overwrite a checkpoint.
For debugging purposes wouldn't it be better to not checkpoint at all if you later want to discard these files anyway?
There could be an option called "overwrite" which defaults to False.
We could add an overwrite option. But I think the current implementation has more issues when you reuse the same checkpoint directory. It might also store let's say at step 10 while you also have a checkpoint at step 100 leading to the new checkpoint at step 10 to be removed.
I'll try to add an overwrite option while fixing the issue of storing older checkpoints than are already stored | 2021-03-12T14:05:22 |
google/flax | 1,180 | google__flax-1180 | [
"1177"
] | bc9ee1aac5e155af2526f22889563fa697dc981d | diff --git a/flax/optim/base.py b/flax/optim/base.py
--- a/flax/optim/base.py
+++ b/flax/optim/base.py
@@ -431,13 +431,14 @@ def _get_params_dict(inputs):
class _ShapeDtype:
shape: Any
dtype: Any
+ _value: Any
_indices: List[int]
@classmethod
def create(cls, value):
if not isinstance(value, jnp.ndarray):
value = jnp.array(value)
- return cls(shape=value.shape, dtype=value.dtype, _indices=[])
+ return cls(shape=value.shape, dtype=value.dtype, _value=value, _indices=[])
class MultiOptimizer(OptimizerDef):
@@ -491,37 +492,45 @@ def __init__(
self.sub_optimizers = sub_optimizers
def init_state(self, params):
- sub_states = []
- matches = jax.tree_map(_ShapeDtype.create, params)
+ param_states = jax.tree_map(_ShapeDtype.create, params)
overlap = False
for idx, (traversal,
opt) in enumerate(zip(self.traversals, self.sub_optimizers)):
- for match in traversal.iterate(matches):
+ for match in traversal.iterate(param_states):
match._indices.append(idx)
overlap |= len(match._indices) > 1
- params_t = tuple(traversal.iterate(params))
- state = opt.init_state(params_t)
- sub_states.append(state)
-
if overlap:
raise ValueError(
'Multiple optimizers match the same leaves : ' +
- str(jax.tree_map(lambda match: match._indices, matches)))
- return tuple(sub_states)
+ str(jax.tree_map(lambda match: match._indices, param_states)))
+ for traversal, opt in zip(self.traversals, self.sub_optimizers):
+ param_states = traversal.update(lambda x: opt.init_param_state(x._value), param_states)
+ # Use None as initial state for params that are not optimized by any sub optimizer.
+ param_states = jax.tree_map(lambda x: None if isinstance(x, _ShapeDtype) else x, param_states)
+
+ return OptimizerState(jnp.asarray(0, dtype=jnp.int32), param_states)
- def apply_gradient(self, hyper_params, params, states, grads):
+ def apply_gradient(self, hyper_params, params, state, grads):
new_params = params
- new_states = []
- it = zip(self.traversals, self.sub_optimizers, hyper_params, states)
- for focus, opt, hp, s in it:
- p = tuple(focus.iterate(params))
- g = tuple(focus.iterate(grads))
- new_p, new_s = opt.apply_gradient(hp, p, s, g)
- new_params = focus.set(list(new_p), new_params)
- new_states.append(new_s)
- return new_params, tuple(new_states)
+ it = zip(self.traversals, self.sub_optimizers, hyper_params)
+ new_param_states = jax.tree_map(_ShapeDtype.create, params)
+ for focus, opt, hp in it:
+ ps = tuple(focus.iterate(params))
+ gs = tuple(focus.iterate(grads))
+ ss = tuple(focus.iterate(state.param_states))
+ new_ps = []
+ new_ss = []
+ for p, g, s in zip(ps, gs, ss):
+ new_p, new_s = opt.apply_param_gradient(state.step, hp, p, s, g)
+ new_ps.append(new_p)
+ new_ss.append(new_s)
+ new_params = focus.set(new_ps, new_params)
+ new_param_states = focus.set(new_ss, new_param_states)
+ # Update state to None when param is not optimized by any sub optimizer.
+ new_param_states = jax.tree_map(lambda x: None if isinstance(x, _ShapeDtype) else x, new_param_states)
+ return new_params, OptimizerState(state.step + 1, new_param_states)
def update_hyper_params(self, **hyper_param_overrides):
"""Updates the hyper parameters with a set of overrides.
| diff --git a/tests/optim_test.py b/tests/optim_test.py
--- a/tests/optim_test.py
+++ b/tests/optim_test.py
@@ -94,12 +94,12 @@ def test_optimizer_with_focus(self):
opt_def = optim.GradientDescent(learning_rate=1.)
t_a = traverse_util.t_identity['a']
optimizer = opt_def.create(params, focus=t_a)
- expected_state = (optim.OptimizerState(0, ((),)),)
+ expected_state = optim.OptimizerState(0, {'a': (), 'b': None})
self.assertEqual(optimizer.state, expected_state)
grads = {'a': -1., 'b': -2.}
new_optimizer = optimizer.apply_gradient(grads)
expected_params = {'a': 1., 'b': 0.}
- expected_state = (optim.OptimizerState(1, ((),)),)
+ expected_state = optim.OptimizerState(1, {'a': (), 'b': None})
self.assertEqual(new_optimizer.state, expected_state)
self.assertEqual(new_optimizer.target, expected_params)
@@ -179,13 +179,13 @@ def test_multi_optimizer(self):
_GradientDescentHyperParams(10.)
]
self.assertEqual(optimizer_def.hyper_params, expected_hyper_params)
- expected_state = (optim.OptimizerState(0, ((),)),) * 2
+ expected_state = optim.OptimizerState(0, {'a': (), 'b': (), 'c': {}})
self.assertEqual(state, expected_state)
grads = {'a': -1., 'b': -2., 'c': {}}
new_params, new_state = optimizer_def.apply_gradient(
optimizer_def.hyper_params, params, state, grads)
expected_params = {'a': 1., 'b': 20., 'c': {}}
- expected_state = (optim.OptimizerState(1, ((),)),) * 2
+ expected_state = optim.OptimizerState(1, {'a': (), 'b': (), 'c': {}})
self.assertEqual(new_state, expected_state)
self.assertEqual(new_params, expected_params)
# override learning_rate
| Invariant state for MultiOptimizer
It would be more user friendly to keep the parameter structure in optimizer.state instead of flattening the paramaters for each sub optimizer. This is especially useful for sharded_jit and friends which are often used with a fine-grainted partitioning of the model params and optimizer state.
| 2021-03-24T14:21:53 |
|
google/flax | 1,182 | google__flax-1182 | [
"969"
] | 767a3e94991759ec5f4f85e0fb00fd0eb27e3275 | diff --git a/flax/core/frozen_dict.py b/flax/core/frozen_dict.py
--- a/flax/core/frozen_dict.py
+++ b/flax/core/frozen_dict.py
@@ -120,9 +120,19 @@ def pop(self, key: K) -> Tuple['FrozenDict[K, V]', V]:
return new_self, value
def unfreeze(self) -> Dict[K, V]:
+ """Unfreeze this FrozenDict.
+
+ Returns:
+ An unfrozen version of this FrozenDict instance.
+ """
return unfreeze(self)
- def tree_flatten(self):
+ def tree_flatten(self) -> Tuple[Tuple[Dict[Any, Any]], Tuple[()]]:
+ """Flattens this FrozenDict.
+
+ Returns:
+ A flattened version of this FrozenDict instance.
+ """
return (self._dict,), ()
@classmethod
diff --git a/flax/training/checkpoints.py b/flax/training/checkpoints.py
--- a/flax/training/checkpoints.py
+++ b/flax/training/checkpoints.py
@@ -175,9 +175,12 @@ def restore_checkpoint(ckpt_dir,
Sorts the checkpoint files naturally, returning the highest-valued
file, e.g.:
- ckpt_1, ckpt_2, ckpt_3 --> ckpt_3
- ckpt_0.01, ckpt_0.1, ckpt_0.001 --> ckpt_0.1
- ckpt_-1.0, ckpt_1.0, ckpt_1e5 --> ckpt_1e5
+
+ * ``ckpt_1, ckpt_2, ckpt_3 --> ckpt_3``
+
+ * ``ckpt_0.01, ckpt_0.1, ckpt_0.001 --> ckpt_0.1``
+
+ * ``ckpt_-1.0, ckpt_1.0, ckpt_1e5 --> ckpt_1e5``
Args:
ckpt_dir: str: checkpoint file or directory of checkpoints to restore from.
@@ -252,7 +255,7 @@ def convert_pre_linen(params):
submodule class. With Linen this behavior has changed to keep separate
submodule counts per module class.
- Consider the following module:
+ Consider the following module::
class Model(nn.Module):
@nn.compact
@@ -262,26 +265,28 @@ def __call__(self, x):
return x
In pre-Linen the resulting params would have had the structure:
- {'Conv_0': { ... }, 'Dense_1': { ... } }
+
+ ``{'Conv_0': { ... }, 'Dense_1': { ... } }``
With Linen the resulting params would instead have had the structure:
- {'Conv_0': { ... }, 'Dense_0': { ... } }
- To convert from pre-Linen format to Linen simply call:
+ ``{'Conv_0': { ... }, 'Dense_0': { ... } }``
+
+ To convert from pre-Linen format to Linen simply call::
params = convert_pre_linen(pre_linen_params)
Note that you can also use this utility to convert pre-Linen collections
because they're following the same module naming. Note though that collections
were "flat" in pre-Linen and first need to be unflattened before they can be
- used with this function:
+ used with this function::
batch_stats = convert_pre_linen(flax.traverse_util.unflatten_dict({
tuple(k.split('/')[1:]): v
for k, v in pre_linen_model_state.as_dict().items()
}))
- Then Linen variables can be defined from these converted collections:
+ Then Linen variables can be defined from these converted collections::
variables = {'params': params, 'batch_stats': batch_stats}
diff --git a/flax/training/lr_schedule.py b/flax/training/lr_schedule.py
--- a/flax/training/lr_schedule.py
+++ b/flax/training/lr_schedule.py
@@ -58,12 +58,13 @@ def create_stepped_learning_rate_schedule(base_learning_rate, steps_per_epoch,
by specified amounts at specified epochs. The steps are given as
the `lr_sched_steps` parameter. A common ImageNet schedule decays the
learning rate by a factor of 0.1 at epochs 30, 60 and 80. This would be
- specified as:
- [
- [30, 0.1],
- [60, 0.01],
- [80, 0.001]
- ]
+ specified as::
+
+ [
+ [30, 0.1],
+ [60, 0.01],
+ [80, 0.001]
+ ]
This function also offers a learing rate warmup as per
https://arxiv.org/abs/1706.02677, for the purpose of training with large
| Add reference documentation for FrozenDict on ReadTheDocs
| 2021-03-25T10:49:42 |
||
google/flax | 1,203 | google__flax-1203 | [
"1192"
] | 82ce38b202013c3b1b121ac379b97a0a37350927 | diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -259,12 +259,16 @@ def wrapped_module_method(*args, **kwargs):
is_compact_method = hasattr(fun, 'compact')
is_setup_method = fun.__name__ == 'setup'
# We lazily call setup() only when needed.
- if not is_setup_method:
+ if is_setup_method:
+ is_recurrent = self._state.in_setup
+ self._state.in_setup = True
+ else:
self._try_setup()
if is_compact_method:
if self.scope is None:
raise errors.CallCompactUnboundModuleError()
+ is_recurrent = self._state.in_compact_method
self._state.in_compact_method = True
_context.module_stack.append(self)
try:
@@ -278,7 +282,10 @@ def wrapped_module_method(*args, **kwargs):
_context.module_stack.pop()
if is_compact_method:
object.__setattr__(self, 'scope', self.scope.rewound())
- if is_compact_method or is_setup_method:
+ # setup or compact calls can be recurrent for example due to super calls
+ # resetting the state would cause is compact/setup method
+ # to be set to False prematurely.
+ if (is_compact_method or is_setup_method) and not is_recurrent:
self._state.reset()
wrapped_module_method.method_handler_wrapped = True
return wrapped_module_method
| diff --git a/tests/linen/module_test.py b/tests/linen/module_test.py
--- a/tests/linen/module_test.py
+++ b/tests/linen/module_test.py
@@ -1237,6 +1237,51 @@ def __call__(self, x):
y = Foo().apply(variables, x)
self.assertEqual(y.shape, (2,))
+ def test_super_compact(self):
+ class Foo(nn.Module):
+ @nn.compact
+ def __call__(self, x):
+ return nn.Dense(4)(x)
+
+ class Bar(Foo):
+ @nn.compact
+ def __call__(self, x):
+ y = super().__call__(x)
+ return nn.Dense(3)(y)
+
+ k = random.PRNGKey(0)
+ x = jnp.ones((4, 7))
+
+ variables = Bar().init(k, x)
+ shapes = jax.tree_map(np.shape, variables['params'])
+ self.assertEqual(shapes, {
+ 'Dense_0': {'kernel': (7, 4), 'bias': (4,)},
+ 'Dense_1': {'kernel': (4, 3), 'bias': (3,)},
+ })
+ y = Bar().apply(variables, x)
+ self.assertEqual(y.shape, (4, 3))
+
+ def test_super_setup(self):
+ class Foo(nn.Module):
+ def setup(self):
+ self.a = nn.Dense(4)
+
+ class Bar(Foo):
+
+ def setup(self):
+ super().setup()
+ self.b = nn.Dense(3)
+
+ def __call__(self, x):
+ y = self.a(x)
+ return self.b(y)
+
+ k = random.PRNGKey(0)
+ x = jnp.ones((4, 7))
+
+ variables = Bar().init(k, x)
+ y = Bar().apply(variables, x)
+ self.assertEqual(y.shape, (4, 3))
if __name__ == '__main__':
absltest.main()
| super().__call__ not generally safe to call in subclass __call__
```
class Foo(nn.Module):
@nn.compact
def __call__(self, x):
return nn.Dense(4)(x)
class Bar(Foo):
@nn.compact
def __call__(self, x):
y = super().__call__(x)
return nn.Dense(4)(y)
k = random.PRNGKey(0)
x = random.randint(k, (4, 7), 0, 256)
variables = Bar().init(k, x)
y = Bar().apply(variables, x)
```
returns
```
AssignSubModuleError: Submodule Dense must be defined in `setup()` or in a method wrapped in `@compact` (https://flax.readthedocs.io/en/improve-error/flax.errors.html#flax.errors.AssignSubModuleError)
```
This happens because the parent `super().__call__` is wrapped to mark itself as "compact" upon entry, clearing the "compact" state upon exit, leading to any remaining variable/submodule instantiations in the subclass `__call__` causing an error because the function no longs "looks compact".
I think we could fix this by adding a bit of logic to the `wrap_method_once` function and by passing in the class to `wrap_method_once` during subclass initialization to detect when `self` isn't an instance of the class it's "supposed to be", and having the wrapper acting as a passthrough.
| (copying from an offline discussion -- if we "formally" support module inheritence and test it rigorously, i think it may be fine to disallow the base class from having optional attributes. otherwise we'd have to reorder the base class optional attributes and put them at the end -- but that would be very confusing for folks who use positional args as opposed to kwargs)
An alternative fix would be to allow nested compact calls | 2021-04-06T14:33:42 |
google/flax | 1,254 | google__flax-1254 | [
"1250"
] | 65061e6128f6695eed441acf2bfffc3b1badd318 | diff --git a/flax/linen/normalization.py b/flax/linen/normalization.py
--- a/flax/linen/normalization.py
+++ b/flax/linen/normalization.py
@@ -76,6 +76,13 @@ class BatchNorm(Module):
def __call__(self, x, use_running_average: Optional[bool] = None):
"""Normalizes the input using batch statistics.
+ NOTE:
+ During initialization (when parameters are mutable) the running average
+ of the batch statistics will not be updated. Therefore, the inputs
+ fed during initialization don't need to match that of the actual input
+ distribution and the reduction axis (set with `axis_name`) does not have
+ to exist.
+
Args:
x: the input to be normalized.
use_running_average: if true, the statistics stored in batch_stats
@@ -93,8 +100,8 @@ def __call__(self, x, use_running_average: Optional[bool] = None):
reduced_feature_shape = tuple(d for i, d in enumerate(x.shape) if i in axis)
reduction_axis = tuple(i for i in range(x.ndim) if i not in axis)
- # we detect if we're in initialization via empty variable tree.
- initializing = not self.has_variable('batch_stats', 'mean')
+ # see NOTE above on initialization behavior
+ initializing = self.is_mutable_collection('params')
ra_mean = self.variable('batch_stats', 'mean',
lambda s: jnp.zeros(s, jnp.float32),
| diff --git a/tests/linen/linen_test.py b/tests/linen/linen_test.py
--- a/tests/linen/linen_test.py
+++ b/tests/linen/linen_test.py
@@ -146,6 +146,23 @@ def test_group_norm_raises(self):
with self.assertRaises(ValueError):
model_cls.init_with_output(key2, x)
+ def test_batch_norm_multi_init(self):
+ class Foo(nn.Module):
+ @nn.compact
+ def __call__(self, x):
+ norm = nn.BatchNorm(
+ name="norm",
+ use_running_average=False,
+ axis_name="batch",
+ )
+ x = norm(x)
+ return x, norm(x)
+
+ key = random.PRNGKey(0)
+ model = Foo()
+ x = random.normal(random.PRNGKey(1), (2, 4))
+ (y1, y2), variables = model.init_with_output(key, x)
+ np.testing.assert_allclose(y1, y2, rtol=1e-4)
class StochasticTest(absltest.TestCase):
| Re-used BatchNorm layer with named axis can't be initialised in train mode
### Problem you have encountered:
When trying to initialise a model with a re-used `BatchNorm` layer a failure occurs when `use_running_average=False` and I've set a named axis (e.g. `axis_name="batch"`). Here is a minimal example which will fail:
```
class TestNet(nn.Module):
@nn.compact
def __call__(self, x, train: bool = True):
norm = nn.BatchNorm(
name="norm",
use_running_average=not train,
momentum=0.9,
epsilon=1e-5,
axis_name="batch"
)
for _ in range(2):
x = norm(x)
return x
key = random.PRNGKey(0)
model = TestNet()
variables = model.init(key, jnp.ones((10,)))
```
### What you expected to happen:
I'd expect the initialization to be successful since this works if any of these three conditions are not met:
1. `use_running_average=False`,
2. there is a named axis, and
3. the `BatchNorm` layer is not reused.
Instead, I get the following error...
### Logs, error messages, etc:
```
NameError: unbound axis name: batch. The following axis names (e.g. defined by pmap) are available to collective operations: []
```
### Steps to reproduce:
Here is a Colab to reproduce the failure as well as successful cases when any of the conditions above are not met: https://colab.research.google.com/drive/1N7Wk6eUdW4UO6Ckj_tUscDhOlO0DtY0P?usp=sharing&forceEdit=true&sandboxMode=true
Please let me know if I can clarify anything!
| This is indeed a known bug and it has existed for a long time I'm working on a fix now.
The reason why it hasn't been fixed before is that re-using a BatchNorm layer is rarely the correct behaviour because two inputs share batch statistics even if they aren't i.i.d..
| 2021-04-16T08:44:03 |
google/flax | 1,262 | google__flax-1262 | [
"1157"
] | 279f80be8793ed2d3932292ca9fc315f533683d2 | diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -407,7 +407,8 @@ def __call__(self, inputs):
"""
if not jnp.issubdtype(inputs.dtype, jnp.integer):
raise ValueError('Input type must be an integer or unsigned integer.')
- return self.embedding[inputs]
+ # Use take because fancy indexing numpy arrays with JAX indices does not work correctly.
+ return jnp.take(self.embedding, inputs, axis=0)
def attend(self, query):
"""Attend over the embedding using a query array.
| diff --git a/tests/linen/linen_linear_test.py b/tests/linen/linen_linear_test.py
--- a/tests/linen/linen_linear_test.py
+++ b/tests/linen/linen_linear_test.py
@@ -272,6 +272,21 @@ def test_embed(self):
z = embed_module.apply(initial_params, jnp.ones((3,)), method=embed_module.attend)
np.testing.assert_allclose(z, 3. * jnp.arange(4))
+ def test_embed_numpy(self):
+ rng = dict(params=random.PRNGKey(0))
+ x = jnp.arange(4)[None]
+ dummy_embedding = np.broadcast_to(
+ np.arange(4)[..., None], (4, 3)).astype(np.float32)
+ embed_module = nn.Embed(
+ num_embeddings=4,
+ features=3,
+ embedding_init=lambda rng, shape, dtype: dummy_embedding,
+ )
+ y, initial_params = embed_module.init_with_output(rng, x)
+ np.testing.assert_allclose(y, dummy_embedding[None])
+ z = embed_module.apply(initial_params, jnp.ones((3,)), method=embed_module.attend)
+ np.testing.assert_allclose(z, 3. * jnp.arange(4))
+
def test_non_final_axis(self):
class Foo(nn.Module):
@nn.compact
| np.array parameters may lead to a silent failure
Passing np.array parameters (instead of jnp.array) to a linen module may lead to a silent failure, see the following example:
```
import flax.linen as nn
import jax
import jax.numpy as jnp
import numpy as np
t = jnp.zeros([2, 196], jnp.int32)
print(f'Input shape: {t.shape}')
m = nn.Embed(32, 10)
rng = jax.random.PRNGKey(0)
vars = m.init(rng, t)
o1 = m.apply(vars, t)
print(f'Expected output shape: {o1.shape}')
o2 = m.apply(jax.tree_map(np.array, vars), t)
print(f'Numpy output shape: {o2.shape}')
```
Output:
```
Input shape: (2, 196)
Expected output shape: (2, 196, 10)
"Numpy params" output shape: (196,) <-- Different output shape
```
| Thanks for catching this!
When you map the embedding to an `np.array`, what will happen when applying the `Embed` module is that the embedding (which is now a Numpy array) is indexed with a `jax.numpy` array. This causes Numpy to treat the `jnp.array` as a tuple, which is not what we want:
```python
embedding = jnp.array([[1], [2]])
idx = jnp.array([0], jnp.int32)
embedding[idx] # Similar to your o1 -- correct
>>> [[1]]
np.array(embedding)[idx] # Similar to o2 -- wrong
>>> [1]
```
We can verify the `jnp` is cast incorrectly by changing its type:
```python
np.array(embedding)[tuple(idx)]
>>> [1]
np.array(embedding)[np.array(idx)]
>>> [[1]]
```
Actually this also throws a Deprecationwarning, suggesting we should explicitly cast the index to either np.arr or tuple.
So I think we can do two things:
1. Force the embedding to be a jnp array (and not a numpy array)
2. Check whether the embedding is a np array, and if so, explicitly cast the indexer to np.array.
I think we want to use `jnp.asarray` here to avoid unnecessary copies but guarantee that the embedding is a jax numpy array.
@jheek I think that may still break if the params aren't jax numpy arrays, which can happen if you load weights from a file. See also #1261. | 2021-04-22T09:48:09 |
google/flax | 1,295 | google__flax-1295 | [
"1294"
] | 63bd13391d2112a82ee14adef9dca0f5699cb6b6 | diff --git a/flax/linen/transforms.py b/flax/linen/transforms.py
--- a/flax/linen/transforms.py
+++ b/flax/linen/transforms.py
@@ -411,20 +411,31 @@ def scan(target: Target,
Example::
+ import flax
+ import flax.linen as nn
+ from jax import random
+
class SimpleScan(nn.Module):
@nn.compact
def __call__(self, c, xs):
LSTM = nn.scan(nn.LSTMCell,
variable_broadcast="params",
- split_rngs={"params": False})
+ split_rngs={"params": False},
+ in_axes=1,
+ out_axes=1)
return LSTM()(c, xs)
- xs = random.uniform(rng_1, (batch_size, features))
- carry_0 = nn.LSTMCell.initialize_carry(
- random.PRNGKey(0), (batch_size,), features)
+ seq_len, batch_size, in_feat, out_feat = 20, 16, 3, 5
+ key_1, key_2, key_3 = random.split(random.PRNGKey(0), 3)
+
+ xs = random.uniform(key_1, (batch_size, seq_len, in_feat))
+ init_carry = nn.LSTMCell.initialize_carry(key_2, (batch_size,), out_feat)
+
model = SimpleScan()
- variables = model.init(key_2, carry_0, xs)
- out_state, out_val = model.apply(variables, carry_0, xs)
+ variables = model.init(key_3, init_carry, xs)
+ out_carry, out_val = model.apply(variables, init_carry, xs)
+
+ assert out_val.shape == (batch_size, seq_len, out_feat)
Args:
| Misleading flax.linen.scan example
Below is the example provided for [`flax.linen.scan`](https://flax.readthedocs.io/en/latest/_autosummary/flax.linen.scan.html#flax.linen.scan):
```python
class SimpleScan(nn.Module):
@nn.compact
def __call__(self, c, xs):
LSTM = nn.scan(nn.LSTMCell,
variable_broadcast="params",
split_rngs={"params": False})
return LSTM()(c, xs)
xs = random.uniform(rng_1, (batch_size, features))
carry_0 = nn.LSTMCell.initialize_carry(
random.PRNGKey(0), (batch_size,), features)
model = SimpleScan()
variables = model.init(key_2, carry_0, xs)
out_state, out_val = model.apply(variables, carry_0, xs)
```
The default `in_axes` for `nn.scan` is `0`, so it seems like this example is scanning over the batch dimension instead of over a sequence.
I believe `xs` needs a sequence dimension in axis 0, as below:
```python
import flax
import flax.linen as nn
from jax import random
class SimpleScan(nn.Module):
@nn.compact
def __call__(self, c, xs):
LSTM = nn.scan(nn.LSTMCell,
variable_broadcast="params",
split_rngs={"params": False})
return LSTM()(c, xs)
key_1, key_2, key_3 = random.split(random.PRNGKey(0), 3)
seq_len, batch_size, features = 7, 11, 13
xs = random.uniform(key_1, (seq_len, batch_size, features))
carry_0 = nn.LSTMCell.initialize_carry(key_2, (batch_size,), features)
model = SimpleScan()
variables = model.init(key_3, carry_0, xs)
out_state, out_val = model.apply(variables, carry_0, xs)
```
Is this correct? I've read https://github.com/google/flax/discussions/1283 to verify. Thanks in advance!
| Yes that looks good. Want to make a PR?
Nit: it's customary to put the batch dim first and putting the sequence length second demonstrates that we can scan over non-leading axes :)
Will do! | 2021-05-03T13:53:49 |
|
google/flax | 1,306 | google__flax-1306 | [
"1053"
] | d969e64d08d0a17671f813d9ea29cc6062158810 | diff --git a/examples/sst2/configs/default.py b/examples/sst2/configs/default.py
--- a/examples/sst2/configs/default.py
+++ b/examples/sst2/configs/default.py
@@ -42,6 +42,5 @@ def get_config():
config.num_epochs = 10
config.seed = 0
- config.deterministic = False
return config
diff --git a/examples/sst2/train.py b/examples/sst2/train.py
--- a/examples/sst2/train.py
+++ b/examples/sst2/train.py
@@ -13,23 +13,31 @@
# limitations under the License.
"""Trains an SST2 text classifier."""
-import copy
-import functools
-from typing import Any, Callable, Dict, Iterable, Mapping, Optional, Sequence, Tuple, Union
+from typing import Any, Callable, Dict, Iterable, Optional, Sequence, Tuple, Union
from absl import logging
-from flax import optim
+from flax import struct
from flax.metrics import tensorboard
+from flax.training import train_state
import input_pipeline
import jax
import jax.numpy as jnp
import ml_collections
import models
import numpy as np
+import optax
import tensorflow as tf
Array = jnp.ndarray
Example = Dict[str, Array]
+TrainState = train_state.TrainState
+
+
+class Metrics(struct.PyTreeNode):
+ """Computed metrics."""
+ loss: float
+ accuracy: float
+ count: Optional[int] = None
@jax.vmap
@@ -42,38 +50,35 @@ def sigmoid_cross_entropy_with_logits(*, labels: Array, logits: Array) -> Array:
return relu_logits - logits * labels + jnp.log1p(jnp.exp(neg_abs_logits))
-def get_initial_params_and_state(key, model):
- """Returns randomly initialized parameters and a fresh model state."""
+def get_initial_params(rng, model):
+ """Returns randomly initialized parameters."""
token_ids = jnp.ones((2, 3), jnp.int32)
lengths = jnp.ones((2,), dtype=jnp.int32)
- variables = model.init(key, token_ids, lengths)
- state, params = variables.pop('params')
- return params, state
+ variables = model.init(rng, token_ids, lengths, deterministic=True)
+ return variables['params']
-def create_optimizer(params, learning_rate, beta, weight_decay):
- """Returns a momentum optimizer."""
- optimizer_def = optim.Momentum(
- learning_rate=learning_rate,
- beta=beta,
- weight_decay=weight_decay)
- optimizer = optimizer_def.create(params)
- return optimizer
+def create_train_state(rng, config: ml_collections.ConfigDict, model):
+ """Create initial training state."""
+ params = get_initial_params(rng, model)
+ tx = optax.chain(
+ optax.sgd(learning_rate=config.learning_rate, momentum=config.momentum),
+ optax.additive_weight_decay(weight_decay=config.weight_decay))
+ state = TrainState.create(apply_fn=model.apply, params=params, tx=tx)
+ return state
-def compute_metrics(*, labels: Array, logits: Array) -> Dict[str, Array]:
+def compute_metrics(*, labels: Array, logits: Array) -> Metrics:
"""Computes the metrics, summed across the batch if a batch is provided."""
if labels.ndim == 1: # Prevent the labels from broadcasting over the logits.
labels = jnp.expand_dims(labels, axis=1)
loss = sigmoid_cross_entropy_with_logits(labels=labels, logits=logits)
binary_predictions = (logits >= 0.)
binary_accuracy = jnp.equal(binary_predictions, labels)
- metrics = {
- 'loss': jnp.sum(loss),
- 'accuracy': jnp.sum(binary_accuracy),
- 'count': logits.shape[0]
- }
- return metrics
+ return Metrics(
+ loss=jnp.sum(loss),
+ accuracy=jnp.sum(binary_accuracy),
+ count=logits.shape[0])
def model_from_config(config: ml_collections.ConfigDict):
@@ -85,110 +90,109 @@ def model_from_config(config: ml_collections.ConfigDict):
output_size=config.output_size,
dropout_rate=config.dropout_rate,
word_dropout_rate=config.word_dropout_rate,
- unk_idx=config.unk_idx,
- deterministic=config.deterministic)
+ unk_idx=config.unk_idx)
return model
def train_step(
- config: Any,
- optimizer: optim.Optimizer,
- model_state: Mapping[str, Any],
+ state: TrainState,
batch: Dict[str, Array],
rngs: Dict[str, Any],
-) -> Tuple[optim.Optimizer, Dict[str, Any], Dict[str, Any]]:
+) -> Tuple[TrainState, Metrics]:
"""Train for a single step."""
# Make sure to get a new RNG at every step.
- model = model_from_config(config)
- step = optimizer.state.step
+ step = state.step
rngs = {name: jax.random.fold_in(rng, step) for name, rng in rngs.items()}
def loss_fn(params):
- variables = {'params': params, **model_state}
- logits, new_model_state = model.apply(
+ variables = {'params': params}
+ logits = state.apply_fn(
variables, batch['token_ids'], batch['length'],
- rngs=rngs, mutable=list(model_state.keys()))
+ deterministic=False,
+ rngs=rngs)
labels = batch['label']
if labels.ndim == 1:
labels = jnp.expand_dims(labels, 1)
loss = jnp.mean(
sigmoid_cross_entropy_with_logits(labels=labels, logits=logits))
- return loss, (logits, new_model_state)
+ return loss, logits
grad_fn = jax.value_and_grad(loss_fn, has_aux=True)
- value, grad = grad_fn(optimizer.target)
- (_, (logits, new_model_state)) = value
- optimizer = optimizer.apply_gradient(grad)
+ value, grads = grad_fn(state.params)
+ (_, logits) = value
+ new_state = state.apply_gradients(grads=grads)
metrics = compute_metrics(labels=batch['label'], logits=logits)
- return optimizer, metrics, new_model_state
+ return new_state, metrics
-def eval_step(config: Any, params: Dict[str, Any],
- model_state: Mapping[str, Any], batch: Dict[str, Array],
- rngs: Dict[str, Any]) -> Tuple[Dict[str, Any], Dict[str, Any]]:
+def eval_step(state: TrainState, batch: Dict[str, Array],
+ rngs: Dict[str, Any]) -> Metrics:
"""Evaluate for a single step. Model should be in deterministic mode."""
- model = model_from_config(config)
- variables = {'params': params, **model_state}
- logits, new_model_state = model.apply(
+ variables = {'params': state.params}
+ logits = state.apply_fn(
variables, batch['token_ids'], batch['length'],
- rngs=rngs,
- mutable=list(model_state.keys()))
+ deterministic=True,
+ rngs=rngs)
metrics = compute_metrics(labels=batch['label'], logits=logits)
- return metrics, new_model_state
+ return metrics
def normalize_batch_metrics(
- batch_metrics: Sequence[Dict[str, Any]]) -> Dict[str, Any]:
+ batch_metrics: Sequence[Metrics]) -> Metrics:
"""Consolidates and normalizes a list of per-batch metrics dicts."""
# Here we sum the metrics that were already summed per batch.
- metric_names = batch_metrics[0].keys()
- summed_metrics = {
- k: np.sum([metrics[k] for metrics in batch_metrics]) for k in metric_names
- }
+ total_loss = np.sum([metrics.loss for metrics in batch_metrics])
+ total_accuracy = np.sum([metrics.accuracy for metrics in batch_metrics])
+ total = np.sum([metrics.count for metrics in batch_metrics])
# Divide each metric by the total number of items in the data set.
- total = np.float(summed_metrics.pop('count'))
- metrics = jax.tree_map(lambda x: x.item() / total, summed_metrics)
- return metrics
+ return Metrics(
+ loss=total_loss.item() / total, accuracy=total_accuracy.item() / total)
+
+
+def batch_to_numpy(batch: Dict[str, tf.Tensor]) -> Dict[str, Array]:
+ """Converts a batch with TF tensors to a batch of NumPy arrays."""
+ # _numpy() reuses memory, does not make a copy.
+ # pylint: disable=protected-access
+ return jax.tree_map(lambda x: x._numpy(), batch)
def evaluate_model(
- eval_step_fn: Callable[..., Tuple[Dict[str, Any], Dict[str, Any]]],
- params: Dict[str, Any],
- model_state: Mapping[str, Any],
+ eval_step_fn: Callable[..., Any],
+ state: TrainState,
batches: Union[Iterable[Example], tf.data.Dataset],
epoch: int,
rngs: Optional[Dict[str, Any]] = None
-) -> Tuple[Dict[str, Any], Mapping[str, Any]]:
+) -> Metrics:
"""Evaluate a model on a dataset."""
batch_metrics = []
for i, batch in enumerate(batches):
- batch = jax.tree_map(lambda x: x._numpy(), batch) # pylint: disable=protected-access
+ batch = batch_to_numpy(batch)
if rngs is not None: # New RNG for each step.
rngs = {name: jax.random.fold_in(rng, i) for name, rng in rngs.items()}
- metrics, model_state = eval_step_fn(params, model_state, batch, rngs)
+ metrics = eval_step_fn(state, batch, rngs)
batch_metrics.append(metrics)
batch_metrics = jax.device_get(batch_metrics)
metrics = normalize_batch_metrics(batch_metrics)
logging.info('eval epoch %03d loss %.4f accuracy %.2f', epoch,
- metrics['loss'], metrics['accuracy'] * 100)
- return metrics, model_state
+ metrics.loss, metrics.accuracy * 100)
+ return metrics
-def train_epoch(train_step_fn: Callable[..., Tuple[optim.Optimizer,
- Dict[str, Any], Any]],
- optimizer: optim.Optimizer,
- model_state: Mapping[str, Any], train_batches: tf.data.Dataset,
- epoch: int, rngs: Optional[Dict[str, Any]] = None):
+def train_epoch(train_step_fn: Callable[..., Tuple[TrainState, Metrics]],
+ state: TrainState,
+ train_batches: tf.data.Dataset,
+ epoch: int,
+ rngs: Optional[Dict[str, Any]] = None
+ ) -> Tuple[TrainState, Metrics]:
"""Train for a single epoch."""
batch_metrics = []
for batch in train_batches:
- batch = jax.tree_map(lambda x: x._numpy(), batch) # pylint: disable=protected-access
- optimizer, metrics, model_state = train_step_fn(
- optimizer, model_state, batch, rngs)
+ batch = batch_to_numpy(batch)
+ state, metrics = train_step_fn(state, batch, rngs)
batch_metrics.append(metrics)
# Compute the metrics for this epoch.
@@ -196,20 +200,20 @@ def train_epoch(train_step_fn: Callable[..., Tuple[optim.Optimizer,
metrics = normalize_batch_metrics(batch_metrics)
logging.info('train epoch %03d loss %.4f accuracy %.2f', epoch,
- metrics['loss'], metrics['accuracy'] * 100)
+ metrics.loss, metrics.accuracy * 100)
- return optimizer, metrics, model_state
+ return state, metrics
def train_and_evaluate(config: ml_collections.ConfigDict,
- workdir: str) -> optim.Optimizer:
+ workdir: str) -> TrainState:
"""Execute model training and evaluation loop.
Args:
config: Hyperparameter configuration for training and evaluation.
workdir: Directory where the tensorboard summaries are written to.
Returns:
- The trained optimizer.
+ The final train state that includes the trained parameters.
"""
# Prepare datasets.
train_dataset = input_pipeline.TextDataset(
@@ -225,28 +229,17 @@ def train_and_evaluate(config: ml_collections.ConfigDict,
shuffle_seed=config.seed)
eval_batches = eval_dataset.get_batches(batch_size=config.batch_size)
- # Prepare configs.
+ # Keep track of vocab size in the config so that the embedder knows it.
config.vocab_size = len(train_dataset.vocab)
- eval_config = copy.deepcopy(config)
- eval_config.deterministic = True
# Compile step functions.
- train_step_fn = jax.jit(functools.partial(train_step, config))
- eval_step_fn = jax.jit(functools.partial(eval_step, eval_config))
+ train_step_fn = jax.jit(train_step)
+ eval_step_fn = jax.jit(eval_step)
- # Initialize parameters.
+ # Create model and a state that contains the parameters.
rng = jax.random.PRNGKey(config.seed)
- init_model = model_from_config(eval_config)
- params, model_state = get_initial_params_and_state(rng, init_model)
- del init_model
-
- # Remove intermediates for training. Otherwise our model state will fill up
- # with intermediate outputs (exported using self.sow() commands). This will
- # cause model_state to have a new shape on each step, triggering a new trace.
- model_state, _ = model_state.pop('intermediates')
-
- optimizer = create_optimizer(
- params, config.learning_rate, config.momentum, config.weight_decay)
+ model = model_from_config(config)
+ state = create_train_state(rng, config, model)
summary_writer = tensorboard.SummaryWriter(workdir)
summary_writer.hparams(dict(config))
@@ -258,24 +251,23 @@ def train_and_evaluate(config: ml_collections.ConfigDict,
# Train for one epoch.
rng, epoch_rng = jax.random.split(rng)
rngs = {'dropout': epoch_rng}
- optimizer, train_metrics, model_state = train_epoch(
- train_step_fn, optimizer, model_state, train_batches, epoch, rngs)
+ state, train_metrics = train_epoch(
+ train_step_fn, state, train_batches, epoch, rngs)
# Evaluate current model on the validation data.
- eval_metrics, _ = evaluate_model(
- eval_step_fn, optimizer.target, model_state, eval_batches, epoch)
+ eval_metrics = evaluate_model(eval_step_fn, state, eval_batches, epoch)
# Write metrics to TensorBoard.
- summary_writer.scalar('train_loss', train_metrics['loss'], epoch)
+ summary_writer.scalar('train_loss', train_metrics.loss, epoch)
summary_writer.scalar(
'train_accuracy',
- train_metrics['accuracy'] * 100,
+ train_metrics.accuracy * 100,
epoch)
- summary_writer.scalar('eval_loss', eval_metrics['loss'], epoch)
+ summary_writer.scalar('eval_loss', eval_metrics.loss, epoch)
summary_writer.scalar(
'eval_accuracy',
- eval_metrics['accuracy'] * 100,
+ eval_metrics.accuracy * 100,
epoch)
summary_writer.flush()
- return optimizer
+ return state
| diff --git a/examples/sst2/train_test.py b/examples/sst2/train_test.py
new file mode 100644
--- /dev/null
+++ b/examples/sst2/train_test.py
@@ -0,0 +1,56 @@
+# Copyright 2021 The Flax Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Tests for sst2.train."""
+from absl.testing import absltest
+from absl.testing import parameterized
+from configs import default as default_config
+import jax
+import jax.test_util
+import numpy as np
+import train
+
+# Parse absl flags test_srcdir and test_tmpdir.
+jax.config.parse_flags_with_absl()
+
+
+class TrainTest(parameterized.TestCase):
+
+ def test_train_step_updates_parameters(self):
+ """Tests if the train step updates the parameters in train state."""
+ # Create model and a state that contains the parameters.
+ config = default_config.get_config()
+ config.vocab_size = 13
+ rng = jax.random.PRNGKey(config.seed)
+ model = train.model_from_config(config)
+ state = train.create_train_state(rng, config, model)
+
+ token_ids = np.array([[2, 4, 3], [2, 6, 3]], dtype=np.int32)
+ lengths = np.array([2, 3], dtype=np.int32)
+ labels = np.zeros_like(lengths)
+ batch = {'token_ids': token_ids, 'length': lengths, 'label': labels}
+ rngs = {'dropout': rng}
+ train_step_fn = jax.jit(train.train_step)
+ new_state, metrics = train_step_fn(state, batch, rngs)
+ self.assertIsInstance(new_state, train.TrainState)
+ self.assertIsInstance(metrics, train.Metrics)
+ old_param_values = jax.tree_leaves(state.params)
+ new_param_values = jax.tree_leaves(new_state.params)
+ for old_array, new_array in zip(old_param_values, new_param_values):
+ # Make sure parameters were updated.
+ self.assertFalse(np.allclose(old_array, new_array))
+
+
+if __name__ == '__main__':
+ absltest.main()
| Replace flax.optim with Optax in examples
See https://github.com/google/flax/blob/master/docs/flip/1009-optimizer-api.md#update-plan
The following examples need to be updated
- [x] imagenet #1251
- [x] mnist #1302
- [x] nlp_seq #1916
- [x] pixelcnn #1413
- [x] ppo #1404
- [x] seq2seq #1396
- [x] vae #1361
- [x] wmt #1476
- [x] lm1b #1479
| Run finished:
https://tensorboard.dev/experiment/w4PcKXloQMG7gXJhjskVjw/
- final test accuracy 0.7640
- total run time 5h4m
Compared to previous run using `flax.optim.Adam` (from `examples/imagenet/README.md`):
https://tensorboard.dev/experiment/iJzNKovmS0q6k5t6k5wvOw/#scalars&_smoothingWeight=0®exInput=v100_x8_mixed_precision
- final test accuracy 0.7647
- total run time 4h51m
Note that the final test accuracy of the imagenet example fluctuates between 0.7625 and 0.7650, so the result is compatible.
Nice! Where is the branch of the code that you ran that used Optax?
Sure, it's https://github.com/andsteing/flax/tree/andsteing/issue1053
Another finished run with the Optax code:
https://tensorboard.dev/experiment/xOpycRYnT7m3inYbEuNMxw/
- final test accuracy 0.766
- total run time 5h1m | 2021-05-06T12:41:10 |
google/flax | 1,311 | google__flax-1311 | [
"1310"
] | 48b5707aac4d795d65ec7ae775c6349ffc45cca5 | diff --git a/flax/training/prefetch_iterator.py b/flax/training/prefetch_iterator.py
--- a/flax/training/prefetch_iterator.py
+++ b/flax/training/prefetch_iterator.py
@@ -55,7 +55,7 @@ def __next__(self):
self._cond.wait_for(lambda: self._buffer or not self._active)
if self._buffer:
item = self._buffer.pop(0)
- self._cond.notifyAll()
+ self._cond.notify_all()
return item
if self._error:
raise self._error # pylint: disable=raising-bad-type
@@ -65,7 +65,7 @@ def __next__(self):
def close(self):
with self._cond:
self._active = False
- self._cond.notifyAll()
+ self._cond.notify_all()
def _prefetch_loop(self):
"""Prefetch loop that prefetches a tf dataset."""
@@ -77,7 +77,7 @@ def _predicate():
item = next(self._data_iter)
with self._cond:
self._buffer.append(item)
- self._cond.notifyAll()
+ self._cond.notify_all()
self._cond.wait_for(_predicate)
if not self._active:
return
@@ -85,5 +85,5 @@ def _predicate():
with self._cond:
self._error = e
self._active = False
- self._cond.notifyAll()
+ self._cond.notify_all()
return
| threading.Condition.notifyAll has been deprecated in favour of notify_all in Python 3.10
### Problem you have encountered:
`threading.Condition.notifyAll` has been deprecated in favour of `notify_all` in Python 3.10. Ref : python/cpython#25174
### What you expected to happen:
use `notify_all` in below places.
```
rg -t py -w 'currentThread|notifyAll|activeCount|isDaemon|setDaemon'
flax/training/prefetch_iterator.py
58: self._cond.notifyAll()
68: self._cond.notifyAll()
80: self._cond.notifyAll()
88: self._cond.notifyAll()
```
| 2021-05-08T06:35:31 |
||
google/flax | 1,324 | google__flax-1324 | [
"1319"
] | c53c1c5383f91416478ce504e9d61020dd8be07c | diff --git a/flax/linen/__init__.py b/flax/linen/__init__.py
--- a/flax/linen/__init__.py
+++ b/flax/linen/__init__.py
@@ -25,7 +25,8 @@
make_causal_mask, combine_masks)
from ..core import broadcast, DenyList
from .linear import Conv, ConvTranspose, Dense, DenseGeneral, Embed
-from .module import Module, compact, enable_named_call, disable_named_call, Variable, init, init_with_output, apply
+from .module import (Module, compact, enable_named_call, disable_named_call,
+ Variable, init, init_with_output, apply, merge_param)
from .normalization import BatchNorm, GroupNorm, LayerNorm
from .pooling import avg_pool, max_pool
from .recurrent import GRUCell, LSTMCell, ConvLSTM, OptimizedLSTMCell
| AttributeError: module 'flax.linen' has no attribute 'merge_param'
[This guide](https://flax.readthedocs.io/en/latest/design_notes/arguments.html) suggests using `nn.merge_param` to combine arguments, but `merge_param` is only available through `nn.module.merge_param`. I believe it needs to be added to the import line [here](https://github.com/google/flax/blob/4ae9143f7ef46ffab6d9123ba1b2e4f3303e68d1/flax/linen/__init__.py#L28). I can open a PR if this is the case.
| Good catch! Please do open that PR | 2021-05-17T22:12:43 |
|
google/flax | 1,423 | google__flax-1423 | [
"1420"
] | a1a73eb9799d5954e4b723c031b2f42e07f0e2d0 | diff --git a/flax/core/frozen_dict.py b/flax/core/frozen_dict.py
--- a/flax/core/frozen_dict.py
+++ b/flax/core/frozen_dict.py
@@ -95,7 +95,7 @@ def __hash__(self):
def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':
"""Create a new FrozenDict with additional or replaced entries."""
- return type(self)(self, **unfreeze(add_or_replace))
+ return type(self)({**self, **unfreeze(add_or_replace)})
def items(self):
for key in self._dict:
| diff --git a/tests/core/frozen_dict_test.py b/tests/core/frozen_dict_test.py
--- a/tests/core/frozen_dict_test.py
+++ b/tests/core/frozen_dict_test.py
@@ -80,6 +80,10 @@ def test_frozen_dict_reduce(self):
self.assertEqual(before, after)
self.assertEqual(after, {'a': {'b': 1, 'c': 2}})
+ def test_frozen_dict_copy_reserved_name(self):
+ result = FrozenDict({'a': 1}).copy({'cls': 2})
+ self.assertEqual(result, {'a': 1, 'cls': 2})
+
if __name__ == '__main__':
absltest.main()
| flax.core.FrozenDict copy broken when the new dictionary contains some names
Provide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible also provide a summary of what steps or workarounds you have already tried.
### Problem you have encountered:
Adding a dictionary which contains 'cls' key fails,

### What you expected to happen:
expected to update the value of 'cls' key.
### Logs, error messages, etc:
### Steps to reproduce:
```
flax.core.FrozenDict({}).copy({'cls': 'abc'})
```
One way to workaround this is to manually create concatenated FrozenDict instead of using `copy`.
```
flax.core.FrozenDict({**flax.core.FrozenDict({'def': '123', 'cls': 22}), **{'cls': 'abc'}})
```
| Thanks for catching this bug!
Your code snippet on line 98 resolves to: `return FrozenDict(self, cls='abc')`. This will invoke `__new__` of a superclass in the `Typing` library, which has `cls` as its first argument. So since you also pass it as a kwarg, the interpreter complains that you passed the same argument twice.
It seems dangerous to me that the current code just expands all key-value pairs in `add_or_replace` to kwargs to the constructor, since any reserved word could introduce bugs. The safest way seems to me to explicitly wrap the two dicts in a new dict, i.e. replace line 98 with:
```python
return type(self)({**self, **unfreeze(add_or_replace)})
```
@jheek WDYT?
Oh my Python! :)
@marcvanzee your solution looks like the easiest workaround | 2021-07-12T08:13:44 |
google/flax | 1,432 | google__flax-1432 | [
"1429"
] | b1ebdc8764b4dcdf4a2b960653c015b3429165db | diff --git a/flax/serialization.py b/flax/serialization.py
--- a/flax/serialization.py
+++ b/flax/serialization.py
@@ -22,6 +22,7 @@
import jax
import msgpack
import numpy as np
+from numpy.lib.arraysetops import isin
_STATE_DICT_REGISTRY = {}
@@ -125,27 +126,24 @@ def _restore_dict(xs, states):
def _namedtuple_state_dict(nt):
- return {'name': nt.__class__.__name__,
- 'fields': {str(i): to_state_dict(x)
- for i, x in enumerate(nt._fields)},
- 'values': {str(i): to_state_dict(x)
- for i, x in enumerate(nt)}
- }
+ return {key: to_state_dict(getattr(nt, key)) for key in nt._fields}
def _restore_namedtuple(xs, state_dict):
"""Rebuild namedtuple from serialized dict."""
- if len(state_dict['values']) != len(xs):
- raise ValueError('The size of the list and the state dict do not match,'
- f' got {len(xs)} and {len(state_dict["values"])}.')
- fields = [state_dict['fields'][str(i)] for i in range(len(xs))]
- namedtuple_class = collections.namedtuple(
- state_dict['name'], fields)
- ys = []
- for i in range(len(state_dict['values'])):
- y = from_state_dict(xs[i], state_dict['values'][str(i)])
- ys.append(y)
- return namedtuple_class(*ys)
+ if set(state_dict.keys()) == {'name', 'fields', 'values'}:
+ # TODO(jheek): remove backward compatible named tuple restoration early 2022
+ state_dict = {state_dict['fields'][str(i)]: state_dict['values'][str(i)]
+ for i in range(len(state_dict['fields']))}
+
+ sd_keys = set(state_dict.keys())
+ nt_keys = set(xs._fields)
+
+ if sd_keys != nt_keys:
+ raise ValueError('The field names of the state dict and the named tuple do not match,'
+ f' got {sd_keys} and {nt_keys}.')
+ fields = {k: from_state_dict(getattr(xs, k), v) for k, v in state_dict.items()}
+ return type(xs)(**fields)
register_serialization_state(dict, _dict_state_dict, _restore_dict)
| diff --git a/tests/serialization_test.py b/tests/serialization_test.py
--- a/tests/serialization_test.py
+++ b/tests/serialization_test.py
@@ -212,6 +212,20 @@ def test_namedtuple_serialization(self):
x1_serialized = serialization.to_bytes(x1)
x2 = foo_class(a=0, b=0, c=0)
restored_x1 = serialization.from_bytes(x2, x1_serialized)
+ self.assertEqual(type(x1), type(restored_x1))
+ self.assertEqual(x1, restored_x1)
+
+ def test_namedtuple_restore_legacy(self):
+ foo_class = collections.namedtuple('Foo', 'a b c')
+ x1 = foo_class(a=1, b=2, c=3)
+ legacy_encoding = {
+ 'name': 'Foo',
+ 'fields': {'0': 'a', '1': 'b', '2': 'c'},
+ 'values': {'0': 1, '1': 2, '2': 3},
+ }
+ x2 = foo_class(a=0, b=0, c=0)
+ restored_x1 = serialization.from_state_dict(x2, legacy_encoding)
+ self.assertEqual(type(x1), type(restored_x1))
self.assertEqual(x1, restored_x1)
def test_model_serialization_to_bytes(self):
| Deserialized TrainState dosn't pass `_check_tree_and_avals` check triggered by jax control flow
### Problem you have encountered:
Training fails to run with restored `TrainState` when `jax.lax.cond` is in the loop. Specifically the `true_fun` and `false_fun` return type check fails because of having `optax._src...<TypeName>` vs `flax.serialization.<TypeName>` for updated and not-yet-updated branches respectively.
### What you expected to happen:
Resume training without having any issues.
### Error message:
```
UnfilteredStackTrace: TypeError: true_fun and false_fun output must have same type structure, got PyTreeDef(CustomNode(<class 'flax.training.train_state.TrainState'>[(<bound method Module.apply of MLP(
# attributes
features = [4, 1]
)>, GradientTransformation(init=<function chain.<locals>.init_fn at 0x7f43f8875950>, update=<function chain.<locals>.update_fn at 0x7f43f8875a70>))], [*, CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}]), [CustomNode(namedtuple[<class 'optax._src.transform.ScaleByAdamState'>], [*, CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}]), CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}])]), CustomNode(namedtuple[<class 'flax.serialization.EmptyState'>], []), CustomNode(namedtuple[<class 'flax.serialization.EmptyState'>], [])]])) and PyTreeDef(CustomNode(<class 'flax.training.train_state.TrainState'>[(<bound method Module.apply of MLP(
# attributes
features = [4, 1]
)>, GradientTransformation(init=<function chain.<locals>.init_fn at 0x7f43f8875950>, update=<function chain.<locals>.update_fn at 0x7f43f8875a70>))], [*, CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}]), [CustomNode(namedtuple[<class 'flax.serialization.ScaleByAdamState'>], [*, CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}]), CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}])]), CustomNode(namedtuple[<class 'flax.serialization.EmptyState'>], []), CustomNode(namedtuple[<class 'flax.serialization.EmptyState'>], [])]])).
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-4-add6ba3b684b> in train_step(state, batch)
23 return state
24
---> 25 new_state = jax.lax.cond(True, upd_step, no_upd_step, None)
26 metrics = {"loss": loss}
27
TypeError: true_fun and false_fun output must have same type structure, got PyTreeDef(CustomNode(<class 'flax.training.train_state.TrainState'>[(<bound method Module.apply of MLP(
# attributes
features = [4, 1]
)>, GradientTransformation(init=<function chain.<locals>.init_fn at 0x7f43f8875950>, update=<function chain.<locals>.update_fn at 0x7f43f8875a70>))], [*, CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}]), [CustomNode(namedtuple[<class 'optax._src.transform.ScaleByAdamState'>], [*, CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}]), CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}])]), CustomNode(namedtuple[<class 'flax.serialization.EmptyState'>], []), CustomNode(namedtuple[<class 'flax.serialization.EmptyState'>], [])]])) and PyTreeDef(CustomNode(<class 'flax.training.train_state.TrainState'>[(<bound method Module.apply of MLP(
# attributes
features = [4, 1]
)>, GradientTransformation(init=<function chain.<locals>.init_fn at 0x7f43f8875950>, update=<function chain.<locals>.update_fn at 0x7f43f8875a70>))], [*, CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}]), [CustomNode(namedtuple[<class 'flax.serialization.ScaleByAdamState'>], [*, CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}]), CustomNode(<class 'flax.core.frozen_dict.FrozenDict'>[()], [{'params': {'Dense_0': {'bias': *, 'kernel': *}, 'Dense_1': {'bias': *, 'kernel': *}}}])]), CustomNode(namedtuple[<class 'flax.serialization.EmptyState'>], []), CustomNode(namedtuple[<class 'flax.serialization.EmptyState'>], [])]])).
```
### Steps to reproduce:
Linked [colab notebook](https://colab.research.google.com/drive/1J-mK1cWSunKCO9NgBA_Pb90PZcOTWgCi?usp=sharing) includes some workarounds. Below is code to reproduce:
```py
import jax
import jax.numpy as jnp
import flax
import flax.linen as nn
import optax
from typing import Sequence
from flax.training.train_state import TrainState
from flax.training.checkpoints import save_checkpoint, restore_checkpoint
rng = jax.random.PRNGKey(842)
rng, data_rng = jax.random.split(rng)
x = jnp.array([[x, x] for x in range(64)], dtype=jnp.float32)
y = jnp.sum(2*x + 1, axis=-1, keepdims=True)
x = x + jax.random.normal(data_rng, x.shape)
def data_gen():
yield x, y
class MLP(nn.Module):
features: Sequence[int]
@nn.compact
def __call__(self, x):
for feat in self.features[:-1]:
x = nn.relu(nn.Dense(feat)(x))
x = nn.Dense(self.features[-1])(x)
return x
model = MLP([4, 1])
params = model.init(jax.random.PRNGKey(0), x)
optimizer = optax.adamw(0.01)
optimizer = optax.MultiSteps(optimizer, 4)
state = TrainState.create(apply_fn=model.apply, params=params, tx=optimizer)
def compute_loss(params, batch):
preds = state.apply_fn(params, batch[0])
targs = batch[1]
return jnp.mean((preds - targs)**2)
grad_fn = jax.value_and_grad(compute_loss)
def train_step(state, batch):
def compute_loss(params):
preds = state.apply_fn(params, batch[0])
targs = batch[1]
return jnp.mean((preds - targs)**2)
grad_fn = jax.value_and_grad(compute_loss)
loss, grad = grad_fn(state.params)
new_state = state.apply_gradients(grads=grad)
metrics = {"loss": loss}
return new_state, metrics
train_step = jax.jit(train_step)
# train model, save checkpoint
for i in range(8):
batch = next(data_gen())
state, metrics = train_step(state, batch)
print(metrics["loss"])
save_checkpoint('./_tmp/', state, 8, overwrite=True)
# restore checkopint, resume training - fails
state = restore_checkpoint('./_tmp/', state)
for i in range(8):
batch = next(data_gen())
state, metrics = train_step(state, batch)
print(metrics["loss"])
```
| This is definitely a bug on our side. We are essentially recreating named tuples but I'm surprised that didn't result in trouble before. I will fix this. | 2021-07-14T11:32:55 |
google/flax | 1,451 | google__flax-1451 | [
"1234"
] | 4748dbeaed34464daff85b9e4ef1b1c7a5abe89f | diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -199,19 +199,19 @@ class Conv(Module):
kernel_size: shape of the convolutional kernel. For 1D convolution,
the kernel size can be passed as an integer. For all other cases, it must
be a sequence of integers.
- strides: a sequence of `n` integers, representing the inter-window
- strides.
+ strides: an integer or a sequence of `n` integers, representing the
+ inter-window strides (default: 1).
padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
of `n` `(low, high)` integer pairs that give the padding to apply before
and after each spatial dimension.
- input_dilation: `None`, or a sequence of `n` integers, giving the
- dilation factor to apply in each spatial dimension of `inputs`.
+ input_dilation: an integer or a sequence of `n` integers, giving the
+ dilation factor to apply in each spatial dimension of `inputs` (default: 1).
Convolution with input dilation `d` is equivalent to transposed
convolution with stride `d`.
- kernel_dilation: `None`, or a sequence of `n` integers, giving the
+ kernel_dilation: an integer or a sequence of `n` integers, giving the
dilation factor to apply in each spatial dimension of the convolution
- kernel. Convolution with kernel dilation is also known as 'atrous
- convolution'.
+ kernel (default: 1). Convolution with kernel dilation
+ is also known as 'atrous convolution'.
feature_group_count: integer, default 1. If specified divides the input
features into groups.
use_bias: whether to add a bias to the output (default: True).
@@ -222,11 +222,11 @@ class Conv(Module):
bias_init: initializer for the bias.
"""
features: int
- kernel_size: Union[int, Iterable[int]]
- strides: Optional[Iterable[int]] = None
+ kernel_size: Iterable[int]
+ strides: Union[None, int, Iterable[int]] = 1
padding: Union[str, Iterable[Tuple[int, int]]] = 'SAME'
- input_dilation: Optional[Iterable[int]] = None
- kernel_dilation: Optional[Iterable[int]] = None
+ input_dilation: Union[None, int, Iterable[int]] = 1
+ kernel_dilation: Union[None, int, Iterable[int]] = 1
feature_group_count: int = 1
use_bias: bool = True
dtype: Dtype = jnp.float32
@@ -248,16 +248,28 @@ def __call__(self, inputs: Array) -> Array:
inputs = jnp.asarray(inputs, self.dtype)
if isinstance(self.kernel_size, int):
- kernel_size = (self.kernel_size,)
+ raise TypeError('The kernel size must be specified as a'
+ ' tuple/list of integers (eg.: [3, 3]).')
else:
- kernel_size = self.kernel_size
+ kernel_size = tuple(self.kernel_size)
+
+ def maybe_broadcast(x):
+ if x is None:
+ # backward compatibility with using None as sentinel for
+ # broadcast 1
+ x = 1
+ if isinstance(x, int):
+ return (x,) * len(kernel_size)
+ return x
is_single_input = False
if inputs.ndim == len(kernel_size) + 1:
is_single_input = True
inputs = jnp.expand_dims(inputs, axis=0)
- strides = self.strides or (1,) * (inputs.ndim - 2)
+ strides = maybe_broadcast(self.strides) # self.strides or (1,) * (inputs.ndim - 2)
+ input_dilation = maybe_broadcast(self.input_dilation)
+ kernel_dilation = maybe_broadcast(self.kernel_dilation)
in_features = inputs.shape[-1]
assert in_features % self.feature_group_count == 0
@@ -272,8 +284,8 @@ def __call__(self, inputs: Array) -> Array:
kernel,
strides,
self.padding,
- lhs_dilation=self.input_dilation,
- rhs_dilation=self.kernel_dilation,
+ lhs_dilation=input_dilation,
+ rhs_dilation=kernel_dilation,
dimension_numbers=dimension_numbers,
feature_group_count=self.feature_group_count,
precision=self.precision)
| diff --git a/tests/linen/linen_linear_test.py b/tests/linen/linen_linear_test.py
--- a/tests/linen/linen_linear_test.py
+++ b/tests/linen/linen_linear_test.py
@@ -161,13 +161,12 @@ def test_dense_general_vs_numpy(self, axis, batch_dims, einsum_expr):
target = np.einsum(einsum_expr, x, initial_params['params']['kernel']) + 1.
np.testing.assert_allclose(y, target, atol=1e-6)
- @parameterized.parameters([((3,),), (3,)])
- def test_conv(self, kernel_size):
+ def test_conv(self):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((1, 8, 3))
conv_module = nn.Conv(
features=4,
- kernel_size=kernel_size,
+ kernel_size=(3,),
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -176,13 +175,12 @@ def test_conv(self, kernel_size):
self.assertEqual(initial_params['params']['kernel'].shape, (3, 3, 4))
np.testing.assert_allclose(y, np.full((1, 6, 4), 10.))
- @parameterized.parameters([((3,),), (3,)])
- def test_single_input_conv(self, kernel_size):
+ def test_single_input_conv(self):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((8, 3))
conv_module = nn.Conv(
features=4,
- kernel_size=kernel_size,
+ kernel_size=(3,),
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -191,13 +189,12 @@ def test_single_input_conv(self, kernel_size):
self.assertEqual(initial_params['params']['kernel'].shape, (3, 3, 4))
np.testing.assert_allclose(y, np.full((6, 4), 10.))
- @parameterized.parameters([((3,),), (3,)])
- def test_group_conv(self, kernel_size):
+ def test_group_conv(self):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((1, 8, 4))
conv_module = nn.Conv(
features=4,
- kernel_size=kernel_size,
+ kernel_size=(3,),
feature_group_count=2,
padding='VALID',
kernel_init=initializers.ones,
@@ -207,13 +204,12 @@ def test_group_conv(self, kernel_size):
self.assertEqual(initial_params['params']['kernel'].shape, (3, 2, 4))
np.testing.assert_allclose(y, np.full((1, 6, 4), 7.))
- @parameterized.parameters([((3,),), (3,)])
- def test_conv_transpose(self, kernel_size):
+ def test_conv_transpose(self):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((1, 8, 3))
conv_transpose_module = nn.ConvTranspose(
features=4,
- kernel_size=kernel_size,
+ kernel_size=(3,),
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -232,13 +228,12 @@ def test_conv_transpose(self, kernel_size):
[ 4., 4., 4., 4.]]])
np.testing.assert_allclose(y, correct_ans)
- @parameterized.parameters([((3,),), (3,)])
- def test_single_input_conv_transpose(self, kernel_size):
+ def test_single_input_conv_transpose(self):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((8, 3))
conv_transpose_module = nn.ConvTranspose(
features=4,
- kernel_size=kernel_size,
+ kernel_size=(3,),
padding='VALID',
kernel_init=initializers.ones,
bias_init=initializers.ones,
@@ -257,6 +252,12 @@ def test_single_input_conv_transpose(self, kernel_size):
[ 4., 4., 4., 4.]])
np.testing.assert_allclose(y, correct_ans)
+ def test_int_kernel_size(self):
+ conv = nn.Conv(features=4, kernel_size=3)
+ x = jnp.ones((8, 3))
+ with self.assertRaises(TypeError):
+ conv.init(random.PRNGKey(0), x)
+
def test_embed(self):
rng = dict(params=random.PRNGKey(0))
x = jnp.arange(4)[None]
| Surprising behaviour for integer kernel_size in linen.Conv
I was quite surprised with how `linen.Conv` treats `int` as an argument for `kernel_size`
```
key1, key2 = jax.random.split(jax.random.PRNGKey(0), 2)
image = jax.random.normal(key1, (8, 256, 256, 3))
conv = flax.linen.Conv(features=48, kernel_size=5)
params = conv.init(key2, image)
```
This errors on the last line with
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
google3/third_party/py/numpy/core/fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
55 try:
---> 56 return getattr(obj, method)(*args, **kwds)
57
AttributeError: 'tuple' object has no attribute 'take'
During handling of the above exception, another exception occurred:
IndexError Traceback (most recent call last)
11 frames
google3/third_party/py/numpy/core/fromnumeric.py in _wrapit(obj, method, *args, **kwds)
44 except AttributeError:
45 wrap = None
---> 46 result = getattr(asarray(obj), method)(*args, **kwds)
47 if wrap:
48 if not isinstance(result, mu.ndarray):
IndexError: index 3 is out of bounds for size 3
```
And I can see it creates a kernel for only 1 spatial dimension.
So it seems it says `5 == (5,)`
Changing to `conv = flax.linen.Conv(features=48, kernel_size=(5,5))` fixes the error.
Overall I find:
a) the error is cryptic
b) if this is the desired behaviour, perhaps it should only accept kernel_size as a sequence of ints?
| Your input is of shape `(8, 256, 256, 3)`, and when you specify a 1D kernel you are applying a 1D convolution. This doesn't work on your input shape, which expects 2D convolutions (your have 2 spatial dimensions). So you can fix this by reducing your input to one special dimension:
```python
from flax import linen as nn
import jax
key1, key2 = jax.random.split(jax.random.PRNGKey(0), 2)
image = jax.random.normal(key1, (8, 256, 4))
conv = nn.Conv(features=48, kernel_size=3)
params = conv.init(key2, image)
```
Closing this for now but please re-open if you think I missed something!
I think we want to keep this open as we want to get rid of flax's magic integer --> tuple conversion here | 2021-07-22T08:12:41 |
google/flax | 1,457 | google__flax-1457 | [
"1455"
] | 1a117d0aa0b9491d0abbd77e003ffe7de508cc49 | diff --git a/flax/core/lift.py b/flax/core/lift.py
--- a/flax/core/lift.py
+++ b/flax/core/lift.py
@@ -17,6 +17,7 @@
import collections
from dataclasses import dataclass
import functools
+import warnings
import jax
@@ -74,13 +75,17 @@ def pack(fn: Callable[..., Any],
in_variable_filters: Sequence[CollectionFilter],
out_variable_filters: Sequence[CollectionFilter],
rng_filters: Sequence[PRNGSequenceFilter],
- name=None) -> Callable[..., Any]:
+ name=None,
+ enable_kwargs=False) -> Callable[..., Any]:
"""Pack variables and rngs for functional transformations.
The pack function is the building block for all other lifted transformations.
"""
@functools.wraps(fn)
- def wrapper(scope_tree: Scope, *args):
+ def wrapper(scope_tree: Scope, *args, **kwargs):
+ if not enable_kwargs and kwargs:
+ msg = 'kwargs are not supported in {}, so \"{}\" is(are) ignored'
+ warnings.warn(msg.format(name, ', '.join(kwargs.keys())), RuntimeWarning)
# pylint: disable=protected-access
scopes, treedef = jax.tree_flatten(scope_tree)
scopes, paths = _dedup_scopes(scopes)
@@ -174,10 +179,16 @@ def repack(inner_scope_tree):
return _transpose(out_variable_groups_xs)
try:
- y, out_variable_groups_xs_t = fn(
- scope_fn, repack,
- variable_groups_xs_t, rng_groups_xs_t,
- *args)
+ if enable_kwargs:
+ y, out_variable_groups_xs_t = fn(
+ scope_fn, repack,
+ variable_groups_xs_t, rng_groups_xs_t,
+ *args, **kwargs)
+ else:
+ y, out_variable_groups_xs_t = fn(
+ scope_fn, repack,
+ variable_groups_xs_t, rng_groups_xs_t,
+ *args)
finally:
for inner_scope in inner_scopes:
inner_scope.invalidate()
@@ -672,16 +683,16 @@ def checkpoint(fn: Callable[..., Any],
A wrapped version of ``fn``. When computing gradients intermediate
computations will be re-computed when computing gradients.
"""
- def inner(scope_fn, repack_fn, variable_groups, rng_groups, *args):
+ def inner(scope_fn, repack_fn, variable_groups, rng_groups, *args, **kwargs):
@functools.partial(jax.remat, concrete=concrete, prevent_cse=prevent_cse)
@functools.wraps(fn)
- def rematted(variable_groups, rng_groups, *args):
+ def rematted(variable_groups, rng_groups, *args, **kwargs):
scope = scope_fn(variable_groups, rng_groups)
- y = fn(scope, *args)
+ y = fn(scope, *args, **kwargs)
return y, repack_fn(scope)
- return rematted(variable_groups, rng_groups, *args)
- return pack(inner, (variables,), (variables,), (rngs,), name='remat')
+ return rematted(variable_groups, rng_groups, *args, **kwargs)
+ return pack(inner, (variables,), (variables,), (rngs,), name='remat', enable_kwargs=True)
remat = checkpoint
| diff --git a/tests/linen/linen_transforms_test.py b/tests/linen/linen_transforms_test.py
--- a/tests/linen/linen_transforms_test.py
+++ b/tests/linen/linen_transforms_test.py
@@ -121,6 +121,19 @@ def test_remat_decorated(self):
self.assertTrue(np.all(y1 == y2))
+ def test_remat_kwargs(self):
+ class ConditionalReLU(nn.Module):
+ @nn.compact
+ def __call__(self, input, apply_relu : bool = False):
+ return nn.relu(input) if apply_relu else input
+ key = random.PRNGKey(0)
+ x = jnp.ones((4, 4)) * -1
+ remat_model = nn.remat(ConditionalReLU)()
+ p = remat_model.init(key, x)
+ y = remat_model.apply(p, x, apply_relu=True)
+
+ self.assertTrue(np.all(y == jnp.zeros_like(x)))
+
def test_vmap(self):
key1, key2 = random.split(random.PRNGKey(3), 2)
x = random.uniform(key1, (4, 4))
| remat: wrapper() got an unexpected keyword argument 'use_running_average'
### Problem you have encountered:
The transformed module returned by `remat` does not expect same keyword arguments as original when used as a submodule in `@compact` decorated `__call__` method.
### What you expected to happen:
The transformed module expects the same keyword arguments.
### Logs, error messages, etc:
```
TypeError: wrapper() got an unexpected keyword argument 'use_running_average'
```
### Steps to reproduce:
Whenever possible, please provide a *minimal example*. Please consider submitting it as a Colab link.
```
import jax
import flax.linen as linen
from jax import numpy as jnp
from typing import Optional
class MyModule(linen.Module):
@linen.compact
def __call__(self, x, train: Optional[bool]=True):
return linen.remat(linen.BatchNorm)()(x, use_running_average=not train)
model = MyModule()
key = jax.random.PRNGKey(0)
variables = model.init(key, jnp.ones((10,), jnp.float32))
```
https://colab.research.google.com/drive/1JsmxSn4Msor5D6G5XHpokfHOFlUcsXIJ?usp=sharing
| Same. This seems because pack [here](https://github.com/google/flax/blob/095517e679d1687b13e106354e966e418756e535/flax/core/lift.py#L73) returns function(see L83 in the same file below) does not accept keyword arguments. When I manually adds **kwargs to the parameter list of the wrapper, L180,666,671, and 674, the problem is addressed | 2021-07-27T03:39:49 |
google/flax | 1,475 | google__flax-1475 | [
"1467"
] | 1a24c4d5d8facc9c42275fea31fd64f679149915 | diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -262,39 +262,9 @@ def wrapped_module_method(*args, **kwargs):
# otherwise call the wrapped function as is.
if args and isinstance(args[0], Module):
self, args = args[0], args[1:]
+ return self._call_wrapped_method(fun, args, kwargs)
else:
return fun(*args, **kwargs)
- is_compact_method = hasattr(fun, 'compact')
- is_setup_method = fun.__name__ == 'setup'
- # We lazily call setup() only when needed.
- if is_setup_method:
- is_recurrent = self._state.in_setup
- self._state.in_setup = True
- else:
- self._try_setup()
-
- if is_compact_method:
- if self.scope is None:
- raise errors.CallCompactUnboundModuleError()
- is_recurrent = self._state.in_compact_method
- self._state.in_compact_method = True
- _context.module_stack.append(self)
- try:
- y = fun(self, *args, **kwargs)
- if _context.capture_stack:
- filter_fn = _context.capture_stack[-1]
- if filter_fn and filter_fn(self, fun.__name__):
- self.sow('intermediates', fun.__name__, y)
- return y
- finally:
- _context.module_stack.pop()
- if is_compact_method:
- object.__setattr__(self, 'scope', self.scope.rewound())
- # setup or compact calls can be recurrent for example due to super calls
- # resetting the state would cause is compact/setup method
- # to be set to False prematurely.
- if (is_compact_method or is_setup_method) and not is_recurrent:
- self._state.reset()
wrapped_module_method.method_handler_wrapped = True
return wrapped_module_method
@@ -523,6 +493,46 @@ def _wrap_module_methods(cls):
setattr(cls, key, wrapped_method)
return cls
+ def _call_wrapped_method(self, fun, args, kwargs):
+ """"Calls a wrapped method.
+
+ This function is responsible for setting up the thread local state
+ correctly before calling the method and cleaning up afterwards.
+ This includes storing intermediates, setup of the compact scope,
+ and making sure setup is called before any other method.
+ """
+ is_compact_method = hasattr(fun, 'compact')
+ is_setup_method = fun.__name__ == 'setup'
+ # We lazily call setup() only when needed.
+ if is_setup_method:
+ is_recurrent = self._state.in_setup
+ self._state.in_setup = True
+ else:
+ self._try_setup()
+
+ if is_compact_method:
+ if self.scope is None:
+ raise errors.CallCompactUnboundModuleError()
+ is_recurrent = self._state.in_compact_method
+ self._state.in_compact_method = True
+ _context.module_stack.append(self)
+ try:
+ y = fun(self, *args, **kwargs)
+ if _context.capture_stack:
+ filter_fn = _context.capture_stack[-1]
+ if filter_fn and filter_fn(self, fun.__name__):
+ self.sow('intermediates', fun.__name__, y)
+ return y
+ finally:
+ _context.module_stack.pop()
+ if is_compact_method:
+ object.__setattr__(self, 'scope', self.scope.rewound())
+ # setup or compact calls can be recurrent for example due to super calls
+ # resetting the state would cause is compact/setup method
+ # to be set to False prematurely.
+ if (is_compact_method or is_setup_method) and not is_recurrent:
+ self._state.reset()
+
def __setattr__(self, name: str, val: Any):
"""Sets an attribute on this Module.
| Cannot pickle linen Modules
I am using `0.3.4` and I am getting an error when trying to pickle flax modules, specifically `Dense` seems to be the problem but other might have similar issues.
### Problem you have encountered:
```python
from flax import linen
import pickle
with open("model.pkl", "wb") as f:
model = pickle.dump(linen.Dense(10), f)
```
> Traceback (most recent call last):
File "test.py", line 8, in <module>
model = pickle.dump(linen.Dense(10), f)
AttributeError: Can't pickle local object 'variance_scaling.<locals>.init'
While the previous is solved with `cloudpickle`, this other code doesn't work:
```python
import cloudpickle
from flax import linen
import pickle
class IndentityFlax(linen.Module):
def __call__(self, x):
return x
with open("mlp.pkl", "wb") as f:
cloudpickle.dump(IndentityFlax(), f)
```
> Traceback (most recent call last):
File "test.py", line 25, in <module>
cloudpickle.dump(IndentityFlax(), f)
File "/data/cristian/elegy/.venv/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 55, in dump
CloudPickler(
File "/data/cristian/elegy/.venv/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
TypeError: cannot pickle '_thread._local' object
| Ok, this exact example seems to work with `cloudpickle` but I am getting another error serializing a `flax.linen.Module` object. I will try to get a reproducible example.
So I found a minimal example that doesn't work with `cloudpickle`which seems to be what is affecting me on my actual problem, see updated issue.
(cloud)pickle issues are a little tricky. For some reason cloudpickle tries to serialize Flax internals. I spend some time before looking into it but my main issue with cloudpickle is that there doesn't seem to be a specification of their algorithm and of course the implementation is black-magic Python. I think the minimal thing we need to officially support a library like cloudpickle is a guide that explains what constraints we should adhere to in order to support cloudpickle. Perhaps something like this does exist but I couldn't find anything last time I looked for it.
You could of course also raise an issue with the cloudpickle team to see if this is even expected behavior from their side in the first place.
@jheek You happen to know which Flax internal object its trying to serialize? I am a bit hesitant to ping cloudpickle without a reproducible example that doesn't involve a whole library (flax) as part of it.
If flax users are not using (cloud)pickle, what is the current recommended way to serialize flax models?
Yeah I agree we should try to minimize the repro. I tried out your pickle example and I was able to remove Flax from the equation:
```
init_fn = jax.nn.initializers.lecun_normal()
with open("model.pkl", "wb") as f:
model = pickle.dump(init_fn, f)
```
So here it's really JAX that is producing a partial function that cannot be pickled.
For cloudpickle I'm not so sure what's going on but it essentially finds an internal ThreadLocal object and decides that it needs to serialize it. This I think doesn't make sense. After all only library methods touch this object (which it shouldn't serialize) and the ThreadLocal object is itself is defined top-level in the module so again it shouldn't try to serialize this object. This pattern of having state in a ThreadLocal object is quite common in Python so I think this should really be fixed in cloudpickle but perhaps I'm overlooking some edge case in how we implemented this in Flax.
@jheek thanks for looking into this!
Its very weird as you say since both `elegy` and `haiku` Modules use `ThreadLocal` but only `flax` is having issues with `cloudpickle`. I am more interested about `cloudpickle` than `pickle` since its generally more robust and pickle doesn't work for `haiku` and `elegy` either so its not really an option.
I will send a PR to `flax` with a test using `cloudpickle` to make this effort a little more formal and maybe others can try to give it a shot if that is OK with the flax team.
I am curious indeed why other libraries that use `ThreadLocal` don't have this problem...
@cgarciae I found the issue. cloudpickle will not serialize functions that are part of a library but it does serialize other globals. I guess this is a python limitation (missing __qualname__ I suppose?). We use the threadlocal inside a decorator function which will get serialized. All we have to do is factor out the body of the decorator into a method so cloudpickle doens't serialize its closure variables. I'm working on a PR | 2021-08-04T12:51:21 |
|
google/flax | 1,511 | google__flax-1511 | [
"1495"
] | 68ce7afea8bf4f07715dad6e3551409da84e4e41 | diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -399,7 +399,7 @@ class Embed(Module):
dtype: Dtype = jnp.float32
embedding_init: Callable[[PRNGKey, Shape, Dtype], Array] = default_embed_init
- embedding: Array = field(init=False)
+ embedding: Array = field(init=False, compare=False, repr=False)
def setup(self):
self.embedding = self.param('embedding',
| diff --git a/tests/linen/linen_linear_test.py b/tests/linen/linen_linear_test.py
--- a/tests/linen/linen_linear_test.py
+++ b/tests/linen/linen_linear_test.py
@@ -287,6 +287,10 @@ def test_embed_numpy(self):
np.testing.assert_allclose(y, dummy_embedding[None])
z = embed_module.apply(initial_params, jnp.ones((3,)), method=embed_module.attend)
np.testing.assert_allclose(z, 3. * jnp.arange(4))
+
+ def test_embed_hash(self):
+ self.assertEqual(hash(nn.Embed(2, 3)), hash(nn.Embed(2, 3)))
+ self.assertNotEqual(hash(nn.Embed(3, 4)), hash(nn.Embed(2, 3)))
def test_non_final_axis(self):
class Foo(nn.Module):
| nn.Embed cannot be hashed -> doesn't work with jax.jit static_argnums
### Problem you have encountered:
There is some issue with hashing of `nn.Embed` which means it cannot be used as input to methods annotated with `jax.jit`. An example situation is when one wishes to have a `train_step` function which is generic over the actual network executed: when you try to pass the model as a static argument, it works with stuff like `nn.Dense` but not `nn.Embed`.
### What you expected to happen:
`jax.jit` to work with static arguments including `nn.Embed`.
### Steps to reproduce:
[This](https://colab.research.google.com/drive/1njsRFfwOM7bdm15zE7jS_73YpQ-jdqCv?usp=sharing) may contain some superfluous code (`optax` and stuff) but I hope it conveys the idea clearly enough.
| In Flax, we would not usually pass around function references as static argnums, but instead pass them in as part of a PyTree with the annotation that they should not be transformed.
In you case, the simplest solution would be to extend `TrainState` and add the `apply_embed_fn` attribute with that annotation:
```python
from typing import Callable
from flax import struct
class TrainState(train_state.TrainState):
embed_apply_fn: Callable = struct.field(pytree_node=False)
```
Then you can initialize the state like this:
```python
state = TrainState.create(
apply_fn=model.apply,
embed_apply_fn=embed.apply,
params=params,
tx=optax.adam(1e-3),
)
```
Which will reduce the parameter count for your `train_step()` that now simply becomes
```python
@jax.jit
def train_step(state, i):
def loss_fn(params):
y = state.embed_apply_fn(params['embed'], i)
x = state.apply_fn(params['model'], y)
# ...
```
As for a minimal repro we could say
```python
import flax
hash(flax.linen.Dense(10)) # Works
hash(flax.linen.Embed(2, 3)) # Fails
```
The difference is due to a field that is not initialized and then the `dataclass`-generated `__hash__` function fails...
https://github.com/google/flax/blob/e30b7f5fff03df0840e7da40a9f8923aee6fb42b/flax/linen/linear.py#L402
As shown by
```python
embed = flax.linen.Embed(2, 3)
object.__setattr__(embed, 'embedding', None)
hash(embed) # Works
```
Tagging @jheek here who introduced above `embedding: Array = field(init=False) ` in #643
@andsteing thanks, that looks like a solution. May I ask for the rationale behind adopting this pattern though? I'm thinking of pytrees as a way to store the state of computation and while it may be convenient to be able to have non-transformed fields for some edge cases, the approach above feels to me like a hack. After all, if we put both the state and implementation in pytrees, what is the purpose of nn.Modules? Should I think of them as just a factory function, used to generate the pytree which then contains the entire API of my model?
Secondly, how does the non-transformed property play with jax.jit? After all, this apply_xyz functions are what we are looking to transform with jit. The approach you're proposing requires jax to figure out the code is static even though it's passed through a field we don't annotate as such. Are functions special cased as always static? After all, they may have closed on arbitrary mutable state.
I'm sorry if I sound critical, I'm just trying to align my intuition about how to use flax with that of its creators. Thank you very much.
Yes, it's a convenience way of passing a mix of parameters and functions through transformations like `jit()` and `pmap()` - note that even though you don't specify `apply_fn` you're already making use of this pattern when calling `state.apply_gradients()` which uses `state.tx` internally:
https://github.com/google/flax/blob/e30b7f5fff03df0840e7da40a9f8923aee6fb42b/flax/training/train_state.py#L55
There is some discussion about this pattern in [FLIP 1009](https://github.com/google/flax/blob/main/docs/flip/1009-optimizer-api.md), where you can also see alternatives.
There is nothing wrong about passing in all the functions as static argnums (or referring to them through an outer scope), but it can become quite verbose and that's why we prefer this dataclass-transform/notransform pattern in our projects (e.g. [our examples](https://flax.readthedocs.io/en/latest/examples.html)).
As for the purpose of `nn.Module`, after having things set up and initialized, most modules are really only used through `.apply_fn()` - not a factory pattern in the classic sense, but for many modules (like `Dense` and `Embed`) you could see the whole `nn.Module` machinery (that allows to nest modules, sets up and tracks scope, updates RNG key chains, stores parameters etc) "producing" a single function in the end (or two in the case of `Embed`).
As for your second question, your function function can indeed close on arbitrary mutable state, and that's a bad idea regardless whether you pass it via `static_argums` or via a pytree dataclass field that has `pytree_node=False`. JAX *expects you* to transform pure functions, and that includes all functions you call from inside those transformed functions, regardless how they're passed into the function - if you're not transforming pure functions you're breaking the contract and there are no guarantees as to what your transformed functions will actually do (in some cases you might get an error transforming such a function, but in many cases JAX will silently comply).
Thanks once again. I suppose I leave this issue open in case @jhee decides there's something to be changed about nn.Embed but on my side the issue is resolved.
@jheek - see above request for comment from jatentaki (your handle was mis-spelled) | 2021-08-31T09:25:59 |
google/flax | 1,525 | google__flax-1525 | [
"62"
] | f75286649161a2318ab468f31116ac450da85d4c | diff --git a/flax/optim/base.py b/flax/optim/base.py
--- a/flax/optim/base.py
+++ b/flax/optim/base.py
@@ -30,6 +30,9 @@
from ..core import FrozenDict, unfreeze
+# Backwards compatibility symbol import.
+ModelParamTraversal = traverse_util.ModelParamTraversal
+
@struct.dataclass
class OptimizerState:
@@ -416,17 +419,6 @@ def restore_state(self, target, opt_state, state_dict):
return self.optimizer_def.restore_state(target, opt_state, state_dict)
-def _get_params_dict(inputs):
- if isinstance(inputs, base.Model):
- return inputs.params
- elif isinstance(inputs, (dict, FrozenDict)):
- return unfreeze(inputs)
- else:
- raise ValueError(
- 'Can only traverse a flax Model instance or a nested dict, not '
- f'{type(inputs)}')
-
-
@dataclasses.dataclass
class _ShapeDtype:
shape: Any
@@ -442,23 +434,24 @@ def create(cls, value):
class MultiOptimizer(OptimizerDef):
- """
- A MultiOptimizer is subclass of :class:`OptimizerDef` and useful for applying
- separate optimizer algorithms to various subsets of the model parameters.
-
- The example below creates two optimizers using :class:`ModelParamTraversal`:
+ """
+ A MultiOptimizer is subclass of :class:`OptimizerDef` and useful for applying
+ separate optimizer algorithms to various subsets of the model parameters.
+
+ The example below creates two optimizers using
+ :class:`flax.traverse_util.ModelParamTraversal`:
one to optimize ``kernel`` parameters and to optimize ``bias`` parameters.
Note each optimizer is created with a different learning rate::
- kernels = optim.ModelParamTraversal(lambda path, _: 'kernel' in path)
- biases = optim.ModelParamTraversal(lambda path, _: 'bias' in path)
+ kernels = traverse_util.ModelParamTraversal(lambda path, _: 'kernel' in path)
+ biases = traverse_util.ModelParamTraversal(lambda path, _: 'bias' in path)
kernel_opt = optim.Momentum(learning_rate=0.01)
bias_opt = optim.Momentum(learning_rate=0.1)
opt_def = MultiOptimizer((kernels, kernel_opt), (biases, bias_opt))
optimizer = opt_def.create(model)
In order to train only a subset of the parameters, you can simply use a single
- :class:`ModelParamTraversal` instance.
+ :class:`flax.traverse_util.ModelParamTraversal` instance.
If you want to update the learning rates of both optimizers online with
different learning rate schedules, you should update the learning rates when
@@ -467,9 +460,9 @@ class MultiOptimizer(OptimizerDef):
hparams = optimizer.optimizer_def.hyper_params
new_optimizer = optimizer.apply_gradient(
- grads,
+ grads,
hyper_params=[
- hparams[0].replace(learning_rate=0.2),
+ hparams[0].replace(learning_rate=0.2),
hparams[1].replace(learning_rate=jnp.where(step < 1000, 0., lr)),
])
"""
@@ -546,63 +539,3 @@ def update_hyper_params(self, **hyper_param_overrides):
if hyper_param_overrides:
hps = [hp.replace(**hyper_param_overrides) for hp in hps]
return hps
-
-
-def _sorted_items(x):
- """Returns items of a dict ordered by keys."""
- return sorted(x.items(), key=lambda x: x[0])
-
-
-class ModelParamTraversal(traverse_util.Traversal):
- """Select model parameters using a name filter.
-
- This traversal operates on a nested dictionary of parameters and selects a
- subset based on the `filter_fn` argument.
-
- See :class:`MultiOptimizer` for an example of how to use
- :class:`ModelParamTraversal` to update subsets of the parameter tree with a
- specific optimizer.
-
- Backward compatibility:
- When using the old api the parameters can be encapsulated in a
- :class:`flax.nn.Model` instance.
- """
-
- def __init__(self, filter_fn):
- """Constructor a new ModelParamTraversal.
-
- Args:
- filter_fn: a function that takes a parameter's full name and its value and
- returns whether this parameter should be selected or not. The name of a
- parameter is determined by the module hierarchy and the parameter name
- (for example: '/module/sub_module/parameter_name').
- """
- self._filter_fn = filter_fn
-
- def iterate(self, inputs):
- params = _get_params_dict(inputs)
- flat_dict = traverse_util.flatten_dict(params)
- for key, value in _sorted_items(flat_dict):
- path = '/' + '/'.join(key)
- if self._filter_fn(path, value):
- yield value
-
- def update(self, fn, inputs):
- params = _get_params_dict(inputs)
- flat_dict = traverse_util.flatten_dict(params, keep_empty_nodes=True)
- new_dict = {}
- for key, value in _sorted_items(flat_dict):
- # empty_node is not an actual leave. It's just a stub for empty nodes
- # in the nested dict.
- if value is not traverse_util.empty_node:
- path = '/' + '/'.join(key)
- if self._filter_fn(path, value):
- value = fn(value)
- new_dict[key] = value
- new_params = traverse_util.unflatten_dict(new_dict)
- if isinstance(inputs, base.Model):
- return inputs.replace(params=new_params)
- elif isinstance(inputs, FrozenDict):
- return FrozenDict(new_params)
- else:
- return new_params
diff --git a/flax/struct.py b/flax/struct.py
--- a/flax/struct.py
+++ b/flax/struct.py
@@ -12,21 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-
-# Copyright 2020 The Flax Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
"""Utilities for defining custom classes that can be used with jax transformations.
"""
diff --git a/flax/traverse_util.py b/flax/traverse_util.py
--- a/flax/traverse_util.py
+++ b/flax/traverse_util.py
@@ -12,21 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-
-# Copyright 2020 The Flax Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
"""A utility for traversing immutable datastructures.
A Traversal can be used to iterate and update complex data structures.
@@ -60,12 +45,12 @@
import dataclasses
import jax
+import flax
from . import struct
-# the empty node is a struct.dataclass to
-# be compatible with JAX.
+# the empty node is a struct.dataclass to be compatible with JAX.
@struct.dataclass
class _EmptyNode:
pass
@@ -381,3 +366,74 @@ def update(self, fn, inputs):
def iterate(self, inputs):
yield from jax.tree_leaves(inputs)
+
+
+def _get_params_dict(inputs):
+ if isinstance(inputs, flax.nn.Model):
+ return inputs.params
+ elif isinstance(inputs, (dict, flax.core.FrozenDict)):
+ return flax.core.unfreeze(inputs)
+ else:
+ raise ValueError(
+ 'Can only traverse a flax Model instance or a nested dict, not '
+ f'{type(inputs)}')
+
+
+def _sorted_items(x):
+ """Returns items of a dict ordered by keys."""
+ return sorted(x.items(), key=lambda x: x[0])
+
+
+class ModelParamTraversal(Traversal):
+ """Select model parameters using a name filter.
+
+ This traversal operates on a nested dictionary of parameters and selects a
+ subset based on the `filter_fn` argument.
+
+ See :class:`flax.optim.MultiOptimizer` for an example of how to use
+ :class:`ModelParamTraversal` to update subsets of the parameter tree with a
+ specific optimizer.
+
+ Backward compatibility:
+ When using the old api the parameters can be encapsulated in a
+ :class:`flax.nn.Model` instance.
+ """
+
+ def __init__(self, filter_fn):
+ """Constructor a new ModelParamTraversal.
+
+ Args:
+ filter_fn: a function that takes a parameter's full name and its value and
+ returns whether this parameter should be selected or not. The name of a
+ parameter is determined by the module hierarchy and the parameter name
+ (for example: '/module/sub_module/parameter_name').
+ """
+ self._filter_fn = filter_fn
+
+ def iterate(self, inputs):
+ params = _get_params_dict(inputs)
+ flat_dict = flatten_dict(params)
+ for key, value in _sorted_items(flat_dict):
+ path = '/' + '/'.join(key)
+ if self._filter_fn(path, value):
+ yield value
+
+ def update(self, fn, inputs):
+ params = _get_params_dict(inputs)
+ flat_dict = flatten_dict(params, keep_empty_nodes=True)
+ new_dict = {}
+ for key, value in _sorted_items(flat_dict):
+ # empty_node is not an actual leave. It's just a stub for empty nodes
+ # in the nested dict.
+ if value is not empty_node:
+ path = '/' + '/'.join(key)
+ if self._filter_fn(path, value):
+ value = fn(value)
+ new_dict[key] = value
+ new_params = unflatten_dict(new_dict)
+ if isinstance(inputs, flax.nn.base.Model):
+ return inputs.replace(params=new_params)
+ elif isinstance(inputs, flax.core.FrozenDict):
+ return flax.core.FrozenDict(new_params)
+ else:
+ return new_params
| diff --git a/tests/optim_test.py b/tests/optim_test.py
--- a/tests/optim_test.py
+++ b/tests/optim_test.py
@@ -113,58 +113,6 @@ def test_empty_optimizer(self):
self.assertEqual(new_optimizer.state, expected_state)
-class ModelParamTraversalTest(absltest.TestCase):
-
- def test_only_works_on_model_params(self):
- traversal = optim.ModelParamTraversal(lambda *_: True)
- with self.assertRaises(ValueError):
- list(traversal.iterate([]))
-
- def test_param_selection(self):
- params = {
- 'x': {
- 'kernel': 1,
- 'bias': 2,
- 'y': {
- 'kernel': 3,
- 'bias': 4,
- },
- 'z': {},
- },
- }
- expected_params = {
- 'x': {
- 'kernel': 2,
- 'bias': 2,
- 'y': {
- 'kernel': 6,
- 'bias': 4,
- },
- 'z': {}
- },
- }
- names = []
- def filter_fn(name, _):
- names.append(name) # track names passed to filter_fn for testing
- return 'kernel' in name
- traversal = optim.ModelParamTraversal(filter_fn)
-
- # Model
- model = nn.Model(None, params)
- values = list(traversal.iterate(model))
- configs = [
- (nn.Model(None, params), nn.Model(None, expected_params)),
- (params, expected_params),
- (FrozenDict(params), FrozenDict(expected_params)),
- ]
- for model, expected_model in configs:
- self.assertEqual(values, [1, 3])
- self.assertEqual(set(names), set([
- '/x/kernel', '/x/bias', '/x/y/kernel', '/x/y/bias']))
- new_model = traversal.update(lambda x: x + x, model)
- self.assertEqual(new_model, expected_model)
-
-
class MultiOptimizerTest(absltest.TestCase):
def test_multi_optimizer(self):
@@ -200,10 +148,10 @@ def test_multi_optimizer_multiple_matches(self):
params = {'a': {'x': 0., 'y': 0.}, 'b': {'y': 0, 'z': 0.}}
opt_a = optim.GradientDescent(learning_rate=1.)
opt_b = optim.GradientDescent(learning_rate=10.)
- t_a = optim.ModelParamTraversal(
+ t_a = traverse_util.ModelParamTraversal(
lambda path, _: path.endswith('/x') or path.endswith('/y')
)
- t_b = optim.ModelParamTraversal(
+ t_b = traverse_util.ModelParamTraversal(
lambda path, value: value.dtype == jnp.int32 or path.endswith('/z')
)
optimizer_def = optim.MultiOptimizer((t_a, opt_a), (t_b, opt_b))
diff --git a/tests/traverse_util_test.py b/tests/traverse_util_test.py
--- a/tests/traverse_util_test.py
+++ b/tests/traverse_util_test.py
@@ -16,11 +16,9 @@
import collections
-
from absl.testing import absltest
-
+import flax
from flax import traverse_util
-
import jax
# Parse absl flags test_srcdir and test_tmpdir.
@@ -187,5 +185,58 @@ def test_flatten_dict_is_leaf(self):
xs_restore = traverse_util.unflatten_dict(flat_xs)
self.assertEqual(xs, xs_restore)
+
+class ModelParamTraversalTest(absltest.TestCase):
+
+ def test_only_works_on_model_params(self):
+ traversal = traverse_util.ModelParamTraversal(lambda *_: True)
+ with self.assertRaises(ValueError):
+ list(traversal.iterate([]))
+
+ def test_param_selection(self):
+ params = {
+ 'x': {
+ 'kernel': 1,
+ 'bias': 2,
+ 'y': {
+ 'kernel': 3,
+ 'bias': 4,
+ },
+ 'z': {},
+ },
+ }
+ expected_params = {
+ 'x': {
+ 'kernel': 2,
+ 'bias': 2,
+ 'y': {
+ 'kernel': 6,
+ 'bias': 4,
+ },
+ 'z': {}
+ },
+ }
+ names = []
+ def filter_fn(name, _):
+ names.append(name) # track names passed to filter_fn for testing
+ return 'kernel' in name
+ traversal = traverse_util.ModelParamTraversal(filter_fn)
+
+ # Model
+ model = flax.nn.Model(None, params)
+ values = list(traversal.iterate(model))
+ configs = [
+ (flax.nn.Model(None, params), flax.nn.Model(None, expected_params)),
+ (params, expected_params),
+ (flax.core.FrozenDict(params), flax.core.FrozenDict(expected_params)),
+ ]
+ for model, expected_model in configs:
+ self.assertEqual(values, [1, 3])
+ self.assertEqual(set(names), set([
+ '/x/kernel', '/x/bias', '/x/y/kernel', '/x/y/bias']))
+ new_model = traversal.update(lambda x: x + x, model)
+ self.assertEqual(new_model, expected_model)
+
+
if __name__ == '__main__':
absltest.main()
| Make `ModelParamTraversal` more public?
`ModelParamTraversal` is currently somewhat hidden within `optim`. But it is much more generally useful, for example for implementing weight-decay (not as a loss) or weight standardization or spectral norm (I think).
So it seems like putting it in `traverse_util.py` (where I'd look for it) would make sense.
| Sorry for a late reply @lucasb-eyer . This is a good proposal. Would you like to give it a shot and submit a PR? Perhaps we could move it out of optim, but keep backwards compatibility to re-exporting again in `optim`?
Currently super stretched on finishing multiple projects, so won't be able to get to it anytime in the next few weeks, sorry.
I'll give this one a try.
I'm removing myself from this issue again since I didn't find time to work on it, so if anyone would like to give this a try, please go ahead! | 2021-09-07T07:45:06 |
google/flax | 1,570 | google__flax-1570 | [
"1419"
] | 136f41a65c545f204d61db781e6629d3680397c4 | diff --git a/flax/linen/__init__.py b/flax/linen/__init__.py
--- a/flax/linen/__init__.py
+++ b/flax/linen/__init__.py
@@ -19,7 +19,7 @@
# re-export commonly used modules and functions
from .activation import (celu, elu, gelu, glu, leaky_relu, log_sigmoid,
log_softmax, relu, sigmoid, soft_sign, softmax,
- softplus, swish, silu, tanh)
+ softplus, swish, silu, tanh, PReLU)
from .attention import (MultiHeadDotProductAttention, SelfAttention,
dot_product_attention, make_attention_mask,
make_causal_mask, combine_masks)
diff --git a/flax/linen/activation.py b/flax/linen/activation.py
--- a/flax/linen/activation.py
+++ b/flax/linen/activation.py
@@ -40,3 +40,35 @@
from jax.numpy import tanh
# pylint: enable=unused-import
+
+from typing import Any
+
+from flax.linen.module import Module, compact
+import jax.numpy as jnp
+
+
+Array = Any
+
+
+class PReLU(Module):
+ """Parametric Rectified Linear Unit (PReLU) activation function.
+
+ Attributes:
+ negative_slope_init: the value to initialize the negative slope.
+ """
+ negative_slope_init: float = 0.01
+ @compact
+ def __call__(self, inputs: Array) -> Array:
+ """Applies an activation to the inputs.
+
+ Args:
+ inputs: the nd-array to apply the activation function to.
+
+ Returns:
+ The transformed input.
+ """
+ negative_slope = self.param(
+ 'negative_slope',
+ lambda k: jnp.asarray(self.negative_slope_init, jnp.float32)
+ )
+ return jnp.where(inputs >= 0, inputs, jnp.asarray(negative_slope, inputs.dtype) * inputs)
| diff --git a/tests/linen/linen_activation_test.py b/tests/linen/linen_activation_test.py
new file mode 100644
--- /dev/null
+++ b/tests/linen/linen_activation_test.py
@@ -0,0 +1,42 @@
+# Copyright 2021 The Flax Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Tests for flax.nn.activation."""
+
+from absl.testing import absltest
+from absl.testing import parameterized
+
+from flax import linen as nn
+
+import jax
+from jax import random
+import jax.numpy as jnp
+
+
+# Parse absl flags test_srcdir and test_tmpdir.
+jax.config.parse_flags_with_absl()
+
+
+class ActivationTest(parameterized.TestCase):
+
+ def test_prelu(self):
+ rng = random.PRNGKey(0)
+ x = jnp.ones((4, 6, 5))
+ act = nn.PReLU()
+ y, _ = act.init_with_output(rng, x)
+ self.assertEqual(y.shape, x.shape)
+
+
+if __name__ == '__main__':
+ absltest.main()
| PReLU activation implementation
I wanted to gauge interest on adding a PReLU activation. I noticed that `flax.linen.activations` are simply aliasing `jax.nn` activation functions which also doesn't have a PReLU implementation.
To add some background, PReLU is simply Leaky ReLU where the alpha (slope) parameter is trainable and not fixed. This makes it simple to implement as a Module if desired.
Here's an example implementation from another [project](https://github.com/isaaccorley/jax-enhance) of mine.
```python
from functools import partial
from typing import Any, Sequence
import jax.numpy as jnp
import flax.linen as nn
# This is nearly identical to jnp.ones however multiplies the output of jnp.ones by the constant value
def constant(key, shape: Sequence[int], value: Any, dtype: Any = jnp.float32) -> jnp.ndarray:
value = jnp.asarray(value, dtype)
return jnp.ones(shape, dtype) * value
class PReLU(nn.Module):
negative_slope_init: float = 0.01
dtype: Any = jnp.float32
@nn.compact
def __call__(self, x: jnp.ndarray) -> jnp.ndarray:
x = jnp.asarray(x, self.dtype)
negative_slope = self.param(
"negative_slope",
partial(constant, value=self.negative_slope_init, dtype=self.dtype),
(1,)
)
return jnp.where(x >= 0, x, negative_slope * x)
```
| Given that all current activation functions reside in JAX, it seem more fitting to add this JAX. Do you want to file an issue against their repo?
Thanks for the suggestion. The main reason I filed the issue here was because it seems like PReLU is a special case where it has a trainable param and, if I'm not mistaken, all other jax activations do not.
I'm not sure if this changes your suggestion, but it's something to consider.
@isaaccorley - hey so sorry for the slow feedback on your suggestion here.
2 points:
- instead of defining a constant init func, we can just declare a jnp scalar array of the correct dtype.
- I think an -activation- "function" should strictly follow the dtype of its argument, so no dtype attribute, just derive it from `x`
So what if we added something like this?
```python
class PReLU(nn.Module):
negative_slope_init: float = 0.01
@nn.compact
def __call__(self, x: jnp.ndarray) -> jnp.ndarray:
negative_slope = self.param(
"negative_slope",
lambda k: jnp.array(self.negative_slope_init, x.dtype)
)
return jnp.where(x >= 0, x, negative_slope * x)
```
I'm indifferent on the implementation. I think the only thing to point out would be since we are inheriting from Module and other Modules have a dtype param, should we stray from that standard even though it is an activation function?
I created a constant init func because jax itself seemed to be lacking one, however I haven't received a response to the issue I posted in the jax repo requesting to add it so I'm fine with just using a lambda.
- Other Modules have a dtype param to control the precision of their -intermediate- values, and a simple activation function like this doesn't have intermediates. We don't require modules to surface a `dtype=` attribute - it's just convention for the core layers to do so to give users the ability to control the floating-point types of the "insides"
- The "constant" functions you're looking for already exist: `jnp.full` and `jnp.full_like`
1. Makes sense thanks for clarifying that.
2. Thanks for pointing me jnp.full. I wasn't aware of that.
Shall I make a PR then?
Yeah if you'd like to make a PR we could add the above to `activations.py` I think (after all the passthrough function imports). (but no pressure - if you don't have time we can add it soon ourselves.)
I'll try to take a first stab at it since it will be my first time contributing to flax. | 2021-09-27T03:20:41 |
google/flax | 1,661 | google__flax-1661 | [
"971"
] | 6da4a003eae5c6c5c891da0a51fdfd8141a3c3ef | diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -201,9 +201,9 @@ class Conv(Module):
be a sequence of integers.
strides: an integer or a sequence of `n` integers, representing the
inter-window strides (default: 1).
- padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
- of `n` `(low, high)` integer pairs that give the padding to apply before
- and after each spatial dimension.
+ padding: either the string `'SAME'`, the string `'VALID'`, the string 'CIRCULAR'` (periodic boundary conditions),
+ or a sequence of `n` `(low, high)` integer pairs that give the padding to apply
+ before and after each spatial dimension.
input_dilation: an integer or a sequence of `n` integers, giving the
dilation factor to apply in each spatial dimension of `inputs` (default: 1).
Convolution with input dilation `d` is equivalent to transposed
@@ -282,12 +282,20 @@ def maybe_broadcast(x):
kernel = self.param('kernel', self.kernel_init, kernel_shape)
kernel = jnp.asarray(kernel, self.dtype)
+ if self.padding == 'CIRCULAR':
+ kernel_size_dilated = [(k - 1) * d + 1 for k, d in zip(kernel_size, kernel_dilation)]
+ pads = [(0, 0)] + [((k - 1) // 2, k // 2) for k in kernel_size_dilated] + [(0, 0)]
+ inputs = jnp.pad(inputs, pads, mode='wrap')
+ padding_lax = 'VALID'
+ else:
+ padding_lax = self.padding
+
dimension_numbers = _conv_dimension_numbers(inputs.shape)
y = lax.conv_general_dilated(
inputs,
kernel,
strides,
- self.padding,
+ padding_lax,
lhs_dilation=input_dilation,
rhs_dilation=kernel_dilation,
dimension_numbers=dimension_numbers,
@@ -313,8 +321,8 @@ class ConvTranspose(Module):
be a sequence of integers.
strides: a sequence of `n` integers, representing the inter-window
strides.
- padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
- of `n` `(low, high)` integer pairs that give the padding to apply before
+ padding: either the string `'SAME'`, the string `'VALID'`, the string 'CIRCULAR'` (periodic boundary conditions),
+ or a sequence of `n` `(low, high)` integer pairs that give the padding to apply before
and after each spatial dimension.
kernel_dilation: `None`, or a sequence of `n` integers, giving the
dilation factor to apply in each spatial dimension of the convolution
@@ -372,13 +380,49 @@ def __call__(self, inputs: Array) -> Array:
kernel = self.param('kernel', self.kernel_init, kernel_shape)
kernel = jnp.asarray(kernel, self.dtype)
+ if self.padding == 'CIRCULAR':
+ padding_lax = 'VALID'
+ else:
+ padding_lax = self.padding
+
y = lax.conv_transpose(inputs,
kernel,
strides,
- self.padding,
+ padding_lax,
rhs_dilation=self.kernel_dilation,
precision=self.precision)
+ if self.padding == "CIRCULAR":
+ # For circular padding, we need to identify the size of the final output
+ # ("period") along each spatial dimension, pad each dimension to an
+ # integer number of periods, and wrap the array periodically around each
+ # dimension. Padding should be done in such a way that the start of the
+ # original input data inside the padded array is located at integer
+ # number of periods - otherwise the result would be circularly shifted.
+
+ # Compute period along each spatial dimension - it's input size scaled
+ # by the stride.
+ scaled_x_dims = [
+ x_dim * stride for x_dim, stride in zip(inputs.shape[1:-1], strides)
+ ]
+ # Compute difference between the current size of y and the final output
+ # size, and complement this difference to 2 * period - that gives how
+ # much we need to pad.
+ size_diffs = [
+ -(y_dim - x_dim) % (2 * x_dim)
+ for y_dim, x_dim in zip(y.shape[1:-1], scaled_x_dims)
+ ]
+ # Divide the padding equaly between left and right. The choice to put
+ # "+1" on the left (and not on the right) represents a convention for
+ # aligning even-sized kernels.
+ total_pad = [((size_diff + 1) // 2, size_diff // 2) for size_diff in size_diffs]
+ y = np.pad(y, [(0, 0)] + total_pad + [(0, 0)])
+ # Wrap the result periodically around each spatial dimension,
+ # one by one.
+ for i in range(1, y.ndim - 1):
+ y = y.reshape(y.shape[:i] + (-1, scaled_x_dims[i - 1]) + y.shape[i + 1:])
+ y = y.sum(axis=i)
+
if is_single_input:
y = jnp.squeeze(y, axis=0)
if self.use_bias:
| diff --git a/tests/linen/linen_linear_test.py b/tests/linen/linen_linear_test.py
--- a/tests/linen/linen_linear_test.py
+++ b/tests/linen/linen_linear_test.py
@@ -204,6 +204,174 @@ def test_group_conv(self):
self.assertEqual(initial_params['params']['kernel'].shape, (3, 2, 4))
np.testing.assert_allclose(y, np.full((1, 6, 4), 7.))
+ @parameterized.product(
+ n_batch=(1, 3),
+ n_features=(1, 2),
+ kernel_size=(1, 2, 3, 9),
+ n_input_features=(1, 3),
+ input_size=(1, 8, 16),
+ )
+ def test_circular_conv_1d_constant(
+ self, n_batch, n_features, kernel_size, n_input_features, input_size
+ ):
+ """
+ Test 1D convolution with circular padding: filter with all elements equal to 1
+ applied on an input with all elements equal to 1.
+ Result should have the same shape as input (except for the feature dimension) and
+ have all elements equal to n_input_features * kernel_lin_size
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = jnp.ones((n_batch, input_size, n_input_features))
+ conv_module = nn.Conv(
+ features=n_features,
+ kernel_size=(kernel_size,),
+ padding='CIRCULAR',
+ kernel_init=initializers.ones,
+ bias_init=initializers.zeros,
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(
+ initial_params['params']['kernel'].shape,
+ (kernel_size, n_input_features, n_features),
+ )
+ correct_ans = np.full(
+ (n_batch, input_size, n_features), kernel_size * n_input_features
+ )
+ np.testing.assert_allclose(y, correct_ans)
+
+ @parameterized.product(
+ n_batch=(1, 3),
+ n_features=(1, 2, 10),
+ kernel_lin_size=(1, 2, 3, 9),
+ n_input_features=(1, 5),
+ input_x_size=(14,),
+ input_y_size=(5, 10),
+ )
+ def test_circular_conv_2d_constant(
+ self,
+ n_batch,
+ n_features,
+ kernel_lin_size,
+ n_input_features,
+ input_x_size,
+ input_y_size,
+ ):
+ """
+ Test 2D convolution with circular padding: square filter with all elements equal to 1
+ applied on an input with all elements equal to 1.
+ Result should have the same shape as input (except for the feature dimension) and
+ have all elements equal to n_input_features * kernel_lin_size^2
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = jnp.ones((n_batch, input_x_size, input_y_size, n_input_features))
+ conv_module = nn.Conv(
+ features=n_features,
+ kernel_size=(kernel_lin_size, kernel_lin_size),
+ padding='CIRCULAR',
+ kernel_init=initializers.ones,
+ bias_init=initializers.zeros,
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(
+ initial_params['params']['kernel'].shape,
+ (kernel_lin_size, kernel_lin_size, n_input_features, n_features),
+ )
+ correct_ans = np.full(
+ (n_batch, input_x_size, input_y_size, n_features),
+ kernel_lin_size * kernel_lin_size * n_input_features,
+ )
+ np.testing.assert_allclose(y, correct_ans)
+
+ def test_circular_conv_1d_custom(self):
+ """
+ Test 1d convolution with circular padding and a stride
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = np.arange(1, 6)
+ x = np.expand_dims(x, (0, 2))
+ kernel = np.array((1, 2, 1))
+ kernel = np.expand_dims(kernel, (1, 2))
+
+ conv_module = nn.Conv(
+ features=1,
+ kernel_size=(3,),
+ strides=(3,),
+ padding='CIRCULAR',
+ kernel_init=lambda *_: kernel,
+ bias_init=initializers.zeros,
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(initial_params['params']['kernel'].shape, (3, 1, 1))
+ # Compare with manually computed convolution
+ correct_ans = np.array((5 + 2 * 1 + 2, 3 + 2 * 4 + 5))
+ correct_ans = np.expand_dims(correct_ans, (0, 2))
+ np.testing.assert_allclose(y, correct_ans)
+
+
+ def test_circular_conv_1d_dilation(self):
+ """
+ Test 1d convolution with circular padding and kernel dilation
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = np.arange(1, 6)
+ x = np.expand_dims(x, (0, 2))
+ kernel = np.array((1, 2, 1))
+ kernel = np.expand_dims(kernel, (1, 2))
+
+ conv_module = nn.Conv(
+ features=1,
+ kernel_size=(3,),
+ padding='CIRCULAR',
+ kernel_init=lambda *_: kernel,
+ bias_init=initializers.zeros,
+ kernel_dilation=(3,)
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(initial_params['params']['kernel'].shape, (3, 1, 1))
+ # Compare with manually computed convolution
+ correct_ans = np.array((3 + 2 * 1 + 4, 4 + 2 * 2 + 5, 5 + 2 * 3 + 1, 1 + 2 * 4 + 2, 2 + 2 * 5 + 3))
+ correct_ans = np.expand_dims(correct_ans, (0, 2))
+ np.testing.assert_allclose(y, correct_ans)
+
+ def test_circular_conv_2d_custom(self):
+ """
+ Test 2d convolution with circular padding on a 3x3 example
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = np.array(((1, 2, 3),
+ (4, 5, 6),
+ (7, 8, 9)))
+ x = np.expand_dims(x, (0, 3))
+ kernel = np.array(((0, 1, 0),
+ (1, 2, 1),
+ (0, 1, 0)))
+ kernel = np.expand_dims(kernel, (2, 3))
+
+ conv_module = nn.Conv(
+ features=1,
+ kernel_size=(3, 3),
+ padding='CIRCULAR',
+ kernel_init=lambda *_: kernel,
+ bias_init=initializers.zeros,
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(initial_params['params']['kernel'].shape, (3, 3, 1, 1))
+ # Compare with manually computed convolution
+ correct_ans = np.array(
+ (
+ (2 * 1 + 7 + 2 + 4 + 3, 2 * 2 + 8 + 3 + 5 + 1, 2 * 3 + 9 + 1 + 6 + 2),
+ (2 * 4 + 1 + 5 + 7 + 6, 2 * 5 + 2 + 6 + 8 + 4, 2 * 6 + 3 + 4 + 9 + 5),
+ (2 * 7 + 4 + 8 + 1 + 9, 2 * 8 + 5 + 9 + 2 + 7, 2 * 9 + 6 + 7 + 3 + 8),
+ )
+ )
+ correct_ans = np.expand_dims(correct_ans, (0, 3))
+ np.testing.assert_allclose(y, correct_ans)
+
def test_conv_transpose(self):
rng = dict(params=random.PRNGKey(0))
x = jnp.ones((1, 8, 3))
@@ -252,6 +420,202 @@ def test_single_input_conv_transpose(self):
[ 4., 4., 4., 4.]])
np.testing.assert_allclose(y, correct_ans)
+ @parameterized.product(
+ n_batch=(1, 3),
+ n_features=(1, 2),
+ kernel_size=(1, 2, 3, 9),
+ n_input_features=(1, 3),
+ input_size=(1, 8, 16),
+ )
+ def test_circular_conv_transpose_1d_constant(
+ self, n_batch, n_features, kernel_size, n_input_features, input_size
+ ):
+ """
+ Test 1D transposed convolution with circular padding: filter with all elements equal to 1
+ applied on an input with all elements equal to 1.
+ Result should have the same shape as input (except for the feature dimension) and
+ have all elements equal to n_input_features * kernel_lin_size
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = jnp.ones((n_batch, input_size, n_input_features))
+ conv_module = nn.ConvTranspose(
+ features=n_features,
+ kernel_size=(kernel_size,),
+ padding="CIRCULAR",
+ kernel_init=initializers.ones,
+ bias_init=initializers.zeros,
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(
+ initial_params["params"]["kernel"].shape,
+ (kernel_size, n_input_features, n_features),
+ )
+ correct_ans = np.full(
+ (n_batch, input_size, n_features), kernel_size * n_input_features
+ )
+ np.testing.assert_allclose(y, correct_ans)
+
+ @parameterized.product(
+ n_batch=(1, 3),
+ n_features=(1, 2, 10),
+ kernel_lin_size=(1, 2, 3, 9),
+ n_input_features=(1, 5),
+ input_x_size=(14,),
+ input_y_size=(5, 10),
+ )
+ def test_circular_conv_transpose_2d_constant(
+ self,
+ n_batch,
+ n_features,
+ kernel_lin_size,
+ n_input_features,
+ input_x_size,
+ input_y_size,
+ ):
+ """
+ Test 2D transposed convolution with circular padding: square filter with all elements equal to 1
+ applied on an input with all elements equal to 1.
+ Result should have the same shape as input (except for the feature dimension) and
+ have all elements equal to n_input_features * kernel_lin_size^2
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = jnp.ones((n_batch, input_x_size, input_y_size, n_input_features))
+ conv_module = nn.ConvTranspose(
+ features=n_features,
+ kernel_size=(kernel_lin_size, kernel_lin_size),
+ padding="CIRCULAR",
+ kernel_init=initializers.ones,
+ bias_init=initializers.zeros,
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(
+ initial_params["params"]["kernel"].shape,
+ (kernel_lin_size, kernel_lin_size, n_input_features, n_features),
+ )
+ correct_ans = np.full(
+ (n_batch, input_x_size, input_y_size, n_features),
+ kernel_lin_size * kernel_lin_size * n_input_features,
+ )
+ np.testing.assert_allclose(y, correct_ans)
+
+ def test_circular_conv_transpose_1d_custom(self):
+ """
+ Test 1d transposed convolution with circular padding and a stride
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = np.arange(1, 6)
+ x = np.expand_dims(x, (0, 2))
+ kernel = np.array((1, 2, 1))
+ kernel = np.expand_dims(kernel, (1, 2))
+
+ conv_module = nn.ConvTranspose(
+ features=1,
+ kernel_size=(3,),
+ strides=(3,),
+ padding="CIRCULAR",
+ kernel_init=lambda *_: kernel,
+ bias_init=initializers.zeros,
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(initial_params["params"]["kernel"].shape, (3, 1, 1))
+ # Compare with manually computed convolution
+ correct_ans = np.array(
+ (1 * 1, 1 * 2, 1 * 1,
+ 2 * 1, 2 * 2, 2 * 1,
+ 3 * 1, 3 * 2, 3 * 1,
+ 4 * 1, 4 * 2, 4 * 1,
+ 5 * 1, 5 * 2, 5 * 1,
+ )
+ )
+ correct_ans = np.expand_dims(correct_ans, (0, 2))
+ np.testing.assert_allclose(y, correct_ans)
+
+ def test_circular_conv_transpose_2d_custom(self):
+ """
+ Test 2d transposed convolution with circular padding on a 3x3 example
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = np.array(
+ (
+ (1, 2, 3),
+ (4, 5, 6),
+ (7, 8, 9),
+ )
+ )
+ x = np.expand_dims(x, (0, 3))
+ kernel = np.array(
+ (
+ (0, 1, 0),
+ (1, 2, 1),
+ (0, 1, 0)
+ )
+ )
+ kernel = np.expand_dims(kernel, (2, 3))
+
+ conv_module = nn.ConvTranspose(
+ features=1,
+ kernel_size=(3, 3),
+ padding="CIRCULAR",
+ kernel_init=lambda *_: kernel,
+ bias_init=initializers.zeros,
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(initial_params["params"]["kernel"].shape, (3, 3, 1, 1))
+ # Compare with manually computed convolution
+ correct_ans = np.array(
+ (
+ (18, 21, 24),
+ (27, 30, 33),
+ (36, 39, 42),
+ )
+ )
+ correct_ans = np.expand_dims(correct_ans, (0, 3))
+ np.testing.assert_allclose(y, correct_ans)
+
+ def test_circular_conv_transpose_2d_custom_bias(self):
+ """
+ Test 2d transposed convolution with circular padding on a 2x2 example with bias
+ """
+ rng = dict(params=random.PRNGKey(0))
+ x = np.array(
+ (
+ (1, 2),
+ (3, 4)
+ )
+ )
+ x = np.expand_dims(x, (0, 3))
+ kernel = np.array(
+ (
+ (1, 2),
+ (3, 4),
+ )
+ )
+ kernel = np.expand_dims(kernel, (2, 3))
+
+ conv_module = nn.ConvTranspose(
+ features=1,
+ kernel_size=(2, 2),
+ padding="CIRCULAR",
+ kernel_init=lambda *_: kernel,
+ bias_init=initializers.ones,
+ )
+ y, initial_params = conv_module.init_with_output(rng, x)
+
+ self.assertEqual(initial_params["params"]["kernel"].shape, (2, 2, 1, 1))
+ # Compare with manually computed convolution
+ correct_ans = np.array(
+ (
+ (21, 23),
+ (29, 31),
+ )
+ )
+ correct_ans = np.expand_dims(correct_ans, (0, 3))
+ np.testing.assert_allclose(y, correct_ans)
+
def test_int_kernel_size(self):
conv = nn.Conv(features=4, kernel_size=3)
x = jnp.ones((8, 3))
| Circular padding in convolutional neural networks
### Description of the model to be implemented
In many areas such as physics, it is convenient to have convolutional layers with periodic boundary conditions (e.g. see [netket](https://github.com/netket/netket))
Therefore, it would be nice to add a "CIRCULAR" padding option to convolutional layers, just as they do in [neural-tangents](https://neural-tangents.readthedocs.io/en/latest/_modules/neural_tangents/stax.html#Conv).
### Dataset the model could be trained on
1D or 2D data. Maybe MNIST images.
### Specific points to consider
None in particular. Just as an example, suppose that one has the 1D data [1,2,3,4,5] and one has filters of size 3, and a stride of 3. The idea is then that two filter operations are carried out. The first one will use [1,2,3], and the second one will use [4,5,1].
### Reference implementations in other frameworks
neural-tangents has replaced stax's GeneralConv by a Conv layer, which has this padding option, and further does not require to provide directly the XLA's `dimension_numbers`.
| I think it would be quite nice to add this, since it doesn't seem to complicate the API much (no additional parameters etc). @levskaya what do you think of this proposal? I recall you were involved in a discussion around this before, and I'm curious whether you think it makes sense to add this.
It would be even nicer if the jax conv op would support this out of the box. They already have 'same' and 'valid'.
If this is still relevant, I'd be happy to raise a PR, reusing the code from https://github.com/google/flax/issues/903#issue-789095219 and adding some tests
I'd love it if you do that Grisha!
On Wed, 3 Nov 2021 at 17:55, Grisha Oryol ***@***.***> wrote:
> If this is still relevant, I'd be happy to raise a PR, reusing the code
> from #903 (comment)
> <https://github.com/google/flax/issues/903#issue-789095219> and adding
> some tests
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/google/flax/issues/971#issuecomment-960260494>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHQJA72FDQBHK5DPXFZCKLTUKG4XLANCNFSM4W4YKQGQ>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
--
Best wishes,
Vladimir Vargas-Calderón
<https://www.researchgate.net/profile/Vladimir_Vargas-Calderon>
PhD Physics Student @ Universidad Nacional de Colombia
--
*Aviso legal:* El contenido de este mensaje y los archivos adjuntos son
confidenciales y de uso exclusivo de la Universidad Nacional de Colombia.
Se encuentran dirigidos sólo para el uso del destinatario al cual van
enviados. La reproducción, lectura y/o copia se encuentran prohibidas a
cualquier persona diferente a este y puede ser ilegal. Si usted lo ha
recibido por error, infórmenos y elimínelo de su correo. Los Datos
Personales serán tratados conforme a la Ley 1581 de 2012 y a nuestra
Política de Datos Personales que podrá consultar en la página web
www.unal.edu.co <http://www.unal.edu.co/>.* *Las opiniones, informaciones,
conclusiones y cualquier otro tipo de dato contenido en este correo
electrónico, no relacionados con la actividad de la Universidad Nacional de
Colombia, se entenderá como personales y de ninguna manera son avaladas por
la Universidad.
| 2021-11-05T08:57:47 |
google/flax | 1,691 | google__flax-1691 | [
"1687"
] | 6520a1a6ed2c056222e8d92ccedd3dd0d407a45f | diff --git a/flax/jax_utils.py b/flax/jax_utils.py
--- a/flax/jax_utils.py
+++ b/flax/jax_utils.py
@@ -159,7 +159,7 @@ def enqueue(n): # Enqueues *up to* `n` elements from the iterator.
enqueue(1)
-def _scan_nd(body_fn, init, xs, n=1):
+def _scan_nd(body_fn, init, xs, n=1, unroll=(1,)):
"""Utility for performing an n-dimensional `lax.scan`.
The n-d scan is simply recursive call of 1-d scan.
@@ -172,11 +172,11 @@ def _scan_nd(body_fn, init, xs, n=1):
A tuple of the final carry and the values returned by the body.
"""
if n == 1:
- return lax.scan(body_fn, init, xs)
+ return lax.scan(body_fn, init, xs, unroll=unroll[0])
else:
def scan_body(c, x):
- return _scan_nd(body_fn, c, x, n=n-1)
- return lax.scan(scan_body, init, xs)
+ return _scan_nd(body_fn, c, x, n=n-1, unroll=unroll[1:])
+ return lax.scan(scan_body, init, xs, unroll=unroll[0])
def _invert_perm(perm):
@@ -186,22 +186,38 @@ def _invert_perm(perm):
return tuple(perm_inv)
-def scan_in_dim(body_fn, init, xs, axis=(0,), keepdims=False):
+def scan_in_dim(body_fn, init, xs, axis=(0,), unroll=(1,), keepdims=False):
"""utility for doing a scan along arbitrary dimensions.
- see `lax.scan` for details on how the scan operation works.
+ See `lax.scan` for details on how the scan operation works.
+
+ Note on `unroll`: This argument gets left padded with ones to match the size
+ of `axis`. Doing so allows unrolls to performed from the innermost loop first.
+ For example, `scan_in_dim(..., axis=(1, 2, 3), unroll=5)` is equivalent to
+ `scan_in_dim(..., axis=(1, 2, 3), unroll=(1, 1, 5))`.
+
Args:
body_fn: the body of the loop of type (c, x) -> (c, y).
init: initial value for the carry.
xs: a pytree of tensors to scan over.
axis: the axis to scan over.
keepdims: keep the dimensions that are scanned over.
+ unroll: an optional positive integer, or tuple of positive integers
+ showing how many iterations of the loop to be unroll into a single
+ iteration for each axis.
Returns:
A tuple of the final carry and the values returned by the body.
"""
if not isinstance(axis, Iterable):
axis = (axis,)
+ if not isinstance(unroll, Iterable):
+ unroll = (unroll,)
+
+ # Pad unroll with ones so we start unrolling from the innermost loop
+ len_diff = len(axis) - len(unroll)
+ unroll = (1,) * len_diff + unroll
+
def transpose_in(x):
perm = axis + tuple(np.delete(np.arange(x.ndim), axis))
return x.transpose(perm)
@@ -220,6 +236,6 @@ def body_wrapper(c, xs):
return c, ys
xs = jax.tree_map(transpose_in, xs)
- c, ys = _scan_nd(body_wrapper, init, xs, n=len(axis))
+ c, ys = _scan_nd(body_wrapper, init, xs, n=len(axis), unroll=unroll)
ys = jax.tree_map(transpose_out, ys)
return c, ys
| Support `unrolled` steps in `jax_utils.scan_in_dims`
Motivated by [jax#3094](https://github.com/google/jax/issues/3094), [jax#3738](https://github.com/google/jax/pull/3738) and [jax#3076](https://github.com/google/jax/pull/3076), `jax.lax.scan` currently supports specifying the number of scan iterations to unroll into a single iteration of the loop using the argument `unrolls`.
It would be nice to be able to control this from `jax_utils.scan_in_dims`.
| 2021-11-30T00:52:21 |
||
google/flax | 1,693 | google__flax-1693 | [
"1671"
] | 6520a1a6ed2c056222e8d92ccedd3dd0d407a45f | diff --git a/flax/optim/weight_norm.py b/flax/optim/weight_norm.py
--- a/flax/optim/weight_norm.py
+++ b/flax/optim/weight_norm.py
@@ -18,24 +18,28 @@
import jax
import jax.numpy as jnp
+from jax import lax
import numpy as np
from .base import OptimizerDef
+Array = Any
+
@struct.dataclass
class _WeightNormHyperParams:
inner: Any
- wn_decay: np.ndarray
- wn_eps: np.ndarray
+ wn_decay: Array
+ wn_eps: Array
@struct.dataclass
class _WeightNormParamState:
direction_state: Any
scale_state: Any
- mult: np.ndarray
+ direction: Array
+ scale: Array
class WeightNorm(OptimizerDef):
@@ -75,8 +79,18 @@ def update_hyper_params(self, **hyper_param_overrides):
return self.hyper_params.replace(inner=inner, wn_decay=decay, wn_eps=eps)
def init_state(self, params):
+ def split_param(param):
+ if param.size > param.shape[-1]:
+ norms = jnp.sqrt(jnp.square(param).sum(
+ tuple(range(param.ndim-1)), keepdims=True) + eps)
+ direction = param / norms
+ return direction, norms
+ else:
+ return param, ()
+
leaves, treedef = jax.tree_flatten(params)
- directions, scales = zip(*(self._split_param(p) for p in leaves))
+ eps = self.hyper_params.wn_eps
+ directions, scales = zip(*(split_param(p) for p in leaves))
directions = treedef.unflatten(directions)
scales = treedef.unflatten(scales)
wn_params = {'direction': directions, 'scale': scales}
@@ -85,71 +99,49 @@ def init_state(self, params):
scale_state = state.param_states['scale']
param_states = jax.tree_multimap(
lambda _, *args: _WeightNormParamState(*args),
- params, direction_state, scale_state, scales)
+ params, direction_state, scale_state, directions, scales)
return state.replace(param_states=param_states)
def apply_gradient(self, hyper_params, params, state, grads):
- p_leaves, treedef = jax.tree_flatten(params)
+ treedef = jax.tree_structure(params)
s_leaves = treedef.flatten_up_to(state.param_states)
- g_leaves = treedef.flatten_up_to(grads)
- split_grads = zip(*(self._split_grad(p, s, g, hyper_params.wn_decay)
- for p, s, g in zip(p_leaves, s_leaves, g_leaves)))
- d_p, d_s, d_g, s_p, s_s, s_g = [
- jax.tree_unflatten(treedef, x) for x in split_grads]
- wn_params = {'direction': d_p, 'scale': s_p}
- wn_state = {'direction': d_s, 'scale': s_s}
- wn_grads = {'direction': d_g, 'scale': s_g}
+ direction = treedef.unflatten(x.direction for x in s_leaves)
+ scale = treedef.unflatten(x.scale for x in s_leaves)
+ dir_state = treedef.unflatten(x.direction_state for x in s_leaves)
+ scale_state = treedef.unflatten(x.scale_state for x in s_leaves)
+ eps = hyper_params.wn_eps
+ decay = hyper_params.wn_decay
+
+ def merge_param(direction, scale):
+ if direction.size > direction.shape[-1]:
+ norm = jnp.square(direction).sum(
+ tuple(range(direction.ndim - 1)), keepdims=True) + eps
+ mult = scale * lax.rsqrt(norm)
+ return direction * mult
+ else:
+ return direction
+ merge_params = lambda d, s: jax.tree_multimap(merge_param, d, s)
+ _, vjp_fn = jax.vjp(merge_params, direction, scale)
+ dir_grad, scale_grad = vjp_fn(grads)
+ def add_decay(direction, dir_grad):
+ if direction.size > direction.shape[-1]:
+ return dir_grad + decay * direction
+ return dir_grad
+ dir_grad = jax.tree_multimap(add_decay, direction, dir_grad)
+
+ wn_params = {'direction': direction, 'scale': scale}
+ wn_state = {'direction': dir_state, 'scale': scale_state}
+ wn_grads = {'direction': dir_grad, 'scale': scale_grad}
new_wn_params, new_state = self.wrapped_optimizer.apply_gradient(
hyper_params.inner, wn_params,
state.replace(param_states=wn_state), wn_grads)
-
- directions = treedef.flatten_up_to(new_wn_params['direction'])
- scales = treedef.flatten_up_to(new_wn_params['scale'])
- new_params, mults = zip(*(self._merge_param(d, s, hyper_params.wn_eps)
- for d, s in zip(directions, scales)))
- new_params = jax.tree_unflatten(treedef, new_params)
- mults = jax.tree_unflatten(treedef, mults)
+ direction = new_wn_params['direction']
+ scale = new_wn_params['scale']
+ new_params = merge_params(direction, scale)
direction_state = new_state.param_states['direction']
scale_state = new_state.param_states['scale']
param_states = jax.tree_multimap(
lambda _, *args: _WeightNormParamState(*args),
- params, direction_state, scale_state, mults)
+ params, direction_state, scale_state, direction, scale)
return new_params, new_state.replace(param_states=param_states)
-
- def _split_param(self, param):
- if param.size > param.shape[-1]:
- scale = jnp.sqrt(jnp.square(param).sum(
- tuple(range(param.ndim-1)), keepdims=True))
- direction = param / scale
- return direction, scale
- else:
- return param, ()
-
- def _merge_param(self, direction, scale, eps):
- if direction.size > direction.shape[-1]:
- norm = jnp.sqrt(jnp.square(direction).sum(
- tuple(range(direction.ndim - 1)), keepdims=True))
- mult = scale / (eps + norm)
- param = direction * mult
- return param, mult
- else:
- return direction, ()
-
- def _split_grad(self, param, state, grad, decay):
- """Split the gradient for the direction and scale."""
- if param.size > param.shape[-1]:
- red_dims = tuple(range(param.ndim-1))
- direction = param / state.mult
- norm = jnp.sqrt(jnp.square(param).sum(red_dims, keepdims=True))
- scale = norm * jnp.sign(state.mult)
- scale_grad = jnp.sum(
- grad * direction, axis=red_dims, keepdims=True)
- direction_grad = state.mult * (grad - scale_grad * direction)
- if decay != 0:
- direction_grad = direction_grad + decay * direction
- direction_info = direction, state.direction_state, direction_grad
- scale_info = scale, state.scale_state, scale_grad
- return direction_info + scale_info
- else:
- return (param, state.direction_state, grad, (), (), ())
| diff --git a/tests/optim_test.py b/tests/optim_test.py
--- a/tests/optim_test.py
+++ b/tests/optim_test.py
@@ -525,14 +525,16 @@ def test_momentum_with_weight_norm(self):
param_states=_WeightNormParamState(
direction_state=_MomentumParamState(momentum=(2, 2)),
scale_state=_MomentumParamState(momentum=(1, 2)),
- mult=(1, 2)
+ direction=(2, 2),
+ scale=(1, 2),
)
))
grads = np.ones((2, 2))
new_params, new_state = optimizer_def.apply_gradient(
optimizer_def.hyper_params, params, state, grads)
np.testing.assert_allclose(new_params, np.full_like(params, 1.9))
- np.testing.assert_allclose(new_state.param_states.mult, 1.9 * 2 ** 0.5)
+ np.testing.assert_allclose(new_state.param_states.direction, np.full_like(params, 2 ** -0.5))
+ np.testing.assert_allclose(new_state.param_states.scale, np.full((1, 2), (2 * 1.9 ** 2) ** 0.5))
class DynamicScaleTest(absltest.TestCase):
| Weight Norm wrapped optimizer returns nan gradients when a row of weights has zero norm
Provide as much information as possible. At least, this should include a description of your issue and steps to reproduce the problem. If possible also provide a summary of what steps or workarounds you have already tried.
### Problem you have encountered:
WeightNorm wrapped optimizer returns nan gradients when a row of weights has zero norm
### What you expected to happen:
optimizer should return a non-nan number
### Logs, error messages, etc:
these two lines may cause division by zero error:
https://github.com/google/flax/blob/d6a219433ab7a946aa18b416148d7381d65dc5b4/flax/optim/weight_norm.py#L124
https://github.com/google/flax/blob/d6a219433ab7a946aa18b416148d7381d65dc5b4/flax/optim/weight_norm.py#L143
### Steps to reproduce:
Whenever possible, please provide a *minimal example*. Please consider submitting it as a Colab link.
| 2021-11-30T14:19:32 |
|
google/flax | 1,703 | google__flax-1703 | [
"1702"
] | d10eda85791d5cb9029cf431aae10c7032c2ea8b | diff --git a/flax/linen/partitioning.py b/flax/linen/partitioning.py
--- a/flax/linen/partitioning.py
+++ b/flax/linen/partitioning.py
@@ -164,7 +164,8 @@ def logical_to_mesh_axes(array_dim_names: Sequence[str],
if rule_model_name in array_dim_names:
pos = array_dim_names.index(rule_model_name)
if rule_mesh_name is None or rule_mesh_name in result:
- result[pos] = None
+ if result[pos] == _unassigned_axis:
+ result[pos] = None
else:
result[pos] = result[pos] or rule_mesh_name
if _unassigned_axis in result:
| diff --git a/tests/linen/partitioning_test.py b/tests/linen/partitioning_test.py
--- a/tests/linen/partitioning_test.py
+++ b/tests/linen/partitioning_test.py
@@ -72,6 +72,16 @@ def test_logical_to_mesh_axes(self):
with partitioning.axis_rules(AXIS_RULES_1):
with self.assertRaises(ValueError):
partitioning.logical_to_mesh_axes(('foo', 'foo', 'baz'))
+ def test_logical_to_mesh_axes_overrides(self):
+ p_rules = (
+ ('baz', 'data'),
+ ('bar', None),
+ ('foo', 'model'),
+ ('foo', 'data'))
+ with partitioning.axis_rules(p_rules):
+ self.assertEqual(
+ partitioning.logical_to_mesh_axes(('baz', 'bar', 'foo')),
+ ('data', None, 'model'))
def test_logical_to_mesh_axes_priorities(self):
p_rules = (
| logical_to_mesh_axes does not process rules with repeated array dim names correctly.
### Problem you have encountered:
Current implementation of logical_to_mesh_axes function results in incorrect annotation propagation incase logical_axis_rules have more than one entry for a logical axis. For example:
logical_axis_rules = (('batch', 'data'),
('vocab', 'model'),
('mlp', 'model'),
('heads', 'model'),
('joined_kv', None),
('kv', None),
('embed', 'model'),
('embed', 'data'),
('relpos_buckets', None),
('length', None),
('layers', None),
('stack', None),
)
should annotate the following tensor:
y = with_sharding_constraint(y, ('batch', 'length', 'embed'))
to
axis_resources=<partitions=((\'data\',), (), (\'model\',)) in the resulting pre optimization HLO.
However with the current fuction it results in:
axis_resources=<partitions=((\'data\',), (), ()).
### Steps to reproduce:
logical_axis_rules, sharding constraints mentioned above should suffice to repro the issue.
The issue seems to be at:
https://github.com/google/flax/blob/d10eda85791d5cb9029cf431aae10c7032c2ea8b/flax/linen/partitioning.py#L166
| 2021-12-08T06:44:13 |
|
google/flax | 1,738 | google__flax-1738 | [
"1738"
] | 3e9c8f5f40bec345710b0549298c8dbf10127d42 | diff --git a/flax/core/lift.py b/flax/core/lift.py
--- a/flax/core/lift.py
+++ b/flax/core/lift.py
@@ -719,70 +719,100 @@ def scanned(broadcast_vars, carry, scan_variable_groups, rng_groups, args):
name='scan')
-def custom_vjp(fn: Callable[..., Any], backward_fn: Callable[..., Any],
- grad_kind: CollectionFilter = 'params',
+def custom_vjp(fn: Callable[..., Any],
+ forward_fn: Callable[..., Any],
+ backward_fn: Callable[..., Any],
+ grad_vars: CollectionFilter = 'params',
nondiff_argnums=()):
- """"Lifted version of `jax.custom_vjp`.
+ """Lifted version of `jax.custom_vjp`.
- `backward_fn` defines a custom vjp (backward gradient) for `fn`.
+ `forward_fn` and `backward_fn` together define a custom vjp for `fn`.
+ The original `fn` will run in case a vjp (backward gradient) is not computed.
+
+ The `forward_fn` receives the same arguments as `fn` but is expected to return
+ a tuple containing the output of `fn(scope, *args)` and the residuals that are
+ passed to `backward_fn`.
+
+ The `backward_fn` receives the nondiff arguments, residuals, and the output tangents.
+ It should return a tuple containing the input and variable tangents.
+
+ Note that the vjp function returned by `lift.vjp` can be passed as residual and
+ used in the `backward_fn`. The scope is unavailable during the backward pass.
+ If the scope is required in `backward_fn`, a snapshot of the variables can be
+ taken and returned as a residual in the `forward_fn`.
Example::
+ f = nn.dense
+
def fwd(scope, x, features):
- y = nn.dense(scope, x, features)
- return y, x
+ y, vjp_fn = lift.vjp(partial(f, features=features), scope, x)
+ return y, vjp_fn
- def bwd(features, scope_fn, params, res, g):
- x = res
- fn = lambda params, x: nn.dense(scope_fn(params), x, features)
- _, pullback = jax.vjp(fn, params, x)
- g_param, g_x = pullback(g)
- g_param = jax.tree_map(jnp.sign, g_param)
- return g_param, g_x
+ def bwd(features, vjp_fn, y_t):
+ input_t, params_t = vjp_fn(y_t)
+ params_t = jax.tree_map(jnp.sign, params_t)
+ return input_t, params_t
- dense_sign_grad = lift.custom_vjp(fwd, backward_fn=bwd, nondiff_argnums=(2,))
+ dense_sign_grad = lift.custom_vjp(
+ f, forward_fn=fwd, backward_fn=bwd, nondiff_argnums=(2,))
Args:
- fn: should return a tuple of output and auxiliary data for the backward pass.
- backward_fn: arguments are passed as (*nondiff_args, scope_fn, grad_variables, aux, g_y)
- where scope_fn takes grad_variables to create the scope,
- aux is the auxiliary data returned by `fn`,
- and g_y is the tangent of y.
+ fn: The function to define a custom_vjp for. The first argument
+ should be a ``Module`` instance.
+ forward_fn: A function with the same arguments as `fn` returning an tuple
+ with the original output and the residuals that will be passsed to
+ `backward_fn`.
+ backward_fn: arguments are passed as (*nondiff_args, residuals, tangents)
+ The function should return a tuple containing the tangents for the
+ input arguments (except the scope and nondiff args) and the variable
+ tangents for the collections specified by `grad_vars`.
+ grad_vars: The collections for which a vjp will be computed
+ (default: "params").
+ nondiff_argnums: arguments for which no vjp is computed.
+ Returns:
+ A function with the same signature as `fn` with the custom vjp.
"""
- # TODO(jheek) is this transform general/flexible enough?
def inner(scope_fn, repack_fn, variable_groups, rng_groups, *args):
grad_variables, other_variables = variable_groups
-
- def simple_scope_fn(grad_variables):
- grad_variables = tuple(freeze(x) for x in grad_variables)
- return scope_fn((grad_variables, other_variables), rng_groups)
+ scopes_treedef = None
def f(grad_variables, *args):
scope = scope_fn((grad_variables, other_variables), rng_groups)
- y, _ = fn(scope, *args)
+ y = fn(scope, *args)
vars_out = repack_fn(scope)
return y, vars_out
f = jax.custom_vjp(f, nondiff_argnums=nondiff_argnums)
def f_fwd(grad_variables, *args):
- scope = simple_scope_fn(grad_variables)
- y, res = fn(scope, *args)
- vars_out = repack_fn(scope)
- return (y, vars_out), (res, grad_variables)
+ nonlocal scopes_treedef
+ scopes = scope_fn((grad_variables, other_variables), rng_groups)
+ scopes_treedef = jax.tree_structure(scopes)
+ y, res = forward_fn(scopes, *args)
+ vars_out = repack_fn(scopes)
+ return (y, vars_out), res
def f_bwd(*args):
+ # the backward function does not pass a lifted scope
+ # to the user. Currently, there is no way to have
+ # side effects flow out of backward pass.
+ # Even without mutation variables would be ill-defined.
+ # For example, would we take a snapshot of the variables
+ # before or after calling `forward_fn`?
nondiff_args = args[:-2]
res, g = args[-2:]
g_y, _ = g
- user_res, grad_variables = res
- return backward_fn(*nondiff_args, simple_scope_fn, grad_variables, user_res, g_y)
+ input_t, var_t = backward_fn(*nondiff_args, res, g_y)
+ assert scopes_treedef is not None, 'backward called before forward?!'
+ var_t = tuple(scopes_treedef.flatten_up_to(var_t))
+ return var_t, input_t
f.defvjp(f_fwd, f_bwd)
return f(grad_variables, *args)
- variable_in_groups = (grad_kind, True,)
- variable_out_groups = (grad_kind, True,)
+ variable_in_groups = (grad_vars, True)
+ variable_out_groups = (grad_vars, True)
rng_groups = (True,)
return pack(
inner, variable_in_groups, variable_out_groups, rng_groups,
diff --git a/flax/linen/__init__.py b/flax/linen/__init__.py
--- a/flax/linen/__init__.py
+++ b/flax/linen/__init__.py
@@ -32,7 +32,7 @@
from .pooling import avg_pool, max_pool
from .recurrent import GRUCell, LSTMCell, ConvLSTM, OptimizedLSTMCell
from .stochastic import Dropout
-from .transforms import jit, named_call, checkpoint, remat, remat_scan, scan, vmap, map_variables, vjp, jvp
+from .transforms import jit, named_call, checkpoint, remat, remat_scan, scan, vmap, map_variables, vjp, jvp, custom_vjp
from .initializers import zeros, ones
# pylint: enable=g-multiple-import
diff --git a/flax/linen/transforms.py b/flax/linen/transforms.py
--- a/flax/linen/transforms.py
+++ b/flax/linen/transforms.py
@@ -865,6 +865,93 @@ def f(scope, x):
rngs=rngs)
+# a version of lift.custom_vjp with a single scope function
+# this avoids having to lift multiple functions in
+# lift_transform.
+def _custom_vjp_single_scope_fn(
+ fn: Callable[..., Any],
+ backward_fn: Callable[..., Any],
+ grad_vars: lift.CollectionFilter = 'params',
+ nondiff_argnums=()):
+ nodiff_fn = functools.partial(fn, needs_residual=False)
+ forward_fn = functools.partial(fn, needs_residual=True)
+ return lift.custom_vjp(
+ nodiff_fn, forward_fn, backward_fn,
+ grad_vars, nondiff_argnums)
+
+
+def custom_vjp(fn: Callable[..., Any],
+ forward_fn: Callable[..., Any],
+ backward_fn: Callable[..., Any],
+ grad_vars: lift.CollectionFilter = 'params',
+ nondiff_argnums=()):
+ """Lifted version of `jax.custom_vjp`.
+
+ `forward_fn` and `backward_fn` together define a custom vjp for `fn`.
+ The original `fn` will run in case a vjp (backward gradient) is not computed.
+
+ The `forward_fn` receives the same arguments as `fn` but is expected to return
+ a tuple containing the output of `fn(mdl, *args)` and the residuals that are
+ passed to `backward_fn`.
+
+ The `backward_fn` receives the nondiff arguments, residuals, and the output
+ tangents. It should return a tuple containing the input and variable tangents.
+
+ Note that the vjp function returned by `nn.vjp` can be passed as residual and
+ used in the `backward_fn`. The scope is unavailable during the backward pass.
+ If the module is required in `backward_fn`, a snapshot of the variables can
+ be taken and returned as a residual in the `forward_fn`.
+
+ Example::
+
+ class Foo(nn.Module):
+ @nn.compact
+ def __call__(self, x):
+ def f(mdl, x):
+ return mdl(x)
+
+ def fwd(mdl, x):
+ return nn.vjp(f, mdl, x)
+
+ def bwd(vjp_fn, y_t):
+ input_t, params_t = vjp_fn(y_t)
+ params_t = jax.tree_map(jnp.sign, params_t)
+ return input_t, params_t
+
+ sign_grad = nn.custom_vjp(
+ f, forward_fn=fwd, backward_fn=bwd)
+ return sign_grad(nn.Dense(1), x).reshape(())
+
+ x = jnp.ones((2,))
+ variables = Foo().init(random.PRNGKey(0), x)
+ grad = jax.grad(Foo().apply)(variables, x)
+
+ Args:
+ fn: The function to define a custom_vjp for.
+ forward_fn: A function with the same arguments as `fn` returning an tuple
+ with the original output and the residuals that will be passsed to
+ `backward_fn`.
+ backward_fn: arguments are passed as (*nondiff_args, residuals, tangents)
+ The function should return a tuple containing the tangents for the
+ input arguments (except the module and nondiff args) and the variable
+ tangents for the collections specified by `grad_vars`.
+ grad_vars: The collections for which a vjp will be computed
+ (default: "params").
+ nondiff_argnums: arguments for which no vjp is computed.
+ Returns:
+ A function with the same signature as `fn` with the custom vjp.
+ """
+ def shared_forward_fn(*args, needs_residual, **kwargs):
+ if needs_residual:
+ return forward_fn(*args, **kwargs)
+ else:
+ return fn(*args, ** kwargs)
+ return decorator_lift_transform(
+ _custom_vjp_single_scope_fn, shared_forward_fn,
+ backward_fn=backward_fn, grad_vars=grad_vars,
+ nondiff_argnums=nondiff_argnums,
+ multi_scope=False)
+
# Special case of decorator_lift_transform to handle named calls for profiling.
def named_call(class_fn, force=True):
| diff --git a/tests/core/design/core_custom_vjp_test.py b/tests/core/design/core_custom_vjp_test.py
--- a/tests/core/design/core_custom_vjp_test.py
+++ b/tests/core/design/core_custom_vjp_test.py
@@ -14,6 +14,7 @@
from typing import Sequence, Callable
+from functools import partial
from absl.testing import absltest
@@ -29,20 +30,21 @@ def mlp_custom_grad(scope: Scope, x: Array,
sizes: Sequence[int] = (8, 1),
act_fn: Callable[[Array], Array] = nn.relu):
+ f = nn.dense
+
def fwd(scope, x, features):
- y = nn.dense(scope, x, features)
- return y, x
+ y, vjp_fn = lift.vjp(partial(f, features=features), scope, x)
+ return y, vjp_fn
- def bwd(features, scope_fn, params, res, g):
- x = res
- fn = lambda params, x: nn.dense(scope_fn(params), x, features)
- _, pullback = jax.vjp(fn, params, x)
- g_param, g_x = pullback(g)
- g_param = jax.tree_map(jnp.sign, g_param)
- return g_param, g_x
+ def bwd(features, res, y_t):
+ del features
+ vjp_fn = res
+ input_t, params_t = vjp_fn(y_t)
+ params_t = jax.tree_map(jnp.sign, params_t)
+ return input_t, params_t
dense_custom_grad = lift.custom_vjp(
- fwd, backward_fn=bwd, nondiff_argnums=(2,))
+ f, forward_fn=fwd, backward_fn=bwd, nondiff_argnums=(2,))
# hidden layers
for size in sizes[:-1]:
diff --git a/tests/linen/linen_transforms_test.py b/tests/linen/linen_transforms_test.py
--- a/tests/linen/linen_transforms_test.py
+++ b/tests/linen/linen_transforms_test.py
@@ -1124,6 +1124,33 @@ def __call__(self, x):
np.testing.assert_array_equal(vs_new['muts']['b']['outer_c']['v'],
jnp.array([1.], jnp.float32))
+ def test_custom_vjp(self):
+
+ class Foo(nn.Module):
+ @nn.compact
+ def __call__(self, x):
+ def f(mdl, x):
+ return mdl(x)
+
+ def fwd(mdl, x):
+ return nn.vjp(f, mdl, x)
+
+ def bwd(vjp_fn, y_t):
+ input_t, params_t = vjp_fn(y_t)
+ params_t = jax.tree_map(jnp.sign, params_t)
+ return input_t, params_t
+
+ sign_grad = nn.custom_vjp(
+ f, forward_fn=fwd, backward_fn=bwd)
+ return sign_grad(nn.Dense(1), x).reshape(())
+ x = jnp.ones((2,))
+ variables = Foo().init(random.PRNGKey(0), x)
+ grad = jax.grad(Foo().apply)(variables, x)
+ for grad_leaf in jax.tree_leaves(grad):
+ self.assertTrue(jnp.all(jnp.abs(grad_leaf) == 1.))
+
+
+
if __name__ == '__main__':
absltest.main()
| Implement custom vjp
1. refactor lift.custom_vjp so the backward pass is well-defined
2. add custom_vjp to linen transforms
Fixes #1738
| 2021-12-21T12:47:19 |
|
google/flax | 1,878 | google__flax-1878 | [
"1768"
] | aad0be1e9b20e3a571c9a6d7814bda7a9951ba5c | diff --git a/flax/core/scope.py b/flax/core/scope.py
--- a/flax/core/scope.py
+++ b/flax/core/scope.py
@@ -769,7 +769,7 @@ def bind(variables: VariableDict,
across the JAX software ecosystem.
"""
if not _is_valid_variables(variables):
- raise errors.ApplyScopeInvalidVariablesError()
+ raise errors.ApplyScopeInvalidVariablesTypeError()
if rngs is not None and not _is_valid_rngs(rngs):
raise errors.InvalidRngError(
'rngs should be a dictionary mapping strings to `jax.PRNGKey`.')
@@ -794,6 +794,12 @@ def wrapper(variables: VariableDict,
*args,
rngs: Optional[RNGSequences] = None,
**kwargs) -> Union[Any, Tuple[Any, VariableDict]]:
+ # Try to detect if user accidentally passed {'params': {'params': ...}.
+ if 'params' in variables and isinstance(
+ variables['params'],
+ (dict, FrozenDict)) and 'params' in variables['params']:
+ raise errors.ApplyScopeInvalidVariablesStructureError(variables)
+
with bind(variables, rngs=rngs, mutable=mutable).temporary() as root:
y = fn(root, *args, **kwargs)
if mutable is not False:
diff --git a/flax/errors.py b/flax/errors.py
--- a/flax/errors.py
+++ b/flax/errors.py
@@ -122,7 +122,7 @@ def __init__(self, msg):
super().__init__(msg)
-class ApplyScopeInvalidVariablesError(FlaxError):
+class ApplyScopeInvalidVariablesTypeError(FlaxError):
"""
When calling :meth:`Module.apply() <flax.linen.Module.apply>`, the first
argument should be a variable dict. For more explanation on variable dicts,
@@ -134,6 +134,18 @@ def __init__(self):
'dictionary with string keys.')
+class ApplyScopeInvalidVariablesStructureError(FlaxError):
+ """
+ This error is thrown when the dict passed as `variables` to apply() has an
+ extra 'params' layer, i.e. {'params': {'params': ...}}.
+ For more explanation on variable dicts, please see :mod:`flax.core.variables`.
+ """
+ def __init__(self, variables):
+ super().__init__(f'Expected the first argument passed to an apply function '
+ 'to be a dictionary containing a \'params\' key at the '
+ 'root level, but got "{variables}".')
+
+
class ScopeParamNotFoundError(FlaxError):
"""
This error is thrown when trying to access a parameter that does not exist.
@@ -176,7 +188,7 @@ class ScopeCollectionNotFound(FlaxError):
def __init__(self, col_name, var_name, scope_path):
super().__init__(
f'Tried to access "{var_name}" from collection "{col_name}"" in '
- f'"{scope_path}" but the collection is emtpy.')
+ f'"{scope_path}" but the collection is empty.')
class ScopeParamShapeError(FlaxError):
| diff --git a/tests/core/core_scope_test.py b/tests/core/core_scope_test.py
--- a/tests/core/core_scope_test.py
+++ b/tests/core/core_scope_test.py
@@ -111,6 +111,21 @@ def f(scope):
with self.assertRaisesRegex(errors.ScopeParamShapeError, msg):
apply(f)(freeze({'params': {'test': np.ones((2,))}}))
+ def test_apply_variables_bad_pytree(self):
+ def f(scope):
+ scope.param('kernel', nn.initializers.ones, (4,))
+
+ params = freeze({
+ 'params': {
+ 'kernel': np.ones((4,)),
+ },
+ })
+ apply(f)(params) # Valid.
+ msg = 'dictionary containing a \'params\' key at the root level'
+ with self.assertRaisesRegex(errors.ApplyScopeInvalidVariablesStructureError,
+ msg):
+ apply(f)({'params': params})
+
def test_mutate_undefined_collection(self):
def f(scope):
scope.put_variable('state', 'test', 123)
| flax.errors.ScopeParamNotFoundError: No parameter named "kernel" exists in "/MLP_0/Dense_0" when attempting to use Jax2TF with a pre-trained JAX NeRF Model
**Redirected from the JAX repo (https://github.com/google/jax/issues/9139#issue-1096888310)**
Tensorflow vers: 2.7; JAX vers: 0.2.24; jaxlib vers: 0.1.72+cuda111; FLAX vers: 0.3.6
The following code is based on the MNIST FLAX jax2tf example, which I adapted for JAX NeRF:
```python
import collections
from os import path
from absl import app
from absl import flags
from flax.training import checkpoints
from jax import random
from jax.experimental.jax2tf.examples import saved_model_lib
from nerf import models
from nerf import utils
import tensorflow as tf
FLAGS = flags.FLAGS
utils.define_flags()
def main(unused_argv):
rng = random.PRNGKey(20200823)
rng, key = random.split(rng)
utils.update_flags(FLAGS)
utils.check_flags(FLAGS)
model, state = models.get_model_state(key, FLAGS, restore=False)
print('Loading model')
state = checkpoints.restore_checkpoint(FLAGS.train_dir, state)
params = state.optimizer.target
predict_fn = lambda params, input: model.apply({"params": params}, input)
Rays = collections.namedtuple("Rays", ("origins", "directions", "viewdirs"))
input_signatures = [Rays(origins=tf.TensorSpec((3,),tf.float32),directions=tf.TensorSpec((3,),tf.float32),viewdirs=tf.TensorSpec((3,),tf.float32))]
saved_model_lib.convert_and_save_model(
predict_fn,
params,
'/any/path/',
input_signatures=input_signatures)
if __name__ == "__main__":
app.run(main)
```
In order to simplify the inputs to the network, and since I am only interested in running inference in TF, I initialize the RNG keys and `randomized` NeRF model inputs to `None` and `False` respectively, so that only the `rays` are inputted. This is the only change over the original JAX NeRF code:
```python
def __call__(self, rays, rng_0 = None, rng_1=None, randomized=False, depth_gt = None, rgb_only = False,depth_sampling = False):
"""Nerf Model.
Args:
rng_0: jnp.ndarray, random number generator for coarse model sampling.
rng_1: jnp.ndarray, random number generator for fine model sampling.
rays: util.Rays, a namedtuple of ray origins, directions, and viewdirs.
randomized: bool, use randomized stratified sampling.
rgb_only: bool, return only rgb
Returns:
ret: list, [(rgb_coarse, disp_coarse, acc_coarse), (rgb, disp, acc)]
"""
# Stratified sampling along rays
if (randomized):
key, rng_0 = random.split(rng_0)
else:
key = None
```
(also, every call to `model.apply()` has its args order inverted to match this)
The error is prompted when attempting to compute the TF graph in this line of 'saved_model_lib.py':
```python
tf_graph = tf.function(lambda inputs: tf_fn(param_vars, inputs),
autograph=False,
experimental_compile=compile_model)
```
Full error stack:
```
Traceback (most recent call last):
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/jorge/jaxnerf/nerf/save_jax_as_tf.py", line 45, in <module>
app.run(main)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "/home/jorge/jaxnerf/nerf/save_jax_as_tf.py", line 38, in main
saved_model_lib.convert_and_save_model(
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/jax/experimental/jax2tf/examples/saved_model_lib.py", line 114, in convert_and_save_model
tf_graph.get_concrete_function(input_signatures[0])
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 1259, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 1239, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 780, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3157, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3557, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3392, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 1143, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 672, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/jax/experimental/jax2tf/examples/saved_model_lib.py", line 107, in <lambda>
tf_graph = tf.function(lambda inputs: tf_fn(param_vars, inputs),
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/jax/experimental/jax2tf/jax2tf.py", line 418, in converted_fun
out_with_avals = _interpret_fun(flat_fun, args_flat, args_avals_flat,
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/jax/experimental/jax2tf/jax2tf.py", line 486, in _interpret_fun
fun.call_wrapped(*in_vals)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/jax/linear_util.py", line 166, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/jax/experimental/jax2tf/jax2tf.py", line 272, in fun_no_kwargs
return fun(*args, **kwargs)
File "/home/jorge/jaxnerf/nerf/save_jax_as_tf.py", line 35, in <lambda>
predict_fn = lambda params, input: model.apply({"params": params}, input)
File "/home/jorge/jaxnerf/nerf/nerf/models.py", line 268, in __call__
raw_rgb, raw_sigma = self.MLP_0(samples_enc)
File "/home/jorge/jaxnerf/nerf_sh/nerf/model_utils.py", line 70, in __call__
x = dense_layer(self.net_width)(x)
File "/home/jorge/anaconda3/envs/jaxnerf/lib/python3.8/site-packages/flax/linen/linear.py", line 171, in __call__
kernel = self.param('kernel',
flax.errors.ScopeParamNotFoundError: No parameter named "kernel" exists in "/MLP_0/Dense_0". (https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.ScopeParamNotFoundError)
```
Has anyone else attempted to save a JAX NeRF model using jax2tf and encountered any such issue?
| This looks like the error that happens when the top-level "packaging" of the `params` pytree is off slightly - e.g. passing in the bare `params` tree or `{'params': {'params': params}` rather than `{'params': params}`... (I say this as we're getting an error here at what looks like the very first parameter lookup into the pytree.)
Can you add these imports
```python
import jax
from jax import numpy as jnp
```
and before the final call to `saved_model_lib.convert_and_save_model` can you add a line:
```python
print(jax.tree_map(jnp.shape, params)
```
so we can check the pytree structure of what you're passing in?
I tried setting up a repro quickly, but I'm seeing `nerf.models.get_model_state` which is a function that doesn't exist in JAX Nerf at https://github.com/google-research/google-research/blob/master/jaxnerf/nerf/models.py -- what code are you actually using here? More info needed for me to look into this.
Hi, @levskaya, thanks for your prompt response! I did as you said and this is what I am getting:
```
FrozenDict({
params: {
MLP_0: {
Dense_0: {
bias: (256,),
kernel: (63, 256),
},
Dense_1: {
bias: (256,),
kernel: (256, 256),
},
Dense_2: {
bias: (256,),
kernel: (256, 256),
},
Dense_3: {
bias: (256,),
kernel: (256, 256),
},
Dense_4: {
bias: (256,),
kernel: (256, 256),
},
Dense_5: {
bias: (256,),
kernel: (319, 256),
},
Dense_6: {
bias: (256,),
kernel: (256, 256),
},
Dense_7: {
bias: (256,),
kernel: (256, 256),
},
Dense_8: {
bias: (1,),
kernel: (256, 1),
},
Dense_9: {
bias: (3,),
kernel: (256, 3),
},
},
MLP_1: {
Dense_0: {
bias: (256,),
kernel: (63, 256),
},
Dense_1: {
bias: (256,),
kernel: (256, 256),
},
Dense_2: {
bias: (256,),
kernel: (256, 256),
},
Dense_3: {
bias: (256,),
kernel: (256, 256),
},
Dense_4: {
bias: (256,),
kernel: (256, 256),
},
Dense_5: {
bias: (256,),
kernel: (319, 256),
},
Dense_6: {
bias: (256,),
kernel: (256, 256),
},
Dense_7: {
bias: (256,),
kernel: (256, 256),
},
Dense_8: {
bias: (1,),
kernel: (256, 1),
},
Dense_9: {
bias: (3,),
kernel: (256, 3),
},
},
},
})
```
Also, as you pointed out `nerf.models.get_model_state` is not an existing function in JAX NeRF; it's just a small helper, sorry I didn't include it in the first place. It's this function:
```python
def get_model_state(key, args, restore=True):
"""
Helper for loading model with get_model & creating optimizer &
optionally restoring checkpoint to reduce boilerplate
"""
model, variables = get_model(key, args)
optimizer = flax.optim.Adam(args.lr_init).create(variables)
state = utils.TrainState(optimizer=optimizer)
if restore:
from flax.training import checkpoints
state = checkpoints.restore_checkpoint(args.train_dir, state)
return model, state
```
You should be able to reproduce with this. I am using a slightly different code to JAX NeRF but I was able to reproduce with this and their code.
Just in case it helps, I will add some more info about my setup:
Ubuntu 20.04 on WSL2
RTX 3080 (CUDA 11.2 CUDDN 8.1.1 NVIDIA driver 510.06)
Tensorflow 2.7
Jaxlib 0.1.74+cuda11.cudnn805 (upgraded a few hours ago but same result)
Jax 0.2.26
Flax 0.3.6
**EDIT:** I was also able to reproduce it in the following setup:
Native Ubuntu 18.04
RTX 2080ti (CUDA 10.1 CUDDN 7.6.5 NVIDIA driver 418.87.00)
Tensorflow 2.3.1
Jaxlib 0.1.72+cuda111
Jax 0.2.26
Flax 0.3.6
And what about `utils.check_flags(FLAGS)` (also doesn't exist in original repo) and `utils.update_flags(FLAGS)`? I have no idea what config you're actually running here? Is there a link to your code and whatever FLAGS are actually being used?
If `get_model(key, args)` is `jaxnerf.nerf.models.get_model` it doesn't have the right signature.
Did you define this helper as well? How are you specifying the example_batch that `jaxnerf.nerf.models.get_model` needs, which is ultimately calling `jaxnerf.nerf.models.construct_nerf(key, example_batch, args) --> model, init_variables`
Sorry, one more question - what do you mean you "initialize the RNG keys" to `None`?? you can't just set things like `key` `rng_0` and `rng_1` to `None` in the original JAX Nerf code... those are JAX deterministic PRNG keys that have to be provided.
If you have something running at all you must have heavily altered the original `__call__` function - I really need to see your code to have any idea about what's going on here. Please just dump all your changes somewhere so I can see what's actually being run.
A quick guess is that you accidentally changed the nested module structure which is causing a mismatch between the provided parameters and the model structure.
Hi, a few things regarding your comments:
1) For reproducibility, I used the lego config from the jaxnerf code. This is what `utlis.update_flags(FLAGS)` loads. This gets me the following params map:
```
FrozenDict({
params: {
MLP_0: {
Dense_0: {
bias: (256,),
kernel: (63, 256),
},
Dense_1: {
bias: (256,),
kernel: (256, 256),
},
Dense_10: {
bias: (128,),
kernel: (283, 128),
},
Dense_11: {
bias: (3,),
kernel: (128, 3),
},
Dense_2: {
bias: (256,),
kernel: (256, 256),
},
Dense_3: {
bias: (256,),
kernel: (256, 256),
},
Dense_4: {
bias: (256,),
kernel: (256, 256),
},
Dense_5: {
bias: (256,),
kernel: (319, 256),
},
Dense_6: {
bias: (256,),
kernel: (256, 256),
},
Dense_7: {
bias: (256,),
kernel: (256, 256),
},
Dense_8: {
bias: (1,),
kernel: (256, 1),
},
Dense_9: {
bias: (256,),
kernel: (256, 256),
},
},
MLP_1: {
Dense_0: {
bias: (256,),
kernel: (63, 256),
},
Dense_1: {
bias: (256,),
kernel: (256, 256),
},
Dense_10: {
bias: (128,),
kernel: (283, 128),
},
Dense_11: {
bias: (3,),
kernel: (128, 3),
},
Dense_2: {
bias: (256,),
kernel: (256, 256),
},
Dense_3: {
bias: (256,),
kernel: (256, 256),
},
Dense_4: {
bias: (256,),
kernel: (256, 256),
},
Dense_5: {
bias: (256,),
kernel: (319, 256),
},
Dense_6: {
bias: (256,),
kernel: (256, 256),
},
Dense_7: {
bias: (256,),
kernel: (256, 256),
},
Dense_8: {
bias: (1,),
kernel: (256, 1),
},
Dense_9: {
bias: (256,),
kernel: (256, 256),
},
},
},
})
```
2) `utils.check_flags(FLAGS)` does indeed not exist in the original jaxnerf, sorry about that. It's just a helper to check whether the user has set training and data dirs. Can be removed without issue; this code also reproduces the error:
```python
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
os.environ["XLA_PYTHON_CLIENT_PREALLOCATE"] = "false"
import collections
from os import path
from absl import app
from absl import flags
import jax
from jax import numpy as jnp
from jax import random
from jax.experimental.jax2tf.examples import saved_model_lib
from nerf_sh.nerf import models
from nerf_sh.nerf import utils
import tensorflow as tf
FLAGS = flags.FLAGS
utils.define_flags()
def main(unused_argv):
rng = random.PRNGKey(20200823)
rng, key = random.split(rng)
utils.update_flags(FLAGS)
model, state = models.get_model_state(key, FLAGS, restore=True)
params = state.optimizer.target
predict_fn = lambda params, input: model.apply({"params": params}, input)
Rays = collections.namedtuple("Rays", ("origins", "directions", "viewdirs"))
input_signatures = [Rays(origins=tf.TensorSpec((3,),tf.float32),directions=tf.TensorSpec((3,),tf.float32),viewdirs=tf.TensorSpec((3,),tf.float32))]
print(jax.tree_map(jnp.shape, params))
saved_model_lib.convert_and_save_model(
predict_fn,
params,
'/some/path',
input_signatures=input_signatures)
if __name__ == "__main__":
app.run(main)
```
3) `get_model(key,args)` is slightly changed from JAX NeRF, as I didn't need the dataset peeking func. However if you call the function as `model, variables = models.get_model(key, dataset.peek(), FLAGS)` after loading the dataset with `dataset = datasets.get_dataset("test", FLAGS)` you should be able to reproduce as well. In any case, this is the function I am using:
```python
def get_model(key, args):
"""A helper function that wraps around a 'model zoo'."""
model_dict = {
"nerf": construct_nerf,
}
return model_dict[args.model](key, args)
```
4) The JAX deterministic PRNG keys can be initialized to None as long as you use the model for test inference: these keys are only used in the randomized stratified sampling scheme which is only performed during training; for test rendering, sampling becomes deterministic. Thus, we can alter the order of the `__call__` parameters and input a single tuple of raydirs, origins and viewdirs. The new call function looks like this:
```python
def __call__(self, rays, rng_0 = None, rng_1=None, randomized=False):
"""Nerf Model.
Args:
rng_0: jnp.ndarray, random number generator for coarse model sampling.
rng_1: jnp.ndarray, random number generator for fine model sampling.
rays: util.Rays, a namedtuple of ray origins, directions, and viewdirs.
randomized: bool, use randomized stratified sampling.
Returns:
ret: list, [(rgb_coarse, disp_coarse, acc_coarse), (rgb, disp, acc)]
"""
# Stratified sampling along rays
if (randomized):
key, rng_0 = random.split(rng_0)
else:
key = None
z_vals, samples = model_utils.sample_along_rays(
key,
rays.origins,
rays.directions,
self.num_coarse_samples,
self.near,
self.far,
randomized,
self.lindisp
)
samples_enc = model_utils.posenc(
samples,
self.min_deg_point,
self.max_deg_point,
self.legacy_posenc_order,
)
# Point attribute predictions
if self.use_viewdirs:
viewdirs_enc = model_utils.posenc(
rays.viewdirs,
0,
self.deg_view,
self.legacy_posenc_order,
)
raw_rgb, raw_sigma = self.MLP_0(samples_enc, viewdirs_enc)
else:
raw_rgb, raw_sigma = self.MLP_0(samples_enc)
# Add noises to regularize the density predictions if needed
key, rng_0 = random.split(rng_0)
raw_sigma = model_utils.add_gaussian_noise(
key,
raw_sigma,
self.noise_std,
randomized,
)
rgb = self.rgb_activation(raw_rgb)
sigma = self.sigma_activation(raw_sigma)
comp_rgb, disp, acc, weights,depth = model_utils.volumetric_rendering(
rgb,
sigma,
z_vals,
rays.directions,
white_bkgd=self.white_bkgd,
)
ret = [
(comp_rgb, disp, acc,depth),
]
# Hierarchical sampling based on coarse predictions
if self.num_fine_samples > 0:
z_vals_mid = 0.5 * (z_vals[Ellipsis, 1:] + z_vals[Ellipsis, :-1])
if (randomized): key, rng_1 = random.split(rng_1)
z_vals, samples = model_utils.sample_pdf(
key,
z_vals_mid,
weights[Ellipsis, 1:-1],
rays.origins,
rays.directions,
z_vals,
self.num_fine_samples,
randomized,
)
samples_enc = model_utils.posenc(
samples,
self.min_deg_point,
self.max_deg_point,
self.legacy_posenc_order,
)
if self.use_viewdirs:
raw_rgb, raw_sigma = self.MLP_1(samples_enc, viewdirs_enc)
else:
raw_rgb, raw_sigma = self.MLP_1(samples_enc)
if (randomized): key, rng_1 = random.split(rng_1)
raw_sigma = model_utils.add_gaussian_noise(
key,
raw_sigma,
self.noise_std,
randomized,
)
rgb = self.rgb_activation(raw_rgb)
sigma = self.sigma_activation(raw_sigma)
comp_rgb, disp, acc, unused_weights, depth = model_utils.volumetric_rendering(
rgb,
sigma,
z_vals,
rays.directions,
white_bkgd=self.white_bkgd,
)
ret.append((comp_rgb, disp, acc,depth))
return ret
```
Then the only change you need to make is in the args order of the `model.apply()` calls. In `train.py`, function `loss_fn()` you replace `ret = model.apply(variables, key_0, key_1, rays, FLAGS.randomized)` with `ret = model.apply(variables, rays, key_0, key_1, FLAGS.randomized)`. You do the same in `train.py`, `render_fn()`
That should be it. Lemme know if I missed something!
@Arcanous98 - thanks for providing the extra info!
Actually I just noticed something from your first response that I should have noticed immediately:
if the output of the inserted printout:
```python
input_signatures = [Rays(origin....
print(jax.tree_map(jnp.shape, params)
saved_model_lib.convert_and_save_model(...
```
has this structure:
```python
FrozenDict({
params: {
MLP_0: { ... }
...
}
})
```
The `params` object shouldn't itself have an extra `params:` layer inside it, since in your `predict_fn` function you write:
```python
predict_fn = lambda params, input: model.apply({"params": params}, input)
```
which is adding an extra nesting layer under another `"params"` key, which would lead to precisely the error that you're seeing.
In Flax the `init` function returns, and the `apply` function takes a variable (frozen-) dictionary structured at the top-level like
```python
{
"params": nested_param_dict,
"some_stateful_collection": some_stateful_collection_dict,
"some_other_stateful_collection": some_other_stateful_collection_dict
...
}
```
where each of those nested_dicts share the same module-defined nesting structure.
If you try to remove the extra `{"params": ...}` nesting, does your code run correctly?
It does! Thanks for the help @levskaya. I'm closing the issue now 👍
Great! Happy to Help! Happy NeRFing. ;) | 2022-02-09T10:28:27 |
google/flax | 1,937 | google__flax-1937 | [
"1936"
] | 94b081325d66c3d3ea04a54d8f5c88e230a07938 | diff --git a/flax/metrics/tensorboard.py b/flax/metrics/tensorboard.py
--- a/flax/metrics/tensorboard.py
+++ b/flax/metrics/tensorboard.py
@@ -39,18 +39,17 @@ def _flatten_dict(input_dict, parent_key='', sep='.'):
for k, v in input_dict.items():
new_key = parent_key + sep + k if parent_key else k
- # Take special care of things hparams cannot handle.
- if v is None:
- v = 'None'
- elif isinstance(v, list):
- v = str(v)
- elif isinstance(v, tuple):
- v = str(v)
- elif isinstance(v, dict):
+ # Valid types according to https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py
+ valid_types = (bool, int, float, str, np.bool_, np.integer, np.floating, np.character)
+
+ if isinstance(v, dict):
# Recursively flatten the dict.
items.extend(_flatten_dict(v, new_key, sep=sep).items())
- else:
- items.append((new_key, v))
+ continue
+ elif not isinstance(v, valid_types):
+ # Cast any incompatible values as strings such that they can be handled by hparams
+ v = str(v)
+ items.append((new_key, v))
return dict(items)
| diff --git a/tests/tensorboard_test.py b/tests/tensorboard_test.py
--- a/tests/tensorboard_test.py
+++ b/tests/tensorboard_test.py
@@ -24,7 +24,7 @@
from tensorboard.util import tensor_util
import tensorflow.compat.v2 as tf
-from flax.metrics.tensorboard import SummaryWriter
+from flax.metrics.tensorboard import SummaryWriter, _flatten_dict
def _process_event(event):
for value in event.summary.value:
@@ -262,5 +262,58 @@ def test_summarywriter_histogram_2bins(self):
self.assertTrue(
np.allclose(actual_histogram[1], (499.5, 999.0, 500.0), atol=1e-01))
+ def test_flatten_dict(self):
+ # Valid types according to https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py
+ input_hparams={
+ # Example Invalid Types
+ "None": None, "List": [1, 2, 3], "Tuple": (1, 2, 3), "Complex": complex("1+1j"), "np.complex_": np.complex_("1+1j"),
+ # Valid Python Types
+ "Bool": True, "Int": 1, "Float": 1.0, "Str": "test",
+ # Valid Numpy Types
+ "np.bool_": np.bool_(1), "np.integer": np.int_(1), "np.floating": np.float_(1.0), "np.character": np.str_("test"),
+ # Nested dict to flatten
+ "Nested_Dict": {
+ "None": None,
+ "List": [1, 2, 3],
+ "Tuple": (1, 2, 3),
+ "Complex": complex("1+1j"),
+ "np.complex_": np.complex_("1+1j"),
+ "Bool": True,
+ "Int": 1,
+ "Float": 1.0,
+ "Str": "test",
+ "np.bool_": np.bool_(1),
+ "np.integer": np.int_(1),
+ "np.floating": np.float_(1.0),
+ "np.character": np.str_("test")
+ }
+ }
+
+ result_hparams = _flatten_dict(input_hparams)
+
+ expected_hparams={
+ "None": "None", "List": "[1, 2, 3]", "Tuple": "(1, 2, 3)", "Complex": "(1+1j)", "np.complex_": "(1+1j)",
+ # Valid Python Types
+ "Bool": True, "Int": 1, "Float": 1.0, "Str": "test",
+ # Valid Numpy Types
+ "np.bool_": np.bool_(1), "np.integer": np.int_(1), "np.floating": np.float_(1.0), "np.character": np.str_("test"),
+ # Nested Dict
+ "Nested_Dict.None": "None",
+ "Nested_Dict.List": "[1, 2, 3]",
+ "Nested_Dict.Tuple": "(1, 2, 3)",
+ "Nested_Dict.Complex": "(1+1j)",
+ "Nested_Dict.np.complex_": "(1+1j)",
+ "Nested_Dict.Bool": True,
+ "Nested_Dict.Int": 1,
+ "Nested_Dict.Float": 1.0,
+ "Nested_Dict.Str": "test",
+ "Nested_Dict.np.bool_": np.bool_(1),
+ "Nested_Dict.np.integer": np.int_(1),
+ "Nested_Dict.np.floating": np.float_(1.0),
+ "Nested_Dict.np.character": np.str_("test")
+ }
+
+ self.assertDictEqual(result_hparams, expected_hparams)
+
if __name__ == '__main__':
absltest.main()
| Incompatible variables for Tensorboard hparams are recast to strings but never returned
### Core Problem
Tensorboard hparams only supports a subset of Python and Numpy variable types ([see hparams docstrings](https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py)). The `flax.metrics.tensorboard.SummaryWriter` class's method `SummaryWriter.hparams()` should handle this behavior via the `flax.metrics.tensorboard._flatten_dict()` function, casting incompatible types to strings (which hparams supports). However, despite performing the casting operation, the `_flatten_dict` function does not append the recast variables to the dictionary it returns.
The result, for the below example, is that the "hidden_layers" parameters are silently excluded and do not appear in Tensorboard's hparams.
```Python
from flax.metrics import tensorboard
experiment_dir = "./Example"
network_hyperparameters = {
"hidden_layers_list": [12,12],
"hidden_layers_tuple": (12,12),
"dropout_rate": 1.0,
}
summary_writer = tensorboard.SummaryWriter(experiment_dir)
summary_writer.hparams(network_hyperparameters)
summary_writer.scalar('Training loss', 0.1, 1)
summary_writer.flush()
```
### Colab Example:
[Example notebook](https://colab.research.google.com/gist/tttc3/8dd7ef04c4222bc18fb03b043d370120/falx_tensorboard_issue_demo.ipynb)
### Proposed fix
Modify `_flattened_dict` to explicitly check if a dictionary value is one of those supported by Tensorboard's hparams api, as defined [here](https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py). If the value is not supported, cast it to a string and append it to the dictionary that `_flattened_dict` normally returns.
**Current _flatten_dict code**
```Python
def _flatten_dict(input_dict, parent_key='', sep='.'):
"""Flattens and simplifies dict such that it can be used by hparams.
Args:
input_dict: Input dict, e.g., from ConfigDict.
parent_key: String used in recursion.
sep: String used to separate parent and child keys.
Returns:
Flattened dict.
"""
items = []
for k, v in input_dict.items():
new_key = parent_key + sep + k if parent_key else k
# Take special care of things hparams cannot handle.
if v is None:
v = 'None'
elif isinstance(v, list):
v = str(v)
elif isinstance(v, tuple):
v = str(v)
elif isinstance(v, dict):
# Recursively flatten the dict.
items.extend(_flatten_dict(v, new_key, sep=sep).items())
else:
items.append((new_key, v))
return dict(items)
```
**Proposed _flatten_dict code modification**
```Python
def _flatten_dict(input_dict, parent_key='', sep='.'):
"""Flattens and simplifies dict such that it can be used by hparams.
Args:
input_dict: Input dict, e.g., from ConfigDict.
parent_key: String used in recursion.
sep: String used to separate parent and child keys.
Returns:
Flattened dict.
"""
items = []
for k, v in input_dict.items():
new_key = parent_key + sep + k if parent_key else k
# Valid types according to https://github.com/tensorflow/tensorboard/blob/1204566da5437af55109f7a4af18f9f8b7c4f864/tensorboard/plugins/hparams/summary_v2.py
valid_types = (bool, int, float, str, np.bool_, np.integer, np.floating, np.character)
if isinstance(v, dict):
# Recursively flatten the dict.
items.extend(_flatten_dict(v, new_key, sep=sep).items())
continue
elif not isinstance(v, valid_types):
# Cast any incompatible values as strings such that they can be handled by hparams
v = str(v)
items.append((new_key, v))
return dict(items)
```
I am happy submit a pull request with the modifications.
| Thanks for noticing this. Indeed there seems to be a bug in our code, and we actually do nothing with `v` if it is `None`, `list` or `tuple`! Yes, it would be great if you could file this as a PR and I think your suggested change using `valid_types` is an improvement.
We should also run internals tests on this to make sure your change doesn't break anything. | 2022-02-24T10:41:43 |
google/flax | 1,948 | google__flax-1948 | [
"1947"
] | 96c78cd1bb43dfacfb8a999f3155facec00ecb3b | diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -530,8 +530,8 @@ def _customized_dataclass_transform(cls):
"""Handles final optional dataclass attributes: `parent` and `name`."""
# Use cls.__dict__ to get annotations of cls itself (no parent class).
annotations = dict(cls.__dict__.get('__annotations__', {}))
- parent_annotation = Union[Type["Module"], Type["Scope"],
- Type["_Sentinel"], None]
+ parent_annotation = Union[Type[Module], Type[Scope],
+ Type[_Sentinel], None]
if ('parent' in annotations
and annotations['parent'] != parent_annotation):
raise errors.ReservedModuleAttributeError(annotations)
| `typing.get_type_hints()` is broken for linen modules
I have some serialization code that involves a recursive call to `get_type_hints()`, which breaks for flax modules:
```python
from typing import get_type_hints
from flax import linen as nn
class Network(nn.Module):
layers: int
# Fails!
# NameError: name 'Module' is not defined
print(get_type_hints(Network))
```
The reason for this seems to be that forward references are (seemingly unnecessarily) used when fields are being dynamically added to the module dataclass, but the typing module tries to resolve these names in the wrong local namespace:
https://github.com/google/flax/blob/96c78cd1bb43dfacfb8a999f3155facec00ecb3b/flax/linen/module.py#L533-L534
This can be confirmed because adding one extra line fixes the error:
```python
from typing import get_type_hints
from flax import linen as nn
from flax.linen.module import Module, Scope, _Sentinel # New
class Network(nn.Module):
layers: int
# Works!
# {'layers': <class 'int'>, 'parent': typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType], 'name': <class 'str'>}
print(get_type_hints(Network))
```
| 2022-02-27T23:21:14 |
||
google/flax | 1,955 | google__flax-1955 | [
"1155"
] | c8cccec9f035a1339136d22ab40dc5966e894f44 | diff --git a/flax/linen/__init__.py b/flax/linen/__init__.py
--- a/flax/linen/__init__.py
+++ b/flax/linen/__init__.py
@@ -17,24 +17,26 @@
# pylint: disable=g-multiple-import
# re-export commonly used modules and functions
-from .activation import (celu, elu, gelu, glu, leaky_relu, log_sigmoid,
- log_softmax, relu, sigmoid, soft_sign, softmax,
- softplus, swish, silu, tanh, PReLU)
+from .activation import (PReLU, celu, elu, gelu, glu, leaky_relu, log_sigmoid,
+ log_softmax, relu, sigmoid, silu, soft_sign, softmax,
+ softplus, swish, tanh)
from .attention import (MultiHeadDotProductAttention, SelfAttention,
- dot_product_attention, dot_product_attention_weights,
- make_attention_mask, make_causal_mask, combine_masks)
-from ..core import broadcast, DenyList, FrozenDict
+ combine_masks, dot_product_attention,
+ dot_product_attention_weights, make_attention_mask,
+ make_causal_mask)
+from .combinators import Sequential
+from ..core import DenyList, FrozenDict, broadcast
+from .initializers import ones, zeros
from .linear import Conv, ConvLocal, ConvTranspose, Dense, DenseGeneral, Embed
-from .module import (Module, compact, nowrap, enable_named_call,
- disable_named_call, override_named_call, Variable, init,
- init_with_output, apply, merge_param)
+from .module import (Module, Variable, apply, compact,
+ disable_named_call, enable_named_call, init,
+ init_with_output, merge_param, nowrap, override_named_call)
from .normalization import BatchNorm, GroupNorm, LayerNorm
from .pooling import avg_pool, max_pool, pool
-from .recurrent import GRUCell, LSTMCell, ConvLSTM, OptimizedLSTMCell
+from .recurrent import ConvLSTM, GRUCell, LSTMCell, OptimizedLSTMCell
from .stochastic import Dropout
-from .transforms import (jit, named_call, checkpoint, remat, remat_scan,
- scan, vmap, map_variables, vjp, jvp, custom_vjp,
+from .transforms import (checkpoint, custom_vjp, jit, jvp, map_variables,
+ named_call, remat, remat_scan, scan, vjp, vmap,
while_loop)
-from .initializers import zeros, ones
# pylint: enable=g-multiple-import
diff --git a/flax/linen/combinators.py b/flax/linen/combinators.py
new file mode 100644
--- /dev/null
+++ b/flax/linen/combinators.py
@@ -0,0 +1,39 @@
+"""Combinators of modules, such as a Sequential."""
+
+from typing import Callable, Sequence
+
+from flax.linen.module import Module
+
+class Sequential(Module):
+ """Applies a linear chain of Modules.
+
+ Meant to be used only for the simple case of fusing together callables where
+ the input of a particular module/op is the output of the previous one.
+
+ Modules will be applied in the order that they are passed in the constructor.
+
+ The apply() method of Sequential accepts any input and forwards it to the
+ first module it contains. It chains the output sequentially to the input of
+ the next module and returns the output of the final module.
+
+ Example usage::
+
+ class Foo(nn.Module):
+ feature_sizes: Sequence[int]
+
+ @nn.compact
+ def __call__(self, x):
+ return nn.Sequential([nn.Dense(layer_size, name=f'layers_{idx}')
+ for idx, layer_size
+ in enumerate(self.feature_sizes)])(x)
+ """
+ layers: Sequence[Callable]
+
+ def __call__(self, *args, **kwargs):
+ if not self.layers:
+ raise ValueError(f'Empty Sequential module {self.name}.')
+
+ outputs = self.layers[0](*args, **kwargs)
+ for layer in self.layers[1:]:
+ outputs = layer(outputs)
+ return outputs
| diff --git a/tests/linen/linen_combinators_test.py b/tests/linen/linen_combinators_test.py
new file mode 100644
--- /dev/null
+++ b/tests/linen/linen_combinators_test.py
@@ -0,0 +1,93 @@
+"""Tests for flax.linen.combinators."""
+
+from typing import Any, Optional, Sequence
+
+from absl.testing import absltest
+
+from flax import linen as nn
+import jax
+from jax import numpy as jnp
+from jax import random
+import numpy as np
+
+# Parse absl flags test_srcdir and test_tmpdir.
+jax.config.parse_flags_with_absl()
+
+
+class MLP(nn.Module):
+ layer_sizes: Sequence[int]
+ activation: Optional[Any] = None
+ activation_final: Optional[Any] = None
+
+ @nn.compact
+ def __call__(self, inputs):
+ x = inputs
+ for layer_size in self.layer_sizes[:-1]:
+ x = nn.Dense(features=layer_size, kernel_init=nn.initializers.ones)(x)
+ if self.activation is not None:
+ x = self.activation(x)
+ x = nn.Dense(
+ features=self.layer_sizes[-1], kernel_init=nn.initializers.ones)(
+ x)
+ if self.activation_final is None:
+ return x
+ return self.activation_final(x)
+
+
+class SequentialTest(absltest.TestCase):
+
+ def test_construction(self):
+ sequential = nn.Sequential([nn.Dense(4), nn.Dense(2)])
+ key1, key2 = random.split(random.PRNGKey(0), 2)
+ x = random.uniform(key1, (3, 1, 5))
+ params = sequential.init(key2, x)
+ output = sequential.apply(params, x)
+ self.assertEqual(output.shape, (3, 1, 2))
+
+ def test_fails_if_layers_empty(self):
+ sequential = nn.Sequential([])
+ with self.assertRaisesRegex(ValueError,
+ 'Empty Sequential module'):
+ sequential.init(random.PRNGKey(42), jnp.ones((3, 5)))
+
+ def test_same_output_as_mlp(self):
+ sequential = nn.Sequential([
+ nn.Dense(4, kernel_init=nn.initializers.ones),
+ nn.Dense(8, kernel_init=nn.initializers.ones),
+ nn.Dense(2, kernel_init=nn.initializers.ones)
+ ])
+ mlp = MLP(layer_sizes=[4, 8, 2])
+
+ key1, key2 = random.split(random.PRNGKey(0), 2)
+ x = random.uniform(key1, (3, 5))
+ params_1 = sequential.init(key2, x)
+ params_2 = mlp.init(key2, x)
+
+ output_1 = sequential.apply(params_1, x)
+ output_2 = mlp.apply(params_2, x)
+ np.testing.assert_array_equal(output_1, output_2)
+
+ def test_same_output_as_mlp_with_activation(self):
+ sequential = nn.Sequential([
+ nn.Dense(4, kernel_init=nn.initializers.ones), nn.relu,
+ nn.Dense(8, kernel_init=nn.initializers.ones), nn.relu,
+ nn.Dense(2, kernel_init=nn.initializers.ones), nn.log_softmax
+ ])
+
+ mlp = MLP(
+ layer_sizes=[4, 8, 2],
+ activation=nn.relu,
+ activation_final=nn.log_softmax)
+
+ key1, key2 = random.split(random.PRNGKey(0), 2)
+ x = random.uniform(key1, (3, 5))
+ params_1 = sequential.init(key2, x)
+ params_2 = mlp.init(key2, x)
+
+ output_1 = sequential.apply(params_1, x)
+ output_2 = mlp.apply(params_2, x)
+ np.testing.assert_array_equal(output_1, output_2)
+
+
+if __name__ == '__main__':
+ absltest.main()
| Implement a Sequential Module
Users often ask for this so it would be good to just add it. I can be as simple as this:
```python
class Sequential(nn.Module):
layers: Sequence[nn.Module]
def __call__(self, x):
for layer in self.layers:
x = layer(x)
return x
```
Example usage:
```
class Foo(nn.Module):
feature_sizes: List[int]
@nn.compact
def __call__(self, x):
return Sequential([nn.Dense(sz, name=f'layers_{idx}')
for idx,sz in enumerate(self.feature_sizes)])(x)
```
| Hi @marcvanzee,
I am interested in working on this issue. I submitted a PR #1156 if that is possible. | 2022-03-02T19:42:03 |
google/flax | 2,007 | google__flax-2007 | [
"1925"
] | 08f4c53d21d9d86bff4e8f78d3b18f56ccfbaeef | diff --git a/flax/serialization.py b/flax/serialization.py
--- a/flax/serialization.py
+++ b/flax/serialization.py
@@ -117,11 +117,14 @@ def _restore_list(xs, state_dict: Dict[str, Any]) -> List[Any]:
def _dict_state_dict(xs: Dict[str, Any]) -> Dict[str, Any]:
- return {key: to_state_dict(value) for key, value in xs.items()}
+ str_keys = set(str(k) for k in xs.keys())
+ if len(str_keys) != len(xs):
+ raise ValueError(f'Dict keys do not have a unique string representation: {str_keys}')
+ return {str(key): to_state_dict(value) for key, value in xs.items()}
def _restore_dict(xs, states: Dict[str, Any]) -> Dict[str, Any]:
- return {key: from_state_dict(value, states[key])
+ return {key: from_state_dict(value, states[str(key)])
for key, value in xs.items()}
| diff --git a/tests/linen/linen_module_test.py b/tests/linen/linen_module_test.py
--- a/tests/linen/linen_module_test.py
+++ b/tests/linen/linen_module_test.py
@@ -159,6 +159,22 @@ def __call__(self, x):
{'lyrs1_a': {'kernel': (10, 3)},
'lyrs1_b': {'kernel': (3, 3)}})
+ def test_setup_dict_nonstring_keys(self):
+ class Foo(nn.Module):
+ def setup(self):
+ self.a = {(1, 2): nn.Dense(2)} # here the dict using tuple as key
+
+ @nn.compact
+ def __call__(self, x):
+ return self.a[(1, 2)](x)
+
+ foo = Foo()
+ x = jnp.ones(shape=(1, 3))
+ params = foo.init(random.PRNGKey(0), x)['params']
+ param_shape = jax.tree_map(jnp.shape, params)
+ self.assertEqual(param_shape,
+ {'a_(1, 2)': {'kernel': (3, 2), 'bias': (2,)}})
+
def test_setup_cloning(self):
class MLP(nn.Module):
def setup(self):
| Can not assign dict whos key is not string as module attribute
Hi,
It seems current flax.linen not allow assign a dict with non-string keys to the module attribute.
See below simple example:
it will trigger error:
`AssertionError: A state dict must only have string keys.`
Questions:
1. Is it an intended behavior? Why?
2. If it is intended, is there any work around? As it is quite possible we need assign the information contained by a dict to the module, and the key of the dict may not string.
```python
import flax.linen as nn
import jax
import jax.numpy as jnp
class Foo(nn.Module):
def setup(self):
self.a = {(1, 2): 3} # here the dict using tuple as key
@nn.compact
def __call__(self, x):
return x
foo = Foo()
rng = jax.random.PRNGKey(0)
x = jnp.ones(shape=(3, 3))
vars = foo.init({"params": rng}, x)
out = foo.apply(vars, x)
print(out)
```
| This should be fixed
@jheek just tried this in a public Colab and installed flax from main, but the problem still seems to be there.
It's because we traverse any assignment looking for Module leaves and have overly strict requirements on the structure of the tree (e.g. string keys) for any leaf Module and that's spilling over as a constraint on any leaf type. | 2022-03-22T09:49:37 |
google/flax | 2,009 | google__flax-2009 | [
"2000"
] | e16cf72b49734f2f32820cd4bee3ee8a894a5a55 | diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -205,6 +205,33 @@ def _conv_dimension_numbers(input_shape):
return lax.ConvDimensionNumbers(lhs_spec, rhs_spec, out_spec)
+PaddingLike = Union[str, int, Sequence[Union[int, Tuple[int, int]]]]
+LaxPadding = Union[str, Sequence[Tuple[int, int]]]
+
+
+def canonicalize_padding(padding: PaddingLike, rank: int) -> LaxPadding:
+ """"Canonicalizes conv padding to a jax.lax supported format."""
+ if isinstance(padding, str):
+ return padding
+ if isinstance(padding, int):
+ return [(padding, padding)] * rank
+ if isinstance(padding, Sequence) and len(padding) == rank:
+ new_pad = []
+ for p in padding:
+ if isinstance(p, int):
+ new_pad.append((p, p))
+ elif isinstance(p, tuple) and len(p) == 2:
+ new_pad.append(p)
+ else:
+ break
+ if len(new_pad) == rank:
+ return new_pad
+ raise ValueError(
+ f'Invalid padding format: {padding}, should be str, int,'
+ f' or a sequence of len {rank} where each element is an'
+ f' int or pair of ints.')
+
+
class _Conv(Module):
"""Convolution Module wrapping `lax.conv_general_dilated[_local]`.
@@ -218,7 +245,9 @@ class _Conv(Module):
padding: either the string `'SAME'`, the string `'VALID'`, the string
`'CIRCULAR'` (periodic boundary conditions), or a sequence of `n` `(low,
high)` integer pairs that give the padding to apply before and after each
- spatial dimension.
+ spatial dimension. A single int is interpeted as applying the same padding
+ in all dims and passign a single int in a sequence causes the same padding
+ to be used on both sides.
input_dilation: an integer or a sequence of `n` integers, giving the
dilation factor to apply in each spatial dimension of `inputs`
(default: 1). Convolution with input dilation `d` is equivalent to
@@ -240,7 +269,7 @@ class _Conv(Module):
features: int
kernel_size: Sequence[int]
strides: Union[None, int, Sequence[int]] = 1
- padding: Union[str, Sequence[Tuple[int, int]]] = 'SAME'
+ padding: PaddingLike = 'SAME'
input_dilation: Union[None, int, Sequence[int]] = 1
kernel_dilation: Union[None, int, Sequence[int]] = 1
feature_group_count: int = 1
@@ -307,8 +336,8 @@ def maybe_broadcast(x: Optional[Union[int, Sequence[int]]]) -> (
input_dilation = maybe_broadcast(self.input_dilation)
kernel_dilation = maybe_broadcast(self.kernel_dilation)
- padding_lax: Union[str, Sequence[Tuple[int, int]]]
- if self.padding == 'CIRCULAR':
+ padding_lax = canonicalize_padding(self.padding, len(kernel_size))
+ if padding_lax == 'CIRCULAR':
kernel_size_dilated = [
(k - 1) * d + 1 for k, d in zip(kernel_size, kernel_dilation)
]
@@ -317,8 +346,6 @@ def maybe_broadcast(x: Optional[Union[int, Sequence[int]]]) -> (
[(0, 0)])
inputs = jnp.pad(inputs, pads, mode='wrap')
padding_lax = 'VALID'
- else:
- padding_lax = self.padding
dimension_numbers = _conv_dimension_numbers(inputs.shape)
in_features = inputs.shape[-1]
@@ -429,7 +456,9 @@ class ConvTranspose(Module):
padding: either the string `'SAME'`, the string `'VALID'`, the string
`'CIRCULAR'` (periodic boundary conditions), or a sequence of `n` `(low,
high)` integer pairs that give the padding to apply before and after each
- spatial dimension.
+ spatial dimension. A single int is interpeted as applying the same padding
+ in all dims and passign a single int in a sequence causes the same padding
+ to be used on both sides.
kernel_dilation: `None`, or a sequence of `n` integers, giving the
dilation factor to apply in each spatial dimension of the convolution
kernel. Convolution with kernel dilation is also known as 'atrous
@@ -445,7 +474,7 @@ class ConvTranspose(Module):
features: int
kernel_size: Union[int, Tuple[int, ...]]
strides: Optional[Tuple[int, ...]] = None
- padding: Union[str, Sequence[Tuple[int, int]]] = 'SAME'
+ padding: PaddingLike = 'SAME'
kernel_dilation: Optional[Sequence[int]] = None
use_bias: bool = True
dtype: Dtype = jnp.float32
@@ -492,11 +521,9 @@ def __call__(self, inputs: Array) -> Array:
self.param_dtype)
kernel = jnp.asarray(kernel, self.dtype)
- padding_lax: Union[str, Sequence[Tuple[int, int]]]
- if self.padding == 'CIRCULAR':
+ padding_lax = canonicalize_padding(self.padding, len(kernel_size))
+ if padding_lax == 'CIRCULAR':
padding_lax = 'VALID'
- else:
- padding_lax = self.padding
y = lax.conv_transpose(
inputs,
| diff --git a/tests/linen/linen_linear_test.py b/tests/linen/linen_linear_test.py
--- a/tests/linen/linen_linear_test.py
+++ b/tests/linen/linen_linear_test.py
@@ -15,6 +15,7 @@
"""Tests for flax.deprecated.nn.linear."""
import functools
+from multiprocessing.sharedctypes import Value
from absl.testing import absltest
from absl.testing import parameterized
@@ -842,6 +843,19 @@ def __call__(self, x):
}})
self.assertEqual(y.shape, (8, 6))
+ def test_canonicalize_padding(self):
+ def test_pad(pad, rank, expected=None):
+ if expected is None:
+ with self.assertRaises(ValueError):
+ nn.linear.canonicalize_padding(pad, rank)
+ else:
+ self.assertEqual(nn.linear.canonicalize_padding(pad, rank), expected)
+ test_pad("SAME", 2, "SAME")
+ test_pad(2, 3, [(2, 2), (2, 2), (2, 2)])
+ test_pad((2, 2), 3)
+ test_pad((2, 2), 1)
+ test_pad([1, (2, 3)], 2, [(1, 1), (2, 3)])
+ test_pad([None, (1, 2)], 2)
if __name__ == '__main__':
absltest.main()
| flax.linen.Conv needs better error checking of 'padding' argument.
Hi!
The following code leads to mysterious error message `RuntimeError: UNKNOWN: -:4:130: error: expected '['` :
```
x = np.random.normal(size=(7, 48, 48, 96)).astype(np.float32)
model_def = nn.Conv(
features=96, kernel_size=(7, 7),
strides=(4, 4),
padding=(3, 3))
model_state, conv_params = model_def.init({'params': jax.random.PRNGKey(42)}, x).pop('params')
out = model_def.apply({"params": conv_params}, x)
```
The mistake here is that I was using `padding=(3, 3)` instead of `padding=((3, 3), (3, 3))`, but the error message is not informative. It would be great if that could be improved. Ideally, a simpler padding spec. like `padding=(3, 3)` or even `padding=3` could directly be supported.
| 2022-03-22T13:17:13 |
|
google/flax | 2,013 | google__flax-2013 | [
"1303"
] | 390383830bd2de784994d4d961e1ffc42a249962 | diff --git a/flax/errors.py b/flax/errors.py
--- a/flax/errors.py
+++ b/flax/errors.py
@@ -141,9 +141,10 @@ class ApplyScopeInvalidVariablesStructureError(FlaxError):
For more explanation on variable dicts, please see :mod:`flax.core.variables`.
"""
def __init__(self, variables):
- super().__init__('Expected the first argument passed to an apply function '
- 'to be a dictionary containing a \'params\' key at the '
- f'root level, but got "{variables}".')
+ super().__init__('Expect the `variables` (first argument) passed to apply() '
+ 'to be a dict with the structure {"params": ...}, but got a dict '
+ 'with an extra params layer, i.e. {"params": {"params": ... } }. '
+ f'You should instead pass in your dict\'s ["params"].')
class ScopeParamNotFoundError(FlaxError):
@@ -160,16 +161,18 @@ class Embed(nn.Module):
def __call__(self, inputs, embed_name='embedding'):
inputs = inputs.astype('int32')
embedding = self.param(embed_name,
- lecun_normal(),
+ jax.nn.initializers.lecun_normal(),
(self.num_embeddings, self.features))
return embedding[inputs]
- variables = Embed(4, 8).init(random.PRNGKey(0), jnp.ones((5, 5, 1)))
- _ = Embed().apply(variables, jnp.ones((5, 5, 1)), 'embed')
+ model = Embed(4, 8)
+ variables = model.init(random.PRNGKey(0), jnp.ones((5, 5, 1)))
+ _ = model.apply(variables, jnp.ones((5, 5, 1)), 'embed')
"""
def __init__(self, param_name, scope_path):
- super().__init__(f'No parameter named "{param_name}" exists in '
- f'"{scope_path}".')
+ super().__init__(
+ f'Could not find parameter named "{param_name}" in scope '
+ f'"{scope_path}".')
class ScopeCollectionNotFound(FlaxError):
diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -282,8 +282,9 @@ def __call__(self, inputs: Array) -> Array:
inputs = jnp.asarray(inputs, self.dtype)
if isinstance(self.kernel_size, int):
- raise TypeError('The kernel size must be specified as a'
- ' tuple/list of integers (eg.: [3, 3]).')
+ raise TypeError('Expected Conv kernel_size to be a'
+ ' tuple/list of integers (eg.: [3, 3]) but got'
+ f' {self.kernel_size}.')
else:
kernel_size = tuple(self.kernel_size)
| diff --git a/tests/core/core_lift_test.py b/tests/core/core_lift_test.py
--- a/tests/core/core_lift_test.py
+++ b/tests/core/core_lift_test.py
@@ -47,7 +47,7 @@ def f(scope):
split_rngs={'params': True})
dense(scope.push('dense'), np.ones((3, 2)), 2)
- msg = r'No parameter named "kernel" exists in "/vmap\(dense\)".'
+ msg = r'Could not find parameter named "kernel" in scope "/vmap\(dense\)".'
with self.assertRaisesRegex(errors.ScopeParamNotFoundError, msg):
apply(f)({'params': {'dense': {'abc': np.ones((3, 3))}}})
diff --git a/tests/core/core_scope_test.py b/tests/core/core_scope_test.py
--- a/tests/core/core_scope_test.py
+++ b/tests/core/core_scope_test.py
@@ -121,11 +121,11 @@ def f(scope):
},
})
apply(f)(params) # Valid.
- msg = 'dictionary containing a \'params\' key at the root level'
+ msg = 'but got a dict with an extra params layer'
with self.assertRaisesRegex(errors.ApplyScopeInvalidVariablesStructureError,
msg):
apply(f)({'params': params})
-
+
def test_mutate_undefined_collection(self):
def f(scope):
scope.put_variable('state', 'test', 123)
@@ -138,7 +138,7 @@ def test_undefined_param(self):
def f(scope):
nn.dense(scope.push('dense'), np.ones((1, 2)), 2)
- msg = r'No parameter named "kernel" exists in "/dense".'
+ msg = r'Could not find parameter named "kernel" in scope "/dense".'
with self.assertRaisesRegex(errors.ScopeParamNotFoundError, msg):
apply(f)({'params': {'abc': 1}})
| flax.errors.ScopeParamNotFoundError: No parameter named "kernel" exists in "/Conv_0".
`Model.apply({'params':params}, batch)` in the loss function seems to throw the error above. I pretty much followed the examples in the docs line-by-line with no luck.
Here is a minimal example of the issue reproduced in google colab - https://colab.research.google.com/drive/12mRim_N4cWmv4nmeuknq8RT2VWUA5egB
| you wrote
```
parameters = SimpleCNN6Layer(n=16).init({'params':jax.random.PRNGKey(0)}, jax.numpy.ones((16, 4000, 1)))
optimizer = optim.Adam(learning_rate=3e-4).create(parameters)
```
but you probably meant
```
variables = SimpleCNN6Layer(n=16).init({'params':jax.random.PRNGKey(0)}, jax.numpy.ones((16, 4000, 1)))
optimizer = optim.Adam(learning_rate=3e-4).create(variables['params'])
```
You have to make sure you take the `params` of the variable dict returns by `init`. So when you create your optimizer you should do:
```
variables = SimpleCNN6Layer(n=16).init({'params':jax.random.PRNGKey(0)}, jax.numpy.ones((16, 4000, 1)))
optimizer = optim.Adam(learning_rate=3e-4).create(variables['params'])
```
The error message is not very clear, so thank you for bringing this up! 👍
We should consider improving the error message to something like "Maybe you passing in an incorrect variable dict"?
Wow, @andsteing beat me to it with an identical code snippet 😄
hah you clearly spent that 1 minute on a much better explanation !
👍 👍 👍 | 2022-03-24T12:05:25 |
google/flax | 2,064 | google__flax-2064 | [
"2029"
] | 18be4d4dbf8ad18fda099355f1a698dfe94c8989 | diff --git a/flax/linen/pooling.py b/flax/linen/pooling.py
--- a/flax/linen/pooling.py
+++ b/flax/linen/pooling.py
@@ -25,8 +25,8 @@ def pool(inputs, init, reduce_fn, window_shape, strides, padding):
Pooling functions are implemented using the ReduceWindow XLA op.
NOTE: Be aware that pooling is not generally differentiable.
- That means providing a reduce_fn that is differentiable does not imply
- that pool is differentiable.
+ That means providing a reduce_fn that is differentiable does not imply that
+ pool is differentiable.
Args:
inputs: input data with dimensions (batch, window dims..., features).
@@ -34,7 +34,7 @@ def pool(inputs, init, reduce_fn, window_shape, strides, padding):
reduce_fn: a reduce function of the form `(T, T) -> T`.
window_shape: a shape tuple defining the window to reduce over.
strides: a sequence of `n` integers, representing the inter-window
- strides.
+ strides (default: `(1, ..., 1)`).
padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
of `n` `(low, high)` integer pairs that give the padding to apply before
and after each spatial dimension.
@@ -76,7 +76,7 @@ def avg_pool(inputs, window_shape, strides=None, padding="VALID"):
inputs: input data with dimensions (batch, window dims..., features).
window_shape: a shape tuple defining the window to reduce over.
strides: a sequence of `n` integers, representing the inter-window
- strides (default: `(1, ..., 1)`).
+ strides (default: `(1, ..., 1)`).
padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
of `n` `(low, high)` integer pairs that give the padding to apply before
and after each spatial dimension (default: `'VALID'`).
@@ -95,7 +95,7 @@ def max_pool(inputs, window_shape, strides=None, padding="VALID"):
inputs: input data with dimensions (batch, window dims..., features).
window_shape: a shape tuple defining the window to reduce over.
strides: a sequence of `n` integers, representing the inter-window
- strides (default: `(1, ..., 1)`).
+ strides (default: `(1, ..., 1)`).
padding: either the string `'SAME'`, the string `'VALID'`, or a sequence
of `n` `(low, high)` integer pairs that give the padding to apply before
and after each spatial dimension (default: `'VALID'`).
@@ -113,7 +113,7 @@ def min_pool(inputs, window_shape, strides=None, padding="VALID"):
inputs: Input data with dimensions (batch, window dims..., features).
window_shape: A shape tuple defining the window to reduce over.
strides: A sequence of `n` integers, representing the inter-window strides
- (default: `(1, ..., 1)`).
+ (default: `(1, ..., 1)`).
padding: Either the string `'SAME'`, the string `'VALID'`, or a sequence of
`n` `(low, high)` integer pairs that give the padding to apply before and
after each spatial dimension (default: `'VALID'`).
| Document default stride for pooling functions
### Discussed in https://github.com/google/flax/discussions/2023
<div type='discussions-op-text'>
<sup>Originally posted by **dogeplusplus** April 3, 2022</sup>
A bit of a nitpick but I was wondering why the default behavior of pooling functions is to have stride 1 instead of the `window_shape`? I feel that for most use cases the stride would be the dimension of the kernel size as in other frameworks.</div>
| 2022-04-25T08:12:41 |
||
google/flax | 2,113 | google__flax-2113 | [
"2108"
] | 93dff18007abdc75d39d7e8d5fb8356f7d8a25c2 | diff --git a/.github/analytics/get_repo_metrics.py b/.github/analytics/get_repo_metrics.py
new file mode 100644
--- /dev/null
+++ b/.github/analytics/get_repo_metrics.py
@@ -0,0 +1,367 @@
+import json
+import os
+from datetime import datetime
+from pathlib import Path
+from typing import Callable, List
+
+import pandas as pd
+import requests
+import matplotlib.pyplot as plt
+import matplotlib.dates as mdates
+
+
+token = os.environ["GITHUB_TOKEN"]
+endpoint = r"https://api.github.com/graphql"
+headers = {"Authorization": f"bearer {token}"}
+
+#------------------------------------------------------------------------------
+# GraphQL
+#------------------------------------------------------------------------------
+# NOTE: This GraphQL logic was ported and adapted from this script:
+# https://github.com/scientific-python/devstats-data/blob/4c022961abc4ca6061f8719d9c3387e98734b90c/query.py
+# It contains style differences from Google's style guide.
+
+def load_query_from_file(fname, repo_owner, repo_name) -> str:
+ with open(fname) as fh:
+ query = fh.read()
+ # Set target repo from template
+ query = query.replace('_REPO_OWNER_', repo_owner)
+ query = query.replace('_REPO_NAME_', repo_name)
+ return query
+
+
+def send_query(query, query_type, cursor=None):
+ """
+ Sends a GraphQL to the GitHub API.
+
+ No validation is done on the query before sending. GitHub GraphQL is
+ supported with the `cursor` argument.
+
+ Parameters
+ ----------
+ query : str
+ The GraphQL query to be sent
+ query_type : {"issues", "pullRequests"}
+ The object being queried according to the GitHub GraphQL schema.
+ Currently only issues and pullRequests are supported
+ cursor : str, optional
+ If given, then the cursor is injected into the query to support
+ GitHub's GraphQL pagination.
+
+ Returns
+ -------
+ dict
+ The result of the query (json) parsed by `json.loads`
+
+ Notes
+ -----
+ This is intended mostly for internal use within `get_all_responses`.
+ """
+ # TODO: Expand this, either by parsing the query type from the query
+ # directly or manually adding more query_types to the set
+ if query_type not in {'issues', 'pullRequests'}:
+ raise ValueError(
+ 'Only \'issues\' and \'pullRequests\' queries are currently supported'
+ )
+ # TODO: Generalize this
+ # WARNING: The cursor injection depends on the specific structure of the
+ # query, this is the main reason why query types are limited to issues/PRs
+ if cursor is not None:
+ cursor_insertion_key = query_type + '('
+ cursor_ind = query.find(cursor_insertion_key) + len(cursor_insertion_key)
+ query = query[:cursor_ind] + f'after:"{cursor}", ' + query[cursor_ind:]
+ # Build request payload
+ payload = {'query' : query}
+ response = requests.post(endpoint, json=payload, headers=headers)
+ return json.loads(response.content)
+
+def get_all_responses(query, query_type):
+ "Helper function to bypass GitHub GraphQL API node limit."
+ # Get data from a single response
+ initial_data = send_query(query, query_type)
+ data, last_cursor, total_count = parse_single_query(initial_data, query_type)
+ print(f'Retrieving {len(data)} out of {total_count} values...')
+ # Continue requesting data (with pagination) until all are acquired
+ while len(data) < total_count:
+ rdata = send_query(query, query_type, cursor=last_cursor)
+ pdata, last_cursor, _ = parse_single_query(rdata, query_type)
+ data.extend(pdata)
+ print(f'Retrieving {len(data)} out of {total_count} values...')
+ print('Done.')
+ return data
+
+def parse_single_query(data, query_type):
+ """
+ Parses the data returned by `send_query`
+
+ .. warning::
+
+ Like `send_query`, the logic here depends on the specific structure
+ of the query (e.g. it must be an issue or PR query, and must have a
+ total count).
+ """
+ try:
+ total_count = data['data']['repository'][query_type]['totalCount']
+ data = data['data']['repository'][query_type]['edges']
+ last_cursor = data[-1]['cursor']
+ except KeyError as e:
+ print(data)
+ raise e
+ return data, last_cursor, total_count
+
+
+class GithubGrabber:
+ """
+ Pulls down data via the GitHub APIv.4 given a valid GraphQL query.
+ """
+
+ def __init__(self, query_fname, query_type, repo_owner, repo_name):
+ """
+ Create an object to send/recv queries related to the issue tracker
+ for the given repository via the GitHub API v.4.
+
+ The repository to query against is given by:
+ https://github.com/<repo_owner>/<repo_name>
+
+ Parameters
+ ----------
+ query_fname : str
+ Path to a valid GraphQL query conforming to the GitHub GraphQL
+ schema
+ query_type : {"issues", "pullRequests"}
+ Type of object that is being queried according to the GitHub GraphQL
+ schema. Currently only "issues" and "pullRequests" are supported.
+ repo_owner : str
+ Repository owner.
+ repo_name : str
+ Repository name.
+ """
+ self.query_fname = query_fname
+ self.query_type = query_type # TODO: Parse this directly from query
+ self.repo_owner = repo_owner
+ self.repo_name = repo_name
+ self.raw_data = None
+ self.load_query()
+
+ def load_query(self):
+ self.query = load_query_from_file(
+ self.query_fname, self.repo_owner, self.repo_name
+ )
+
+ def get(self):
+ self.raw_data = get_all_responses(self.query, self.query_type)
+
+#------------------------------------------------------------------------------
+# metrics helpers
+#------------------------------------------------------------------------------
+
+def _to_datetime(date_str: str) -> datetime:
+ return datetime.fromisoformat(date_str.replace('Z', ''))
+
+def _get_issues_features(issues):
+ for issue in issues:
+ issue = issue['node']
+
+ created_at = _to_datetime(issue['createdAt'])
+ time_labeled_or_converted = None
+ time_issue_closed = None
+
+ for event in issue['timelineItems']['edges']:
+ event = event['node']
+
+ if event['__typename'] in {'LabeledEvent', 'ConvertedToDiscussionEvent'}:
+ time_labeled_or_converted = _to_datetime(event['createdAt'])
+
+ if event['__typename'] == 'ClosedEvent':
+ time_issue_closed = _to_datetime(event['createdAt'])
+
+ yield {
+ 'created_at': created_at,
+ 'time_labeled_or_converted': time_labeled_or_converted,
+ 'time_issue_closed': time_issue_closed,
+ 'issue_closed': issue['state'] == 'CLOSED',
+ }
+
+def _get_pr_features(prs):
+ for pr in prs:
+ pr = pr['node']
+
+ created_at = _to_datetime(pr['createdAt'])
+ ready_for_review_at = _to_datetime(pr['createdAt'])
+ time_labeled_or_assigned = None
+ time_merged_or_closed = None
+ time_review = None
+
+ if pr["reviews"]["nodes"]:
+ review = pr["reviews"]["nodes"][0]
+ time_review = _to_datetime(review["createdAt"])
+
+ for event in pr['timelineItems']['edges']:
+ event = event['node']
+
+ if (
+ time_labeled_or_assigned is None
+ and event['__typename'] == 'LabeledEvent'
+ and 'cla:' not in event['label']['name']
+ ):
+ time_labeled_or_assigned = _to_datetime(event['createdAt'])
+
+ if (
+ time_labeled_or_assigned is None
+ and event['__typename'] == 'AssignedEvent'
+ ):
+ time_labeled_or_assigned = _to_datetime(event['createdAt'])
+
+ if event['__typename'] in {'ClosedEvent', 'MergedEvent'}:
+ time_merged_or_closed = _to_datetime(event['createdAt'])
+
+ if event['__typename'] == 'ReadyForReviewEvent':
+ ready_for_review_at = _to_datetime(event['createdAt'])
+
+ yield {
+ 'created_at': created_at,
+ 'ready_for_review_at': ready_for_review_at,
+ 'time_labeled_or_assigned': time_labeled_or_assigned,
+ 'time_merged_or_closed': time_merged_or_closed,
+ 'time_review': time_review,
+ 'pr_closed': pr['state'] != 'OPEN',
+ }
+
+def _start_of_month(date: datetime) -> datetime:
+ return date.replace(day=1, hour=0, minute=0, second=0, microsecond=0)
+
+def _shift_n_months(date: datetime, n: int) -> datetime:
+ month = ((date.month + n - 1) % 12) + 1
+
+ # shift to next year if necessary
+ if date.month > month:
+ date = date.replace(year=date.year + 1)
+
+ date = date.replace(month=month)
+
+ return date
+
+
+def _rolling_window(
+ df: pd.DataFrame,
+ f: Callable[[pd.DataFrame], pd.Series],
+ window_size: int = 6,
+ step: int = 1,
+) -> pd.DataFrame:
+ # start of month of the first issue
+ start: datetime = df.iloc[0]['created_at'].replace(
+ day=1, hour=0, minute=0, second=0, microsecond=0
+ )
+ end = _shift_n_months(start, window_size)
+
+ last_month = _start_of_month(df.iloc[-1]['created_at'])
+ last_month = _shift_n_months(last_month, 1)
+
+ rows: List[pd.Series] = []
+ while end < last_month:
+ row = f(df[(df['created_at'] >= start) & (df['created_at'] < end)])
+ row['period_start'] = start
+ row['period_end'] = end
+ rows.append(row)
+ start = _shift_n_months(start, step)
+ end = _shift_n_months(end, step)
+
+ df = pd.DataFrame(rows)
+ df = df[['period_start', 'period_end'] + list(df.columns[:-2])]
+
+ return df
+
+def _process_prs(df: pd.DataFrame) -> pd.Series:
+ return pd.Series({
+ 'pr_response_time': df['pr_response_time'].dt.days.mean(),
+ 'pr_resolution_time': df['pr_resolution_time'].dt.days.mean(),
+ })
+
+def _process_issues(df: pd.DataFrame) -> pd.Series:
+ return pd.Series({
+ 'issue_response_time': df['issue_response_time'].dt.days.mean(),
+ 'issue_resolution_time': df['issue_resolution_time'].dt.days.mean(),
+ })
+
+#-----------------------------------------------------------------------------
+# main
+#-----------------------------------------------------------------------------
+def main(
+ repo_owner: str = 'google',
+ repo_name: str = 'flax',
+):
+ # Download issue data
+ issues = GithubGrabber(
+ '.github/analytics/issue_activity_since_date.gql',
+ 'issues',
+ repo_owner=repo_owner,
+ repo_name=repo_name,
+ )
+ issues.get()
+
+ df_issues = pd.DataFrame(list(_get_issues_features(issues.raw_data)))
+ df_issues['issue_response_time'] = df_issues['time_labeled_or_converted'] - df_issues['created_at']
+ df_issues['issue_resolution_time'] = df_issues['time_issue_closed'] - df_issues['created_at']
+
+ df_issues = _rolling_window(df_issues, _process_issues)
+
+ prs = GithubGrabber(
+ '.github/analytics/pr_data_query.gql',
+ 'pullRequests',
+ repo_owner=repo_owner,
+ repo_name=repo_name,
+ )
+ prs.get()
+
+ df_prs = pd.DataFrame(list(_get_pr_features(prs.raw_data)))
+ time_response = df_prs[['time_labeled_or_assigned', 'time_review']].min(axis=1)
+ df_prs['pr_response_time'] = time_response - df_prs['ready_for_review_at']
+ df_prs['pr_resolution_time'] = df_prs['time_merged_or_closed'] - df_prs['ready_for_review_at']
+
+ df_prs = _rolling_window(df_prs, _process_prs)
+
+ # plot for isssue_response_time
+ plt.figure()
+ plt.plot(df_issues['period_end'], df_issues['issue_response_time'])
+ plt.xlabel('Date')
+ plt.ylabel('Issue Response Time (days)')
+ plt.title('Issue Response Time')
+ plt.gca().xaxis.set_major_locator(plt.MaxNLocator(5))
+ plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
+ plt.ylim(0)
+
+ # plot for issue_resolution_time
+ plt.figure()
+ plt.plot(df_issues['period_end'], df_issues['issue_resolution_time'])
+ plt.xlabel('Date')
+ plt.ylabel('Issue Resolution Time (days)')
+ plt.title('Issue Resolution Time')
+ plt.gca().xaxis.set_major_locator(plt.MaxNLocator(5))
+ plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
+ plt.ylim(0)
+
+ # plot for pr_response_time
+ plt.figure()
+ plt.plot(df_prs['period_end'], df_prs['pr_response_time'])
+ plt.xlabel('Date')
+ plt.ylabel('Pull Request Response Time (days)')
+ plt.title('Pull Request Response Time')
+ plt.gca().xaxis.set_major_locator(plt.MaxNLocator(5))
+ plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
+ plt.ylim(0)
+
+ # plot for pr_resolution_time
+ plt.figure()
+ plt.plot(df_prs['period_end'], df_prs['pr_resolution_time'])
+ plt.xlabel('Date')
+ plt.ylabel('Pull Request Resolution Time (days)')
+ plt.title('Pull Request Resolution Time')
+ plt.gca().xaxis.set_major_locator(plt.MaxNLocator(5))
+ plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
+ plt.ylim(0)
+
+ # show plots
+ plt.show()
+
+if __name__ == '__main__':
+ main()
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -51,6 +51,7 @@
"tensorflow_datasets",
"tensorflow",
"torch",
+ "pandas", # get_repo_metrics script
]
__version__ = None
| Track Github metrics over time
We currently have little insight into how well we are maintaining our Github page.
It would be useful to have some way of tracking some metrics over time, to see whether we are improving / getting worse.
Some things we could track:
* Issue resolution time (how long does it take before we close an issue) (e.g., as in isitmaintained.com)
* Number of open issues (isitmaintained.com)
* Issue response time (how long does it take before we reply to an issue)
As a motivation: when querying isitmaintained.com on April 2022, we get the following scores for "issue resolution time":
* Flax: 21d
* JAX: 4d
* Tensorflow 8d
* Pytorch: 6d
Clearly we can improve here as Flax!
| Some suggestions from @cgarciae:
* We could write a script that gets statistics per month using the Github API.
* It could save the results in a CSV.
* We could then run a Github action as cronjob and retrieve these numbers automatically ever week/month.
Assigning this to @cgarciae since he would like to look into this and ask some other folks who have experience with this.
Someone from the Numpy team recommended us to look at this script:
https://github.com/scientific-python/devstats-data/blob/4c022961abc4ca6061f8719d9c3387e98734b90c/query.py
It feeds this page where they have some stats about various packages:
https://devstats.scientific-python.org/
Adapting that script I could get the following info.
**Issues**
```json
[
{
"cursor": "Y3Vyc29yOnYyOpHOIPZ9Dw==",
"node": {
"number": 5,
"title": "Flattening parameters",
"createdAt": "2020-01-21T17:31:37Z",
"state": "CLOSED",
"closedAt": "2020-03-27T07:47:35Z",
"updatedAt": "2020-03-27T07:47:35Z",
"url": "https://github.com/google/flax/issues/5",
"labels": {
"edges": []
},
"timelineItems": {
"totalCount": 4,
"edges": [
{
"node": {
"__typename": "IssueComment",
"author": {
"login": "avital"
},
"createdAt": "2020-01-22T09:42:42Z"
}
},
{
"node": {
"__typename": "IssueComment",
"author": {
"login": "avital"
},
"createdAt": "2020-03-06T09:16:43Z"
}
},
{
"node": {
"__typename": "IssueComment",
"author": {
"login": "marcvanzee"
},
"createdAt": "2020-03-27T07:47:35Z"
}
},
{
"node": {
"__typename": "ClosedEvent",
"actor": {
"login": "marcvanzee"
}
}
}
]
}
}
},
...
]
```
**PRs**
```json
[
{
"cursor": "Y3Vyc29yOnYyOpHOFYqJWQ==",
"node": {
"number": 1,
"state": "CLOSED",
"title": "Project directory restructure.",
"createdAt": "2020-01-10T11:11:17Z",
"baseRefName": "prerelease",
"mergeable": "CONFLICTING",
"author": {
"login": "Britefury"
},
"authorAssociation": "CONTRIBUTOR",
"mergedBy": null,
"mergedAt": null,
"reviews": {
"totalCount": 0
},
"participants": {
"totalCount": 4
}
}
},
...
}
```
This is a very good start. We need to properly define what metrics we want to report. I'll create a couple of suggestions next.
## Metrics
During the last N (6?) months:
* `issue-response-time`: Time between creation and the first label assignment or conversion to a discussion. This means that if a regular user responds it doesn't count. (Can users select labels?)
* `issue-resolution-time`: Time between creation and closed. Not sure what happens to issues that are converted to discussion, maybe just ignore those and have a separate metric for discussions.
* `pr-response-time`: Time between creation and reviewer is assigned.
* `discussion-response-time`: Time between creation and first comment.
* `discussion-resolution-time`: Time between creation and marked answered. | 2022-05-11T15:53:35 |
|
google/flax | 2,136 | google__flax-2136 | [
"2135"
] | ef6bf4054c30271a58bfabb58f3d0049ef5d851a | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
install_requires = [
"numpy>=1.12",
- "jax>=0.3",
+ "jax>=0.3.2",
"matplotlib", # only needed for tensorboard export
"msgpack",
"optax",
| Flax actually requires jax 0.3.2
https://github.com/google/flax/blob/ef6bf4054c30271a58bfabb58f3d0049ef5d851a/flax/linen/initializers.py#L19
the constant initialiser was added in this commit https://github.com/google/jax/commit/86e8928e709ac07cc51c10e815db6284507c320e that was first included in jax 0.3.2
This came up in NetKet's automated oldest-version-dependencies testing.
| 2022-05-23T14:30:02 |
||
google/flax | 2,171 | google__flax-2171 | [
"2153"
] | 0a5a187e63f9e5287444b1686494eb3875c38743 | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -84,8 +84,8 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
-html_theme = 'sphinx_rtd_theme'
-html_style = 'css/flax_theme.css'
+html_theme = 'sphinx_book_theme'
+# html_style = 'css/flax_theme.css'
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
| New Sphinx Theme
The idea is to get a new and shiny theme that makes Flax's RTD page standout a little more.
I've gathered a couple of options:
### [JAX's Theme](https://jax.readthedocs.io/en/latest/)

### [Pydata Sphinx Theme](https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/index.html)

### [Furo](https://pradyunsg.me/furo/quickstart/)

| +1 to JAX
+1 to Furo
@marcvanzee Here are some samples from JAX's theme (sphinx_book_theme) and furo.
My 2 cents: I like furo a little better but they both look good.
## sphinx_book_theme
#### landing page

#### module

## furo
#### landing page

#### module

| 2022-06-03T15:29:02 |
|
google/flax | 2,204 | google__flax-2204 | [
"2202"
] | 2e0428835655dc4f898ad119f98949e104d6fa52 | diff --git a/docs/_ext/codediff.py b/docs/_ext/codediff.py
--- a/docs/_ext/codediff.py
+++ b/docs/_ext/codediff.py
@@ -24,9 +24,10 @@
---
<CODE_BLOCK_RIGHT>
-In order to highlight a line of code, prepend it with "#!".
+In order to highlight a line of code, append "#!" to it.
"""
import itertools
+from typing import List, Tuple
from docutils import nodes
from docutils.parsers.rst import directives
@@ -35,10 +36,14 @@
import sphinx
from sphinx.util.docutils import SphinxDirective
+MISSING = object()
class CodeDiffParser:
- def parse(self, lines, title_left='Base', title_right='Diff', code_sep='---'):
+ def parse(
+ self, lines, title_left='Base', title_right='Diff', code_sep='---', sync=MISSING):
+ sync = sync is not MISSING
+
if code_sep not in lines:
raise ValueError('Code separator not found! Code snippets should be '
f'separated by {code_sep}.')
@@ -47,19 +52,10 @@ def parse(self, lines, title_left='Base', title_right='Diff', code_sep='---'):
test_code = lines[idx+1:]
code_right = self._code_block(test_code)
- self.max_left = max(len(x) for x in code_left + [title_left])
- self.max_right = max(len(x) for x in code_right + [title_right])
-
- output = [
- self._hline(),
- self._table_row(title_left, title_right),
- self._hline(),
- ]
+ output = self._tabs(
+ (title_left, code_left), (title_right, code_right), sync=sync)
- for l, r in itertools.zip_longest(code_left, code_right, fillvalue=''):
- output += [self._table_row(l, r)]
-
- return output + [self._hline()], test_code
+ return output, test_code
def _code_block(self, lines):
"""Creates a codeblock."""
@@ -77,17 +73,20 @@ def _code_block(self, lines):
# Indent code and add empty line so the code is picked up by the directive.
return directive + [''] + list(map(lambda x: ' ' + x, code))
- def _hline(self):
- return '+' + '-'*(self.max_left+2) + '+' + '-'*(self.max_right+2) + '+'
-
- def _rfill(self, text, max_len):
- return text + ' ' * (max_len-len(text))
+ def _tabs(self, *contents: Tuple[str, List[str]], sync):
+ output = ['.. tab-set::'] + [' ']
+
+ for title, content in contents:
+ output += [f' .. tab-item:: {title}']
+
+ if sync:
+ key = title.strip()
+ output += [f' :sync: {key}']
- def _table_row(self, left, right):
- text_left = self._rfill(left, self.max_left)
- text_right = self._rfill(right, self.max_right)
- return '| ' + text_left + ' | ' + text_right + ' |'
+ output += [' ']
+ output += [' ' + line for line in content]
+ return output
class CodeDiffDirective(SphinxDirective):
has_content = True
@@ -95,6 +94,7 @@ class CodeDiffDirective(SphinxDirective):
'title_left': directives.unchanged,
'title_right': directives.unchanged,
'code_sep': directives.unchanged,
+ 'sync': directives.flag,
}
def run(self):
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -60,7 +60,7 @@
'sphinx.ext.viewcode',
'myst_nb',
'codediff',
- 'sphinx_markdown_tables'
+ 'sphinx_design',
]
# Add any paths that contain templates here, relative to this directory.
@@ -91,7 +91,7 @@
# a list of builtin themes.
#
html_theme = 'sphinx_book_theme'
-# html_style = 'css/flax_theme.css'
+html_css_files = ["css/flax_theme.css"]
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
| codediff section not respecting html container
In the new `sphinx_book_theme` our custom `codediff` directive is rendering its content outside the container, overflowing until the end of the page. As shown in this screenshot, it appear to be rendered underneath the `contents` section because its not respecting its section boundaries:

| 2022-06-17T01:43:45 |
||
google/flax | 2,296 | google__flax-2296 | [
"2022"
] | 9eddf8666b61f2a647531f3501174f1c802f9e72 | diff --git a/flax/core/scope.py b/flax/core/scope.py
--- a/flax/core/scope.py
+++ b/flax/core/scope.py
@@ -693,7 +693,15 @@ def put_variable(self, col: str, name: str, value: Any):
if not self.is_mutable_collection(col):
raise errors.ModifyScopeVariableError(col, name, self.path_text)
variables = self._mutable_collection(col)
- variables[name] = value
+ # Make sure reference sharing of child variable dictionaries isn't broken
+ def put(target, key, val):
+ if key in target and isinstance(target[key], dict) and isinstance(val, Mapping):
+ for k, v in val.items():
+ put(target[key], k, v)
+ else:
+ target[key] = val
+
+ put(variables, name, value)
def variable(self, col: str, name: str, # pylint: disable=keyword-arg-before-vararg
init_fn: Optional[Callable[..., T]] = None,
| diff --git a/tests/core/core_lift_test.py b/tests/core/core_lift_test.py
--- a/tests/core/core_lift_test.py
+++ b/tests/core/core_lift_test.py
@@ -190,8 +190,24 @@ def c_fn(scope, x):
vars = vars.copy(updates)
self.assertEqual(vars['state'].unfreeze(), {'a_count': 1, 'b_count': 1, 'c_count': 1})
np.testing.assert_allclose(y1, y3)
-
-
+
+ def test_subscope_var_aliasing(self):
+ def test(scope, x):
+ subscope = scope.push(name="a")
+ subscope.put_variable('state', 'x', 0.)
+ _ = lift.while_loop(
+ lambda scope, x: False,
+ lambda scope, x: x,
+ scope,
+ jnp.array(0, jnp.int32),
+ carry_variables=['state'],
+ )
+ subscope.put_variable('state', 'x', 1.)
+ val0 = scope.variables()['state']['a']['x']
+ val1 = subscope.variables()['state']['x']
+ self.assertEqual(val0, val1)
+ return x
+ init(test)( random.PRNGKey(0), 1.)
if __name__ == '__main__':
diff --git a/tests/core/core_scope_test.py b/tests/core/core_scope_test.py
--- a/tests/core/core_scope_test.py
+++ b/tests/core/core_scope_test.py
@@ -209,6 +209,13 @@ def test_variable_no_init(self):
self.assertEqual(abc.value, 1)
with self.assertRaises(errors.ScopeVariableNotFoundError):
root.variable('state', 'test')
+
+ def test_variable_alias(self):
+ scope = Scope({}, mutable='state')
+ subscope = scope.push(name="a")
+ subscope.put_variable('state', 'x', 0.)
+ scope.put_variable('state', 'a', {'x': jnp.array(1., jnp.float32)})
+ self.assertEqual(scope.variables()['state']['a']['x'], subscope.variables()['state']['x'])
if __name__ == '__main__':
| Updating subtree with `put_variable` doesn't update sub-scopes' references.
There are rare cases where we want to manually mess with the tree of variables at some point in a model.
If we try to use `get_variable` and `put_variable` to directly modify the variables in a collection, this works _locally_ for variables within a module, however if we try to mess with a sub-module's variables from a parent module, the mutation applied to the outer scope doesn't propagate into the sub-scope's references.
This can be illustrated by the example:
```python
from flax import linen as nn
from jax import random, numpy as jnp
class A(nn.Module):
def setup(self):
self.foo = self.param('foo', nn.initializers.zeros, x.shape)
def dummy(self):
return None
def __call__(self, x):
print(self.foo) # == [0.] !!
return x + self.foo
class B(nn.Module):
@nn.compact
def __call__(self, x):
a = A(name="a")
# trigger setup
a.dummy()
# fetch variables under 'a' in params collection
vs = self.get_variable('params', 'a')
# update this subtree
new_vs = jax.tree_map(lambda x: jnp.ones_like(x), vs)
self.put_variable('params', 'a', new_vs)
# now run call and return
return a(x)
k = random.PRNGKey(0)
x = jnp.zeros((1,))
y, vs = B().init_with_output(k, x)
y # DeviceArray([0.], dtype=float32) # <-- "wrong"
vs # FrozenDict({'params': {'a': {'foo': DeviceArray([1.], dtype=float32),}}})
```
| minimal repro
```python
def test(scope):
subscope = scope.push(name="a")
subscope.put_variable('cache', 'x', jnp.array(0.0, jnp.float32))
# doesn't update subscope._variables but overwrites ref, leaving a "dangling" subscope
scope.put_variable('cache', 'a', {'x': jnp.array(1.0, jnp.float32)})
assert scope.variables()['cache']['a']['x'] == subscope.variables()['cache']['x']
k = random.PRNGKey(0)
_, vs = flax.core.init(test)(k)
``` | 2022-07-15T13:43:19 |
google/flax | 2,316 | google__flax-2316 | [
"2274"
] | f75454111ce2a12eee196d31fa64ee37e2be9509 | diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -18,17 +18,20 @@
import enum
import functools
import inspect
+import re
import threading
import typing
import weakref
-from typing import (Any, Callable, Dict, Generic, Iterable, List, Optional,
- Sequence, Set, Tuple, Type, TypeVar, Union, overload)
+from typing import (Any, Callable, Dict, Iterable, List, NamedTuple, Mapping,
+ Optional, Set, Tuple, Type, TypeVar, Union, overload)
import jax
import numpy as np
+import jax.numpy as jnp
from typing_extensions import \
dataclass_transform # pytype: disable=not-supported-yet
+import flax
from flax import (config, core, errors, serialization, traceback_util,
traverse_util)
from flax.core import Scope
@@ -37,8 +40,6 @@
CollectionFilter, DenyList, FrozenVariableDict, Variable, VariableDict,
union_filters)
from flax.ids import uuid
-from flax.linen import summary
-
traceback_util.register_exclusion(__file__)
@@ -61,6 +62,16 @@
# pylint: disable=protected-access,attribute-defined-outside-init
+def _get_value_representation(x: Any) -> 'flax.linen.summary._ValueRepresentation':
+ from flax.linen import summary
+
+ if isinstance(x, (int, float, bool, type(None))) or (
+ isinstance(x, np.ndarray) and np.isscalar(x)):
+ return summary._ObjectRepresentation(x)
+ try:
+ return summary._ArrayRepresentation(jnp.shape(x), jnp.result_type(x))
+ except:
+ return summary._ObjectRepresentation(x)
def _indent(x: str, num_spaces: int):
indent_str = ' ' * num_spaces
@@ -104,6 +115,46 @@ def _module_repr(module: 'Module', num_spaces: int = 4):
else:
return f'{cls_name}()'
+#
+# -----------------------------------------------------------------------------
+
+_find_non_lifted_module = re.compile(r'.*\((.*)\)')
+
+def _fix_path_part(part: str):
+ """Fixes a path part by removing transformation name and parenthesis sometimes
+ inserted by lifted transformations"""
+ match = _find_non_lifted_module.match(part)
+ if match:
+ return match.group(1)
+ return part
+
[email protected]
+class _CallInfo:
+ index: int
+ path: Tuple[str, ...]
+ module_type: Type['Module']
+ method: str
+ args: Tuple[Any, ...]
+ kwargs: Dict[str, Any]
+ outputs: Any
+
[email protected]
+class _CallInfoContext(threading.local):
+ index: int
+ calls: List[_CallInfo]
+
+ def get_call_index(self, module: 'Module') -> int:
+ index = self.index
+ self.index += 1
+ return index
+
[email protected]
+def _tabulate_context():
+ _context.call_info_stack.append(_CallInfoContext(0, []))
+ try:
+ yield
+ finally:
+ _context.call_info_stack.pop()
# Track parent relationship across Modules.
# -----------------------------------------------------------------------------
@@ -128,6 +179,13 @@ def capture_stack(self):
self._thread_data.capture_stack = []
return self._thread_data.capture_stack
+ @property
+ def call_info_stack(self) -> List[_CallInfoContext]:
+ """Keeps track of the active call_info_context."""
+ if not hasattr(self._thread_data, 'call_info_stack'):
+ self._thread_data.call_info_stack = []
+ return self._thread_data.call_info_stack
+
# The global context
_context = _DynamicContext()
@@ -638,6 +696,7 @@ def _call_wrapped_method(self, fun, args, kwargs):
is_compact_method = hasattr(fun, 'compact')
fun_name = getattr(fun, '__name__', 'unnamed_function')
is_setup_method = fun_name == 'setup'
+ add_call_info = not is_setup_method and len(_context.call_info_stack) > 0
# We lazily call setup() only when needed.
if is_setup_method:
is_recurrent = self._state.in_setup
@@ -652,15 +711,27 @@ def _call_wrapped_method(self, fun, args, kwargs):
self._state.in_compact_method = True
_context.module_stack.append(self)
try:
+ # get call info
+ if add_call_info:
+ call_index = _context.call_info_stack[-1].get_call_index(self)
+ scope_path = jax.tree_util.tree_map(_fix_path_part, self.scope.path)
+
+ # call method
if _use_named_call:
with jax.named_scope(_derive_profiling_name(self, fun)):
y = fun(self, *args, **kwargs)
else:
y = fun(self, *args, **kwargs)
+
if _context.capture_stack:
filter_fn = _context.capture_stack[-1]
if filter_fn and filter_fn(self, fun_name):
self.sow('intermediates', fun_name, y)
+ if add_call_info:
+ _args, _kwargs, _y = jax.tree_util.tree_map(
+ _get_value_representation, (args, kwargs, y), is_leaf=lambda x: x is None)
+ _context.call_info_stack[-1].calls.append(
+ _CallInfo(call_index, scope_path, type(self), fun.__name__, _args, _kwargs, _y))
return y
finally:
_context.module_stack.pop()
@@ -1410,17 +1481,17 @@ def tabulate(
self,
rngs: Union[PRNGKey, RNGSequences],
*args,
- method: Optional[Callable[..., Any]] = None,
- mutable: CollectionFilter = True,
depth: Optional[int] = None,
- exclude_methods: Sequence[str] = (),
+ show_repeated: bool = False,
+ mutable: CollectionFilter = True,
+ console_kwargs: Optional[Mapping[str, Any]] = None,
**kwargs) -> str:
"""Creates a summary of the Module represented as a table.
- This method has the same signature as `init`, but instead of returning
- the variables, it returns the string summarizing the Module in a table.
- `tabulate` uses `jax.eval_shape` to run the forward computation without
- consuming any FLOPs or allocating memory.
+ This method has the same signature and internally calls `Module.init`,
+ but instead of returning the variables, it returns the string summarizing
+ the Module in a table. `tabulate` uses `jax.eval_shape` to run the forward
+ computation without consuming any FLOPs or allocating memory.
Example::
@@ -1441,61 +1512,60 @@ def __call__(self, x):
This gives the following output::
- Foo Summary
- ┏━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓
- ┃ path ┃ outputs ┃ params ┃
- ┡━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩
- │ Inputs │ float32[16,9] │ │
- ├─────────┼───────────────┼──────────────────────┤
- │ Dense_0 │ float32[16,4] │ bias: float32[4] │
- │ │ │ kernel: float32[9,4] │
- │ │ │ │
- │ │ │ 40 (160 B) │
- ├─────────┼───────────────┼──────────────────────┤
- │ Dense_1 │ float32[16,2] │ bias: float32[2] │
- │ │ │ kernel: float32[4,2] │
- │ │ │ │
- │ │ │ 10 (40 B) │
- ├─────────┼───────────────┼──────────────────────┤
- │ Foo │ float32[16,2] │ │
- ├─────────┼───────────────┼──────────────────────┤
- │ │ Total │ 50 (200 B) │
- └─────────┴───────────────┴──────────────────────┘
-
- Total Parameters: 50 (200 B)
+ Foo Summary
+ ┏━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓
+ ┃ path ┃ module ┃ inputs ┃ outputs ┃ params ┃
+ ┡━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩
+ │ │ Foo │ float32[16,9] │ float32[16,2] │ │
+ ├─────────┼────────┼───────────────┼───────────────┼──────────────────────┤
+ │ Dense_0 │ Dense │ float32[16,9] │ float32[16,4] │ bias: float32[4] │
+ │ │ │ │ │ kernel: float32[9,4] │
+ │ │ │ │ │ │
+ │ │ │ │ │ 40 (160 B) │
+ ├─────────┼────────┼───────────────┼───────────────┼──────────────────────┤
+ │ Dense_1 │ Dense │ float32[16,4] │ float32[16,2] │ bias: float32[2] │
+ │ │ │ │ │ kernel: float32[4,2] │
+ │ │ │ │ │ │
+ │ │ │ │ │ 10 (40 B) │
+ ├─────────┼────────┼───────────────┼───────────────┼──────────────────────┤
+ │ │ │ │ Total │ 50 (200 B) │
+ └─────────┴────────┴───────────────┴───────────────┴──────────────────────┘
+
+ Total Parameters: 50 (200 B)
**Note**: rows order in the table does not represent execution order,
instead it aligns with the order of keys in `variables` which are sorted
alphabetically.
Args:
- rngs: The rngs for the variable collections.
+ rngs: The rngs for the variable collections as passed to `Module.init`.
*args: The arguments to the forward computation.
- method: An optional method. If provided, applies this method. If not
- provided, applies the ``__call__`` method.
- mutable: Can be bool, str, or list. Specifies which collections should be
- treated as mutable: ``bool``: all/no collections are mutable.
- ``str``: The name of a single mutable collection. ``list``: A
- list of names of mutable collections. By default all collections
- except 'intermediates' are mutable.
depth: controls how many submodule deep the summary can go. By default its
`None` which means no limit. If a submodule is not shown because of the
- depth limit, its parameter count and bytes will be added to the row of
- its first shown ancestor such that the sum of all rows always adds up to
- the total number of parameters of the Module.
- exclude_methods: A sequence of strings that specifies which methods should
- be ignored. In case a module calls a helper method from its main method,
- use this argument to exclude the helper method from the summary to avoid
- ambiguity.
+ depth limit, its parameter count and bytes will be added to the row of its
+ first shown ancestor such that the sum of all rows always adds up to the
+ total number of parameters of the Module.
+ show_repeated: If `True`, repeated calls to the same module will be shown
+ in the table, otherwise only the first call will be shown. Default is
+ `False`.
+ mutable: Can be bool, str, or list. Specifies which collections should be
+ treated as mutable: ``bool``: all/no collections are mutable. ``str``: The
+ name of a single mutable collection. ``list``: A list of names of mutable
+ collections. By default all collections except 'intermediates' are
+ mutable.
+ console_kwargs: An optional dictionary with additional keyword arguments that
+ are passed to `rich.console.Console` when rendering the table. Default arguments
+ are `{'force_terminal': True, 'force_jupyter': False}`.
**kwargs: keyword arguments to pass to the forward computation.
Returns:
A string summarizing the Module.
"""
-
- tabulate_fn = summary.tabulate(self, rngs, method=method,
- mutable=mutable, depth=depth,
- exclude_methods=exclude_methods)
+ from flax.linen import summary
+
+ tabulate_fn = summary.tabulate(self, rngs, depth=depth,
+ show_repeated=show_repeated, mutable=mutable,
+ console_kwargs=console_kwargs)
return tabulate_fn(*args, **kwargs)
diff --git a/flax/linen/summary.py b/flax/linen/summary.py
--- a/flax/linen/summary.py
+++ b/flax/linen/summary.py
@@ -13,13 +13,15 @@
# limitations under the License.
"""Flax Module summary library."""
+from abc import ABC, abstractmethod
import dataclasses
import io
-from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Sequence, Set, Tuple, Union
+from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Sequence, Set, Tuple, Type, Union
-import flax
-from flax.core.scope import CollectionFilter, DenyList
+import flax.linen.module as module_lib
+from flax.core.scope import CollectionFilter, FrozenVariableDict, MutableVariableDict
import jax
+import jax.numpy as jnp
import rich.console
import rich.table
import rich.text
@@ -29,6 +31,42 @@
RNGSequences = Dict[str, PRNGKey]
Array = Any # pylint: disable=invalid-name
+class _ValueRepresentation(ABC):
+ """A class that represents a value in the summary table."""
+
+ @abstractmethod
+ def render(self) -> str:
+ ...
+
+ @abstractmethod
+ def value(self) -> Any:
+ ...
+
[email protected]
+class _ArrayRepresentation(_ValueRepresentation):
+ shape: Tuple[int, ...]
+ dtype: Any
+
+ @classmethod
+ def render_array(cls, x) -> str:
+ return cls(jnp.shape(x), jnp.result_type(x)).render()
+
+ def render(self):
+ shape_repr = ','.join(str(x) for x in self.shape)
+ return f'[dim]{self.dtype}[/dim][{shape_repr}]'
+
+ def value(self):
+ return self
+
[email protected]
+class _ObjectRepresentation(_ValueRepresentation):
+ obj: Any
+
+ def render(self):
+ return repr(self.obj)
+
+ def value(self):
+ return self.obj
@dataclasses.dataclass
class Row:
@@ -46,12 +84,18 @@ class Row:
from submodules depending on the depth of the Module in question.
"""
path: Tuple[str, ...]
+ module_type: Type[module_lib.Module]
+ method: str
+ inputs: Any
outputs: Any
- module_variables: Dict[str, Dict[str, Array]]
- counted_variables: Dict[str, Dict[str, Array]]
+ module_variables: Dict[str, Dict[str, Any]]
+ counted_variables: Dict[str, Dict[str, Any]]
+
+ def __post_init__(self):
+ self.inputs = _normalize_structure(self.inputs)
+ self.outputs = _normalize_structure(self.outputs)
- def size_and_bytes(self,
- collections: Iterable[str]) -> Dict[str, Tuple[int, int]]:
+ def size_and_bytes(self, collections: Iterable[str]) -> Dict[str, Tuple[int, int]]:
return {
col: _size_and_bytes(self.counted_variables[col])
if col in self.counted_variables else (0, 0) for col in collections
@@ -68,7 +112,7 @@ class Table(List[Row]):
* `collections`: a list containing the parameter collections (e.g. 'params', 'batch_stats', etc)
"""
- def __init__(self, module: 'flax.linen.Module', collections: List[str],
+ def __init__(self, module: module_lib.Module, collections: Sequence[str],
rows: Iterable[Row]):
super().__init__(rows)
self.module = module
@@ -76,22 +120,21 @@ def __init__(self, module: 'flax.linen.Module', collections: List[str],
def tabulate(
- module: 'flax.linen.Module',
- rngs: Union[PRNGKey, RNGSequences],
- method: Optional[Callable[..., Any]] = None,
- mutable: CollectionFilter = True,
- depth: Optional[int] = None,
- exclude_methods: Sequence[str] = (),
+ module: module_lib.Module,
+ rngs: Union[PRNGKey, RNGSequences],
+ depth: Optional[int] = None,
+ show_repeated: bool = False,
+ mutable: CollectionFilter = True,
+ console_kwargs: Optional[Mapping[str, Any]] = None,
+ **kwargs,
) -> Callable[..., str]:
"""Returns a function that creates a summary of the Module represented as a table.
- This function accepts most of the same arguments as `Module.init`, except that
- it returns a function of the form `(*args, **kwargs) -> str` where `*args` and
- `**kwargs`
- are passed to `method` (e.g. `__call__`) during the forward pass.
+ This function accepts most of the same arguments and internally calls `Module.init`,
+ except that it returns a function of the form `(*args, **kwargs) -> str` where `*args`
+ and `**kwargs` are passed to `method` (e.g. `__call__`) during the forward pass.
- `tabulate` uses `jax.eval_shape` under the hood to run the forward computation
- without
+ `tabulate` uses `jax.eval_shape` under the hood to run the forward computation without
consuming any FLOPs or allocating memory.
Example::
@@ -101,10 +144,10 @@ def tabulate(
import flax.linen as nn
class Foo(nn.Module):
- @nn.compact
- def __call__(self, x):
- h = nn.Dense(4)(x)
- return nn.Dense(2)(h)
+ @nn.compact
+ def __call__(self, x):
+ h = nn.Dense(4)(x)
+ return nn.Dense(2)(h)
x = jnp.ones((16, 9))
tabulate_fn = nn.tabulate(Foo(), jax.random.PRNGKey(0))
@@ -114,28 +157,27 @@ def __call__(self, x):
This gives the following output::
- Foo Summary
- ┏━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓
- ┃ path ┃ outputs ┃ params ┃
- ┡━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩
- │ Inputs │ float32[16,9] │ │
- ├─────────┼───────────────┼──────────────────────┤
- │ Dense_0 │ float32[16,4] │ bias: float32[4] │
- │ │ │ kernel: float32[9,4] │
- │ │ │ │
- │ │ │ 40 (160 B) │
- ├─────────┼───────────────┼──────────────────────┤
- │ Dense_1 │ float32[16,2] │ bias: float32[2] │
- │ │ │ kernel: float32[4,2] │
- │ │ │ │
- │ │ │ 10 (40 B) │
- ├─────────┼───────────────┼──────────────────────┤
- │ Foo │ float32[16,2] │ │
- ├─────────┼───────────────┼──────────────────────┤
- │ │ Total │ 50 (200 B) │
- └─────────┴───────────────┴──────────────────────┘
-
- Total Parameters: 50 (200 B)
+ Foo Summary
+ ┏━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓
+ ┃ path ┃ module ┃ inputs ┃ outputs ┃ params ┃
+ ┡━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩
+ │ │ Foo │ float32[16,9] │ float32[16,2] │ │
+ ├─────────┼────────┼───────────────┼───────────────┼──────────────────────┤
+ │ Dense_0 │ Dense │ float32[16,9] │ float32[16,4] │ bias: float32[4] │
+ │ │ │ │ │ kernel: float32[9,4] │
+ │ │ │ │ │ │
+ │ │ │ │ │ 40 (160 B) │
+ ├─────────┼────────┼───────────────┼───────────────┼──────────────────────┤
+ │ Dense_1 │ Dense │ float32[16,4] │ float32[16,2] │ bias: float32[2] │
+ │ │ │ │ │ kernel: float32[4,2] │
+ │ │ │ │ │ │
+ │ │ │ │ │ 10 (40 B) │
+ ├─────────┼────────┼───────────────┼───────────────┼──────────────────────┤
+ │ │ │ │ Total │ 50 (200 B) │
+ └─────────┴────────┴───────────────┴───────────────┴──────────────────────┘
+
+ Total Parameters: 50 (200 B)
+
**Note**: rows order in the table does not represent execution order,
instead it aligns with the order of keys in `variables` which are sorted
@@ -143,22 +185,24 @@ def __call__(self, x):
Args:
module: The module to tabulate.
- method: An optional method. If provided, applies this method. If not
- provided, applies the ``__call__`` method.
- mutable: Can be bool, str, or list. Specifies which collections should be
- treated as mutable: ``bool``: all/no collections are mutable. ``str``: The
- name of a single mutable collection. ``list``: A list of names of mutable
- collections. By default all collections except 'intermediates' are
- mutable.
+ rngs: The rngs for the variable collections as passed to `Module.init`.
depth: controls how many submodule deep the summary can go. By default its
`None` which means no limit. If a submodule is not shown because of the
depth limit, its parameter count and bytes will be added to the row of its
first shown ancestor such that the sum of all rows always adds up to the
total number of parameters of the Module.
- exclude_methods: A sequence of strings that specifies which methods should
- be ignored. In case a module calls a helper method from its main method,
- use this argument to exclude the helper method from the summary to avoid
- ambiguity.
+ mutable: Can be bool, str, or list. Specifies which collections should be
+ treated as mutable: ``bool``: all/no collections are mutable. ``str``: The
+ name of a single mutable collection. ``list``: A list of names of mutable
+ collections. By default all collections except 'intermediates' are
+ mutable.
+ show_repeated: If `True`, repeated calls to the same module will be shown
+ in the table, otherwise only the first call will be shown. Default is
+ `False`.
+ console_kwargs: An optional dictionary with additional keyword arguments that
+ are passed to `rich.console.Console` when rendering the table. Default arguments
+ are `{'force_terminal': True, 'force_jupyter': False}`.
+ **kwargs: Additional arguments passed to `Module.init`.
Returns:
A function that accepts the same `*args` and `**kwargs` of the forward pass
@@ -166,170 +210,125 @@ def __call__(self, x):
Modules.
"""
- def _tabulate_fn(*args, **kwargs):
- table_fn = _get_module_table(module, rngs, method=method,
- mutable=mutable, depth=depth,
- exclude_methods=set(exclude_methods))
- table = table_fn(*args, **kwargs)
- return _render_table(table)
+ def _tabulate_fn(*fn_args, **fn_kwargs):
+ table_fn = _get_module_table(module, depth=depth, show_repeated=show_repeated)
+ table = table_fn(rngs, *fn_args, mutable=mutable, **fn_kwargs, **kwargs)
+ return _render_table(table, console_kwargs)
return _tabulate_fn
-
def _get_module_table(
- module: 'flax.linen.Module',
- rngs: Union[PRNGKey, RNGSequences],
- method: Optional[Callable[..., Any]],
- mutable: CollectionFilter,
+ module: module_lib.Module,
depth: Optional[int],
- exclude_methods: Set[str],
+ show_repeated: bool,
) -> Callable[..., Table]:
-
- exclude_methods.add("setup")
+ """A function that takes a Module and returns function with the same signature as `init`
+ but returns the Table representation of the Module."""
def _get_table_fn(*args, **kwargs):
- output_methods: Set[str] = set()
-
- def capture_intermediates(_module, method_name: str):
- if method_name in exclude_methods:
- return False
+
+ with module_lib._tabulate_context():
+
+ def _get_variables():
+ return module.init(*args, **kwargs)
+
+ variables = jax.eval_shape(_get_variables)
+ calls = module_lib._context.call_info_stack[-1].calls
+ calls.sort(key=lambda c: c.index)
+
+ collections: Set[str] = set(variables.keys())
+ rows = []
+ all_paths: Set[Tuple[str, ...]] = set(call.path for call in calls)
+ visited_paths: Set[Tuple[str, ...]] = set()
+
+ for c in calls:
+ call_depth = len(c.path)
+ inputs = _process_inputs(c.args, c.kwargs)
+
+ if c.path in visited_paths:
+ if not show_repeated:
+ continue
+ module_vars = {}
+ counted_vars = {}
+ elif depth is not None:
+ if call_depth > depth:
+ continue
+ module_vars, _ = _get_module_variables(c.path, variables, all_paths)
+ if call_depth == depth:
+ counted_vars = _get_path_variables(c.path, variables)
+ else:
+ counted_vars = module_vars
else:
- output_methods.add(method_name)
- return True
-
- shape_variables = jax.eval_shape(lambda: module.init(
- rngs,
- *args,
- method=method,
- mutable=mutable,
- capture_intermediates=capture_intermediates,
- **kwargs,
- ))
-
- collections: List[str] = [
- col for col in shape_variables.keys() if col != 'intermediates'
- ]
- shape_variables = shape_variables.unfreeze()
- rows = list(
- _flatten_to_rows(
- path=(),
- variables=shape_variables,
- depth=depth,
- output_methods=output_methods))
-
- if args and kwargs:
- input_values = (*args, kwargs)
- elif args and not kwargs:
- input_values = args[0] if len(args) == 1 else args
- elif kwargs and not args:
- input_values = kwargs
- else:
- input_values = ''
-
- inputs_row = Row(('Inputs',), input_values, {}, {})
- rows.insert(0, inputs_row)
-
- return Table(module, collections, rows)
+ module_vars, _ = _get_module_variables(c.path, variables, all_paths)
+ counted_vars = module_vars
+
+ visited_paths.add(c.path)
+ rows.append(
+ Row(c.path, c.module_type, c.method, inputs, c.outputs, module_vars, counted_vars))
+
+ return Table(module, tuple(collections), rows)
return _get_table_fn
-
-def _flatten_to_rows(
- path: Tuple[str, ...],
- variables: Dict[str, Any],
- depth: Optional[int],
- output_methods: Set[str],
-) -> Iterable[Row]:
-
- # get variables only for this Module
- module_variables = _get_module_variables(variables)
- module_outputs = {
- key: value
- for key, value in variables['intermediates'].items()
- if key in output_methods
- }
-
- if len(module_outputs) == 0:
- output = None
- elif len(module_outputs) > 1:
- raise ValueError(
- f"Cannot infer output, module '{'/'.join(path)}' has multiple "
- f"intermediates: {list(module_outputs.keys())}. Use the `exclude_methods` "
- f"argument to make sure each module only reports one output.")
- else:
- output = list(module_outputs.values())[0][0]
-
- if depth is not None and depth == 0:
- # don't recurse, yield current level
- # count_variables contains all variables that are not intermediates
- variables = variables.copy()
- del variables['intermediates']
- module_variables.pop('intermediates')
- yield Row(
- path=path,
- outputs=output,
- module_variables=module_variables,
- counted_variables=variables,
- )
+def _get_module_variables(
+ path: Tuple[str, ...], variables: FrozenVariableDict, all_paths: Set[Tuple[str, ...]]
+) -> Tuple[MutableVariableDict, Any]:
+ """A function that takes a path and variables structure and returns a
+ (module_variables, submodule_variables) tuple for that path. _get_module_variables
+ uses the `all_paths` set to determine if a variable belongs to a submodule or not."""
+ module_variables = _get_path_variables(path, variables)
+ submodule_variables = {collection: {} for collection in module_variables}
+ all_keys = set(key for collection in module_variables.values() for key in collection)
+
+ for key in all_keys:
+ submodule_path = path + (key,)
+ if submodule_path in all_paths:
+
+ for collection in module_variables:
+ if key in module_variables[collection]:
+ submodule_variables[collection][key] = module_variables[collection].pop(key)
+
+ return module_variables, submodule_variables
+
+def _get_path_variables(path: Tuple[str, ...], variables: FrozenVariableDict) -> MutableVariableDict:
+ """A function that takes a path and a variables structure and returns the variable structure at
+ that path."""
+ path_variables = {}
+
+ for collection in variables:
+ collection_variables = variables[collection]
+ for name in path:
+ if name not in collection_variables:
+ collection_variables = None
+ break
+ collection_variables = collection_variables[name]
+
+ if collection_variables is not None:
+ path_variables[collection] = collection_variables.unfreeze()
+
+ return path_variables
+
+def _process_inputs(args, kwargs) -> Any:
+ """A function that normalizes the representation of the ``args`` and ``kwargs``
+ for the ``inputs`` column."""
+ if args and kwargs:
+ input_values = (*args, kwargs)
+ elif args and not kwargs:
+ input_values = args[0] if len(args) == 1 else args
+ elif kwargs and not args:
+ input_values = kwargs
else:
- # recurse into lower levels
- keys = list(key for key in variables['intermediates'].keys()
- if key not in module_variables['intermediates'])
-
- # add keys from other collections
- # dont use set here because we want to preserve order
- for collection in variables:
- if collection != 'intermediates':
- for key in variables[collection]:
- if key not in keys and key not in module_variables.get(
- collection, {}):
- keys.append(key)
-
- for key in keys:
- next_path = path + (key,)
- next_variables = _step_into(variables, key)
- yield from _flatten_to_rows(
- path=next_path,
- variables=next_variables,
- depth=depth - 1 if depth is not None else None,
- output_methods=output_methods,
- )
-
- # current row
- yield Row(
- path=path,
- outputs=output,
- module_variables=module_variables,
- counted_variables=module_variables,
- )
-
-
-def _step_into(variables: Dict[str, Any], key: str):
- return {
- col: params[key] for col, params in variables.items() if key in params
- }
+ input_values = ()
+ return input_values
-def _get_module_variables(variables: Dict[str, Any]) -> Dict[str, Any]:
-
- module_variables: Dict[str, Dict[str, Any]] = {
- collection: {
- name: value
- for name, value in params.items()
- if not isinstance(value, Mapping) # is this robust?
- } for collection, params in variables.items()
- }
- # filter empty collectionswhen
- module_variables = {
- collection: params
- for collection, params in module_variables.items()
- if len(params) > 0
- }
-
- return module_variables
-
-
-def _render_table(table: Table) -> str:
+def _render_table(table: Table, console_extras: Optional[Mapping[str, Any]]) -> str:
+ """A function that renders a Table to a string representation using rich."""
+ console_kwargs = {'force_terminal': True, 'force_jupyter': False}
+ if console_extras is not None:
+ console_kwargs.update(console_extras)
+
+ non_params_cols = 4
rich_table = rich.table.Table(
show_header=True,
show_lines=True,
@@ -338,6 +337,8 @@ def _render_table(table: Table) -> str:
)
rich_table.add_column('path')
+ rich_table.add_column('module')
+ rich_table.add_column('inputs')
rich_table.add_column('outputs')
for col in table.collections:
@@ -351,20 +352,25 @@ def _render_table(table: Table) -> str:
if collection in row.module_variables:
col_repr += _as_yaml_str(
- jax.tree_util.tree_map(_format_value,
- row.module_variables[collection]))
- col_repr += '\n\n'
+ _summary_tree_map(_ArrayRepresentation.render_array, row.module_variables[collection]))
+ if col_repr:
+ col_repr += '\n\n'
col_repr += f'[bold]{_size_and_bytes_repr(*size_bytes)}[/bold]'
collections_size_repr.append(col_repr)
+ no_show_methods = {'__call__', '<lambda>'}
+ path_repr = '/'.join(row.path)
+ method_repr = f' [dim]({row.method})[/dim]' if row.method not in no_show_methods else ''
rich_table.add_row(
- '/'.join(row.path) if row.path else table.module.__class__.__name__,
- _as_yaml_str(jax.tree_util.tree_map(_format_value, row.outputs)),
+ path_repr,
+ row.module_type.__name__ + method_repr,
+ _as_yaml_str(_summary_tree_map(lambda x: x.render(), row.inputs)),
+ _as_yaml_str(_summary_tree_map(lambda x: x.render(), row.outputs)),
*collections_size_repr)
# add footer with totals
- rich_table.columns[1].footer = rich.text.Text.from_markup(
+ rich_table.columns[non_params_cols - 1].footer = rich.text.Text.from_markup(
'Total', justify='right')
# get collection totals
@@ -378,8 +384,8 @@ def _render_table(table: Table) -> str:
# add totals to footer
for i, col in enumerate(table.collections):
- rich_table.columns[2 +
- i].footer = _size_and_bytes_repr(*collection_total[col])
+ rich_table.columns[non_params_cols + i].footer = \
+ _size_and_bytes_repr(*collection_total[col])
# add final totals to caption
caption_totals = (0, 0)
@@ -392,8 +398,10 @@ def _render_table(table: Table) -> str:
rich_table.caption_style = 'bold'
rich_table.caption = f'\nTotal Parameters: {_size_and_bytes_repr(*caption_totals)}'
- return '\n' + _get_rich_repr(rich_table) + '\n'
+ return '\n' + _get_rich_repr(rich_table, console_kwargs) + '\n'
+def _summary_tree_map(f, tree, *rest):
+ return jax.tree_util.tree_map(f, tree, *rest, is_leaf=lambda x: x is None)
def _size_and_bytes_repr(size: int, num_bytes: int) -> str:
if not size:
@@ -409,9 +417,9 @@ def _size_and_bytes(pytree: Any) -> Tuple[int, int]:
return size, num_bytes
-def _get_rich_repr(obj):
+def _get_rich_repr(obj, console_kwargs):
f = io.StringIO()
- console = rich.console.Console(file=f, force_terminal=True)
+ console = rich.console.Console(file=f, **console_kwargs)
console.print(obj)
return f.getvalue()
@@ -432,13 +440,13 @@ def _as_yaml_str(value) -> str:
return file.getvalue().replace('\n...', '').replace('\'', '').strip()
-def _format_value(value):
- if hasattr(value, 'shape') and hasattr(value, 'dtype'):
- shape_repr = ','.join(map(str, value.shape))
- return f'[dim]{value.dtype}[/dim][{shape_repr}]'
+def _normalize_structure(obj):
+ if isinstance(obj, (tuple, list)):
+ return tuple(map(_normalize_structure, obj))
+ elif isinstance(obj, Mapping):
+ return {k: _normalize_structure(v) for k, v in obj.items()}
else:
- return str(value)
-
+ return obj
def _bytes_repr(num_bytes):
count, units = ((f'{num_bytes / 1e9 :,.1f}', 'GB') if num_bytes > 1e9 else
| diff --git a/tests/linen/summary_test.py b/tests/linen/summary_test.py
--- a/tests/linen/summary_test.py
+++ b/tests/linen/summary_test.py
@@ -12,23 +12,23 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import dataclasses
-from typing import List, Type
+from typing import List
import jax
import jax.numpy as jnp
-import numpy as np
from absl.testing import absltest
-from jax import lax, random
-from jax.nn import initializers
+from jax import random
+import numpy as np
from flax import linen as nn
from flax.core.scope import Array
-from flax.linen.summary import _get_module_table
+from flax.linen import summary
# Parse absl flags test_srcdir and test_tmpdir.
jax.config.parse_flags_with_absl()
+CONSOLE_TEST_KWARGS = dict(force_terminal=False, no_color=True, width=10_000)
+
def _get_shapes(pytree):
return jax.tree_util.tree_map(lambda x: x.shape if hasattr(x, 'shape') else x, pytree)
@@ -96,9 +96,7 @@ def __call__(self, x: Array, training: bool) -> Array:
return x, dict(a=x, b=x+1.0)
-
-
-class ModuleTest(absltest.TestCase):
+class SummaryTest(absltest.TestCase):
def test_module_summary(self):
"""
@@ -111,55 +109,63 @@ def test_module_summary(self):
x = jnp.ones((batch_size, 28, 28, 1))
module = CNN(test_sow=False)
- table = _get_module_table(
- module,
+ table = summary._get_module_table(module, depth=None, show_repeated=True)(
{"dropout":random.PRNGKey(0), "params": random.PRNGKey(1)},
- method=None, mutable=True, depth=None,
- exclude_methods=set(),
- )(
- x, training=True
+ x, training=True, mutable=True,
)
+ # get values for inputs and outputs from their _ValueRepresentation
+ for row in table:
+ row.inputs = jax.tree_util.tree_map(lambda x: x.value(), row.inputs)
+ row.outputs = jax.tree_util.tree_map(lambda x: x.value(), row.outputs)
- # 11 rows = 1 Inputs + 4 ConvBlock_0 + 4 ConvBlock_1 + 1 Dense_0 + 1 Module output
- self.assertEqual(len(table), 11)
+ # 10 rows = 1 CNN + 4 ConvBlock_0 + 4 ConvBlock_1 + 1 Dense_0
+ self.assertEqual(len(table), 10)
# check paths
- self.assertEqual(table[0].path, ("Inputs",))
+ self.assertEqual(table[0].path, ())
- self.assertEqual(table[1].path, ("block1", "bn"))
+ self.assertEqual(table[1].path, ("block1",))
self.assertEqual(table[2].path, ("block1", "conv"))
- self.assertEqual(table[3].path, ("block1", "dropout"))
- self.assertEqual(table[4].path, ("block1",))
+ self.assertEqual(table[3].path, ("block1", "bn"))
+ self.assertEqual(table[4].path, ("block1", "dropout"))
- self.assertEqual(table[5].path, ("block2", "bn"))
+ self.assertEqual(table[5].path, ("block2",))
self.assertEqual(table[6].path, ("block2", "conv"))
- self.assertEqual(table[7].path, ("block2", "dropout"))
- self.assertEqual(table[8].path, ("block2",))
+ self.assertEqual(table[7].path, ("block2", "bn"))
+ self.assertEqual(table[8].path, ("block2", "dropout"))
self.assertEqual(table[9].path, ("dense",))
- self.assertEqual(table[10].path, ())
# check outputs shapes
self.assertEqual(
- (table[0].outputs[0].shape, table[0].outputs[1]),
+ (table[0].inputs[0].shape, table[0].inputs[1]),
(x.shape, dict(training=True)),
)
+ self.assertEqual(
+ _get_shapes(table[0].outputs),
+ ((batch_size, 10), dict(a=(batch_size, 10), b=(batch_size, 10))),
+ )
+ self.assertEqual(_get_shapes(table[1].inputs), ((batch_size, 28, 28, 1), {'training': True}))
self.assertEqual(table[1].outputs.shape, (batch_size, 28, 28, 32))
+ self.assertEqual(table[2].inputs.shape, (batch_size, 28, 28, 1))
self.assertEqual(table[2].outputs.shape, (batch_size, 28, 28, 32))
+ self.assertEqual(_get_shapes(table[3].inputs), ((batch_size, 28, 28, 32), {'use_running_average': False}))
self.assertEqual(table[3].outputs.shape, (batch_size, 28, 28, 32))
+ self.assertEqual(_get_shapes(table[4].inputs), ((batch_size, 28, 28, 32), {'deterministic': False}))
self.assertEqual(table[4].outputs.shape, (batch_size, 28, 28, 32))
+ self.assertEqual(_get_shapes(table[5].inputs), ((batch_size, 28, 28, 32), {'training': True}))
self.assertEqual(table[5].outputs.shape, (batch_size, 28, 28, 64))
+ self.assertEqual(table[6].inputs.shape, (batch_size, 28, 28, 32))
self.assertEqual(table[6].outputs.shape, (batch_size, 28, 28, 64))
+ self.assertEqual(_get_shapes(table[7].inputs), ((batch_size, 28, 28, 64), {'use_running_average': False}))
self.assertEqual(table[7].outputs.shape, (batch_size, 28, 28, 64))
+ self.assertEqual(_get_shapes(table[8].inputs), ((batch_size, 28, 28, 64), {'deterministic': False}))
self.assertEqual(table[8].outputs.shape, (batch_size, 28, 28, 64))
+ self.assertEqual(table[9].inputs.shape, (batch_size, 64))
self.assertEqual(table[9].outputs.shape, (batch_size, 10))
- self.assertEqual(
- _get_shapes(table[10].outputs),
- ((batch_size, 10), dict(a=(batch_size, 10), b=(batch_size, 10))),
- )
# check no summary is performed
for row in table:
@@ -178,45 +184,51 @@ def test_module_summary_with_depth(self):
x = jnp.ones((batch_size, 28, 28, 1))
module = CNN(test_sow=False)
- table = _get_module_table(
- module,
+ table = summary._get_module_table(module, depth=1, show_repeated=True)(
{"dropout":random.PRNGKey(0), "params": random.PRNGKey(1)},
- method=None, mutable=True, depth=1,
- exclude_methods=set(),
- )(
- x, training=True
+ x, training=True, mutable=True,
)
+ # get values for inputs and outputs from their _ValueRepresentation
+ for row in table:
+ row.inputs = jax.tree_util.tree_map(lambda x: x.value(), row.inputs)
+ row.outputs = jax.tree_util.tree_map(lambda x: x.value(), row.outputs)
- # 5 rows = 1 Inputs + 1 ConvBlock_0 + 1 ConvBlock_1 + 1 Dense_0 + 1 Module output
- self.assertEqual(len(table), 5)
+ # 4 rows = 1 CNN + 1 ConvBlock_0 + 1 ConvBlock_1 + 1 Dense_0
+ self.assertEqual(len(table), 4)
# check paths
- self.assertEqual(table[0].path, ("Inputs",))
+ self.assertEqual(table[0].path, ())
+
self.assertEqual(table[1].path, ("block1",))
self.assertEqual(table[2].path, ("block2",))
self.assertEqual(table[3].path, ("dense",))
- self.assertEqual(table[4].path, ())
# check outputs shapes
self.assertEqual(
- (table[0].outputs[0].shape, table[0].outputs[1]),
+ (table[0].inputs[0].shape, table[0].inputs[1]),
(x.shape, dict(training=True)),
)
- self.assertEqual(table[1].outputs.shape, (batch_size, 28, 28, 32))
- self.assertEqual(table[2].outputs.shape, (batch_size, 28, 28, 64))
- self.assertEqual(table[3].outputs.shape, (batch_size, 10))
self.assertEqual(
- _get_shapes(table[4].outputs),
+ _get_shapes(table[0].outputs),
((batch_size, 10), dict(a=(batch_size, 10), b=(batch_size, 10))),
)
+ self.assertEqual(_get_shapes(table[1].inputs), ((batch_size, 28, 28, 1), {'training': True}))
+ self.assertEqual(table[1].outputs.shape, (batch_size, 28, 28, 32))
+
+ self.assertEqual(_get_shapes(table[2].inputs), ((batch_size, 28, 28, 32), {'training': True}))
+ self.assertEqual(table[2].outputs.shape, (batch_size, 28, 28, 64))
+
+ self.assertEqual(table[3].inputs.shape, (batch_size, 64))
+ self.assertEqual(table[3].outputs.shape, (batch_size, 10))
+
# check ConvBlock_0 and ConvBlock_1 are summarized
self.assertNotEqual(table[1].module_variables, table[1].counted_variables)
self.assertNotEqual(table[2].module_variables, table[2].counted_variables)
- # check Dense_0 and Module output are not summarized
+ # check CNN and Dense_0 output are not summarized
+ self.assertEqual(table[0].module_variables, table[0].counted_variables)
self.assertEqual(table[3].module_variables, table[3].counted_variables)
- self.assertEqual(table[4].module_variables, table[4].counted_variables)
def test_tabulate(self):
@@ -233,6 +245,7 @@ def test_tabulate(self):
{"dropout":random.PRNGKey(0), "params": random.PRNGKey(1)},
x,
training=True,
+ console_kwargs=CONSOLE_TEST_KWARGS,
)
# NOTE: its tricky to validate the content of lines
@@ -246,9 +259,11 @@ def test_tabulate(self):
# check headers are correct
self.assertIn("path", lines[3])
+ self.assertIn("module", lines[3])
+ self.assertIn("inputs", lines[3])
self.assertIn("outputs", lines[3])
self.assertIn("params", lines[3])
- self.assertIn("batch_stats", lines[3])
+ self.assertIn("batch_stats", lines[3])
# collection counts
self.assertIn("Total", lines[-6])
@@ -274,9 +289,11 @@ def test_tabulate_with_sow(self):
{"dropout":random.PRNGKey(0), "params": random.PRNGKey(1)},
x,
training=True,
+ console_kwargs=CONSOLE_TEST_KWARGS,
)
- self.assertNotIn("INTERM", module_repr)
+ self.assertIn("intermediates", module_repr)
+ self.assertIn("INTERM", module_repr)
def test_tabulate_with_method(self):
@@ -290,9 +307,11 @@ def test_tabulate_with_method(self):
x,
training=True,
method=CNN.cnn_method,
+ console_kwargs=CONSOLE_TEST_KWARGS,
)
- self.assertNotIn("INTERM", module_repr)
+ self.assertIn("(block_method)", module_repr)
+ self.assertIn("(cnn_method)", module_repr)
def test_tabulate_function(self):
"""
@@ -307,14 +326,12 @@ def test_tabulate_function(self):
module_repr = nn.tabulate(
module,
{"dropout":random.PRNGKey(0), "params": random.PRNGKey(1)},
+ console_kwargs=CONSOLE_TEST_KWARGS,
)(
x,
training=True,
)
- # NOTE: its tricky to validate the content of lines
- # because it seems to be shell-dependent, so we will
- # just check lines that wont change between environments
lines = module_repr.split("\n")
# check title
@@ -323,6 +340,8 @@ def test_tabulate_function(self):
# check headers are correct
self.assertIn("path", lines[3])
+ self.assertIn("module", lines[3])
+ self.assertIn("inputs", lines[3])
self.assertIn("outputs", lines[3])
self.assertIn("params", lines[3])
self.assertIn("batch_stats", lines[3])
@@ -337,4 +356,158 @@ def test_tabulate_function(self):
# total counts
self.assertIn("Total Parameters", lines[-3])
self.assertIn("19,850", lines[-3])
- self.assertIn("79.4 KB", lines[-3])
\ No newline at end of file
+ self.assertIn("79.4 KB", lines[-3])
+
+
+ def test_lifted_transform(self):
+ class LSTM(nn.Module):
+ batch_size: int
+ out_feat: int
+
+ @nn.compact
+ def __call__(self, x):
+ carry = nn.LSTMCell.initialize_carry(
+ random.PRNGKey(0), (self.batch_size,), self.out_feat
+ )
+ Cell = nn.scan(
+ nn.LSTMCell,
+ variable_broadcast="params",
+ split_rngs={"params": False},
+ in_axes=1,
+ out_axes=1,
+ )
+ return Cell(name="ScanLSTM")(carry, x)
+
+
+ lstm = LSTM(batch_size=32, out_feat=128)
+
+ with jax.check_tracer_leaks(True):
+ module_repr = lstm.tabulate(
+ random.PRNGKey(0),
+ x=jnp.ones((32, 128, 64)),
+ console_kwargs=CONSOLE_TEST_KWARGS)
+
+ lines = module_repr.splitlines()
+
+ self.assertIn("LSTM", lines[5])
+ self.assertIn("ScanLSTM", lines[9])
+ self.assertIn("LSTMCell", lines[9])
+ self.assertIn("ScanLSTM/ii", lines[13])
+ self.assertIn("Dense", lines[13])
+
+ def test_lifted_transform_no_rename(self):
+ class LSTM(nn.Module):
+ batch_size: int
+ out_feat: int
+
+ @nn.compact
+ def __call__(self, x):
+ carry = nn.LSTMCell.initialize_carry(
+ random.PRNGKey(0), (self.batch_size,), self.out_feat
+ )
+ Cell = nn.scan(
+ nn.LSTMCell,
+ variable_broadcast="params",
+ split_rngs={"params": False},
+ in_axes=1,
+ out_axes=1,
+ )
+ return Cell()(carry, x)
+
+
+ lstm = LSTM(batch_size=32, out_feat=128)
+
+ with jax.check_tracer_leaks(True):
+ module_repr = lstm.tabulate(
+ random.PRNGKey(0),
+ x=jnp.ones((32, 128, 64)),
+ console_kwargs=CONSOLE_TEST_KWARGS)
+
+ lines = module_repr.splitlines()
+
+ self.assertIn("LSTM", lines[5])
+ self.assertIn("ScanLSTMCell_0", lines[9])
+ self.assertIn("LSTMCell", lines[9])
+ self.assertIn("ScanLSTMCell_0/ii", lines[13])
+ self.assertIn("Dense", lines[13])
+
+ def test_module_reuse(self):
+ class ConvBlock(nn.Module):
+ @nn.compact
+ def __call__(self, x):
+ x = nn.Conv(32, [3, 3])(x)
+ x = nn.BatchNorm(use_running_average=True)(x)
+ x = nn.Dropout(0.5, deterministic=True)(x)
+ x = nn.relu(x)
+ return x
+
+ class CNN(nn.Module):
+ @nn.compact
+ def __call__(self, x):
+ block = ConvBlock()
+ x = block(x)
+ x = block(x)
+ x = block(x)
+ return x
+
+ x = jnp.ones((4, 28, 28, 32))
+ module_repr = CNN().tabulate(
+ jax.random.PRNGKey(0),
+ x=x,
+ show_repeated=True,
+ console_kwargs=CONSOLE_TEST_KWARGS)
+ lines = module_repr.splitlines()
+
+ # first call
+ self.assertIn("ConvBlock_0/Conv_0", lines[9])
+ self.assertIn("bias", lines[9])
+ self.assertIn("ConvBlock_0/BatchNorm_0", lines[14])
+ self.assertIn("mean", lines[14])
+ self.assertIn("bias", lines[14])
+ self.assertIn("ConvBlock_0/Dropout_0", lines[19])
+
+ # second call
+ self.assertIn("ConvBlock_0/Conv_0", lines[23])
+ self.assertNotIn("bias", lines[23])
+ self.assertIn("ConvBlock_0/BatchNorm_0", lines[25])
+ self.assertNotIn("mean", lines[25])
+ self.assertNotIn("bias", lines[25])
+ self.assertIn("ConvBlock_0/Dropout_0", lines[27])
+
+ # third call
+ self.assertIn("ConvBlock_0/Conv_0", lines[31])
+ self.assertNotIn("bias", lines[31])
+ self.assertIn("ConvBlock_0/BatchNorm_0", lines[33])
+ self.assertNotIn("mean", lines[33])
+ self.assertNotIn("bias", lines[33])
+ self.assertIn("ConvBlock_0/Dropout_0", lines[35])
+
+ def test_empty_input(self):
+ class EmptyInput(nn.Module):
+ @nn.compact
+ def __call__(self):
+ return 1
+
+ module = EmptyInput()
+ module_repr = module.tabulate({}, console_kwargs=CONSOLE_TEST_KWARGS)
+ lines = module_repr.splitlines()
+
+ self.assertRegex(lines[5], r'|\s*|\s*EmptyInput\s*|\s*|\s*1\s*|')
+
+ def test_numpy_scalar(self):
+ class Submodule(nn.Module):
+ def __call__(self, x):
+ return x + 1
+
+ class EmptyInput(nn.Module):
+ @nn.compact
+ def __call__(self):
+ return Submodule()(x=np.pi)
+
+ module = EmptyInput()
+ module_repr = module.tabulate({}, console_kwargs=CONSOLE_TEST_KWARGS)
+ lines = module_repr.splitlines()
+
+ self.assertIn('4.141592', lines[5])
+ self.assertIn('x: 3.141592', lines[7])
+ self.assertIn('4.141592', lines[7])
\ No newline at end of file
| `nn.tabulate` results in `KeyError: 'intermediates'` with methods that include transformations
### System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Arch Linux x64
- Flax, jax, jaxlib versions (obtain with `pip show flax jax jaxlib`: `flax=0.5.1`, `jax=0.3.13`, `jaxlib=0.3.10`
- Python version: 3.10.5
- GPU/TPU model and memory: RTX 3080 12GB
- CUDA version (if applicable): 11.7
### Problem you have encountered:
`nn.tabulate` results in KeyError: 'intermediates' when used with methods that include transformations. Tested with `vmap` and `scan` (included repro below).
### Logs, error messages, etc:
```
File ~/.local/lib/python3.10/site-packages/flax/linen/summary.py:173, in tabulate.<locals>._tabulate_fn(*args, **kwargs)
169 def _tabulate_fn(*args, **kwargs):
170 table_fn = _get_module_table(module, rngs, method=method,
171 mutable=mutable, depth=depth,
172 exclude_methods=set(exclude_methods))
--> 173 table = table_fn(*args, **kwargs)
174 return _render_table(table)
File ~/.local/lib/python3.10/site-packages/flax/linen/summary.py:213, in _get_module_table.<locals>._get_table_fn(*args, **kwargs)
209 collections: List[str] = [
210 col for col in shape_variables.keys() if col != 'intermediates'
211 ]
212 shape_variables = shape_variables.unfreeze()
--> 213 rows = list(
214 _flatten_to_rows(
215 path=(),
216 variables=shape_variables,
217 depth=depth,
...
250 }
252 if len(module_outputs) == 0:
253 output = None
KeyError: 'intermediates'
```
### Steps to reproduce:
Minimal repro:
```python
import jax.numpy as jnp
from jax import random
import flax.linen as nn
class LSTM(nn.Module):
batch_size: int
out_feat: int
@nn.compact
def __call__(self, x):
carry = nn.LSTMCell.initialize_carry(
random.PRNGKey(0), (self.batch_size,), self.out_feat
)
Cell = nn.scan(
nn.LSTMCell,
variable_broadcast="params",
split_rngs={"params": False},
in_axes=1,
out_axes=1,
)
return Cell()(carry, x)
if __name__ == "__main__":
lstm = LSTM(batch_size=128, out_feat=128)
# KeyError: 'intermediates'
print(lstm.tabulate(random.PRNGKey(0), jnp.ones((128, 128))))
```
| Hey @RocketLL, thanks for minimal repro.
@jheek @marcvanzee the `Cell` Module (`ScanLSTMCell_0`) is neither reporting its outputs nor the output of its submodules as shown here:
```
{
intermediates: {
__call__: (((ShapeDtypeStruct(shape=(128, 128), dtype=float32), ShapeDtypeStruct(shape=(128, 128), dtype=float32)), ShapeDtypeStruct(shape=(128, 128, 128), dtype=float32)),),
},
params: {
ScanLSTMCell_0: {
hf: {
bias: ShapeDtypeStruct(shape=(128,), dtype=float32),
kernel: ShapeDtypeStruct(shape=(128, 128), dtype=float32),
},
hg: {
bias: ShapeDtypeStruct(shape=(128,), dtype=float32),
kernel: ShapeDtypeStruct(shape=(128, 128), dtype=float32),
},
hi: {
bias: ShapeDtypeStruct(shape=(128,), dtype=float32),
kernel: ShapeDtypeStruct(shape=(128, 128), dtype=float32),
},
ho: {
bias: ShapeDtypeStruct(shape=(128,), dtype=float32),
kernel: ShapeDtypeStruct(shape=(128, 128), dtype=float32),
},
if: {
kernel: ShapeDtypeStruct(shape=(128, 128), dtype=float32),
},
ig: {
kernel: ShapeDtypeStruct(shape=(128, 128), dtype=float32),
},
ii: {
kernel: ShapeDtypeStruct(shape=(128, 128), dtype=float32),
},
io: {
kernel: ShapeDtypeStruct(shape=(128, 128), dtype=float32),
},
},
},
}
```
Is this expected?
For now I'll have to rethink the policy of how to detect submodules, there are two options:
1. Treat `ScanLSTMCell_0`, `hf`, etc, as submodules with no outputs.
2. Treat the whole `ScanLSTMCell_0` structure as just parameters.
@jheek points out that this can be fixed by adding `intermediates` to `variable_axes`:
```python
Cell = nn.scan(
nn.LSTMCell,
variable_broadcast="params",
split_rngs={"params": False},
variable_axes={"intermediates": 1},
in_axes=1,
out_axes=1,
)
```
However the internal use of `capture_intermediates` is an implementation detail and the users shouldn't be required to be aware of this. I'll create a new issue to track this broader problem with `tabulate` and propose a more general solution. | 2022-07-21T15:49:59 |
google/flax | 2,343 | google__flax-2343 | [
"2342",
"2342"
] | 0740ef63c4eae05de58d80f85a05fc23bb8b3261 | diff --git a/flax/training/checkpoints.py b/flax/training/checkpoints.py
--- a/flax/training/checkpoints.py
+++ b/flax/training/checkpoints.py
@@ -58,7 +58,7 @@
def _checkpoint_path(ckpt_dir: str,
- step: Union[int, str],
+ step: Union[int, float, str],
prefix: str = 'checkpoint_') -> str:
return os.path.join(ckpt_dir, f'{prefix}{step}')
@@ -113,7 +113,7 @@ def _save_gdas(gda_manager: GlobalAsyncCheckpointManager,
def _restore_gdas(state_dict,
target: Optional[Any],
ckpt_path: str,
- step: Optional[int] = None,
+ step: Optional[Union[int, float]] = None,
gda_manager: Optional[GlobalAsyncCheckpointManager] = None):
# When target is a single leaf instead of a pytree dict.
@@ -222,7 +222,7 @@ def save_async(self, task: Callable[[], Any]):
def save_checkpoint(ckpt_dir: Union[str, os.PathLike],
target: PyTree,
- step: int,
+ step: Union[int, float],
prefix: str = 'checkpoint_',
keep: int = 1,
overwrite: bool = False,
@@ -381,7 +381,7 @@ def latest_checkpoint(ckpt_dir: Union[str, os.PathLike],
def restore_checkpoint(
ckpt_dir: Union[str, os.PathLike],
target: Optional[Any],
- step: Optional[int] = None,
+ step: Optional[Union[int, float]] = None,
prefix: str = 'checkpoint_',
parallel: bool = True,
gda_manager: Optional[GlobalAsyncCheckpointManager] = None) -> PyTree:
@@ -400,7 +400,7 @@ def restore_checkpoint(
ckpt_dir: str: checkpoint file or directory of checkpoints to restore from.
target: matching object to rebuild via deserialized state-dict. If None, the
deserialized state-dict is returned as-is.
- step: int: step number to load or None to load latest. If specified,
+ step: int or float: step number to load or None to load latest. If specified,
ckpt_dir must be a directory.
prefix: str: name prefix of checkpoint files.
parallel: bool: whether to load seekable checkpoints in parallel, for speed.
| Inconsistent type annotation of `step` in `training.checkpoints`
### System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Any
- Flax, jax, jaxlib versions (obtain with `pip show flax jax jaxlib`:
```
Name: flax
Version: 0.5.2
Summary: Flax: A neural network library for JAX designed for flexibility
Home-page: https://github.com/google/flax
Author: Flax team
Author-email: [email protected]
License: UNKNOWN
Location: /home/qys/Research/generative-distribution-shift/.venv/lib/python3.8/site-packages
Requires: jax, matplotlib, msgpack, numpy, optax, PyYAML, rich, typing-extensions
Required-by:
---
Name: jax
Version: 0.3.14
Summary: Differentiate, compile, and transform Numpy code.
Home-page: https://github.com/google/jax
Author: JAX team
Author-email: [email protected]
License: Apache-2.0
Location: /home/qys/Research/generative-distribution-shift/.venv/lib/python3.8/site-packages
Requires: absl-py, etils, numpy, opt-einsum, scipy, typing-extensions
Required-by: chex, flax, optax
---
Name: jaxlib
Version: 0.3.14
Summary: XLA library for JAX
Home-page: https://github.com/google/jax
Author: JAX team
Author-email: [email protected]
License: Apache-2.0
Location: /home/qys/Research/generative-distribution-shift/.venv/lib/python3.8/site-packages
Requires: absl-py, flatbuffers, numpy, scipy
Required-by: chex, optax
```
- Python version: Python 3.8.10
- GPU/TPU model and memory: Any
- CUDA version (if applicable): N/A
### Problem you have encountered:
The argument `step` has inconsistent type annotation. For example, the docstring of `save_checkpoint` says it can be either an integer or a float
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L241
but the docstring of `restore_checkpoint` says it must be an integer
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L403-L404
However, the example given by `restore_checkpoint` hints that a float is fine
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L390-L397
The documentation is also inconsistent with the actual type annotation. This makes linters like mypy and pyright unhappy.
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L225
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L384
### What you expected to happen:
The correct type annotation should be `step: Union[int, float]`.
### Logs, error messages, etc:
N/A
### Steps to reproduce:
N/A
Inconsistent type annotation of `step` in `training.checkpoints`
### System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Any
- Flax, jax, jaxlib versions (obtain with `pip show flax jax jaxlib`:
```
Name: flax
Version: 0.5.2
Summary: Flax: A neural network library for JAX designed for flexibility
Home-page: https://github.com/google/flax
Author: Flax team
Author-email: [email protected]
License: UNKNOWN
Location: /home/qys/Research/generative-distribution-shift/.venv/lib/python3.8/site-packages
Requires: jax, matplotlib, msgpack, numpy, optax, PyYAML, rich, typing-extensions
Required-by:
---
Name: jax
Version: 0.3.14
Summary: Differentiate, compile, and transform Numpy code.
Home-page: https://github.com/google/jax
Author: JAX team
Author-email: [email protected]
License: Apache-2.0
Location: /home/qys/Research/generative-distribution-shift/.venv/lib/python3.8/site-packages
Requires: absl-py, etils, numpy, opt-einsum, scipy, typing-extensions
Required-by: chex, flax, optax
---
Name: jaxlib
Version: 0.3.14
Summary: XLA library for JAX
Home-page: https://github.com/google/jax
Author: JAX team
Author-email: [email protected]
License: Apache-2.0
Location: /home/qys/Research/generative-distribution-shift/.venv/lib/python3.8/site-packages
Requires: absl-py, flatbuffers, numpy, scipy
Required-by: chex, optax
```
- Python version: Python 3.8.10
- GPU/TPU model and memory: Any
- CUDA version (if applicable): N/A
### Problem you have encountered:
The argument `step` has inconsistent type annotation. For example, the docstring of `save_checkpoint` says it can be either an integer or a float
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L241
but the docstring of `restore_checkpoint` says it must be an integer
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L403-L404
However, the example given by `restore_checkpoint` hints that a float is fine
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L390-L397
The documentation is also inconsistent with the actual type annotation. This makes linters like mypy and pyright unhappy.
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L225
https://github.com/google/flax/blob/0740ef63c4eae05de58d80f85a05fc23bb8b3261/flax/training/checkpoints.py#L384
### What you expected to happen:
The correct type annotation should be `step: Union[int, float]`.
### Logs, error messages, etc:
N/A
### Steps to reproduce:
N/A
| 2022-07-27T23:24:35 |
||
google/flax | 2,364 | google__flax-2364 | [
"2362"
] | d0e1459183b3b818058a951fe96294c00f276333 | diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -603,7 +603,7 @@ def __call__(self, inputs: Array) -> Array:
total_pad = [
((size_diff + 1) // 2, size_diff // 2) for size_diff in size_diffs
]
- y = np.pad(y, [(0, 0)] + total_pad + [(0, 0)])
+ y = jnp.pad(y, [(0, 0)] + total_pad + [(0, 0)])
# Wrap the result periodically around each spatial dimension,
# one by one.
for i in range(1, y.ndim - 1):
| diff --git a/tests/linen/linen_linear_test.py b/tests/linen/linen_linear_test.py
--- a/tests/linen/linen_linear_test.py
+++ b/tests/linen/linen_linear_test.py
@@ -773,6 +773,21 @@ def test_circular_conv_transpose_2d_constant(
)
np.testing.assert_allclose(y, correct_ans)
+ def test_circular_conv_transpose_2d_with_vmap(self):
+ layer = nn.ConvTranspose(features=5, kernel_size=(3,), padding="CIRCULAR")
+
+ # this is ok
+ sample_input = jnp.ones((1, 32, 2))
+ out, vars = layer.init_with_output(jax.random.PRNGKey(0), sample_input)
+ self.assertEqual(out.shape, (1, 32, 5))
+
+ batch_input = jnp.ones((8, 32, 2))
+ batch_apply = jax.vmap(layer.apply, in_axes=(None, 0))
+
+ # this breaks with the error provided
+ batch_out = batch_apply(vars, batch_input)
+ self.assertEqual(batch_out.shape, (8, 32, 5))
+
def test_circular_conv_transpose_1d_custom(self):
"""Test 1d transposed convolution with circular padding and a stride."""
rng = dict(params=random.PRNGKey(0))
| Transpose Convolution module issue when used with circular padding and vmap
### Problem you have encountered:
I'm simply trying to `vmap` a `ConvTranspose` layer with circular padding, and it results in a `jax._src.errors.TracerArrayConversionError`. I'm running things on GPU.
### Steps to reproduce:
Here is a minimum example that reproduces the error.
```
import jax
import jax.numpy as jnp
import flax.linen as nn
layer = nn.ConvTranspose(features=5, kernel_size=(3,), padding="CIRCULAR")
# this is ok
sample_input = jnp.ones((1, 32, 2))
out, vars = layer.init_with_output(jax.random.PRNGKey(0), sample_input)
print(out.shape)
batch_input = jnp.ones((8, 4, 32, 2))
batch_apply = jax.vmap(layer.apply, in_axes=(None, 0))
# this breaks with the error provided
batch_out = batch_apply(vars, batch_input)
print(batch_out.shape)
```
I'm moderately confident that this is a bug that is specific to the transpose convolution because I verified that the code works ok if `nn.ConvTranspose` is replaced with `nn.Conv`. Things are also ok when `vmap` is not used.
### Logs, error messages, etc:
```
UnfilteredStackTrace: jax._src.errors.TracerArrayConversionError: The numpy.ndarray conversion method __array__() was called on the JAX Tracer object Traced<ShapedArray(float32[4,34,5])>with<BatchTrace(level=1/0)> with
val = DeviceArray([[[[ 0.19632685, 0.56257343, 0.6327205 , 0.278047 ,
...
```
### My guess at what's happening:
I'm suspecting that since padding needs to be added, the shape information becomes not static any more at some point?
| 2022-08-03T15:58:29 |
|
google/flax | 2,407 | google__flax-2407 | [
"2406"
] | cda7a4c85bbce744e412ab82e298ddf76d4770d2 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,7 @@
"matplotlib", # only needed for tensorboard export
"msgpack",
"optax",
- "rich~=11.1",
+ "rich>=11.1",
"typing_extensions>=4.1.1",
"PyYAML>=5.4.1",
]
| Outdated `rich` dependency version
The version of `rich` is currently limited to `rich~=11.1`, causing problems with `pip` dependency resolution when installing with other packages.
https://github.com/google/flax/blob/cda7a4c85bbce744e412ab82e298ddf76d4770d2/setup.py#L33
Should be a trivial fix since `flax.linen.summary` doesn't seem to need any changes, I'll open a PR.
| 2022-08-18T01:07:37 |
||
google/flax | 2,425 | google__flax-2425 | [
"2156"
] | 0451a55be575095a07be13490f68b8d1b1687177 | diff --git a/flax/linen/linear.py b/flax/linen/linear.py
--- a/flax/linen/linear.py
+++ b/flax/linen/linear.py
@@ -467,7 +467,42 @@ def maybe_broadcast(x: Optional[Union[int, Sequence[int]]]) -> (
class Conv(_Conv):
- """Convolution Module wrapping `lax.conv_general_dilated`."""
+ """Convolution Module wrapping `lax.conv_general_dilated`.
+
+ Attributes:
+ features: number of convolution filters.
+ kernel_size: shape of the convolutional kernel. For 1D convolution,
+ the kernel size can be passed as an integer. For all other cases, it must
+ be a sequence of integers.
+ strides: an integer or a sequence of `n` integers, representing the
+ inter-window strides (default: 1).
+ padding: either the string `'SAME'`, the string `'VALID'`, the string
+ `'CIRCULAR'` (periodic boundary conditions), or a sequence of `n` `(low,
+ high)` integer pairs that give the padding to apply before and after each
+ spatial dimension. A single int is interpeted as applying the same padding
+ in all dims and passign a single int in a sequence causes the same padding
+ to be used on both sides. `'CAUSAL'` padding for a 1D convolution will
+ left-pad the convolution axis, resulting in same-sized output.
+ input_dilation: an integer or a sequence of `n` integers, giving the
+ dilation factor to apply in each spatial dimension of `inputs`
+ (default: 1). Convolution with input dilation `d` is equivalent to
+ transposed convolution with stride `d`.
+ kernel_dilation: an integer or a sequence of `n` integers, giving the
+ dilation factor to apply in each spatial dimension of the convolution
+ kernel (default: 1). Convolution with kernel dilation
+ is also known as 'atrous convolution'.
+ feature_group_count: integer, default 1. If specified divides the input
+ features into groups.
+ use_bias: whether to add a bias to the output (default: True).
+ mask: Optional mask for the weights during masked convolution. The mask must
+ be the same shape as the convolution weight matrix.
+ dtype: the dtype of the computation (default: infer from input and params).
+ param_dtype: the dtype passed to parameter initializers (default: float32).
+ precision: numerical precision of the computation see `jax.lax.Precision`
+ for details.
+ kernel_init: initializer for the convolutional kernel.
+ bias_init: initializer for the bias.
+ """
@property
def shared_weights(self) -> bool:
@@ -475,7 +510,42 @@ def shared_weights(self) -> bool:
class ConvLocal(_Conv):
- """Local convolution Module wrapping `lax.conv_general_dilated_local`."""
+ """Local convolution Module wrapping `lax.conv_general_dilated_local`.
+
+ Attributes:
+ features: number of convolution filters.
+ kernel_size: shape of the convolutional kernel. For 1D convolution,
+ the kernel size can be passed as an integer. For all other cases, it must
+ be a sequence of integers.
+ strides: an integer or a sequence of `n` integers, representing the
+ inter-window strides (default: 1).
+ padding: either the string `'SAME'`, the string `'VALID'`, the string
+ `'CIRCULAR'` (periodic boundary conditions), or a sequence of `n` `(low,
+ high)` integer pairs that give the padding to apply before and after each
+ spatial dimension. A single int is interpeted as applying the same padding
+ in all dims and passign a single int in a sequence causes the same padding
+ to be used on both sides. `'CAUSAL'` padding for a 1D convolution will
+ left-pad the convolution axis, resulting in same-sized output.
+ input_dilation: an integer or a sequence of `n` integers, giving the
+ dilation factor to apply in each spatial dimension of `inputs`
+ (default: 1). Convolution with input dilation `d` is equivalent to
+ transposed convolution with stride `d`.
+ kernel_dilation: an integer or a sequence of `n` integers, giving the
+ dilation factor to apply in each spatial dimension of the convolution
+ kernel (default: 1). Convolution with kernel dilation
+ is also known as 'atrous convolution'.
+ feature_group_count: integer, default 1. If specified divides the input
+ features into groups.
+ use_bias: whether to add a bias to the output (default: True).
+ mask: Optional mask for the weights during masked convolution. The mask must
+ be the same shape as the convolution weight matrix.
+ dtype: the dtype of the computation (default: infer from input and params).
+ param_dtype: the dtype passed to parameter initializers (default: float32).
+ precision: numerical precision of the computation see `jax.lax.Precision`
+ for details.
+ kernel_init: initializer for the convolutional kernel.
+ bias_init: initializer for the bias.
+ """
@property
def shared_weights(self) -> bool:
| Conv docs page doesn't show attribute/argument's description
`Conv` and `ConvLocal` inherit from `_Conv` but their docstrings don't "re-expose" the `Attributes` section so Sphinx doesn't show the description of each attribute to the users. An easy solution would be to just duplicate these sections, else somehow dynamically modifying `{Conv, ConvLocal}.__docs__` to add the common attributes section *might* work.

| This situation got worse with the recent template change:

| 2022-08-30T17:07:04 |
|
google/flax | 2,440 | google__flax-2440 | [
"1014"
] | fb8b640b0fedb4a771caf7b4b2d9ec85e0cb2d85 | diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -775,8 +775,11 @@ def __getattr__(self, name: str) -> Any:
if name in self.__dict__:
return self.__dict__[name]
else:
- raise AttributeError(
- f'"{self.__class__.__name__}" object has no attribute "{name}"')
+ msg = f'"{self.__class__.__name__}" object has no attribute "{name}".'
+ if self.scope is None:
+ msg += (f' If "{name}" is defined in \'.setup()\', remember these fields '
+ 'are only accessible from inside \'init\' or \'apply\'.')
+ raise AttributeError(msg)
def __dir__(self) -> List[str]:
"""Call setup() before listing attributes."""
| Raise clearer Exception when calling method of unbound module
Using this minimal example
```
import jax.numpy as np
from jax.numpy import log, exp
import jax.random as rand
import flax.linen as ln
class MultipleForw(ln.Module):
def setup(self):
self.s1 = self.param("s1", ln.initializers.ones, (1,))
def __call__(self, X, ):
return X * log(1 + exp(self.s1 - 1))
mf = MultipleForw()
X = np.arange(5)
mf.init(rand.PRNGKey(0), X)
mf(X)
```
### Problem you have encountered:
The last line raised the rather opaque error message `AttributeError: 'MultipleForw' object has no attribute 's1'`
### What you expected to happen:
The raised Exception should contain a hint that makes clear that calling a linen-Module correctly is by using `mf.apply(parameters, input)`. See Discussion #1013
| In #1072, I tried fixing this by creating a custom error class for Module AtttributeError, but after a discussion with @avital we found that this is not a very natural solution because users expect a normal `AttributeError` when they are trying to access an unknown attribute in a Module.
Solving this issue is a bit more work, and probably not our highest priority. For now I'm lowering the priority of this issue because it seems we won't fix it soon, and we can higher it when it turns out that more users run into this problem.
Also unassigning myself since I don't plan to work on this soon.
I'd like to take this issue. A simple solution would be to customize the current error message to suggest calling `apply` if `self.scope is None`. I'll create a PR as this is a rather simple fix, if we want to tackle it a different way we can discuss there.
This already sounds an order of magnitude better than the current situation | 2022-09-06T19:04:40 |
|
google/flax | 2,446 | google__flax-2446 | [
"656"
] | fdd1d6fef0dfea785a10b1f5ebd1635cc2509c2e | diff --git a/flax/core/lift.py b/flax/core/lift.py
--- a/flax/core/lift.py
+++ b/flax/core/lift.py
@@ -315,13 +315,13 @@ def swap(target):
@dataclasses.dataclass(frozen=True)
class In(Generic[T]):
"""Specifies a variable collection should only be lifted as input."""
- axis: Any # pytype does not support generic variable annotation
+ axis: T
@dataclasses.dataclass(frozen=True)
class Out(Generic[T]):
"""Specifies a variable collection should only be lifted as output."""
- axis: Any # pytype does not support generic variable annotation
+ axis: T
def _split_in_out_axes(xs: Mapping[CollectionFilter, Any]):
| Pytype attribute generics tracker
Pytype currently doesn't support Generic types class attributes:
```
class Foo:
bar: T
```
As a workaround we use `Any` for the attribute instead. This workaround should be reverted once the functionality is implemented
| @jheek -- Is there a public bug we can reference tracking the current limitation in pytype?
@jheek is this still relevant?
Yes we are still using this workaround | 2022-09-07T10:48:15 |
|
google/flax | 2,457 | google__flax-2457 | [
"2452"
] | e320e11c6c20d8692ae2292107fefcd2aa5f20d8 | diff --git a/flax/core/lift.py b/flax/core/lift.py
--- a/flax/core/lift.py
+++ b/flax/core/lift.py
@@ -1138,6 +1138,7 @@ def checkpoint(fn: Callable[..., Any],
rngs: PRNGSequenceFilter = True,
concrete: bool = False,
prevent_cse: bool = True,
+ static_argnums: Union[int, Tuple[int, ...]] = (),
policy: Optional[Callable[..., bool]] = None,
) -> Callable[..., Any]:
"""Lifted version of ``jax.checkpoint``.
@@ -1164,15 +1165,21 @@ def checkpoint(fn: Callable[..., Any],
``pmap``, CSE can defeat the purpose of this decorator. But in some
settings, like when used inside a ``scan``, this CSE prevention mechanism
is unnecessary, in which case ``prevent_cse`` can be set to False.
+ static_argnums: Optional, int or sequence of ints, indicates which argument
+ values on which to specialize for tracing and caching purposes. Specifying
+ arguments as static can avoid ConcretizationTypeErrors when tracing, but
+ at the cost of more retracing overheads.
policy: Experimental checkpoint policy, see ``jax.checkpoint``.
Returns:
A wrapped version of ``fn``. When computing gradients intermediate
computations will be re-computed when computing gradients.
"""
def inner(scope_fn, repack_fn, variable_groups, rng_groups, *args, **kwargs):
+ # add 2 to each static_argnums because we add two initial arguments to rematted
+ static_argnums_ = jax.tree_util.tree_map(lambda x: x + 2, static_argnums)
@functools.partial(jax.remat,
- concrete=concrete, prevent_cse=prevent_cse,
- policy=policy)
+ concrete=concrete, static_argnums=static_argnums_,
+ prevent_cse=prevent_cse, policy=policy)
@functools.wraps(fn)
def rematted(variable_groups, rng_groups, *args, **kwargs):
scope = scope_fn(variable_groups, rng_groups)
diff --git a/flax/linen/transforms.py b/flax/linen/transforms.py
--- a/flax/linen/transforms.py
+++ b/flax/linen/transforms.py
@@ -572,6 +572,7 @@ def checkpoint(target: Target,
rngs: lift.PRNGSequenceFilter = True,
concrete: bool = False,
prevent_cse: bool = True,
+ static_argnums: Union[int, Tuple[int, ...]] = (),
policy: Optional[Callable[..., bool]] = None,
methods=None) -> Target:
"""Lifted version of ``jax.checkpoint``.
@@ -599,6 +600,10 @@ def checkpoint(target: Target,
``pmap``, CSE can defeat the purpose of this decorator. But in some
settings, like when used inside a ``scan``, this CSE prevention mechanism
is unnecessary, in which case ``prevent_cse`` should be set to False.
+ static_argnums: Optional, int or sequence of ints, indicates which argument
+ values on which to specialize for tracing and caching purposes. Specifying
+ arguments as static can avoid ConcretizationTypeErrors when tracing, but
+ at the cost of more retracing overheads.
policy: Experimental checkpoint policy, see ``jax.checkpoint``.
methods: If `target` is a `Module`, the methods of `Module` to checkpoint.
@@ -606,9 +611,13 @@ def checkpoint(target: Target,
A wrapped version of ``target``. When computing gradients intermediate
computations will be re-computed on the backward pass.
"""
+ # subtract 1 from each static_argnums because 'self' is not passed to the
+ # lifted function
+ static_argnums = jax.tree_util.tree_map(lambda x: x - 1, static_argnums)
return lift_transform(
lift.checkpoint, target,
variables=variables, rngs=rngs, concrete=concrete,
+ static_argnums=static_argnums,
prevent_cse=prevent_cse, policy=policy,
methods=methods)
| diff --git a/tests/linen/linen_transforms_test.py b/tests/linen/linen_transforms_test.py
--- a/tests/linen/linen_transforms_test.py
+++ b/tests/linen/linen_transforms_test.py
@@ -145,6 +145,73 @@ def __call__(self, input, apply_relu : bool = False):
# This next line crashes with a concretization error
_ = jax.grad(lambda x: remat_model.apply(p, x, apply_relu=True))(x)
+ def test_remat_static_argnums(self):
+ test = self
+
+ class Foo(nn.Module):
+ train_is_static: bool
+
+ @nn.compact
+ def __call__(self, inputs, train: bool):
+ if self.train_is_static:
+ test.assertTrue(isinstance(train, bool))
+ else:
+ test.assertTrue(isinstance(train, jnp.ndarray))
+
+ return nn.Dense(3, use_bias=False)(inputs)
+
+ # set train as a static argument
+ FooRemat = nn.remat(Foo, static_argnums=(2,))
+ foo = FooRemat(train_is_static=True)
+
+ x = jnp.empty((1, 2))
+ variables = foo.init(random.PRNGKey(0), x, True)
+ y = foo.apply(variables, x, False)
+ self.assertEqual(y.shape, (1, 3))
+
+ # set train as a non-static arguments
+ FooRemat = nn.remat(Foo, static_argnums=())
+ foo = FooRemat(train_is_static=False)
+
+ variables = foo.init(random.PRNGKey(0), x, True)
+ y = foo.apply(variables, x, False)
+ self.assertEqual(y.shape, (1, 3))
+
+ def test_remat_decorator_static_argnums(self):
+ test = self
+
+ class FooTrainStatic(nn.Module):
+ @partial(nn.remat, static_argnums=(2,))
+ @nn.compact
+ def __call__(self, inputs, train: bool):
+ test.assertTrue(isinstance(train, bool))
+
+ return nn.Dense(3, use_bias=False)(inputs)
+
+ # set train as a static argument
+ foo = FooTrainStatic()
+
+ x = jnp.empty((1, 2))
+ variables = foo.init(random.PRNGKey(0), x, True)
+ y = foo.apply(variables, x, False)
+ self.assertEqual(y.shape, (1, 3))
+
+ class FooTrainDynamic(nn.Module):
+ @partial(nn.remat, static_argnums=())
+ @nn.compact
+ def __call__(self, inputs, train: bool):
+ test.assertTrue(isinstance(train, jnp.ndarray))
+
+ return nn.Dense(3, use_bias=False)(inputs)
+
+ # set train as a non-static arguments
+ foo = FooTrainDynamic()
+
+ variables = foo.init(random.PRNGKey(0), x, True)
+ y = foo.apply(variables, x, False)
+ self.assertEqual(y.shape, (1, 3))
+
+
def test_vmap(self):
key1, key2 = random.split(random.PRNGKey(3), 2)
x = random.uniform(key1, (4, 4))
| flax.linen.remat with concrete=True doesn't work with jax 0.3.17
### Problem you have encountered:
This may already be on the Flax team's radar, but I noticed that when using flax.linen.remat, setting concrete=True doesn't work with Jax 0.3.17, for the reasons discussed [here](https://jax.readthedocs.io/en/latest/jep/11830-new-remat-checkpoint.html).
As of version 0.6.0: flax.linen.remat
(1) passes the argument ```concrete=True``` to ```jax.remat```, which leads to an error message.
(2) does not accept an argument ```static_argnums```, as used in the latest ```jax.remat```.
Interestingly, pip's constraint solver did not seem to be aware of this incompatibility; running ```pip install jax, flax``` allowed me to install flax==0.6.0 with jax==0.3.17, leading to the observed problem.
As a workaround, I've downgraded to jax==0.3.16, and am running ```jax.config.update("jax_new_checkpoint", False)``` at the top of my scripts, as suggested by the link above.
### What you expected to happen:
To ensure compatibility with Jax's remat functionality, future versions of flax.linen.remat would ideally accept an argument ```static_argnums```, which can be passed to the jax.remat implementation.
In the traceback triggered by Flax passing ```concrete=True```, the Jax developers also remark that
> If jax.numpy operations need to be performed on static arguments, we can use the `jax.ensure_compile_time_eval()` context manager.
which may also be relevant to the future design of flax.linen.remat.
### Steps to reproduce:
The problem can be reproduced by running the script
```
import flax.linen as nn
import jax
class Foo(nn.Module):
def setup(self):
self.linear = nn.remat(nn.Dense, concrete=True)(100, use_bias=False)
def __call__(self, inputs):
return self.linear(inputs)
if __name__ == '__main__':
rng = jax.random.PRNGKey(0)
rng, sk1, sk2 = jax.random.split(rng, 3)
foo = Foo()
input = jax.random.normal(sk1, [1, 10])
params = foo.init({"params": sk2}, input)["params"]
out = foo.apply({"params": params}, input)
```
### Logs, error messages, etc:
When I run the above script, I obtain the following traceback:
<details>
<summary>toggle to show</summary>
```
Traceback (most recent call last):
File "/Users/lucaslingle/PycharmProjects/project123/src/project123/nn/generic_module.py", line 17, in <module>
params = foo.init({"params": sk2}, input)["params"]
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/jax/_src/traceback_util.py", line 162, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/linen/module.py", line 1273, in init
_, v_out = self.init_with_output(
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/jax/_src/traceback_util.py", line 162, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/linen/module.py", line 1229, in init_with_output
return init_with_output(
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/core/scope.py", line 897, in wrapper
return apply(fn, mutable=mutable, flags=init_flags)({}, *args, rngs=rngs,
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/core/scope.py", line 865, in wrapper
y = fn(root, *args, **kwargs)
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/linen/module.py", line 1647, in scope_fn
return fn(module.clone(parent=scope), *args, **kwargs)
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/linen/module.py", line 361, in wrapped_module_method
return self._call_wrapped_method(fun, args, kwargs)
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/linen/module.py", line 657, in _call_wrapped_method
y = fun(self, *args, **kwargs)
File "/Users/lucaslingle/PycharmProjects/project123/src/project123/nn/generic_module.py", line 9, in __call__
return self.linear(inputs)
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/linen/transforms.py", line 316, in wrapped_fn
ret = trafo_fn(module_scopes, *args, **kwargs)
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/core/lift.py", line 213, in wrapper
y, out_variable_groups_xs_t = fn(
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/flax/core/lift.py", line 1177, in inner
def rematted(variable_groups, rng_groups, *args, **kwargs):
File "/Users/lucaslingle/opt/miniconda3/envs/project123/lib/python3.10/site-packages/jax/_src/api.py", line 3084, in checkpoint
raise NotImplementedError(msg)
jax._src.traceback_util.UnfilteredStackTrace: NotImplementedError: The 'concrete' option to jax.checkpoint / jax.remat is deprecated; in its place, you can use its `static_argnums` option, and if necessary the `jax.ensure_compile_time_eval()` context manager.
For example, if using `concrete=True` for an `is_training` flag:
from functools import partial
@partial(jax.checkpoint, concrete=True)
def foo(x, is_training):
if is_training:
return f(x)
else:
return g(x)
replace it with a use of `static_argnums`:
@partial(jax.checkpoint, static_argnums=(1,))
def foo(x, is_training):
...
If jax.numpy operations need to be performed on static arguments, we can use the `jax.ensure_compile_time_eval()` context manager. For example, we can replace this use of `concrete=True`
:
@partial(jax.checkpoint, concrete=True)
def foo(x, y):
if y > 0:
return f(x)
else:
return g(x)
with this combination of `static_argnums` and `jax.ensure_compile_time_eval()`:
@partial(jax.checkpoint, static_argnums=(1,))
def foo(x, y):
with jax.ensure_compile_time_eval():
y_pos = y > 0
if y_pos:
return f(x)
else:
return g(x)
See https://jax.readthedocs.io/en/latest/jep/11830-new-remat-checkpoint.html
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/lucaslingle/PycharmProjects/project123/src/project123/nn/generic_module.py", line 17, in <module>
params = foo.init({"params": sk2}, input)["params"]
File "/Users/lucaslingle/PycharmProjects/project123/src/project123/nn/generic_module.py", line 9, in __call__
return self.linear(inputs)
NotImplementedError: The 'concrete' option to jax.checkpoint / jax.remat is deprecated; in its place, you can use its `static_argnums` option, and if necessary the `jax.ensure_compile_time_eval()` context manager.
For example, if using `concrete=True` for an `is_training` flag:
from functools import partial
@partial(jax.checkpoint, concrete=True)
def foo(x, is_training):
if is_training:
return f(x)
else:
return g(x)
replace it with a use of `static_argnums`:
@partial(jax.checkpoint, static_argnums=(1,))
def foo(x, is_training):
...
If jax.numpy operations need to be performed on static arguments, we can use the `jax.ensure_compile_time_eval()` context manager. For example, we can replace this use of `concrete=True`
:
@partial(jax.checkpoint, concrete=True)
def foo(x, y):
if y > 0:
return f(x)
else:
return g(x)
with this combination of `static_argnums` and `jax.ensure_compile_time_eval()`:
@partial(jax.checkpoint, static_argnums=(1,))
def foo(x, y):
with jax.ensure_compile_time_eval():
y_pos = y > 0
if y_pos:
return f(x)
else:
return g(x)
See https://jax.readthedocs.io/en/latest/jep/11830-new-remat-checkpoint.html
```
</details>
### System information
- OS Platform and Distribution: ```MacOS Catalina 10.15.7```
- Flax, jax, jaxlib versions: ```flax==0.6.0, jax==0.3.17, jaxlib==0.3.15```
- Python version: ```3.10```
- GPU/TPU model and memory: ```N/A```
- CUDA version (if applicable): ```N/A```
| Hey @lucaslingle, thanks for bringing this up! I've opened #2457 with a fix for this. | 2022-09-12T18:58:41 |
google/flax | 2,492 | google__flax-2492 | [
"1004"
] | ad331b92c2c258bc6190275b70050e505318d862 | diff --git a/flax/linen/stochastic.py b/flax/linen/stochastic.py
--- a/flax/linen/stochastic.py
+++ b/flax/linen/stochastic.py
@@ -27,6 +27,11 @@
class Dropout(Module):
"""Create a dropout layer.
+ Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure
+ to include an RNG seed named `'dropout'`. For example::
+
+ model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`
+
Attributes:
rate: the dropout probability. (_not_ the keep rate!)
broadcast_dims: dimensions that will share the same dropout mask
| Improve documentation for `Dropout` and `rngs` argument in `linen.Module.apply()`
Here is an example of `Dropout` in a model definition:
https://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/models.py#L211
Here is the `apply()`, where `rngs` is passed in
https://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/train.py#L206-L207
However the `rng` is not very clearly explained in `apply()`
https://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/linen/module.py#L749
The `rngs` seems to be passed to `flax/core/scope.py`
Here is the code for `Dropout` (linen)
https://github.com/google/flax/blob/9b4807840c5cb26ef5e29028e3558d404aee00a0/flax/linen/stochastic.py#L56-L57
Here is the code for `make_rng()`
https://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/core/scope.py#L441-L447
The documentation for `rngs` in `apply()` should have a (pointer to) list of names of possible rngs
And documentation for `Dropout` should mention how to pass in rng using `apply()`, without directly passing in like `Dropout()(x,rng=rng)`.
Also probably need to mention the `make_rng()` `fold_in` the rng so each dropout layer will use different rng if there are multiple dropout layers.
| We could mention that `Dropout()` requires an `rng` with the name `dropout` in its module documentation. The code is currently very short and it's easily visible, but I agree it would be better discoverable if it was mentioned in the class pydoc as well.
I also think that extending the `Module.apply()` could be extended with something like
```
rngs: The rngs for the variable collections. For example :class:`flax.linen.stochastic.Dropout`
requires an additional rng named `dropout`.
```
would be reasonable, since this is quite common and illustrative for newcomers.
WDYT @marcvanzee who was been thinking about the right level of verbosity in our docs
@cccntu why don't you give it a try updating the documentation and sending a PR?
@andsteing Thanks for the suggestion. I will give it a try in a few days. :) | 2022-09-29T02:31:16 |
|
google/flax | 2,496 | google__flax-2496 | [
"667"
] | 69163b90c5f40e86055a0e87766360dc7ae9b8fd | diff --git a/flax/errors.py b/flax/errors.py
--- a/flax/errors.py
+++ b/flax/errors.py
@@ -538,6 +538,37 @@ class CallCompactUnboundModuleError(FlaxError):
def __init__(self):
super().__init__('Can\'t call compact methods on unbound modules')
+class CallSetupUnboundModuleError(FlaxError):
+ """
+ This error occurs when you are trying to call `.setup()` directly. For instance, the
+ error will be raised when trying to run this code::
+
+ from flax import linen as nn
+ import jax.numpy as jnp
+
+ class MyModule(nn.Module):
+ def setup(self):
+ self.submodule = MySubModule()
+
+ module = MyModule()
+ module.setup() # <-- ERROR!
+ submodule = module.submodule
+
+ In general you shouldn't call `.setup()` yourself, if you need to get access
+ to a field or submodule defined inside `setup` you can instead create a function
+ to extract it and pass it to `nn.apply`::
+
+ # setup() will be called automatically by `nn.apply`
+ def get_submodule(module):
+ return module.submodule.clone() # avoid leaking the Scope
+
+ empty_variables = {} # you can also use the real variables
+ submodule = nn.apply(get_submodule, module)(empty_variables)
+
+ """
+ def __init__(self):
+ super().__init__('Can\'t call compact methods on unbound modules')
+
class InvalidCheckpointError(FlaxError):
"""
diff --git a/flax/linen/module.py b/flax/linen/module.py
--- a/flax/linen/module.py
+++ b/flax/linen/module.py
@@ -681,6 +681,8 @@ def _call_wrapped_method(self, fun, args, kwargs):
add_call_info = not is_setup_method and len(_context.call_info_stack) > 0
# We lazily call setup() only when needed.
if is_setup_method:
+ if self.scope is None:
+ raise errors.CallSetupUnboundModuleError()
is_recurrent = self._state.in_setup
self._state.in_setup = True
else:
| Directly calling `module.setup()` should raise an exception
`flax.linen.Module.setup()` should not be called directly because it needs a `flax.linen.Scope` to be set up properly.
Since #653 there is no more exception risen when a user inadvertently calls `flax.linen.Module.setup()` (though there probably will be error messages if the user tries to access the scope...)
| An incomplete thought below:
Curiously enough, this recent [discussion](https://github.com/google/flax/discussions/665#discussioncomment-136656) made me think about this and wonder... For most Flax modules (that ultimately define parameters in a compact method), if you define submodules in `setup` but not parameters, I think it may be safe to call `setup` before a module is bound.
In fact, right now we have weird behavior where in Roland's example you can't run `ResNet().backbone`, even though it should probably work just fine (and still be an unbound module).
I'm not yet sure what the best answer is, but perhaps a workaround for Roland for now is actually to explicitly do:
```
resnet = ResNet()
resnet.setup()
resnet.backbone # does this work?
```
### Update
Currently @avital's example does not work:
```python
resnet = ResNet()
resnet.setup()
resnet.backbone # does this work?
```
You now get the following error:
```
AssertionError: Trying to register submodules on unbound scope.
```
This behaviour is documented in [The Module lifecycle guide](https://flax.readthedocs.io/en/latest/advanced_topics/module_lifecycle.html#setup). We could go ahead and raise an error if `setup` is called on unbounded Module and add a message with some pointers into how to correctly extract submodules defined in `setup`.
### Proposal
We could recommend the use of `bind` for this use-case, even a bind with empty variables could work:
```python
resnet = ResNet()
backbone = resnet.bind({}).backbone.clone()
```
We could document this pattern in an existing guide or add a short guide about this topic of "Accessing Submodules".
#### Future ideas
If this pattern is sound we could make it a bit more user-friendly in the future via a `.submodules` that would automate the previous:
```python
resnet = ResNet()
backbone = resnet.submodules.backbone
```
| 2022-10-03T19:55:58 |
|
google/flax | 2,517 | google__flax-2517 | [
"2463"
] | b8d1162b9deff0002c66a0723425660919d7f1ee | diff --git a/flax/linen/stochastic.py b/flax/linen/stochastic.py
--- a/flax/linen/stochastic.py
+++ b/flax/linen/stochastic.py
@@ -29,7 +29,7 @@ class Dropout(Module):
Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure
to include an RNG seed named `'dropout'`. For example::
-
+
model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`
Attributes:
diff --git a/flax/struct.py b/flax/struct.py
--- a/flax/struct.py
+++ b/flax/struct.py
@@ -98,7 +98,7 @@ def create(cls, kernel):
# check if already a flax dataclass
if '_flax_dataclass' in clz.__dict__:
return clz
-
+
data_clz = dataclasses.dataclass(frozen=True)(clz)
meta_fields = []
data_fields = []
| diff --git a/tests/struct_test.py b/tests/struct_test.py
--- a/tests/struct_test.py
+++ b/tests/struct_test.py
@@ -67,7 +67,7 @@ def test_keypath_error(self):
raise e('in_axes')
def test_double_wrap_no_op(self):
-
+
class A:
a: int
| [Accessibility] Enable EPUB output on ReadTheDocs
Currently there is only HTML output [enabled](https://readthedocs.org/projects/flax/downloads/). It would be great if EPUB and PDF could also be enabled.
| Mind if I do it? It's a really small fix, just gotta add
```
- epub
- pdf
```
To the .readthedocs.yml under `formats:` | 2022-10-10T18:05:35 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.