repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
aws/aws-sam-cli | 4,300 | aws__aws-sam-cli-4300 | [
"3258"
] | 03807f126586e63852a797731984be8425f27b9e | diff --git a/samcli/lib/build/app_builder.py b/samcli/lib/build/app_builder.py
--- a/samcli/lib/build/app_builder.py
+++ b/samcli/lib/build/app_builder.py
@@ -712,7 +712,10 @@ def _get_build_options(
return normalized_build_props
_build_options: Dict = {
- "go": {"artifact_executable_name": handler},
+ "go": {
+ "artifact_executable_name": handler,
+ "trim_go_path": build_props.get("TrimGoPath", False),
+ },
"provided": {"build_logical_id": function_name},
"nodejs": {"use_npm_ci": build_props.get("UseNpmCi", False)},
}
| diff --git a/tests/unit/lib/build_module/test_app_builder.py b/tests/unit/lib/build_module/test_app_builder.py
--- a/tests/unit/lib/build_module/test_app_builder.py
+++ b/tests/unit/lib/build_module/test_app_builder.py
@@ -1906,15 +1906,31 @@ def test_invalid_metadata_cases(self, metadata, expected_output):
@parameterized.expand(
[
- ("go", "", {"artifact_executable_name": "app.handler"}),
- ("python", "", None),
- ("nodejs", "npm", {"use_npm_ci": True}),
- ("esbuild", "npm-esbuild", {"entry_points": ["app"], "use_npm_ci": True}),
- ("provided", "", {"build_logical_id": "Function"}),
+ ("go", "", {"TrimGoPath": True}, {"artifact_executable_name": "app.handler", "trim_go_path": True}),
+ ("python", "", {}, None),
+ ("nodejs", "npm", {"UseNpmCi": True}, {"use_npm_ci": True}),
+ ("esbuild", "npm-esbuild", {"UseNpmCi": True}, {"entry_points": ["app"], "use_npm_ci": True}),
+ ("provided", "", {}, {"build_logical_id": "Function"}),
]
)
- def test_get_options_various_languages_dependency_managers(self, language, dependency_manager, expected_options):
- build_properties = {"UseNpmCi": True}
+ def test_get_options_various_languages_dependency_managers(
+ self, language, dependency_manager, build_properties, expected_options
+ ):
+ metadata = {"BuildProperties": build_properties}
+ options = ApplicationBuilder._get_build_options(
+ "Function", language, "app.handler", dependency_manager, metadata
+ )
+ self.assertEqual(options, expected_options)
+
+ @parameterized.expand(
+ [
+ ("go", "", {}, {"artifact_executable_name": "app.handler", "trim_go_path": False}),
+ ("nodejs", "npm", {}, {"use_npm_ci": False}),
+ ]
+ )
+ def test_get_default_options_various_languages_dependency_managers(
+ self, language, dependency_manager, build_properties, expected_options
+ ):
metadata = {"BuildProperties": build_properties}
options = ApplicationBuilder._get_build_options(
"Function", language, "app.handler", dependency_manager, metadata
| sam build includes absolute paths in Golang binaries
### Description:
<!-- Briefly describe the bug you are facing.-->
When you run ```sam build``` and ```sam package``` in two instances of the same projects with different absolute paths and upload to the same S3 bucket, the compiled binaries will be uploaded both times, even though the binaries are identical.
According to #2110 this has been fixed, but I can still reproduce the issue on Windows.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
1. Clone [this repository](https://github.com/suroh1994/sampackagesample) into a directory on your local Windows machine.
1. ```cd``` into the directory.
1. Run ```sam build```.
1. Run ```sam package --s3-bucket YOUR_BUCKETNAME```
1. Clone the same repository into a second directory on your local windows machine.
1. Copy the second directory into a third directory using ```xcopy seconddirectory thirddirectory /s /e```.
1. ```cd``` into the second directory.
1. Run ```sam build```.
1. Run ```sam package --s3-bucket YOUR_BUCKETNAME```
1. ```cd``` into the third directory.
1. Run ```sam build```.
1. Run ```sam package --s3-bucket YOUR_BUCKETNAME```
### Observed result:
<!-- Please provide command output with `--debug` flag set. -->
The calculated hashes change, when the parent directory changes. This results in the compiled binaries being re-uploaded for every directory, although no files have been modified.
### Expected result:
<!-- Describe what you expected. -->
Hashes should not depend on the parent directory. Compiled binaries should only be uploaded, when they have been modified.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Windows 10 Pro Version 20H2 (Build 19042.1165)
2. `sam --version`: 1.31.0
3. AWS region: eu-central-1
### Further notes:
I can imagine this is also the cause for the issue #2872, where running ```sam package``` in an ephemeral container results in the files being uploaded every time.
| I've played around with the source code and it seems the hashes are correct, but instead the compiled binaries are not identical. But I have not yet identified, why they differ.
I believe I have identified the issue. When I build my binaries using ```sam build```, the resulting binary is identical to ```go build```. But when I use ```go build -trimpath``` the binary is different, yet the binaries in both directories are now identical. I'm looking into what other metadata is stored in the binaries, that could let to the binaries differing between two CodeBuild runs.
I solved the issue successfully for both local builds and builds generated by CodeBuild. As my last post suggested, the issue came down to ```go build``` embedding absolute paths into the binary. So to resolve the issue I did the following:
To have ```sam build``` use the ```-trimpath``` flag, I setup makefiles for every Lambda function and added these lines to each function:
```
Metadata:
BuildMethod: makefile
```
Then I added a ```Makefile``` file to each directory listed as ```CodeUri``` with the following content:
```
build-Logical-Resource-Id:
go build -trimpath -o "$(ARTIFACTS_DIR)/your-handler-name"
```
For more information on how to use a makefile, see the [documentation](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-build.html).
A complete example can be found in the [resolved branch](https://github.com/suroh1994/sampackagesample/tree/resolved) of my repo.
Going forward, it might be interesting to consider using the ```-trimpath``` by default. I do not know how important it is to keep the information in the binaries and which variant more commonly would lead to issues.
Updated title to reflect the found issue.
@suroh1994 Thanks for the info, it is very valuable. We will consider adding `-trimpath` by default if that avoid such issue
Any update on if trimpath can be included by default? My group is running into this issue atm.
@jaglade This will be a welcome contribution to the `aws-lambda-builders` golang build workflow and we can make this feature opt-in (atleast at this point, I have not considered the ramifications of the change in behavior causing breaks).
@sriram-mv I have opened a [pull request](https://github.com/aws/aws-lambda-builders/pull/389) in aws-lambda-builders allowing this option to be passed in.
Once this PR is reviewed and merged I believe this project can be updated [here](https://github.com/aws/aws-sam-cli/blob/develop/samcli/lib/build/app_builder.py#L715) to something along the lines of:
```python
_build_options: Dict = {
"go": {
"artifact_executable_name": handler,
"trim_go_path": build_props.get("TrimGoPath", False),
},
"provided": {"build_logical_id": function_name},
"nodejs": {"use_npm_ci": build_props.get("UseNpmCi", False)},
}
```
I _believe_ this will allow SAM template users to define functions like the following:
```yaml
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: functions/hello-world/
Handler: main
Runtime: go1.x
Architectures:
- x86_64
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
Metadata:
BuildProperties:
TrimGoPath: True
```
Which fixes the issue described above by @suroh1994 and @jaglade | 2022-10-12T17:27:19 |
aws/aws-sam-cli | 4,360 | aws__aws-sam-cli-4360 | [
"4355"
] | bb30dbae2ebf0da2b15c2517cd37f42f7f7396fa | diff --git a/samcli/lib/samlib/resource_metadata_normalizer.py b/samcli/lib/samlib/resource_metadata_normalizer.py
--- a/samcli/lib/samlib/resource_metadata_normalizer.py
+++ b/samcli/lib/samlib/resource_metadata_normalizer.py
@@ -186,7 +186,7 @@ def _extract_image_asset_metadata(metadata):
"""
asset_path = Path(metadata.get(ASSET_PATH_METADATA_KEY, ""))
dockerfile_path = Path(metadata.get(ASSET_DOCKERFILE_PATH_KEY), "")
- dockerfile, path_from_asset = dockerfile_path.stem, dockerfile_path.parent
+ dockerfile, path_from_asset = dockerfile_path.name, dockerfile_path.parent
dockerfile_context = str(Path(asset_path.joinpath(path_from_asset)))
return {
SAM_METADATA_DOCKERFILE_KEY: dockerfile,
| diff --git a/tests/integration/buildcmd/test_build_cmd.py b/tests/integration/buildcmd/test_build_cmd.py
--- a/tests/integration/buildcmd/test_build_cmd.py
+++ b/tests/integration/buildcmd/test_build_cmd.py
@@ -86,6 +86,34 @@ def test_with_default_requirements(self, runtime, use_container):
self.built_template, self.FUNCTION_LOGICAL_ID_IMAGE, self._make_parameter_override_arg(overrides), expected
)
+ @parameterized.expand([("3.6", False), ("3.7", False), ("3.8", False), ("3.9", False)])
+ @pytest.mark.flaky(reruns=3)
+ def test_with_dockerfile_extension(self, runtime, use_container):
+ _tag = f"{random.randint(1,100)}"
+ overrides = {
+ "Runtime": runtime,
+ "Handler": "main.handler",
+ "DockerFile": "Dockerfile.production",
+ "Tag": _tag,
+ }
+ cmdlist = self.get_command_list(use_container=use_container, parameter_overrides=overrides)
+
+ LOG.info("Running Command: ")
+ LOG.info(cmdlist)
+ run_command(cmdlist, cwd=self.working_dir)
+
+ self._verify_image_build_artifact(
+ self.built_template,
+ self.FUNCTION_LOGICAL_ID_IMAGE,
+ "ImageUri",
+ f"{self.FUNCTION_LOGICAL_ID_IMAGE.lower()}:{_tag}",
+ )
+
+ expected = {"pi": "3.14"}
+ self._verify_invoke_built_function(
+ self.built_template, self.FUNCTION_LOGICAL_ID_IMAGE, self._make_parameter_override_arg(overrides), expected
+ )
+
@pytest.mark.flaky(reruns=3)
def test_intermediate_container_deleted(self):
_tag = f"{random.randint(1, 100)}"
diff --git a/tests/integration/testdata/buildcmd/PythonImage/Dockerfile.production b/tests/integration/testdata/buildcmd/PythonImage/Dockerfile.production
new file mode 100644
--- /dev/null
+++ b/tests/integration/testdata/buildcmd/PythonImage/Dockerfile.production
@@ -0,0 +1,15 @@
+ARG BASE_RUNTIME
+
+FROM public.ecr.aws/lambda/python:$BASE_RUNTIME
+
+ARG FUNCTION_DIR="/var/task"
+
+RUN mkdir -p $FUNCTION_DIR
+
+COPY main.py $FUNCTION_DIR
+
+COPY __init__.py $FUNCTION_DIR
+COPY requirements.txt $FUNCTION_DIR
+
+RUN python -m pip install -r $FUNCTION_DIR/requirements.txt -t $FUNCTION_DIR
+
diff --git a/tests/unit/lib/samlib/test_resource_metadata_normalizer.py b/tests/unit/lib/samlib/test_resource_metadata_normalizer.py
--- a/tests/unit/lib/samlib/test_resource_metadata_normalizer.py
+++ b/tests/unit/lib/samlib/test_resource_metadata_normalizer.py
@@ -112,6 +112,41 @@ def test_replace_all_resources_that_contain_image_metadata(self):
self.assertEqual(docker_build_args, template_data["Resources"]["Function1"]["Metadata"]["DockerBuildArgs"])
self.assertEqual("Function1", template_data["Resources"]["Function1"]["Metadata"]["SamResourceId"])
+ def test_replace_all_resources_that_contain_image_metadata_dockerfile_extensions(self):
+ docker_build_args = {"arg1": "val1", "arg2": "val2"}
+ asset_path = pathlib.Path("/path", "to", "asset")
+ dockerfile_path = pathlib.Path("path", "to", "Dockerfile.production")
+ template_data = {
+ "Resources": {
+ "Function1": {
+ "Properties": {
+ "Code": {
+ "ImageUri": {
+ "Fn::Sub": "${AWS::AccountId}.dkr.ecr.${AWS::Region}.${AWS::URLSuffix}/cdk-hnb659fds-container-assets-${AWS::AccountId}-${AWS::Region}:b5d75370ccc2882b90f701c8f98952aae957e85e1928adac8860222960209056"
+ }
+ }
+ },
+ "Metadata": {
+ "aws:asset:path": asset_path,
+ "aws:asset:property": "Code.ImageUri",
+ "aws:asset:dockerfile-path": dockerfile_path,
+ "aws:asset:docker-build-args": docker_build_args,
+ },
+ },
+ }
+ }
+
+ ResourceMetadataNormalizer.normalize(template_data)
+
+ expected_docker_context_path = str(pathlib.Path("/path", "to", "asset", "path", "to"))
+ self.assertEqual("function1", template_data["Resources"]["Function1"]["Properties"]["Code"]["ImageUri"])
+ self.assertEqual(
+ expected_docker_context_path, template_data["Resources"]["Function1"]["Metadata"]["DockerContext"]
+ )
+ self.assertEqual("Dockerfile.production", template_data["Resources"]["Function1"]["Metadata"]["Dockerfile"])
+ self.assertEqual(docker_build_args, template_data["Resources"]["Function1"]["Metadata"]["DockerBuildArgs"])
+ self.assertEqual("Function1", template_data["Resources"]["Function1"]["Metadata"]["SamResourceId"])
+
def test_replace_all_resources_that_contain_image_metadata_windows_paths(self):
docker_build_args = {"arg1": "val1", "arg2": "val2"}
asset_path = "C:\\path\\to\\asset"
| Bug: sam build omits Dockerfile suffix from build.toml when aws:asset:property == "Code.ImageUri"
### Description:
Hi all! I was unable to find an open or closed issue describing this exact problem.
The problem is very specific: when your SAM template specifies a Dockerfile containing a suffix in its name (e.g., `.production`) AND the `aws:asset:property` is set to `"Code.ImageUri"`, the Dockerfile suffix is not copied over to the generated `build.toml`, and thus SAM is unable to locate the Dockerfile. This occurs for both the `aws:asset:dockerfile-path` and `Dockerfile` properties in the template JSON.
This is important because this is the pathway by which CDK generates SAM templates which can be used for local testing and other deployment internals.
The problem boils down to the metadata normalization when `aws:asset:property` is `Code.ImageUri`. This leads to `_extract_image_asset_metadata` being called in which only the Dockerfile path's `stem` is extracted ([code permalink](https://github.com/aws/aws-sam-cli/blob/d634991615476a3980496694f90ac6c23800ab73/samcli/lib/samlib/resource_metadata_normalizer.py#L189)).
It seems to me that `dockerfile_path.name` should be used here instead of `dockerfile_path.stem` ([Python docs](https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.name)). Is there some other rationale for using `.stem` here?
### Steps to reproduce:
1. Create a CDK project (e.g., [CDK hello world app](https://docs.aws.amazon.com/cdk/v2/guide/hello_world.html))
2. Add a simple Dockerfile **named `Dockerfile.production`**
3. Define a CDK Docker Lambda function. E.g.:
```typescript
new lambda.DockerImageFunction(
this,
"TestFunction",
{
code: lambda.DockerImageCode.fromImageAsset(
path.join(__dirname, "..", "path", "to", "dockerfile"),
{
file: "Dockerfile.production",
},
),
},
);
```
5. Change the name of your `Dockerfile` to `Dockerfile.prod`. Also update this reference in the template JSON/YAML.
6. `npx cdk synth --no-staging`
7. `sam build -t cdk.out/HelloCdk.template.json`
8. `sam local invoke`
### Observed result:
The build will "succeed" but you should not see a Docker build triggered. Open `.aws-sam/build.toml` and examine the function build definition metadata. You should see
```
"aws:asset:dockerfile-path" = "Dockerfile.production"
```
and
```
Dockerfile = "Dockerfile"
```
Upon attempting to invoke, you will see that SAM is unable to locate the Docker image to invoke the function (because it has not been built).
### Expected result:
The outputted `build.toml` should include:
```
Dockerfile = "Dockerfile.production"
```
`sam build` and `sam local invoke` should trigger a build of the Docker image and invoke the function by running that image, respectively.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. macOS 13.0
3. `sam --version`: SAM CLI, version 1.61.0
9. AWS region: local development
| Thanks for creating this issue, it does look to be a bug. Would you be willing to raise a PR for it and we will prioritize on getting it through! cc: @moelasmar
@sriram-mv thank you for the response! Yep absolutely, I can get a PR up sometime today. | 2022-10-31T22:45:15 |
aws/aws-sam-cli | 5,016 | aws__aws-sam-cli-5016 | [
"5015"
] | 20302447eaf38a9e47d6e6d686265402ab4f4041 | diff --git a/samcli/local/docker/manager.py b/samcli/local/docker/manager.py
--- a/samcli/local/docker/manager.py
+++ b/samcli/local/docker/manager.py
@@ -142,7 +142,12 @@ def pull_image(self, image_name, tag=None, stream=None):
If the Docker image was not available in the server
"""
if tag is None:
- tag = image_name.split(":")[1] if ":" in image_name else "latest"
+ _image_name_split = image_name.split(":")
+ # Separate the image_name from the tag so less forgiving docker clones
+ # (podman) get the image name as the URL they expect. Official docker seems
+ # to clean this up internally.
+ tag = _image_name_split[1] if len(_image_name_split) > 1 else "latest"
+ image_name = _image_name_split[0]
# use a global lock to get the image lock
with self._lock:
image_lock = self._lock_per_image.get(image_name)
| Bug: build error using podman
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
<!-- Briefly describe the bug you are facing.-->
Error building image with "docker compatible tool" podman.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
Start podman
run `sam build -u`
### Observed result:
<!-- Please provide command output with `--debug` flag set. -->
2023-04-14 10:39:57,621 | Config file location: /<snip>/sam-hello-world/samconfig.toml
2023-04-14 10:39:57,622 | Loading configuration values from [default.['build'].parameters] (env.command_name.section) in config file at '/<snip>/sam-hello-world/samconfig.toml'...
2023-04-14 10:39:57,624 | Configuration values successfully loaded.
2023-04-14 10:39:57,624 | Configuration values are: {}
2023-04-14 10:39:57,629 | Using SAM Template at /<snip>/sam-hello-world/template.yaml
2023-04-14 10:39:57,648 | Using config file: samconfig.toml, config environment: default
2023-04-14 10:39:57,648 | Expand command line arguments to:
2023-04-14 10:39:57,648 | --template_file=/<snip>sam-hello-world/template.yaml --use_container --mount_with=READ --build_dir=.aws-sam/build --cache_dir=.aws-sam/cache
2023-04-14 10:39:58,377 | 'build' command is called
2023-04-14 10:39:58,377 | Starting Build inside a container
2023-04-14 10:39:58,383 | Collected default values for parameters: {}
2023-04-14 10:39:58,400 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2023-04-14 10:39:58,400 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2023-04-14 10:39:58,401 | 0 stacks found in the template
2023-04-14 10:39:58,401 | Collected default values for parameters: {}
2023-04-14 10:39:58,410 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2023-04-14 10:39:58,410 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2023-04-14 10:39:58,410 | 2 resources found in the stack
2023-04-14 10:39:58,410 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/'
2023-04-14 10:39:58,410 | --base-dir is not presented, adjusting uri hello_world/ relative to /<snip>/sam-hello-world/template.yaml
2023-04-14 10:39:58,414 | 2 resources found in the stack
2023-04-14 10:39:58,415 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/'
2023-04-14 10:39:58,415 | Instantiating build definitions
2023-04-14 10:39:58,416 | Same function build definition found, adding function (Previous: BuildDefinition(python3.10, /<snip>/sam-hello-world/hello_world, Zip, , aa9be8ec-e101-4712-8496-468de4ca138c, {}, {}, arm64, []), Current: BuildDefinition(python3.10, /<snip>/sam-hello-world/hello_world, Zip, , 4c168b03-f965-47c9-894a-38e54517eb14, {}, {}, arm64, []), Function: Function(function_id='HelloWorldFunction', name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.10', memory=None, timeout=3, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/<snip>/sam-hello-world/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/hello', 'Method': 'get', 'RestApiId': 'ServerlessRestApi'}}}, metadata={'SamResourceId': 'HelloWorldFunction'}, inlinecode=None, codesign_config_arn=None, architectures=['arm64'], function_url_config=None, stack_path='', runtime_management_config=None))
2023-04-14 10:39:58,417 | Building codeuri: /<snip>/sam-hello-world/hello_world runtime: python3.10 metadata: {} architecture: arm64 functions: HelloWorldFunction
2023-04-14 10:39:58,417 | Building to following folder /<snip>/sam-hello-world/.aws-sam/build/HelloWorldFunction
2023-04-14 10:39:58,444 | Failed to download image with name public.ecr.aws/sam/build-python3.10:latest-arm64
2023-04-14 10:39:58,444 | Container was not created. Skipping deletion
2023-04-14 10:39:58,444 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2023-04-14 10:39:58,445 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2023-04-14 10:39:58,445 | Unable to find Click Context for getting session_id.
Error: Could not find public.ecr.aws/sam/build-python3.10:latest-arm64 image locally and failed to pull it from docker.
Traceback:
File "click/core.py", line 1055, in main
File "click/core.py", line 1657, in invoke
File "click/core.py", line 1404, in invoke
File "click/core.py", line 760, in invoke
File "click/decorators.py", line 84, in new_func
File "click/core.py", line 760, in invoke
File "samcli/lib/telemetry/metric.py", line 183, in wrapped
File "samcli/lib/telemetry/metric.py", line 148, in wrapped
File "samcli/lib/utils/version_checker.py", line 42, in wrapped
File "samcli/cli/main.py", line 92, in wrapper
File "samcli/commands/build/command.py", line 180, in cli
File "samcli/commands/build/command.py", line 276, in do_cli
File "samcli/commands/build/build_context.py", line 279, in run
File "samcli/lib/build/app_builder.py", line 214, in build
File "samcli/lib/build/build_strategy.py", line 80, in build
File "samcli/lib/build/build_strategy.py", line 90, in _build_functions
File "samcli/lib/build/build_strategy.py", line 163, in build_single_function_definition
File "samcli/lib/build/app_builder.py", line 688, in _build_function
File "samcli/lib/build/app_builder.py", line 930, in _build_function_on_container
File "samcli/local/docker/manager.py", line 114, in run
File "samcli/local/docker/manager.py", line 87, in create
An unexpected error was encountered while executing "sam build".
Search for an existing issue:
https://github.com/aws/aws-sam-cli/issues?q=is%3Aissue+is%3Aopen+Bug%3A%20sam%20build%20-%20DockerImagePullFailedException
Or create a bug report:
https://github.com/aws/aws-sam-cli/issues/new?template=Bug_report.md&title=Bug%3A%20sam%20build%20-%20DockerImagePullFailedException
### Expected result:
<!-- Describe what you expected. -->
The image to build successfully
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
<!-- Either provide the following info (for AWS SAM CLI v1.68.0 or before) or paste the output of `sam --info` (AWS SAM CLI v1.69.0 or after). -->
1. OS: OS X 13.3.1
2. `sam --version`: SAM CLI, version 1.79.0
3. AWS region: us-west-2
```
{
"version": "1.79.0",
"system": {
"python": "3.8.13",
"os": "macOS-13.3.1-arm64-arm-64bit"
},
"additional_dependencies": {
"docker_engine": "4.4.2",
"aws_cdk": "2.72.1 (build ddbfac7)",
"terraform": "1.4.4"
}
}
```
`Add --debug flag to command you are running`
ok.
| 2023-04-14T17:02:27 |
||
aws/aws-sam-cli | 5,504 | aws__aws-sam-cli-5504 | [
"5485"
] | 2ba0b72384f6cf0f03fefe1e48fd1ecc5bd89c55 | diff --git a/schema/schema.py b/schema/schema.py
--- a/schema/schema.py
+++ b/schema/schema.py
@@ -32,14 +32,17 @@ class SamCliParameterSchema:
type: str
description: str = ""
default: Optional[Any] = None
+ items: Optional[str] = None
choices: Optional[Any] = None
def to_schema(self) -> Dict[str, Any]:
"""Return the JSON schema representation of the SAM CLI parameter."""
- param = {}
+ param: Dict[str, Any] = {}
param.update({"title": self.name, "type": self.type, "description": self.description})
if self.default:
param.update({"default": self.default})
+ if self.items:
+ param.update({"items": {"type": self.items}})
if self.choices:
param.update({"enum": self.choices})
return param
@@ -136,7 +139,10 @@ def format_param(param: click.core.Option) -> SamCliParameterSchema:
formatted_param_type = param_type or "string"
formatted_param: SamCliParameterSchema = SamCliParameterSchema(
- param.name or "", formatted_param_type, clean_text(param.help or "")
+ param.name or "",
+ formatted_param_type,
+ clean_text(param.help or ""),
+ items="string" if formatted_param_type == "array" else None,
)
if param.default:
@@ -150,7 +156,15 @@ def format_param(param: click.core.Option) -> SamCliParameterSchema:
def get_params_from_command(cli) -> List[SamCliParameterSchema]:
"""Given a CLI object, return a list of all parameters in that CLI, formatted as SamCliParameterSchema objects."""
- return [format_param(param) for param in cli.params if param.name and isinstance(param, click.core.Option)]
+ params_to_exclude = [
+ "config_env", # shouldn't allow different environment from where the config is being read from
+ "config_file", # shouldn't allow reading another file within current file
+ ]
+ return [
+ format_param(param)
+ for param in cli.params
+ if param.name and isinstance(param, click.core.Option) and param.name not in params_to_exclude
+ ]
def retrieve_command_structure(package_name: str) -> List[SamCliCommandSchema]:
| fix: use StringIO instead of BytesIO with StreamWriter
Related to https://github.com/aws/aws-sam-cli/pull/5427
#### Which issue(s) does this change fix?
N/A
#### Why is this change necessary?
With recently changed the way to write into stream to fix some issues related to encoding (see PR above). That changed caused some issues with ECRUpload class, due to discrepancy of the `write` method between `BytesIO` vs `StringIO`. `BytesIO` instance accepts byte array where `StringIO` accepts string.
#### How does it address the issue?
This resolves issue by using `StringIO` stream instead of `BytesIO`. This PR also adds some typing for the inner `stream` instance of the `StreamWrapper` class to check for other usages in the code base.
#### What side effects does this change have?
Manually running `sam local` tests to validate the change.
#### Mandatory Checklist
**PRs will only be reviewed after checklist is complete**
- [ ] Add input/output [type hints](https://docs.python.org/3/library/typing.html) to new functions/methods
- [ ] Write design document if needed ([Do I need to write a design document?](https://github.com/aws/aws-sam-cli/blob/develop/DEVELOPMENT_GUIDE.md#design-document))
- [x] Write/update unit tests
- [ ] Write/update integration tests
- [ ] Write/update functional tests if needed
- [x] `make pr` passes
- [ ] `make update-reproducible-reqs` if dependencies were changed
- [ ] Write documentation
By submitting this pull request, I confirm that my contribution is made under the terms of the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).
| 2023-07-12T22:45:12 |
||
autorope/donkeycar | 47 | autorope__donkeycar-47 | [
"46"
] | e832cbfa5e7eb26bc16d86f21b76dd880a211ee8 | diff --git a/donkey/remotes.py b/donkey/remotes.py
--- a/donkey/remotes.py
+++ b/donkey/remotes.py
@@ -4,6 +4,7 @@
"""
import time
+from datetime import datetime
import json
import io
import os
@@ -32,6 +33,16 @@ def __init__(self, remote_url, vehicle_id='mycar'):
self.control_url = remote_url + '/api/vehicles/control/' + vehicle_id + '/'
self.last_milliseconds = 0
+ self.session = requests.Session()
+
+ self.log('time,lag\n', write_method='w')
+
+ def log(self, line, path='lag_log.csv', write_method='a'):
+ with open('lag_log.csv', write_method) as f:
+ f.write(line)
+
+
+
def decide(self, img_arr, angle, throttle, milliseconds):
@@ -49,31 +60,38 @@ def decide(self, img_arr, angle, throttle, milliseconds):
r = None
-
while r == None:
#Try connecting to server until connection is made.
+ start = time.time()
try:
- start = time.time()
- r = requests.post(self.control_url,
+ r = self.session.post(self.control_url,
files={'img': dk.utils.arr_to_binary(img_arr),
- 'json': json.dumps(data)}) #hack to put json in file
- end = time.time()
- lag = end-start
+ 'json': json.dumps(data)},
+ timeout=0.2) #hack to put json in file
+
except (requests.ConnectionError) as err:
print("Vehicle could not connect to server. Make sure you've " +
"started your server and you're referencing the right port.")
time.sleep(3)
+
+ except (requests.exceptions.ReadTimeout) as err:
+ print("Request took too long. Retrying")
+ return angle, throttle * .8
+
+
+ end = time.time()
+ lag = end-start
+ self.log('{}, {} \n'.format(datetime.now().time() , lag ))
+ print('vehicle <> server: request lag: %s' %lag)
- print(r.text)
-
data = json.loads(r.text)
angle = float(data['angle'])
throttle = float(data['throttle'])
- print('vehicle <> server: request lag: %s' %lag)
-
+
+
return angle, throttle
| Figure out root cause of spikes in control lag time
This was the main issue I experienced at the Feb 18th track day that prevented me from reliably driving the car around the track for training. It always manifests as an intermittent problem but I am also able to observe it at home, although less frequently than I saw it at the track day.
I'm running a local donkey server over wifi, so 4G latency is not a factor here. On wifi, I'm frequently seeing lag times spike above 1s, sometimes as long as 30 or more seconds. The clues that I've seen so far are:
1) This seems to happen more frequently in areas of high network congestion (like the track day when everyone was running nearby).
2) It happens on my home wifi network at least once per minute while driving the donkey car, to varying levels of severity.
3) I tried an alternate router at home on a non-standard wifi channel, and was not able to reproduce the delays.
Here's a sample console log from the pi that shows the spikes. Lag time of ~0.06 is about normal on my home network.
```
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06897997856140137
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.07510542869567871
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06453394889831543
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.759141206741333
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.05977487564086914
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0692141056060791
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06003284454345703
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.16736602783203125
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06820440292358398
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0678567886352539
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.12179088592529297
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.05697226524353027
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0699162483215332
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.06665158271789551
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.17603182792663574
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 1.0047976970672607
throttle update: 0.0
pulse: 370
angle: 0.0 throttle: 0.0
{"angle": "0", "throttle": "0"}
vehicle <> server: request lag: 0.0619354248046875
throttle update: 0.0
pulse: 370
```
| Thanks @alanwells. I remember that the trace shows this hangs on the connection creation. Maybe reusing the connection would prevent these large lags. The requests module provides this functionality with sessions. http://docs.python-requests.org/en/master/user/advanced/ | 2017-02-23T19:11:00 |
|
autorope/donkeycar | 273 | autorope__donkeycar-273 | [
"272"
] | e9599e6a83233fa9e54d4a72c4e4835081f78b68 | diff --git a/donkeycar/util/web.py b/donkeycar/util/web.py
--- a/donkeycar/util/web.py
+++ b/donkeycar/util/web.py
@@ -1,7 +1,10 @@
import socket
def get_ip_address():
- ip = ([l for l in ([ip for ip in socket.gethostbyname_ex(socket.gethostname())[2] if not ip.startswith("127.")][:1],
- [[(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in
- [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1]]) if l][0][0])
- return ip
+ try:
+ ip = ([l for l in ([ip for ip in socket.gethostbyname_ex(socket.gethostname())[2] if not ip.startswith("127.")][:1],
+ [[(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in
+ [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1]]) if l][0][0])
+ return ip
+ except OSError: #occurs when cannot connect to '8.8.8.8'
+ return "127.0.0.1" #loopback
\ No newline at end of file
| Support WIFI network that does not have internet access
REF: https://github.com/wroscoe/donkey/blob/dev/donkeycar/util/web.py
The system determines its IP address using a ping to 8.8.8.8
This approach fails when the WIFI network does not have internet access.
|
Hello William I've been trying to find a place to get started and try to help I've never done a pull request and I'm a bit trigger shy I would like to help on this request but I don't want to make an ass of myself and do things wrong.
If you can give me some pointers on how to do this the first time so I'm not asking too many questions or submitting noob questions I would be happy to try and assist on this one I've set up Linux Wireless in AP mode before.
Sent from my iPhone
> On Jun 25, 2018, at 12:39 PM, mancoast <[email protected]> wrote:
>
> REF: https://github.com/wroscoe/donkey/blob/dev/donkeycar/util/web.py
>
> The system determines its IP address using a ping to 8.8.8.8
> This approach fails when the WIFI network does not have internet access.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub, or mute the thread.
| 2018-06-26T17:12:43 |
|
liqd/a4-product | 54 | liqd__a4-product-54 | [
"52"
] | 94b6e357777d4fe0437ad6b6541764301d5ca346 | diff --git a/liqd_product/apps/organisations/admin.py b/liqd_product/apps/organisations/admin.py
--- a/liqd_product/apps/organisations/admin.py
+++ b/liqd_product/apps/organisations/admin.py
@@ -4,6 +4,7 @@
class OrganisationAdmin(admin.ModelAdmin):
+ search_fields = ('name',)
raw_id_fields = ('initiators', )
diff --git a/liqd_product/config/settings/base.py b/liqd_product/config/settings/base.py
--- a/liqd_product/config/settings/base.py
+++ b/liqd_product/config/settings/base.py
@@ -49,6 +49,7 @@
'adhocracy4.projects.apps.ProjectsConfig',
'adhocracy4.ratings.apps.RatingsConfig',
'adhocracy4.reports.apps.ReportsConfig',
+ 'adhocracy4.rules.apps.RulesConfig',
# General components that define models or helpers
'liqd_product.apps.contrib.apps.Config',
| Adapt to meinBerlin Changes
- [x] use A4_CATEGORIZABLE ( liqd/a4-meinberlin#893 )
- [x] adapt to project_list_tile templates in user space (liqd/a4-meinberlin#798 liqd/a4-meinberlin#792 liqd/a4-meinberlin#783 ... )
- [x] adapt to project_list_tile templates in dashboard ( liqd/a4-meinberlin#851 ... )
- [x] check if css refactorings should be used in product, too ( liqd/a4-meinberlin#903 liqd/a4-meinberlin#900 liqd/a4-meinberlin#875 liqd/a4-meinberlin#860 liqd/a4-meinberlin#881 liqd/a4-meinberlin#807 liqd/a4-meinberlin#788 ...)
- [x] unifiy admin.pys with meinBerlin
- [x] include 'adhocracy4.rules.apps.RulesConfig', ( liqd/a4-meinberlin#914 )
| 2017-10-23T14:24:41 |
||
liqd/a4-product | 66 | liqd__a4-product-66 | [
"65"
] | e84944f21aace5aed8a2014bdf2833dbd432d642 | diff --git a/liqd_product/config/settings/base.py b/liqd_product/config/settings/base.py
--- a/liqd_product/config/settings/base.py
+++ b/liqd_product/config/settings/base.py
@@ -49,6 +49,7 @@
'adhocracy4.projects.apps.ProjectsConfig',
'adhocracy4.ratings.apps.RatingsConfig',
'adhocracy4.reports.apps.ReportsConfig',
+ 'adhocracy4.rules.apps.RulesConfig',
# General components that define models or helpers
'liqd_product.apps.contrib.apps.Config',
| Internal server error when editing poll question
Internal server error when editing poll question in creating poll in dashboard
| 2017-11-08T16:48:45 |
||
liqd/a4-product | 139 | liqd__a4-product-139 | [
"110"
] | 56f8d33dbcc5e9131ebf17ca131c8a00244bb3f8 | diff --git a/liqd_product/apps/contrib/management/commands/makemessages.py b/liqd_product/apps/contrib/management/commands/makemessages.py
--- a/liqd_product/apps/contrib/management/commands/makemessages.py
+++ b/liqd_product/apps/contrib/management/commands/makemessages.py
@@ -25,8 +25,15 @@ def find_files(self, root):
settings.BASE_DIR, 'node_modules', 'adhocracy4', 'adhocracy4'
))
a4_paths = super().find_files(get_module_dir('adhocracy4'))
+ mbjs_paths = super().find_files(path.join(
+ settings.BASE_DIR, 'node_modules', 'a4-meinberlin', 'meinberlin'
+ ))
+ mb_paths = super().find_files(get_module_dir('meinberlin'))
+
liqd_product_paths = super().find_files(
path.relpath(get_module_dir('liqd_product'))
)
- return a4js_paths + a4_paths + liqd_product_paths
+ return a4js_paths + a4_paths + \
+ mbjs_paths + mb_paths + \
+ liqd_product_paths
| Translations incomplete
- partner page
| <img width="410" alt="bildschirmfoto 2017-11-28 um 12 05 56" src="https://user-images.githubusercontent.com/11075214/33316541-90e55c40-d434-11e7-86b1-76fcd6e663d1.png">
In the dashboard:
<img width="328" alt="bildschirmfoto 2017-11-28 um 12 20 22" src="https://user-images.githubusercontent.com/11075214/33317130-955cce8c-d436-11e7-9ef3-d3e52700e8b9.png">
<img width="966" alt="bildschirmfoto 2017-11-28 um 12 25 00" src="https://user-images.githubusercontent.com/11075214/33317368-36e79372-d437-11e7-864b-c613152be4c9.png">
<img width="949" alt="bildschirmfoto 2017-11-28 um 12 28 19" src="https://user-images.githubusercontent.com/11075214/33317505-a7d761b6-d437-11e7-847e-8376beb62eaa.png">
https://github.com/liqd/a4-product/issues/130
https://github.com/liqd/a4-product/issues/127 | 2017-11-29T08:49:26 |
|
liqd/a4-product | 149 | liqd__a4-product-149 | [
"114"
] | 8ffb6754f3436aa2f71c9b3fdb4e6d3ace31252c | diff --git a/liqd_product/config/urls.py b/liqd_product/config/urls.py
--- a/liqd_product/config/urls.py
+++ b/liqd_product/config/urls.py
@@ -16,6 +16,7 @@
from adhocracy4.reports.api import ReportViewSet
from liqd_product.apps.partners.urlresolvers import partner_patterns
from liqd_product.apps.users.decorators import user_is_project_admin
+from meinberlin.apps.contrib import views as contrib_views
from meinberlin.apps.documents.api import DocumentViewSet
from meinberlin.apps.polls.api import PollViewSet
from meinberlin.apps.polls.api import VoteViewSet
@@ -67,6 +68,7 @@
url(r'^browse/', never_cache(user_is_project_admin(ck_views.browse)),
name='ckeditor_browse'),
+ url(r'^components/$', contrib_views.ComponentLibraryView.as_view()),
url(r'^jsi18n/$', javascript_catalog,
js_info_dict, name='javascript-catalog'),
| tile images on partner page are not cut to same size

| 2017-12-04T17:49:14 |
||
liqd/a4-product | 155 | liqd__a4-product-155 | [
"89"
] | 46e596791c1f5e1f20692e8ec7d20a2e851863ef | diff --git a/liqd_product/apps/partners/templatetags/partners_tags.py b/liqd_product/apps/partners/templatetags/partner_tags.py
similarity index 100%
rename from liqd_product/apps/partners/templatetags/partners_tags.py
rename to liqd_product/apps/partners/templatetags/partner_tags.py
diff --git a/liqd_product/config/settings/base.py b/liqd_product/config/settings/base.py
--- a/liqd_product/config/settings/base.py
+++ b/liqd_product/config/settings/base.py
@@ -79,6 +79,7 @@
'meinberlin.apps.contrib.apps.Config',
'meinberlin.apps.maps.apps.Config',
'meinberlin.apps.moderatorfeedback.apps.Config',
+ 'meinberlin.apps.notifications.apps.Config',
# General apps containing views
'liqd_product.apps.account.apps.Config',
@@ -332,7 +333,6 @@
('meinberlin_mapideas', 'mapidea'),
)
-
A4_CATEGORIZABLE = (
('meinberlin_ideas', 'idea'),
('meinberlin_mapideas', 'mapidea'),
diff --git a/liqd_product/config/urls.py b/liqd_product/config/urls.py
--- a/liqd_product/config/urls.py
+++ b/liqd_product/config/urls.py
@@ -73,7 +73,10 @@
# Urls within the context of a partner
partner_patterns(
url(r'^modules/', include('adhocracy4.modules.urls')),
- url(r'^projects/', include('adhocracy4.projects.urls')),
+ # Temporary include meinberlin projects urls, as they contain
+ # the invite links. This may be removed when invites are refactored
+ # to a separate app.
+ url(r'^projects/', include('meinberlin.apps.projects.urls')),
url(r'^offlineevents/', include('meinberlin.apps.offlineevents.urls',
namespace='meinberlin_offlineevents')),
url(r'^ideas/', include(r'meinberlin.apps.ideas.urls',
| diff --git a/liqd_product/apps/contrib/management/commands/send_product_test_emails.py b/liqd_product/apps/contrib/management/commands/send_product_test_emails.py
new file mode 100644
--- /dev/null
+++ b/liqd_product/apps/contrib/management/commands/send_product_test_emails.py
@@ -0,0 +1,206 @@
+from django.conf import settings
+from django.contrib.auth import get_user_model
+from django.contrib.contenttypes.models import ContentType
+from django.core.management.base import BaseCommand
+
+from adhocracy4.actions.models import Action
+from adhocracy4.actions.verbs import Verbs
+from adhocracy4.comments.models import Comment
+from adhocracy4.emails.mixins import SyncEmailMixin
+from adhocracy4.projects.models import Project
+from adhocracy4.reports import emails as reports_emails
+from adhocracy4.reports.models import Report
+from meinberlin.apps.contrib.emails import Email
+from meinberlin.apps.ideas.models import Idea
+from meinberlin.apps.notifications import emails as notification_emails
+from meinberlin.apps.projects import models as project_models
+
+User = get_user_model()
+
+
+class TestEmail(SyncEmailMixin, Email):
+ def get_receivers(self):
+ return self.kwargs['receiver']
+
+ def dispatch(self, object, *args, **kwargs):
+ self.template_name = kwargs.pop('template_name')
+ print('Sending template: {} with object "{}"'.format(
+ self.template_name,
+ str(object)))
+ super().dispatch(object, *args, **kwargs)
+
+ def get_context(self):
+ context = super().get_context()
+ context['project'] = getattr(self.object, 'project', None)
+ context['contact_email'] = settings.CONTACT_EMAIL
+ return context
+
+
+class Command(BaseCommand):
+ help = 'Send test emails to a registered user.'
+
+ def add_arguments(self, parser):
+ parser.add_argument('email')
+
+ def handle(self, *args, **options):
+ self.user = User.objects.get(email=options['email'])
+
+ self._send_notifications_create_idea()
+ self._send_notifications_comment_idea()
+ self._send_notification_phase()
+ self._send_notification_project_created()
+
+ self._send_report_mails()
+
+ self._send_allauth_email_confirmation()
+ self._send_allauth_password_reset()
+
+ self._send_invitation_private_project()
+ self._send_invitation_moderator()
+
+ def _send_notifications_create_idea(self):
+ # Send notification for a newly created item
+ action = Action.objects.filter(
+ verb=Verbs.ADD.value,
+ obj_content_type=ContentType.objects.get_for_model(Idea)
+ ).exclude(project=None).first()
+ if not action:
+ self.stderr.write('At least one idea is required')
+ return
+
+ self._send_notify_create_item(action)
+
+ def _send_notifications_comment_idea(self):
+ # Send notifications for a comment on a item
+ action = Action.objects.filter(
+ verb=Verbs.ADD.value,
+ obj_content_type=ContentType.objects.get_for_model(Comment),
+ target_content_type=ContentType.objects.get_for_model(Idea)
+ ).exclude(project=None).first()
+ if not action:
+ self.stderr.write('At least one idea with a comment is required')
+ return
+
+ self._send_notify_create_item(action)
+
+ def _send_notify_create_item(self, action):
+ TestEmail.send(
+ action,
+ receiver=[self.user],
+ template_name=notification_emails.
+ NotifyCreatorEmail.template_name)
+
+ TestEmail.send(
+ action,
+ receiver=[self.user],
+ template_name=notification_emails.
+ NotifyFollowersOnNewItemCreated.template_name)
+
+ TestEmail.send(
+ action,
+ receiver=[self.user],
+ template_name=notification_emails.
+ NotifyModeratorsEmail.template_name)
+
+ def _send_notification_phase(self):
+ action = Action.objects.filter(
+ verb=Verbs.SCHEDULE.value
+ ).first()
+ if not action:
+ self.stderr.write('Schedule action is missing')
+ return
+
+ TestEmail.send(
+ action,
+ receiver=[self.user],
+ template_name=notification_emails.
+ NotifyFollowersOnPhaseIsOverSoonEmail.template_name
+ )
+
+ def _send_notification_project_created(self):
+ project = Project.objects.first()
+ TestEmail.send(
+ project,
+ project=project,
+ creator=self.user,
+ receiver=[self.user],
+ template_name=notification_emails.
+ NotifyInitiatorsOnProjectCreatedEmail.template_name
+ )
+
+ def _send_report_mails(self):
+ report = Report.objects.first()
+ if not report:
+ self.stderr.write('At least on report is required')
+ return
+
+ TestEmail.send(
+ report,
+ receiver=[self.user],
+ template_name=reports_emails.ReportCreatorEmail.template_name
+ )
+
+ TestEmail.send(
+ report,
+ receiver=[self.user],
+ template_name=reports_emails.ReportModeratorEmail.template_name
+ )
+
+ def _send_allauth_password_reset(self):
+ context = {"current_site": 'http://example.com/...',
+ "user": self.user,
+ "password_reset_url": 'http://example.com/...',
+ "request": None,
+ "username": self.user.username}
+
+ TestEmail.send(self.user,
+ receiver=[self.user],
+ template_name='account/email/password_reset_key',
+ **context
+ )
+
+ def _send_allauth_email_confirmation(self):
+ context = {
+ "user": self.user,
+ "activate_url": 'http://example.com/...',
+ "current_site": 'http://example.com/...',
+ "key": 'the1454key',
+ }
+
+ TestEmail.send(
+ self.user,
+ receiver=[self.user],
+ template_name='account/email/email_confirmation_signup',
+ **context
+ )
+
+ TestEmail.send(
+ self.user,
+ receiver=[self.user],
+ template_name='account/email/email_confirmation',
+ **context
+ )
+
+ def _send_invitation_private_project(self):
+ invite = project_models.ParticipantInvite.objects.first()
+ if not invite:
+ self.stderr.write('At least one participant request is required')
+ return
+
+ TestEmail.send(
+ invite,
+ receiver=[self.user],
+ template_name='meinberlin_projects/emails/invite_participant'
+ )
+
+ def _send_invitation_moderator(self):
+ invite = project_models.ModeratorInvite.objects.first()
+ if not invite:
+ self.stderr.write('At least one moderator request is required')
+ return
+
+ TestEmail.send(
+ invite,
+ receiver=[self.user],
+ template_name='meinberlin_projects/emails/invite_moderator'
+ )
| Registration: Confirmation email text & language
There are a couple of spelling and language mistakes in the confirmation email:
- parts of the texts are in German (registration happened in Safari (German) others are in english
- wrong support email address
- links to "contact" and "terms of use" working but no respective static pages ye
- space missing before Liquid Democracy e.V.
"Diese E-Mail wurde an [email protected] gesendet. Wenn Sie sich nicht registriert haben können Sie diese E-Mail ignorieren. Wir werden Ihnen keine weiteren E-Mail senden. Falls Sie weitere Fragen haben, wenden Sie sich bitte an uns unter [email protected]_LIQD_PRODUCT_ is a participation platform operated byLiquid Democracy e.V., Am Sudhaus 2, D-12053 Berlin"
| closed with #133 | 2017-12-05T11:50:54 |
liqd/a4-product | 170 | liqd__a4-product-170 | [
"126"
] | 34a6cb2ba3da6a94acdf358e0c84e6da18515ea5 | diff --git a/liqd_product/apps/users/migrations/0004_use_url_field_for_homepage.py b/liqd_product/apps/users/migrations/0004_use_url_field_for_homepage.py
new file mode 100644
--- /dev/null
+++ b/liqd_product/apps/users/migrations/0004_use_url_field_for_homepage.py
@@ -0,0 +1,20 @@
+# -*- coding: utf-8 -*-
+# Generated by Django 1.11.7 on 2017-12-06 12:47
+from __future__ import unicode_literals
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('liqd_product_users', '0003_bio_text_field'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='user',
+ name='homepage',
+ field=models.URLField(blank=True, max_length=50, verbose_name='Homepage'),
+ ),
+ ]
diff --git a/liqd_product/apps/users/models.py b/liqd_product/apps/users/models.py
--- a/liqd_product/apps/users/models.py
+++ b/liqd_product/apps/users/models.py
@@ -79,7 +79,7 @@ class User(auth_models.AbstractBaseUser, auth_models.PermissionsMixin):
verbose_name=_('Facebook name'),
)
- homepage = models.CharField(
+ homepage = models.URLField(
blank=True,
max_length=50,
verbose_name=_('Homepage'),
| homepage field in user profile is not an URL

I can just add a string, but in profile a (invalid) link is made out of it
| Currently this is a `CharField`. So we have two options: Either change it to a `URLField` or not display a link. | 2017-12-06T12:55:02 |
|
liqd/a4-product | 261 | liqd__a4-product-261 | [
"260"
] | 2ef05297e434944ea56ff4cb302b4ea376229600 | diff --git a/liqd_product/config/settings/base.py b/liqd_product/config/settings/base.py
--- a/liqd_product/config/settings/base.py
+++ b/liqd_product/config/settings/base.py
@@ -386,3 +386,7 @@
# The default language is used for emails and strings
# that are stored translated to the database.
DEFAULT_LANGUAGE = 'de'
+
+SECURE_BROWSER_XSS_FILTER = True
+SESSION_COOKIE_HTTPONLY = True
+SECURE_CONTENT_TYPE_NOSNIFF = True
| HTTP Header
I'll propose to set the following HTTP header
* `HttpOnly`
* `X-XSS-Protection`
* `X-Content-Type-Options: nosniff`
* ~HSTS~ set via nginx
See [OWASP headers project](https://www.owasp.org/index.php/OWASP_Secure_Headers_Project) for details
| 2018-01-23T14:44:33 |
||
liqd/a4-product | 299 | liqd__a4-product-299 | [
"277"
] | b1a7a667aca478bcd7cceed1167433ac81f8da9b | diff --git a/liqd_product/apps/contrib/templatetags/__init__.py b/liqd_product/apps/contrib/templatetags/__init__.py
new file mode 100644
diff --git a/liqd_product/apps/contrib/templatetags/marked_tags.py b/liqd_product/apps/contrib/templatetags/marked_tags.py
new file mode 100644
--- /dev/null
+++ b/liqd_product/apps/contrib/templatetags/marked_tags.py
@@ -0,0 +1,16 @@
+from django import template
+from django.template.defaultfilters import stringfilter
+from django.utils.safestring import mark_safe
+
+register = template.Library()
+
+
[email protected](needs_autoescape=True)
+@stringfilter
+def marked_per_word(value, autoescape=True):
+ result = ''
+ for word in value.split():
+ result += ('<div class="marked marked--multiple_lines">{}</div>'
+ .format(word))
+
+ return mark_safe(result)
| problems with the marked class in css on text that extends over multiple lines


| 2018-02-20T14:49:39 |
||
liqd/a4-product | 344 | liqd__a4-product-344 | [
"218"
] | 47cf79303ee0e80c2114565b201be0a74a5f5a71 | diff --git a/liqd_product/config/settings/base.py b/liqd_product/config/settings/base.py
--- a/liqd_product/config/settings/base.py
+++ b/liqd_product/config/settings/base.py
@@ -404,7 +404,7 @@
A4_MAP_BASEURL = 'https://{s}.tile.openstreetmap.org/'
A4_MAP_ATTRIBUTION = '© <a href="http://openstreetmap.org/copyright">OpenStreetMap</a> contributors'
-A4_MAP_BOUNDING_BOX = ([[52.3517, 13.8229], [52.6839, 12.9543]])
+A4_MAP_BOUNDING_BOX = ([[54.983, 15.016], [47.302, 5.988]])
A4_DASHBOARD = {
'PROJECT_DASHBOARD_CLASS': 'meinberlin.apps.dashboard.ProjectDashboard',
| Update default polygon in settings
When a new project with a map is added, the initiator can set the polygon in which e.g. ideas are allowed. At the moment the map when choosing this polygon is set to an area around Berlin but should be extended to a larger area e.g. Germany. This can be done in the settings.
| 2018-04-18T09:48:22 |
||
liqd/a4-product | 360 | liqd__a4-product-360 | [
"357"
] | 7959652bd23a3f384d9d3ca4b381156fae451399 | diff --git a/liqd_product/apps/dashboard/blueprints.py b/liqd_product/apps/dashboard/blueprints.py
--- a/liqd_product/apps/dashboard/blueprints.py
+++ b/liqd_product/apps/dashboard/blueprints.py
@@ -101,9 +101,11 @@
)),
('facetoface',
ProjectBlueprint(
- title=_('Face to Face Participation'),
+ title=_('Face-to-Face Participation'),
description=_(
- 'Share info about a face to face participation event.'
+ 'With this module you can provide information about events or '
+ 'phases for face-to-face participation. No online participation '
+ 'is possible in this module.'
),
content=[
activities_phases.FaceToFacePhase(),
| [f2f module] wording
Here are the wordings:
1
Edit face-to-face participation information
Informationen zur Vor-Ort-Beteiligung bearbeiten
2
Title
Titel
3
Highlighted Info
Hervorgehobene Information
3a (Hilfetext)
Highlight important information like the time or location of your face-to-face event
Zur Hervorhebung von wichtigen Informationen wie Ort oder Zeitraum der Vor-Ort-Beteiligung
4
Description
Beschreibung
5
Face-to-Face Information
Informationen Vor-Ort-Beteiligung
6
Face-to-Face Participation
Vor-Ort-Beteiligung
7
With this module you can provide information about events or phases for face-to-face participation. No online participation is possible in this module.
Mit diesem Modul können Informationen über Veranstaltungen und Phasen zur Vor-Ort-Beteiligung bereitgestellt werden. In diesem Modul ist keine Online-Beteiligung möglich.
8
Phase 1: Provide information about face-to-face participation events
Phase 1: Informationen zur Vor-Ort-Beteiligung bereitstellen


| 2018-05-09T09:39:21 |
||
liqd/a4-product | 375 | liqd__a4-product-375 | [
"370"
] | 6e8266a242a8e3ca68dd673fe8d9dd7b1b65e9af | diff --git a/liqd_product/apps/partners/views.py b/liqd_product/apps/partners/views.py
--- a/liqd_product/apps/partners/views.py
+++ b/liqd_product/apps/partners/views.py
@@ -20,7 +20,9 @@ def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['project_list'] = Project.objects\
- .filter(organisation__partner=self.object)
+ .filter(organisation__partner=self.object,
+ is_archived=False,
+ is_draft=False)
context['action_list'] = Action.objects\
.filter(project__organisation__partner=self.object)\
| [partner landing page] unpublished and archived projects are shown
On the partner landing page, we show unpublished and archived projects. Unpublished projects should never be shown and archived projects should be hidden per default.
See: https://product-dev.liqd.net/teststadt/

| 2018-05-29T15:49:11 |
||
liqd/a4-product | 606 | liqd__a4-product-606 | [
"428"
] | d5a98a2068af6fb53eb1b1c214fe3e32b978afec | diff --git a/liqd_product/apps/actions/apps.py b/liqd_product/apps/actions/apps.py
--- a/liqd_product/apps/actions/apps.py
+++ b/liqd_product/apps/actions/apps.py
@@ -35,6 +35,6 @@ def ready(self):
configure_icon('far fa-comment', type='comment')
configure_icon('far fa-lightbulb', type='item')
configure_icon('fas fa-plus', verb=Verbs.ADD)
- configure_icon('fas fa-pencil-alt', verb=Verbs.UPDATE)
+ configure_icon('fas fa-pencil', verb=Verbs.UPDATE)
configure_icon('fas fa-flag', verb=Verbs.START)
- configure_icon('far fa-clock', verb=Verbs.SCHEDULE)
+ configure_icon('far fa-clock-o', verb=Verbs.SCHEDULE)
| can't see full time when creating an event on small screen

| @phillimorland And maybe this? It should be in offlineevents (and I wonder if not also in the phases if it's there). | 2019-03-04T10:36:17 |
|
liqd/a4-product | 608 | liqd__a4-product-608 | [
"604"
] | d5a98a2068af6fb53eb1b1c214fe3e32b978afec | diff --git a/liqd_product/apps/projects/dashboard.py b/liqd_product/apps/projects/dashboard.py
--- a/liqd_product/apps/projects/dashboard.py
+++ b/liqd_product/apps/projects/dashboard.py
@@ -2,10 +2,8 @@
from django.utils.translation import ugettext_lazy as _
from adhocracy4.dashboard import DashboardComponent
-from adhocracy4.dashboard import ProjectFormComponent
from adhocracy4.dashboard import components
-from . import forms
from . import views
@@ -51,16 +49,5 @@ def get_urls(self):
)]
-class TopicComponent(ProjectFormComponent):
- identifier = 'topics'
- weight = 33
- label = _('Topics')
-
- form_title = _('Edit topics')
- form_class = forms.TopicForm
- form_template_name = 'liqd_product_projects/project_topics.html'
-
-
components.register_project(ModeratorsComponent())
components.register_project(ParticipantsComponent())
-components.register_project(TopicComponent())
diff --git a/liqd_product/apps/projects/forms.py b/liqd_product/apps/projects/forms.py
--- a/liqd_product/apps/projects/forms.py
+++ b/liqd_product/apps/projects/forms.py
@@ -3,8 +3,6 @@
from django.core.exceptions import ValidationError
from django.utils.translation import ugettext_lazy as _
-from adhocracy4.dashboard.forms import ProjectDashboardForm
-from adhocracy4.projects.models import Project
from liqd_product.apps.users import fields as user_fields
from .models import ModeratorInvite
@@ -70,11 +68,3 @@ def clean(self):
raise ValidationError(
_('Please enter email addresses or upload a file'))
return cleaned_data
-
-
-class TopicForm(ProjectDashboardForm):
-
- class Meta:
- model = Project
- fields = ['topics']
- required_for_project_publish = ['topics']
| Mandatory mB topic selection on bet.in ( US #1775)
All projects need a topic on bet.in now, even existing ones. Can we remove that requirement? We haven't yet thought about how to implement topics on bet.in and there are not shown anywhere, so it would probably be confusing for initiators.
| 2019-03-04T11:19:19 |
||
liqd/a4-product | 655 | liqd__a4-product-655 | [
"654"
] | 3a7b1d1ca645d7bf0d5d4d57aef3fe95bdf92fe5 | diff --git a/liqd_product/config/settings/base.py b/liqd_product/config/settings/base.py
--- a/liqd_product/config/settings/base.py
+++ b/liqd_product/config/settings/base.py
@@ -444,3 +444,9 @@
# The default language is used for emails and strings
# that are stored translated to the database.
DEFAULT_LANGUAGE = 'de'
+
+WAGTAILADMIN_RICH_TEXT_EDITORS = {
+ 'default': {
+ 'WIDGET': 'wagtail.admin.rich_text.HalloRichTextArea'
+ }
+}
| Error 500 when trying to edit landing page
I need to add a partner to the landing page on beteiligung.in productive soon. Currently, I can’t edit the page (500 error).
https://www.beteiligung.in/admin/pages/3/edit/
Could you look into it?
| Yeah looks familiar. Will look into it | 2019-04-16T12:41:46 |
|
liqd/a4-product | 717 | liqd__a4-product-717 | [
"706"
] | aa71ee7377628bb5603a558658a89fcce58b8fd0 | diff --git a/liqd_product/apps/partners/migrations/0012_wording_fixes.py b/liqd_product/apps/partners/migrations/0012_wording_fixes.py
new file mode 100644
--- /dev/null
+++ b/liqd_product/apps/partners/migrations/0012_wording_fixes.py
@@ -0,0 +1,48 @@
+# -*- coding: utf-8 -*-
+# Generated by Django 1.11.20 on 2019-05-23 11:20
+from __future__ import unicode_literals
+
+import adhocracy4.images.fields
+import ckeditor.fields
+import ckeditor_uploader.fields
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('liqd_product_partners', '0011_kommune_word_change'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='partner',
+ name='description',
+ field=models.CharField(help_text='The description will be displayed on the landing page. max. 400 characters', max_length=400, verbose_name='Short description of your organisation'),
+ ),
+ migrations.AlterField(
+ model_name='partner',
+ name='imprint',
+ field=ckeditor.fields.RichTextField(help_text='Please provide all the legally required information of your imprint. The imprint will be shown on a separate page.', verbose_name='Imprint'),
+ ),
+ migrations.AlterField(
+ model_name='partner',
+ name='information',
+ field=ckeditor_uploader.fields.RichTextUploadingField(blank=True, help_text='You can provide general information about your participation platform to your visitors. It’s also helpful to name a general person of contact for inquiries. The information will be shown on a separate page that can be reached via the main menu.', verbose_name='Information about your organisation'),
+ ),
+ migrations.AlterField(
+ model_name='partner',
+ name='logo',
+ field=adhocracy4.images.fields.ConfiguredImageField('logo', blank=True, help_text='The Logo representing your organisation. The image must be square and it should be min. 200 pixels wide and 200 pixels tall. Allowed file formats are png, jpeg, gif. The file size should be max. 5 MB.', upload_to='partners/logos', verbose_name='Logo'),
+ ),
+ migrations.AlterField(
+ model_name='partner',
+ name='slogan',
+ field=models.CharField(blank=True, help_text='The slogan will be shown below the title of your platform on the landing page. The slogan can provide context or additional information to the title. max. 200 characters', max_length=200, verbose_name='Slogan'),
+ ),
+ migrations.AlterField(
+ model_name='partner',
+ name='title',
+ field=models.CharField(default='Beteiligungsplattform', help_text='The title of your platform will be shown on the landing page. It should be clear to the users that this is your participation platform. max. 100 characters', max_length=100, verbose_name='Title of your platform'),
+ ),
+ ]
diff --git a/liqd_product/apps/partners/models.py b/liqd_product/apps/partners/models.py
--- a/liqd_product/apps/partners/models.py
+++ b/liqd_product/apps/partners/models.py
@@ -13,19 +13,29 @@ class Partner(models.Model):
slug = AutoSlugField(populate_from='name', unique=True)
name = models.CharField(max_length=100)
title = models.CharField(
+ verbose_name=_('Title of your platform'),
max_length=100,
default='Beteiligungsplattform',
- help_text=_('max. 100 characters')
+ help_text=_('The title of your platform will be shown '
+ 'on the landing page. It should be clear to '
+ 'the users that this is your participation '
+ 'platform. max. 100 characters')
)
description = models.CharField(
max_length=400,
verbose_name=_('Short description of your organisation'),
- help_text=_('max. 400 characters')
+ help_text=_('The description will be displayed on the '
+ 'landing page. max. 400 characters')
)
logo = images_fields.ConfiguredImageField(
'logo',
verbose_name=_('Logo'),
- help_prefix=_('The Logo representing your organisation'),
+ help_text=_('The Logo representing your organisation.'
+ ' The image must be square and it '
+ 'should be min. 200 pixels wide and 200 '
+ 'pixels tall. Allowed file formats are '
+ 'png, jpeg, gif. The file size '
+ 'should be max. 5 MB.'),
upload_to='partners/logos',
blank=True
)
@@ -33,7 +43,12 @@ class Partner(models.Model):
max_length=200,
verbose_name=_('Slogan'),
blank=True,
- help_text=_('max. 200 characters')
+ help_text=_('The slogan will be shown below '
+ 'the title of your platform on '
+ 'the landing page. The slogan can '
+ 'provide context or additional '
+ 'information to the title. '
+ 'max. 200 characters')
)
image = images_fields.ConfiguredImageField(
'heroimage',
@@ -47,10 +62,19 @@ class Partner(models.Model):
information = RichTextUploadingField(
config_name='image-editor',
verbose_name=_('Information about your organisation'),
+ help_text=_('You can provide general information about your '
+ 'participation platform to your visitors. '
+ 'It’s also helpful to name a general person '
+ 'of contact for inquiries. The information '
+ 'will be shown on a separate page that '
+ 'can be reached via the main menu.'),
blank=True
)
imprint = RichTextField(
- verbose_name=_('Imprint')
+ verbose_name=_('Imprint'),
+ help_text=_('Please provide all the legally '
+ 'required information of your imprint. '
+ 'The imprint will be shown on a separate page.')
)
admins = models.ManyToManyField(
settings.AUTH_USER_MODEL,
| #1988 Corrections and additions for organisation edit form
I checked the wording story and saw some help texts and labels I would like to change. I hope that’s okay.

**1.**
DE: Titel ihrer Plattform
EN: Title of your platform
helptext
DE: Der Titel Ihrer Plattform wird auf der Startseite angezeigt. Es sollte für die Nutzer*innen deutlich werden, dass es sich um eine Beteiligungsplattform handelt. Max. 100 Zeichen
EN: The title of your platform will be shown on the landing page. It should be clear to the users that this is your participation platform.
**2.**
hepltext
DE: Das Logo ihrer Organisation. Die Grafik muss quadratisch sein. Sie muss mindestens 200 Pixel breit und 200 Pixel hoch sein. Erlaubte Dateiformate: png, jpeg, gif. Die maximale Dateigröße beträgt 5 MB.
EN: The logo representing your organisation. The image must be square and it should be min. 200 pixels wide and 200 pixels tall. Allowed file formats are png, jpeg, gif. The file size should be max. 5 MB.
**3.**
helptext
DE: Die Kurzbeschreibung wird auf der Startseite angezeigt. Max. 400 Zeichen
EN: The description will be displayed on the landing page. max. 400 characters
**4.**
helptext
DE: Der Slogan wird unter dem Titel ihrer Plattform auf der Startseite angezeigt und kann den Titel ergänzen oder erläutern. Max. 200 Zeichen
EN: The slogan will be shown below the title of your platform on the landing page. The slogan can provide context or additional information to the title. max. 200 characters
**5.**
helptext
DE: Sie können den Besucher*innen hier generelle Information zu Ihrer Beteiligungsplattform bereitstellen. Es ist außerdem hilfreich, wenn Sie eine*n Ansprechpartner*in für Anfragen benennen. Die Informationen werden auf einer separaten Seite angezeigt, die man über das Hauptmenü erreicht.
EN: You can provide general information about your participation platform to your visitors. It’s also helpful to name a general person of contact for inquiries. The information will be shown on a separate page that can be reached via the main menu.
**6.**
DE: Bitte geben Sie alle rechtlich nötigen Informationen zum Impressum an. Das Impressum wird auf einer separaten Seite angezeigt.
EN: Please provide all the legally required information of your imprint. The imprint will be shown on a separate page.
| 2019-05-23T12:00:15 |
||
liqd/a4-product | 837 | liqd__a4-product-837 | [
"835"
] | 544cbcc7ffbccb77e4310a0b775991a4f51cbef6 | diff --git a/apps/cms/contacts/models.py b/apps/cms/contacts/models.py
--- a/apps/cms/contacts/models.py
+++ b/apps/cms/contacts/models.py
@@ -115,7 +115,7 @@ def get_form_fields(self):
fields.insert(0, FormField(
label='receive_copy',
field_type='checkbox',
- help_text=_('I want to receicve a copy of my message as email'),
+ help_text=_('I want to receive a copy of my message'),
required=False))
fields.insert(0, FormField(
@@ -138,7 +138,7 @@ def get_form_fields(self):
fields.insert(0, FormField(
label='name',
- help_text=_('Your first and last name'),
+ help_text=_('Your name'),
field_type='singleline',
required=False))
return fields
| #2151 contact form field labels
In EN:
It should say „Your name“ instead of „your first and last name“
It should say „I want to receive a copy of my message“ instead of „
I want to receicve a copy of my message as email“
in DE:
It should say „Ihr Name” instead of „Ihr Vor- und Nachname“
It should say „Eine Kopie der Nachricht an mich senden“ instead of „Eine Kopie der Anfrage an mich senden“
| 2019-07-29T15:21:00 |
||
liqd/a4-product | 1,080 | liqd__a4-product-1080 | [
"1076"
] | bdfdc460da3c22de90ba1d58fae3c17c7c2d9ba9 | diff --git a/apps/users/migrations/0011_add_helptexts.py b/apps/users/migrations/0011_add_helptexts.py
new file mode 100644
--- /dev/null
+++ b/apps/users/migrations/0011_add_helptexts.py
@@ -0,0 +1,28 @@
+# Generated by Django 2.2.6 on 2019-10-18 08:16
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('a4_candy_users', '0010_rename_avatar_field'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='user',
+ name='bio',
+ field=models.TextField(blank=True, help_text='Tell us about yourself in 255 characters!', max_length=255, verbose_name='Biography'),
+ ),
+ migrations.AlterField(
+ model_name='user',
+ name='facebook_handle',
+ field=models.CharField(blank=True, help_text='Your facebook name is the last part of the URL, when you access your profile.', max_length=50, verbose_name='Facebook name'),
+ ),
+ migrations.AlterField(
+ model_name='user',
+ name='twitter_handle',
+ field=models.CharField(blank=True, max_length=15, verbose_name='Twitter handle'),
+ ),
+ ]
diff --git a/apps/users/models.py b/apps/users/models.py
--- a/apps/users/models.py
+++ b/apps/users/models.py
@@ -76,18 +76,23 @@ class User(auth_models.AbstractBaseUser, auth_models.PermissionsMixin):
blank=True,
max_length=255,
verbose_name=_('Biography'),
+ help_text=_(
+ 'Tell us about yourself in 255 characters!')
)
twitter_handle = models.CharField(
blank=True,
max_length=15,
- verbose_name=_('Twitter name'),
+ verbose_name=_('Twitter handle'),
)
facebook_handle = models.CharField(
blank=True,
max_length=50,
verbose_name=_('Facebook name'),
+ help_text=_(
+ 'Your facebook name is the last part of the URL, '
+ 'when you access your profile.')
)
homepage = models.URLField(
| very long username breaks profile and settings


| 2019-10-18T11:42:25 |
||
liqd/a4-product | 1,090 | liqd__a4-product-1090 | [
"1086"
] | 3b5af6fe75c41c30e01df50c7cacfb897aa880fe | diff --git a/apps/users/forms.py b/apps/users/forms.py
--- a/apps/users/forms.py
+++ b/apps/users/forms.py
@@ -14,6 +14,7 @@ class TermsSignupForm(auth_forms.UserCreationForm):
})
def signup(self, request, user):
+ user.get_newsletters = self.cleaned_data["get_newsletters"]
user.signup(
self.cleaned_data['username'],
self.cleaned_data['email'],
| get_newsletters during normal register is broken
If checked, the user still has get_newsletters = False. But when changed in the account settings, it's changed.
| 2019-10-21T11:29:27 |
||
liqd/a4-product | 1,092 | liqd__a4-product-1092 | [
"1088"
] | 376f91a19a570d4086e511aa2b170cfaf07770e8 | diff --git a/apps/organisations/migrations/0008_add_label_and_help_for_is_supporting.py b/apps/organisations/migrations/0008_add_label_and_help_for_is_supporting.py
new file mode 100644
--- /dev/null
+++ b/apps/organisations/migrations/0008_add_label_and_help_for_is_supporting.py
@@ -0,0 +1,18 @@
+# Generated by Django 2.2.6 on 2019-10-21 13:04
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('a4_candy_organisations', '0007_organisation_is_supporting'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='organisation',
+ name='is_supporting',
+ field=models.BooleanField(default=False, help_text='For supporting organisations, the banner asking for donations is not displayed on their pages.', verbose_name='is a supporting organisation'),
+ ),
+ ]
diff --git a/apps/organisations/models.py b/apps/organisations/models.py
--- a/apps/organisations/models.py
+++ b/apps/organisations/models.py
@@ -79,7 +79,10 @@ class Organisation(models.Model):
'The imprint will be shown on a separate page.')
)
is_supporting = models.BooleanField(
- default=False
+ default=False,
+ verbose_name=_('is a supporting organisation'),
+ help_text=_('For supporting organisations, the banner asking '
+ 'for donations is not displayed on their pages.')
)
def __str__(self):
| #2321: small support banner is not shown in German
The mobile version of the support banner is not shown in German.

| 2019-10-21T13:22:53 |
||
liqd/a4-product | 1,097 | liqd__a4-product-1097 | [
"758"
] | 620dfb67db14417851310cd12597568c42cba4e5 | diff --git a/apps/organisations/views.py b/apps/organisations/views.py
--- a/apps/organisations/views.py
+++ b/apps/organisations/views.py
@@ -31,6 +31,7 @@ def get_context_data(self, **kwargs):
context['action_list'] = Action.objects\
.filter(project__organisation=self.object)\
+ .filter(project__is_archived=False) \
.filter_public()\
.exclude_updates()[:4]
| archived projects accessible via activity feed
At https://www.beteiligung.in/liqd/ all projects are private but I can see the content of the projects if I click on the activity feed. Even if not signed in.
| Thank you for noticing, this @CarolingerSeilchenspringer ! @MagdaN Could we add this to the sprint as a task without US? I think it is a relatively critical issue.
sure
Thank you! :)
Alright! We fixed the other new issues where we could see the content of private projects, when there was only one module and no event in that project. That was a new bug introduced with the timeline.
As for this issue here with the activitiy feed: all projects that are shown in the activitiy feed on bet.in/liqd are public, but archived. So, they are not shown as project tiles, but in the activity feed. Should we remove actions from archived projects from the activity feed?
currently archived projects can't be accessed anywhere (else) in a+?! #376
@rittermo Should activities of archived projects be removed from the activity list?? Otherwise we should close this issue! | 2019-10-21T15:34:48 |
|
liqd/a4-product | 1,113 | liqd__a4-product-1113 | [
"1109"
] | 146eed0ac11449a8b8c9a98448ee5ee995ad3e6d | diff --git a/apps/polls/views.py b/apps/polls/views.py
--- a/apps/polls/views.py
+++ b/apps/polls/views.py
@@ -81,5 +81,8 @@ def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['comment_export'] = reverse(
'a4dashboard:poll-comment-export',
- kwargs={'module_slug': self.module.slug})
+ kwargs={
+ 'organisation_slug': self.module.project.organisation.slug,
+ 'module_slug': self.module.slug
+ })
return context
| poll-comment export seems to be broken
```
Environment:
Request Method: GET
Request URL: http://localhost:8004/liqd-orga/dashboard/modules/umfrage/poll/export/
Django Version: 2.2.6
Python Version: 3.7.3
Installed Applications:
('django.contrib.sites',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sitemaps',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'widget_tweaks',
'rest_framework',
'allauth',
'allauth.account',
'allauth.socialaccount',
'rules.apps.AutodiscoverRulesConfig',
'easy_thumbnails',
'ckeditor',
'ckeditor_uploader',
'capture_tag',
'background_task',
'wagtail.contrib.forms',
'wagtail.contrib.redirects',
'wagtail.contrib.settings',
'wagtail.contrib.styleguide',
'wagtail.embeds',
'wagtail.sites',
'wagtail.users',
'wagtail.snippets',
'wagtail.documents',
'wagtail.images',
'wagtail.search',
'wagtail.admin',
'wagtail.core',
'modelcluster',
'taggit',
'apps.cms.pages',
'apps.cms.settings',
'apps.cms.contacts',
'apps.cms.news',
'apps.cms.use_cases',
'apps.cms.images',
'adhocracy4.actions',
'adhocracy4.administrative_districts',
'adhocracy4.categories',
'adhocracy4.ckeditor',
'adhocracy4.comments',
'adhocracy4.dashboard',
'adhocracy4.filters',
'adhocracy4.follows',
'adhocracy4.forms',
'adhocracy4.images',
'adhocracy4.labels',
'adhocracy4.maps',
'adhocracy4.modules',
'adhocracy4.organisations',
'adhocracy4.phases',
'adhocracy4.projects',
'adhocracy4.ratings',
'adhocracy4.reports',
'adhocracy4.rules',
'apps.actions',
'apps.contrib',
'apps.likes',
'apps.maps',
'apps.moderatorfeedback',
'apps.moderatorremark',
'apps.newsletters',
'apps.notifications',
'apps.organisations',
'apps.partners',
'apps.questions',
'apps.users',
'apps.account',
'apps.dashboard',
'apps.embed',
'apps.exports',
'apps.offlineevents',
'apps.projects',
'apps.activities',
'apps.budgeting',
'apps.documents',
'apps.ideas',
'apps.mapideas',
'apps.polls',
'allauth.socialaccount.providers.facebook',
'allauth.socialaccount.providers.github',
'allauth.socialaccount.providers.google',
'allauth.socialaccount.providers.twitter')
Installed Middleware:
('django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django_cloudflare_push.middleware.push_middleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'apps.embed.middleware.AjaxPathMiddleware',
'wagtail.core.middleware.SiteMiddleware',
'wagtail.contrib.redirects.middleware.RedirectMiddleware')
Traceback:
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response
115. response = self.process_exception_by_middleware(e, request)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response
113. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/views/generic/base.py" in view
71. return self.dispatch(request, *args, **kwargs)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/contrib/auth/mixins.py" in dispatch
85. return super().dispatch(request, *args, **kwargs)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/views/generic/base.py" in dispatch
97. return handler(request, *args, **kwargs)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/views/generic/base.py" in get
158. context = self.get_context_data(**kwargs)
File "/home/katharina/a4-product/apps/polls/views.py" in get_context_data
84. kwargs={'module_slug': self.module.slug})
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/urls/base.py" in reverse
90. return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/urls/resolvers.py" in _reverse_with_prefix
673. raise NoReverseMatch(msg)
Exception Type: NoReverseMatch at /liqd-orga/dashboard/modules/umfrage/poll/export/
Exception Value: Reverse for 'poll-comment-export' with keyword arguments '{'module_slug': 'umfrage'}' not found. 1 pattern(s) tried: ['(?P<organisation_slug>[-\\w_]+)/dashboard/modules/(?P<module_slug>[-\\w_]+)/poll/export/comments/$']
```
| looks like there is an reverse without the organisation slug | 2019-10-22T14:58:11 |
|
OpenNMT/OpenNMT-tf | 6 | OpenNMT__OpenNMT-tf-6 | [
"1"
] | ca96cb35efd083d0380c05041bc254f81ee383e0 | diff --git a/opennmt/utils/transformer.py b/opennmt/utils/transformer.py
--- a/opennmt/utils/transformer.py
+++ b/opennmt/utils/transformer.py
@@ -163,5 +163,5 @@ def add_and_norm(inputs,
rate=dropout,
training=mode == tf.estimator.ModeKeys.TRAIN)
outputs += inputs
- outputs = tf.contrib.layers.layer_norm(outputs)
+ outputs = tf.contrib.layers.layer_norm(outputs, begin_norm_axis=-1)
return outputs
| Poor translation results with the Transformer
The Transformer model produces very bad translation results. Its implementation should be revised and fixed.
See also the reference implementation at https://github.com/tensorflow/tensor2tensor.
| 2017-11-03T13:11:12 |
||
OpenNMT/OpenNMT-tf | 11 | OpenNMT__OpenNMT-tf-11 | [
"8"
] | 14633f9a5e1c515552cb4ae1d1e1414c96a7146c | diff --git a/opennmt/config.py b/opennmt/config.py
--- a/opennmt/config.py
+++ b/opennmt/config.py
@@ -43,7 +43,10 @@ def load_config(config_paths, config=None):
# Add or update section in main configuration.
for section in subconfig:
if section in config:
- config[section].update(subconfig[section])
+ if isinstance(config[section], dict):
+ config[section].update(subconfig[section])
+ else:
+ config[section] = subconfig[section]
else:
config[section] = subconfig[section]
| diff --git a/opennmt/tests/config_test.py b/opennmt/tests/config_test.py
new file mode 100644
--- /dev/null
+++ b/opennmt/tests/config_test.py
@@ -0,0 +1,38 @@
+import os
+import yaml
+
+import tensorflow as tf
+
+from opennmt import config
+
+config_file_1 = "config_test_1.tmp"
+config_file_2 = "config_test_2.tmp"
+
+
+class ConfigTest(tf.test.TestCase):
+
+ def tearDown(self):
+ if os.path.isfile(config_file_1):
+ os.remove(config_file_1)
+ if os.path.isfile(config_file_2):
+ os.remove(config_file_2)
+
+
+ def testConfigOverride(self):
+ config1 = {"model_dir": "foo", "train": {"batch_size": 32, "steps": 42}}
+ config2 = {"model_dir": "bar", "train": {"batch_size": 64}}
+
+ with open(config_file_1, "w") as config_file:
+ config_file.write(yaml.dump(config1))
+ with open(config_file_2, "w") as config_file:
+ config_file.write(yaml.dump(config2))
+
+ loaded_config = config.load_config([config_file_1, config_file_2])
+
+ self.assertDictEqual(
+ {"model_dir": "bar", "train": {"batch_size": 64, "steps": 42}},
+ loaded_config)
+
+
+if __name__ == "__main__":
+ tf.test.main()
| AttributeError: 'str' object has no attribute 'update'
```
Traceback (most recent call last):
File "/home/soul/anaconda2/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/home/soul/anaconda2/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/soul/projects/opennmt-tf/OpenNMT-tf/bin/main.py", line 275, in <module>
main()
File "/home/soul/projects/opennmt-tf/OpenNMT-tf/bin/main.py", line 225, in main
config = load_config(args.config)
File "opennmt/config.py", line 48, in load_config
config[section].update(subconfig[section])
AttributeError: 'str' object has no attribute 'update'
```
The attribute that caused it was "model_dir", where its value was a string.
The config file that I used:
```
# The directory where models and summaries will be saved. It is created if it does not exist.
model_dir: enfr
data:
train_features_file: data/enfr/src-train.txt
train_labels_file: data/enfr/tgt-train.txt
eval_features_file: data/enfr/src-val.txt
eval_labels_file: data/enfr/tgt-val.txt
# (optional) Models may require additional resource files (e.g. vocabularies).
source_words_vocabulary: data/enfr/src-vocab.txt
target_words_vocabulary: data/enfr/tgt-vocab.txt
# Model and optimization parameters.
params:
# The optimizer class name in tf.train or tf.contrib.opt.
optimizer: AdamOptimizer
learning_rate: 0.1
# (optional) Maximum gradients norm (default: None).
clip_gradients: 5.0
# (optional) The type of learning rate decay (default: None). See:
# * https://www.tensorflow.org/versions/master/api_guides/python/train#Decaying_the_learning_rate
# * opennmt/utils/decay.py
# This value may change the semantics of other decay options. See the documentation or the code.
decay_type: exponential_decay
# (optional unless decay_type is set) The learning rate decay rate.
decay_rate: 0.9
# (optional unless decay_type is set) Decay every this many steps.
decay_steps: 10000
# (optional) If true, the learning rate is decayed in a staircase fashion (default: True).
staircase: true
# (optional) After how many steps to start the decay (default: 0).
start_decay_steps: 50000
# (optional) Stop decay when this learning rate value is reached (default: 0).
minimum_learning_rate: 0.0001
# (optional) Width of the beam search (default: 1).
beam_width: 5
# (optional) Length penaly weight to apply on hypotheses (default: 0).
length_penalty: 0.2
# (optional) Maximum decoding iterations before stopping (default: 250).
maximum_iterations: 200
# Training options.
train:
batch_size: 64
# (optional) Save a checkpoint every this many steps.
save_checkpoints_steps: 5000
# (optional) How many checkpoints to keep on disk.
keep_checkpoint_max: 3
# (optional) Save summaries every this many steps.
save_summary_steps: 100
# (optional) Train for this many steps. If not set, train forever.
train_steps: 1000000
# (optional) Evaluate every this many seconds (default: 3600).
eval_delay: 7200
# (optional) Save evaluation predictions in model_dir/eval/.
save_eval_predictions: false
# (optional) The maximum length of feature sequences during training (default: None).
maximum_features_length: 70
# (optional) The maximum length of label sequences during training (default: None).
maximum_labels_length: 70
# (optional) The number of buckets by sequence length to improve training efficiency (default: 5).
num_buckets: 5
# (optional) The number of threads to use for processing data in parallel (default: number of logical cores).
num_parallel_process_calls: 4
# (optional) The data pre-fetch buffer size, e.g. for shuffling examples (default: batch_size * 1000).
buffer_size: 10000
# (optional) Inference options.
infer:
# (optional) The batch size to use (default: 1).
batch_size: 10
# (optional) The number of threads to use for processing data in parallel (default: number of logical cores).
num_parallel_process_calls: 8
# (optional) The data pre-fetch buffer size when processing data in parallel (default: batch_size * 10).
buffer_size: 100
# (optional) For compatible models, the number of hypotheses to output (default: 1).
n_best: 1
```
| 2017-11-04T10:00:48 |
|
OpenNMT/OpenNMT-tf | 29 | OpenNMT__OpenNMT-tf-29 | [
"5"
] | c2a0356b73431de056b3791a02381ee4e7fdafd5 | diff --git a/bin/main.py b/bin/main.py
--- a/bin/main.py
+++ b/bin/main.py
@@ -10,6 +10,7 @@
import tensorflow as tf
from opennmt.utils import hooks
+from opennmt.utils.evaluator import external_evaluation_fn
from opennmt.config import load_model_module, load_config
@@ -92,7 +93,14 @@ def train(estimator, model, config):
if not os.path.isdir(save_path):
os.makedirs(save_path)
eval_hooks.append(hooks.SaveEvaluationPredictionHook(
- model, os.path.join(save_path, "predictions.txt")))
+ model,
+ os.path.join(save_path, "predictions.txt"),
+ post_evaluation_fn=external_evaluation_fn(
+ config["train"].get("external_evaluators"),
+ config["data"]["eval_labels_file"],
+ output_dir=estimator.model_dir)))
+ elif config["train"].get("external_evaluators") is not None:
+ tf.logging.warning("External evaluators only work when save_eval_predictions is enabled.")
train_spec = tf.estimator.TrainSpec(
input_fn=model.input_fn(
diff --git a/opennmt/utils/evaluator.py b/opennmt/utils/evaluator.py
new file mode 100644
--- /dev/null
+++ b/opennmt/utils/evaluator.py
@@ -0,0 +1,121 @@
+"""Evaluation related classes and functions."""
+
+import subprocess
+
+import abc
+import re
+import six
+
+import tensorflow as tf
+
+from tensorflow.python.summary.writer.writer_cache import FileWriterCache as SummaryWriterCache
+
+
[email protected]_metaclass(abc.ABCMeta)
+class ExternalEvaluator(object):
+ """Base class for external evaluators."""
+
+ def __init__(self, labels_file=None, output_dir=None):
+ self._labels_file = labels_file
+ self._summary_writer = None
+
+ if output_dir is not None:
+ self._summary_writer = SummaryWriterCache.get(output_dir)
+
+ def __call__(self, step, predictions_path):
+ """Scores the predictions and logs the result.
+
+ Args:
+ step: The step at which this evaluation occurs.
+ predictions_path: The path to the saved predictions.
+ """
+ score = self.score(self._labels_file, predictions_path)
+ if score is None:
+ return
+ if self._summary_writer is not None:
+ self._summarize_score(step, score)
+ self._log_score(score)
+
+ # Some evaluators may return several scores so let them the ability to
+ # define how to log the score result.
+
+ def _summarize_score(self, step, score):
+ summary = tf.Summary(value=[tf.Summary.Value(
+ tag="external_evaluation/{}".format(self.name()), simple_value=score)])
+ self._summary_writer.add_summary(summary, step)
+
+ def _log_score(self, score):
+ tf.logging.info("%s evaluation score: %f", self.name(), score)
+
+ @abc.abstractproperty
+ def name(self):
+ """Returns the name of this evaluator."""
+ raise NotImplementedError()
+
+ @abc.abstractmethod
+ def score(self, labels_file, predictions_path):
+ """Scores the predictions against the true output labels."""
+ raise NotImplementedError()
+
+
+class BLEUEvaluator(ExternalEvaluator):
+ """Evaluator calling multi-bleu.perl."""
+
+ def name(self):
+ return "BLEU"
+
+ def score(self, labels_file, predictions_path):
+ try:
+ with open(predictions_path, "r") as predictions_file:
+ bleu_out = subprocess.check_output(
+ ["third_party/multi-bleu.perl", labels_file],
+ stdin=predictions_file,
+ stderr=subprocess.STDOUT)
+ bleu_out = bleu_out.decode("utf-8")
+ bleu_score = re.search(r"BLEU = (.+?),", bleu_out).group(1)
+ return float(bleu_score)
+ except subprocess.CalledProcessError as error:
+ if error.output is not None:
+ msg = error.output.strip()
+ tf.logging.warning(
+ "multi-bleu.perl script returned non-zero exit code: {}".format(msg))
+ return None
+
+
+def external_evaluation_fn(evaluators_name, labels_file, output_dir=None):
+ """Returns a callable to be used in
+ :class:`opennmt.utils.hooks.SaveEvaluationPredictionHook` that calls one or
+ more external evaluators.
+
+ Args:
+ evaluators_name: An evaluator name or a list of evaluators name.
+ labels_file: The true output labels.
+ output_dir: The run directory.
+
+ Returns:
+ A callable or ``None`` if :obj:`evaluators_name` is ``None`` or empty.
+
+ Raises:
+ ValueError: if an evaluator name is invalid.
+ """
+ if evaluators_name is None:
+ return None
+ if not isinstance(evaluators_name, list):
+ evaluators_name = [evaluators_name]
+ if not evaluators_name:
+ return None
+
+ evaluators = []
+ for name in evaluators_name:
+ name = name.lower()
+ if name == "bleu":
+ evaluator = BLEUEvaluator(labels_file=labels_file, output_dir=output_dir)
+ else:
+ raise ValueError("No evaluator associated with the name: {}".format(name))
+ evaluators.append(evaluator)
+
+ def _post_evaluation_fn(step, predictions_path):
+ for evaluator in evaluators:
+ evaluator(step, predictions_path)
+
+ return _post_evaluation_fn
diff --git a/opennmt/utils/hooks.py b/opennmt/utils/hooks.py
--- a/opennmt/utils/hooks.py
+++ b/opennmt/utils/hooks.py
@@ -82,8 +82,8 @@ def __init__(self, model, output_file, post_evaluation_fn=None):
model: The model for which to save the evaluation predictions.
output_file: The output filename which will be suffixed by the current
training step.
- post_evaluation_fn: (optional) A callable that takes as argument the file
- with the saved predictions.
+ post_evaluation_fn: (optional) A callable that takes as argument the
+ current step and the file with the saved predictions.
"""
self._model = model
self._output_file = output_file
@@ -101,8 +101,8 @@ def before_run(self, run_context): # pylint: disable=unused-argument
return tf.train.SessionRunArgs([self._predictions, self._global_step])
def after_run(self, run_context, run_values): # pylint: disable=unused-argument
- predictions, step = run_values.results
- self._output_path = "{}.{}".format(self._output_file, step)
+ predictions, self._current_step = run_values.results
+ self._output_path = "{}.{}".format(self._output_file, self._current_step)
with open(self._output_path, "a") as output_file:
for prediction in misc.extract_batches(predictions):
self._model.print_prediction(prediction, stream=output_file)
@@ -110,4 +110,4 @@ def after_run(self, run_context, run_values): # pylint: disable=unused-argument
def end(self, session):
tf.logging.info("Evaluation predictions saved to %s", self._output_path)
if self._post_evaluation_fn is not None:
- self._post_evaluation_fn(self._output_path)
+ self._post_evaluation_fn(self._current_step, self._output_path)
| diff --git a/opennmt/tests/evaluator_test.py b/opennmt/tests/evaluator_test.py
new file mode 100644
--- /dev/null
+++ b/opennmt/tests/evaluator_test.py
@@ -0,0 +1,15 @@
+import tensorflow as tf
+
+from opennmt.utils import evaluator
+
+
+class EvaluatorTest(tf.test.TestCase):
+
+ def testBLEUEvaluator(self):
+ bleu_evaluator = evaluator.BLEUEvaluator()
+ score = bleu_evaluator.score("data/toy-ende/tgt-val.txt", "data/toy-ende/tgt-val.txt")
+ self.assertEqual(100.0, score)
+
+
+if __name__ == "__main__":
+ tf.test.main()
| Add BLEU evaluation metric
Ideally, this metric should be compatible with [`tf.metrics`](https://www.tensorflow.org/api_docs/python/tf/metrics) for a seamless integration in the training flow. Otherwise, we could rely on the `opennmt.utils.hook.SaveEvaluationPredictionHook` hook for external evaluation.
| I'm sure you're aware of it, so what's wrong with this implementation:
https://github.com/tensorflow/tensor2tensor/blob/a55c4cf57e509a0caaf2079b11ca3a46c9e17636/tensor2tensor/utils/bleu_hook.py
?
Yes I saw that. Have to look at it more closely but ideally we would like a scorer that gives the same result as `multi-bleu.perl`.
That's why I think it may be better to have an external evaluation by saving the validation translation (which is already implemented) and run `multi-bleu.perl` on it. | 2017-11-27T15:10:03 |
OpenNMT/OpenNMT-tf | 36 | OpenNMT__OpenNMT-tf-36 | [
"35"
] | 38143006a89f4b77bb6c1c9ff2bd2cf69b9996f5 | diff --git a/opennmt/utils/misc.py b/opennmt/utils/misc.py
--- a/opennmt/utils/misc.py
+++ b/opennmt/utils/misc.py
@@ -37,7 +37,7 @@ def item_or_tuple(x):
def count_lines(filename):
"""Returns the number of lines of the file :obj:`filename`."""
- with open(filename) as f:
+ with open(filename, "rb") as f:
i = 0
for i, _ in enumerate(f):
pass
| File reading unicode error
When trying the quickstart example, I faced an error which is regarding file opening in
`utils\misc.py`
It got resolved once I changed
```python
line 40: with open(filename) as f:
to
line 40: with open(filename, encoding="utf8") as f:
```
I'll open a pull request with the fix.
**Windows, py3.6, tf1.4**
`python -m bin.main train --model config/models/nmt_small.
py --config config/opennmt-defaults.yml config/data/toy-ende.yml`
```bash
INFO:tensorflow:Using config: {'_model_dir': 'toy-ende', '_tf_random_seed': None, '_save_sum
mary_steps': 50, '_save_checkpoints_steps': 5000, '_save_checkpoints_secs': None, '_session_
config': gpu_options {
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps
': 50, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec
object at 0x000002213F038F60>, '_task_type': 'worker', '_task_id': 0, '_master': '', '_is_c
hief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after 18000 secs (ev
al_spec.throttle_secs) or training is finished.
Traceback (most recent call last):
File "C:\Users\Ayush\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 193, in _r
un_module_as_main
"__main__", mod_spec)
File "C:\Users\Ayush\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _ru
n_code
exec(code, run_globals)
File "C:\Users\Ayush\Projects\OpenNMT-tf\bin\main.py", line 308, in <module>
main()
File "C:\Users\Ayush\Projects\OpenNMT-tf\bin\main.py", line 290, in main
train(estimator, model, config)
File "C:\Users\Ayush\Projects\OpenNMT-tf\bin\main.py", line 135, in train
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "C:\Users\Ayush\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\p
ython\estimator\training.py", line 430, in train_and_evaluate
executor.run_local()
File "C:\Users\Ayush\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\p
ython\estimator\training.py", line 609, in run_local
hooks=train_hooks)
File "C:\Users\Ayush\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\p
ython\estimator\estimator.py", line 302, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Users\Ayush\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\p
ython\estimator\estimator.py", line 708, in _train_model
input_fn, model_fn_lib.ModeKeys.TRAIN)
File "C:\Users\Ayush\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\p
ython\estimator\estimator.py", line 577, in _get_features_and_labels_from_input_fn
result = self._call_input_fn(input_fn, mode)
File "C:\Users\Ayush\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\p
ython\estimator\estimator.py", line 663, in _call_input_fn
return input_fn(**kwargs)
File "C:\Users\Ayush\Projects\OpenNMT-tf\opennmt\models\model.py", line 515, in <lambda>
maximum_labels_length=maximum_labels_length)
File "C:\Users\Ayush\Projects\OpenNMT-tf\opennmt\models\model.py", line 374, in _input_fn_
impl
self._initialize(metadata)
File "C:\Users\Ayush\Projects\OpenNMT-tf\opennmt\models\sequence_to_sequence.py", line 93,
in _initialize
self.source_inputter.initialize(metadata)
File "C:\Users\Ayush\Projects\OpenNMT-tf\opennmt\inputters\text_inputter.py", line 304, in
initialize
self.vocabulary_size = count_lines(self.vocabulary_file) + self.num_oov_buckets
File "C:\Users\Ayush\Projects\OpenNMT-tf\opennmt\utils\misc.py", line 42, in count_lines
for i, _ in enumerate(f):
File "C:\Users\Ayush\AppData\Local\Programs\Python\Python36\lib\encodings\cp1252.py", line
23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 5597: character maps
to <undefined>```
| 2017-12-01T19:04:13 |
||
OpenNMT/OpenNMT-tf | 52 | OpenNMT__OpenNMT-tf-52 | [
"51"
] | ac77c2060c141816fb0f4fbc6762730ed3d00a04 | diff --git a/bin/average_checkpoints.py b/bin/average_checkpoints.py
new file mode 100644
--- /dev/null
+++ b/bin/average_checkpoints.py
@@ -0,0 +1,86 @@
+"""Checkpoint averaging script."""
+
+# This script is modified version of
+# https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/bin/t2t_avg_all.py
+# which comes with the following license and copyright notice:
+
+# Copyright 2017 The Tensor2Tensor Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import argparse
+
+import tensorflow as tf
+import numpy as np
+
+
+def main():
+ tf.logging.set_verbosity(tf.logging.INFO)
+
+ parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+ parser.add_argument("--model_dir", required=True,
+ help="The model directory containing the checkpoints.")
+ parser.add_argument("--output_dir", required=True,
+ help="The output directory where the averaged checkpoint will be saved.")
+ parser.add_argument("--max_count", default=8,
+ help="The maximal number of checkpoints to average.")
+ args = parser.parse_args()
+
+ if args.model_dir == args.output_dir:
+ raise ValueError("Model and output directory must be different")
+
+ checkpoints_path = tf.train.get_checkpoint_state(args.model_dir).all_model_checkpoint_paths
+ if len(checkpoints_path) > args.max_count:
+ checkpoints_path = checkpoints_path[-args.max_count:]
+ num_checkpoints = len(checkpoints_path)
+
+ tf.logging.info("Averaging %d checkpoints..." % num_checkpoints)
+ tf.logging.info("Listing variables...")
+
+ var_list = tf.train.list_variables(checkpoints_path[0])
+ avg_values = {}
+ for name, shape in var_list:
+ if not name.startswith("global_step"):
+ avg_values[name] = np.zeros(shape)
+
+ for checkpoint_path in checkpoints_path:
+ tf.logging.info("Loading checkpoint %s" % checkpoint_path)
+ reader = tf.train.load_checkpoint(checkpoint_path)
+ for name in avg_values:
+ avg_values[name] += reader.get_tensor(name) / num_checkpoints
+
+ tf_vars = []
+ for name, value in avg_values.items():
+ tf_vars.append(tf.get_variable(name, shape=value.shape))
+ placeholders = [tf.placeholder(v.dtype, shape=v.shape) for v in tf_vars]
+ assign_ops = [tf.assign(v, p) for (v, p) in zip(tf_vars, placeholders)]
+
+ latest_step = int(checkpoints_path[-1].split("-")[-1])
+ out_base_file = os.path.join(args.output_dir, "model.ckpt")
+ global_step = tf.get_variable(
+ "global_step",
+ initializer=tf.constant(latest_step, dtype=tf.int64),
+ trainable=False)
+ saver = tf.train.Saver(tf.global_variables())
+
+ with tf.Session() as sess:
+ sess.run(tf.global_variables_initializer())
+ for p, assign_op, (name, value) in zip(placeholders, assign_ops, avg_values.items()):
+ sess.run(assign_op, {p: value})
+ tf.logging.info("Saving averaged checkpoint to %s-%d" % (out_base_file, latest_step))
+ saver.save(sess, out_base_file, global_step=global_step)
+
+
+if __name__ == "__main__":
+ main()
| Model averaging
Same as here: https://github.com/OpenNMT/OpenNMT-py/issues/514
It brings at least 1 Bleu on T2T.
| 2018-01-18T11:07:47 |
||
OpenNMT/OpenNMT-tf | 138 | OpenNMT__OpenNMT-tf-138 | [
"134"
] | 5ff8394ce0ca73dcb6e5320d4b9cfb59d1077e33 | diff --git a/opennmt/bin/average_checkpoints.py b/opennmt/bin/average_checkpoints.py
--- a/opennmt/bin/average_checkpoints.py
+++ b/opennmt/bin/average_checkpoints.py
@@ -1,29 +1,10 @@
"""Checkpoint averaging script."""
-# This script is modified version of
-# https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/bin/t2t_avg_all.py
-# which comes with the following license and copyright notice:
-
-# Copyright 2017 The Tensor2Tensor Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
import argparse
-import six
import tensorflow as tf
-import numpy as np
+
+from opennmt.utils.checkpoint import average_checkpoints
def main():
@@ -37,50 +18,7 @@ def main():
parser.add_argument("--max_count", type=int, default=8,
help="The maximal number of checkpoints to average.")
args = parser.parse_args()
-
- if args.model_dir == args.output_dir:
- raise ValueError("Model and output directory must be different")
-
- checkpoints_path = tf.train.get_checkpoint_state(args.model_dir).all_model_checkpoint_paths
- if len(checkpoints_path) > args.max_count:
- checkpoints_path = checkpoints_path[-args.max_count:]
- num_checkpoints = len(checkpoints_path)
-
- tf.logging.info("Averaging %d checkpoints..." % num_checkpoints)
- tf.logging.info("Listing variables...")
-
- var_list = tf.train.list_variables(checkpoints_path[0])
- avg_values = {}
- for name, shape in var_list:
- if not name.startswith("global_step"):
- avg_values[name] = np.zeros(shape)
-
- for checkpoint_path in checkpoints_path:
- tf.logging.info("Loading checkpoint %s" % checkpoint_path)
- reader = tf.train.load_checkpoint(checkpoint_path)
- for name in avg_values:
- avg_values[name] += reader.get_tensor(name) / num_checkpoints
-
- tf_vars = []
- for name, value in six.iteritems(avg_values):
- tf_vars.append(tf.get_variable(name, shape=value.shape))
- placeholders = [tf.placeholder(v.dtype, shape=v.shape) for v in tf_vars]
- assign_ops = [tf.assign(v, p) for (v, p) in zip(tf_vars, placeholders)]
-
- latest_step = int(checkpoints_path[-1].split("-")[-1])
- out_base_file = os.path.join(args.output_dir, "model.ckpt")
- global_step = tf.get_variable(
- "global_step",
- initializer=tf.constant(latest_step, dtype=tf.int64),
- trainable=False)
- saver = tf.train.Saver(tf.global_variables())
-
- with tf.Session() as sess:
- sess.run(tf.global_variables_initializer())
- for p, assign_op, (name, value) in zip(placeholders, assign_ops, six.iteritems(avg_values)):
- sess.run(assign_op, {p: value})
- tf.logging.info("Saving averaged checkpoint to %s-%d" % (out_base_file, latest_step))
- saver.save(sess, out_base_file, global_step=global_step)
+ average_checkpoints(args.model_dir, args.output_dir, max_count=args.max_count)
if __name__ == "__main__":
diff --git a/opennmt/runner.py b/opennmt/runner.py
--- a/opennmt/runner.py
+++ b/opennmt/runner.py
@@ -10,7 +10,7 @@
from tensorflow.python.estimator.util import fn_args
-from opennmt.utils import hooks
+from opennmt.utils import hooks, checkpoint
from opennmt.utils.evaluator import external_evaluation_fn
from opennmt.utils.misc import extract_batches, print_bytes
@@ -156,6 +156,22 @@ def evaluate(self, checkpoint_path=None):
self._estimator.evaluate(
eval_spec.input_fn, hooks=eval_spec.hooks, checkpoint_path=checkpoint_path)
+ def average_checkpoints(self, output_dir, max_count=8):
+ """Averages checkpoints.
+
+ Args:
+ output_dir: The directory that will contain the averaged checkpoint.
+ max_count: The maximum number of checkpoints to average.
+
+ Returns:
+ The path to the directory containing the averaged checkpoint.
+ """
+ return checkpoint.average_checkpoints(
+ self._estimator.model_dir,
+ output_dir,
+ max_count=max_count,
+ session_config=self._estimator.config.session_config)
+
def infer(self,
features_file,
predictions_file=None,
diff --git a/opennmt/utils/checkpoint.py b/opennmt/utils/checkpoint.py
new file mode 100644
--- /dev/null
+++ b/opennmt/utils/checkpoint.py
@@ -0,0 +1,86 @@
+"""Checkpoint utilities."""
+
+import os
+import six
+
+import tensorflow as tf
+import numpy as np
+
+
+def average_checkpoints(model_dir, output_dir, max_count=8, session_config=None):
+ """Averages checkpoints.
+
+ Args:
+ model_dir: The directory containing checkpoints.
+ output_dir: The directory that will contain the averaged checkpoint.
+ max_count: The maximum number of checkpoints to average.
+ session_config: Configuration to use when creating the session.
+
+ Returns:
+ The path to the directory containing the averaged checkpoint.
+
+ Raises:
+ ValueError: if :obj:`output_dir` is the same as :obj:`model_dir`.
+ """
+ # This script is modified version of
+ # https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/bin/t2t_avg_all.py
+ # which comes with the following license and copyright notice:
+
+ # Copyright 2017 The Tensor2Tensor Authors.
+ #
+ # Licensed under the Apache License, Version 2.0 (the "License");
+ # you may not use this file except in compliance with the License.
+ # You may obtain a copy of the License at
+ #
+ # http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing, software
+ # distributed under the License is distributed on an "AS IS" BASIS,
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ # See the License for the specific language governing permissions and
+ # limitations under the License.
+ if model_dir == output_dir:
+ raise ValueError("Model and output directory must be different")
+
+ checkpoints_path = tf.train.get_checkpoint_state(model_dir).all_model_checkpoint_paths
+ if len(checkpoints_path) > max_count:
+ checkpoints_path = checkpoints_path[-max_count:]
+ num_checkpoints = len(checkpoints_path)
+
+ tf.logging.info("Averaging %d checkpoints..." % num_checkpoints)
+ tf.logging.info("Listing variables...")
+
+ var_list = tf.train.list_variables(checkpoints_path[0])
+ avg_values = {}
+ for name, shape in var_list:
+ if not name.startswith("global_step"):
+ avg_values[name] = np.zeros(shape)
+
+ for checkpoint_path in checkpoints_path:
+ tf.logging.info("Loading checkpoint %s" % checkpoint_path)
+ reader = tf.train.load_checkpoint(checkpoint_path)
+ for name in avg_values:
+ avg_values[name] += reader.get_tensor(name) / num_checkpoints
+
+ tf_vars = []
+ for name, value in six.iteritems(avg_values):
+ tf_vars.append(tf.get_variable(name, shape=value.shape))
+ placeholders = [tf.placeholder(v.dtype, shape=v.shape) for v in tf_vars]
+ assign_ops = [tf.assign(v, p) for (v, p) in zip(tf_vars, placeholders)]
+
+ latest_step = int(checkpoints_path[-1].split("-")[-1])
+ out_base_file = os.path.join(output_dir, "model.ckpt")
+ global_step = tf.get_variable(
+ "global_step",
+ initializer=tf.constant(latest_step, dtype=tf.int64),
+ trainable=False)
+ saver = tf.train.Saver(tf.global_variables())
+
+ with tf.Session(config=session_config) as sess:
+ sess.run(tf.global_variables_initializer())
+ for p, assign_op, (name, value) in zip(placeholders, assign_ops, six.iteritems(avg_values)):
+ sess.run(assign_op, {p: value})
+ tf.logging.info("Saving averaged checkpoint to %s-%d" % (out_base_file, latest_step))
+ saver.save(sess, out_base_file, global_step=global_step)
+
+ return output_dir
| Expose APIs for checkpoint averaging
Expose an API endpoint to average checkpoints in a model directory and use it in `Runner`.
| 2018-05-26T13:58:05 |
||
OpenNMT/OpenNMT-tf | 141 | OpenNMT__OpenNMT-tf-141 | [
"135"
] | a7014ccf26e60d871774ec037b55561438458d17 | diff --git a/opennmt/optimizers/__init__.py b/opennmt/optimizers/__init__.py
--- a/opennmt/optimizers/__init__.py
+++ b/opennmt/optimizers/__init__.py
@@ -3,3 +3,5 @@
from opennmt.optimizers.adafactor import AdafactorOptimizer
from opennmt.optimizers.adafactor import get_optimizer_from_params \
as get_adafactor_optimizer_from_params
+
+from opennmt.optimizers.multistep_adam import MultistepAdamOptimizer
diff --git a/opennmt/optimizers/multistep_adam.py b/opennmt/optimizers/multistep_adam.py
new file mode 100644
--- /dev/null
+++ b/opennmt/optimizers/multistep_adam.py
@@ -0,0 +1,142 @@
+# coding=utf-8
+# Copyright 2018 The Tensor2Tensor Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Changes:
+# * raise exception on incompatible TensorFlow version
+# * fix Pylint warnings
+
+"""Optimizer variants which make it possible to use very large batch sizes with
+limited GPU memory. Optimizers in this module accumulate the gradients for n
+batches, and call the optimizer's update rule every n batches with the
+accumulated gradients.
+See [Saunders et al., 2018](https://arxiv.org/abs/1805.00456) for details.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+# Dependency imports
+
+import tensorflow as tf
+
+
+class MultistepAdamOptimizer(tf.train.AdamOptimizer):
+ """Adam with SGD updates every n steps with accumulated gradients."""
+
+ def __init__(self, learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8,
+ use_locking=False, name="Adam", n=1):
+ super(MultistepAdamOptimizer, self).__init__(
+ learning_rate=learning_rate, beta1=beta1, beta2=beta2, epsilon=epsilon,
+ use_locking=use_locking, name=name)
+ if not hasattr(self, "_create_non_slot_variable"):
+ raise RuntimeError("MultistepAdamOptimizer requires TensorFlow 1.6+")
+ self._n = n # Call Adam optimizer every n batches with accumulated grads
+ self._n_t = None # n as tensor
+
+ def _create_slots(self, var_list):
+ """Create slot variables for Adam with accumulated gradients.
+
+ Like super class method, but additionally creates slots for the gradient
+ accumulator `acc_grad` and the counter variable.
+ """
+ super(MultistepAdamOptimizer, self)._create_slots(var_list)
+ first_var = min(var_list, key=lambda x: x.name)
+ self._create_non_slot_variable(initial_value=0 if self._n == 1 else 1,
+ name="iter",
+ colocate_with=first_var)
+ for v in var_list:
+ self._zeros_slot(v, "grad_acc", self._name)
+
+ def _get_iter_variable(self):
+ if tf.contrib.eager.in_eager_mode():
+ graph = None
+ else:
+ graph = tf.get_default_graph()
+ return self._get_non_slot_variable("iter", graph=graph)
+
+ def _prepare(self):
+ super(MultistepAdamOptimizer, self)._prepare()
+ self._n_t = tf.convert_to_tensor(self._n, name="n")
+
+ def _apply_cond(self, apply_fn, grad, var, *args, **kwargs):
+ """Conditionally apply or accumulate gradient.
+
+ Call `apply_fn only if the current counter value (iter) is zero. This
+ method couples common functionality for all _apply_*() implementations
+ in Adam.
+ """
+ grad_acc = self.get_slot(var, "grad_acc")
+
+ def _apply_adam(grad_acc, apply_fn, grad, var, *args, **kwargs):
+ total_grad = (grad_acc + grad) / tf.cast(self._n_t, grad.dtype)
+ adam_op = apply_fn(total_grad, var, *args, **kwargs)
+ with tf.control_dependencies([adam_op]):
+ grad_acc_to_zero_op = grad_acc.assign(tf.zeros_like(grad_acc),
+ use_locking=self._use_locking)
+ return tf.group(adam_op, grad_acc_to_zero_op)
+
+ def _accumulate_gradient(grad_acc, grad):
+ assign_op = tf.assign_add(grad_acc, grad, use_locking=self._use_locking)
+ return tf.group(assign_op) # Strip return value
+
+ return tf.cond(tf.equal(self._get_iter_variable(), 0),
+ lambda: _apply_adam(grad_acc, apply_fn, grad, var, *args, **kwargs),
+ lambda: _accumulate_gradient(grad_acc, grad))
+
+ def _apply_dense(self, grad, var):
+ return self._apply_cond(
+ super(MultistepAdamOptimizer, self)._apply_dense, grad, var)
+
+ def _resource_apply_dense(self, grad, var):
+ return self._apply_cond(
+ super(MultistepAdamOptimizer, self)._resource_apply_dense, grad, var)
+
+ def _apply_sparse_shared(self, grad, var, indices, scatter_add):
+ return self._apply_cond(
+ super(MultistepAdamOptimizer, self)._apply_sparse_shared, grad, var,
+ indices, scatter_add)
+
+ def _apply_sparse(self, grad, var):
+ # TODO: Implement a sparse version
+ dense_grad = tf.convert_to_tensor(grad)
+ return self._apply_cond(
+ super(MultistepAdamOptimizer, self)._apply_dense, dense_grad, var)
+
+ def _finish(self, update_ops, name_scope):
+ """Like super class method, but updates beta_power variables only every
+ n batches. The iter variable is updated with
+
+ iter <- iter + 1 mod n
+ """
+ iter_ = self._get_iter_variable()
+ beta1_power, beta2_power = self._get_beta_accumulators()
+ with tf.control_dependencies(update_ops):
+ with tf.colocate_with(iter_):
+
+ def _update_beta_op():
+ update_beta1 = beta1_power.assign(
+ beta1_power * self._beta1_t,
+ use_locking=self._use_locking)
+ update_beta2 = beta2_power.assign(
+ beta2_power * self._beta2_t,
+ use_locking=self._use_locking)
+ return tf.group(update_beta1, update_beta2)
+ maybe_update_beta = tf.cond(tf.equal(iter_, 0), _update_beta_op, tf.no_op)
+ with tf.control_dependencies([maybe_update_beta]):
+ update_iter = iter_.assign(tf.mod(iter_ + 1, self._n_t),
+ use_locking=self._use_locking)
+ return tf.group(
+ *update_ops + [update_iter, maybe_update_beta], name=name_scope)
| Accumulate gradients to simulate large batch size training
Introduce a multi-step optimizer that accumulatee gradients for N steps. Mostly useful for training Transformer models.
| 2018-05-31T08:19:39 |
||
OpenNMT/OpenNMT-tf | 153 | OpenNMT__OpenNMT-tf-153 | [
"152"
] | b0405265c42690c001007b4f1d215bb1682ed5fc | diff --git a/opennmt/inputters/text_inputter.py b/opennmt/inputters/text_inputter.py
--- a/opennmt/inputters/text_inputter.py
+++ b/opennmt/inputters/text_inputter.py
@@ -376,12 +376,12 @@ def transform(self, inputs, mode):
case_insensitive_embeddings=self.case_insensitive_embeddings)
self.embedding_size = pretrained.shape[-1]
- shape = None
- initializer = tf.constant(pretrained.astype(self.dtype.as_numpy_dtype()))
+ initializer = tf.constant_initializer(
+ pretrained.astype(self.dtype.as_numpy_dtype()), dtype=self.dtype)
else:
- shape = [self.vocabulary_size, self.embedding_size]
initializer = None
+ shape = [self.vocabulary_size, self.embedding_size]
embeddings = tf.get_variable(
"w_embs",
shape=shape,
| Error when using pre-trained word embedding features
System configuration:
Tensorflow : 1.8.0
OpenNMT-tf: 1.5.0
I use pre-trained word embedding for training Transformer, and it can only last for one epoch and will be failed when it is constructing the model for the next epoch. Here is my code, the "embedding_file_key" and "embedding_file_with_header" caused the error:
```
class Transformer(onmt.models.Transformer):
"""Defines a Transformer model as decribed in https://arxiv.org/abs/1706.03762."""
def __init__(self):
super(Transformer, self).__init__(
source_inputter=onmt.inputters.WordEmbedder(
vocabulary_file_key="source_words_vocabulary",
embedding_file_key="source_words_embeddings",
embedding_file_with_header=False,
embedding_size=512),
target_inputter=onmt.inputters.WordEmbedder(
vocabulary_file_key="target_words_vocabulary",
embedding_file_key="target_words_embeddings",
embedding_file_with_header=False,
embedding_size=512),
num_layers=6,
num_units=512,
num_heads=8,
ffn_inner_dim=2048,
dropout=0.1,
attention_dropout=0.1,
relu_dropout=0.1)
model = Transformer()
runner = Runner(
model,
config,
seed=None,
num_devices=1,
gpu_allow_growth=True)
runner.train_and_evaluate()
```
My config file is almost the same with [OpenNMT-tf/scripts/wmt/config/wmt_ende.yml](https://github.com/OpenNMT/OpenNMT-tf/blob/master/scripts/wmt/config/wmt_ende.yml) except adding "source_words_embeddings" and "target_words_embeddings" :
```
# The directory where models and summaries will be saved. It is created if it does not exist.
#model_dir: /home/lgy/deepModels/tf_models/wmt_ende_transformer_4gpu_lr2_ws8000_dur2_0.998
model_dir: /home/lgy/deepModels/tf_models
data:
train_features_file: /home/lgy/data/wmt/wmt14-de-en/amax/train.en
train_labels_file: /home/lgy/data/wmt/wmt14-de-en/amax/train.de
eval_features_file: /home/lgy/data/wmt/wmt14-de-en/amax/valid.en
eval_labels_file: /home/lgy/data/wmt/wmt14-de-en/amax/valid.de
source_words_vocabulary: /home/lgy/data/wmt/wmt14-de-en/amax/wmtende.vocab
target_words_vocabulary: /home/lgy/data/wmt/wmt14-de-en/amax/wmtende.vocab
source_words_embeddings: /home/lgy/deepModels/tf_models/wmt_ende_transformer_4gpu_lr2_ws8000_dur2_0.998/compositional_encode_M32K16.txt
target_words_embeddings: /home/lgy/deepModels/tf_models/wmt_ende_transformer_4gpu_lr2_ws8000_dur2_0.998/compositional_decode_M32K16.txt
```
I can run the first epoch successfully, but failed when the model is constructing for the next epoch. **It should be noted that if I delete the the "embedding_file_key" and "embedding_file_with_header" in the code, it ran successfully without error.** Here is the error message:
```
INFO:tensorflow:Saving checkpoints for 16221 into /home/lgy/deepModels/tf_models/wmt_ende_transformer_1gpu_lr2_ws8000_dur2_0.998_compositionalencoding_M64K16/model.ckpt.
INFO:tensorflow:Loss for final step: 3.9958026.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2018-06-14-15:33:54
INFO:tensorflow:Graph was finalized.
2018-06-14 23:33:54.898526: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-14 23:33:54.898579: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-14 23:33:54.898588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-06-14 23:33:54.898594: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-06-14 23:33:54.898894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15127 MB memory) -> physical GPU (device: 0, name: Tesla P100-SXM2-16GB, pci bus id: 0000:89:00.0, compute capability: 6.0)
INFO:tensorflow:Restoring parameters from /home/lgy/deepModels/tf_models/wmt_ende_transformer_1gpu_lr2_ws8000_dur2_0.998_compositionalencoding_M64K16/model.ckpt-16221
INFO:tensorflow:Running local_init_op.
2018-06-14 23:33:55.571416: I tensorflow/core/kernels/lookup_util.cc:373] Table trying to initialize from file /home/lgy/data/wmt/wmt14-de-en/amax/wmtende.vocab is already initialized.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation predictions saved to /home/lgy/deepModels/tf_models/wmt_ende_transformer_1gpu_lr2_ws8000_dur2_0.998_compositionalencoding_M64K16/eval/predictions.txt.16221
INFO:tensorflow:BLEU evaluation score: 15.020000
INFO:tensorflow:Finished evaluation at 2018-06-14-15:39:44
INFO:tensorflow:Saving dict for global step 16221: global_step = 16221, loss = 3.0185592
INFO:tensorflow:Calling model_fn.
Traceback (most recent call last):
File "tmp_train_transformer.py", line 74, in <module>
runner.train_and_evaluate()
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/runner.py", line 148, in train_and_evaluate
tf.estimator.train_and_evaluate(self._estimator, train_spec, eval_spec)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/estimator/training.py", line 439, in train_and_evaluate
executor.run()
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/estimator/training.py", line 518, in run
self.run_local()
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/estimator/training.py", line 657, in run_local
eval_result = evaluator.evaluate_and_export()
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/estimator/training.py", line 858, in evaluate_and_export
self._export_eval_result(eval_result, is_the_final_export)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/estimator/training.py", line 889, in _export_eval_result
is_the_final_export=is_the_final_export)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/estimator/exporter.py", line 232, in export
is_the_final_export)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/estimator/exporter.py", line 123, in export
strip_default_attrs=self._strip_default_attrs)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 613, in export_savedmodel
config=self.config)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 831, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/models/model.py", line 128, in _model_fn
_, predictions = self._build(features, labels, params, mode, config=config)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/models/sequence_to_sequence.py", line 185, in _build
return_alignment_history=True))
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/decoders/self_attention_decoder.py", line 315, in dynamic_decode_and_search
eos_id=end_token)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/utils/beam_search.py", line 557, in beam_search
back_prop=False)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3224, in while_loop
result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2956, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2893, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/utils/beam_search.py", line 485, in inner_loop
i, alive_seq, alive_log_probs, states)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/utils/beam_search.py", line 379, in grow_topk
flat_logits, flat_states = symbols_to_logits_fn(flat_ids, i, flat_states)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/decoders/self_attention_decoder.py", line 101, in _impl
inputs = embedding_fn(ids[:, -1:])
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/models/sequence_to_sequence.py", line 103, in _target_embedding_fn
return self.target_inputter.transform(ids, mode=mode)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/inputters/text_inputter.py", line 390, in transform
trainable=self.trainable)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1317, in get_variable
constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1079, in get_variable
constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 425, in get_variable
constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 394, in _true_getter
use_resource=use_resource, constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 751, in _get_single_variable
"reuse=tf.AUTO_REUSE in VarScope?" % name)
ValueError: Variable transformer/decoder/w_embs does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
```
This message starts from the end of the first epoch including evaluating BLEU on the validation set and ends at the ValueError. According to the error message, I edited line 103 in “/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.5.0-py2.7.egg/opennmt/models/sequence_to_sequence.py” from
```
def _scoped_target_embedding_fn(self, mode, scope):
def _target_embedding_fn(ids):
try:
with tf.variable_scope(scope):
return self.target_inputter.transform(ids, mode=mode)
except ValueError:
with tf.variable_scope(scope, reuse=True):
return self.target_inputter.transform(ids, mode=mode) # line 103
return _target_embedding_fn
```
to
```
def _scoped_target_embedding_fn(self, mode, scope):
def _target_embedding_fn(ids):
try:
with tf.variable_scope(scope):
return self.target_inputter.transform(ids, mode=mode)
except ValueError:
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE): # I edit here
return self.target_inputter.transform(ids, mode=mode)
return _target_embedding_fn
```
Then the running failed again at the same place but with a different error message (The first epoch is also successful and it fails at the beginning of the second epoch). The message is
```
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.1.0-py2.7.egg/opennmt/models/sequence_to_sequence.py", line 105, in _target_embedding_fn
return self.target_inputter.transform(ids, mode=mode)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/OpenNMT_tf-1.1.0-py2.7.egg/opennmt/inputters/text_inputter.py", line 390, in transform
trainable=self.trainable)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1297, in get_variable
constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1093, in get_variable
constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 439, in get_variable
constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 408, in _true_getter
use_resource=use_resource, constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 800, in _get_single_variable
use_resource=use_resource)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 2157, in variable
use_resource=use_resource)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 2147, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 2130, in default_variable_creator
constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 235, in __init__
constraint=constraint)
File "/home/lgy/test/nmt_transformer_opennmt_tf-workspace/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 356, in _init_from_args
"initializer." % name)
ValueError: Initializer for variable transformer/decoder/w_embs_1/ is from inside a control-flow construct, such as a loop or conditional. When creating a variable inside a loop or conditional, use a lambda as the initializer.
```
I think this is a bug, but I am not sure whether it is the bug of OpenNMT-tf or tensorflow, and I found a issue https://github.com/tensorflow/tensorflow/issues/14729 talks about a similar error in github of tensorflow.
| 2018-06-15T07:15:17 |
||
OpenNMT/OpenNMT-tf | 172 | OpenNMT__OpenNMT-tf-172 | [
"171"
] | 20edc90734fcc2640c8284add5160cb8a94a1e78 | diff --git a/opennmt/decoders/rnn_decoder.py b/opennmt/decoders/rnn_decoder.py
--- a/opennmt/decoders/rnn_decoder.py
+++ b/opennmt/decoders/rnn_decoder.py
@@ -104,8 +104,10 @@ def decode(self,
sequence_length,
embedding,
sampling_probability)
+ fused_projection = False
else:
helper = tf.contrib.seq2seq.TrainingHelper(inputs, sequence_length)
+ fused_projection = True # With TrainingHelper, project all timesteps at once.
cell, initial_state = self._build_cell(
mode,
@@ -118,9 +120,6 @@ def decode(self,
if output_layer is None:
output_layer = build_output_layer(self.num_units, vocab_size, dtype=inputs.dtype)
- # With TrainingHelper, project all timesteps at once.
- fused_projection = isinstance(helper, tf.contrib.seq2seq.TrainingHelper)
-
decoder = tf.contrib.seq2seq.BasicDecoder(
cell,
helper,
| InvalidArgumentError when using scheduled sampling
Hi,
I am trying the ListenAttendSpell model working on character level outputs.
My train_labels_file is of this kind (one space separated sentence per line):
```
<char> <char> <char> <space> <char> <char>
<char> <char> <char> <char> <char> <space>
.
.
.
I <space> a m <space> t a l l
```
etc.
and I built the vocab file (length ~160) out of it using the onmt-build-vocab script.
When running the model I get an InvalidArgumentError and this is the generated stack:
```
Traceback (most recent call last):
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1323, in _do_call
return fn(*args)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1302, in _run_fn
status, run_metadata)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0] = 319 is not in [0, 167)
[[Node: seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/decoder_1/embedding_lookup = Gather[Tindices=DT_INT3
2, Tparams=DT_FLOAT, _class=["loc:@seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/decoder_1/convert_gradient_to_tens
or_cc661786"], validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrain
ingHelperNextInputs/cond/decoder_1/convert_gradient_to_tensor_cc661786, seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/co
nd/GatherNd)]]
[[Node: seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/Greater/Switch/_525 = _HostRecv[client_terminated=fa
lse, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_7992_seq2seq/pa
rallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/Greater/Switch", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0
/device:GPU:0"](^_cloopseq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/TrainingHelperNextInputs/GreaterEqual/_132)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/hltsrv0/rdessi/sw/anaconda3/bin/onmt-main", line 11, in <module>
load_entry_point('OpenNMT-tf==1.5.0', 'console_scripts', 'onmt-main')()
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/bin/main.py", line 133, in main
runner.train_and_evaluate()
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/runner.py", line 148, in train_and_evaluate
tf.estimator.train_and_evaluate(self._estimator, train_spec, eval_spec)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 432, in train_and_evaluate
executor.run_local()
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 611, in run_local
hooks=train_hooks)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 302, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 783, in _train_model
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 521, in run
run_metadata=run_metadata)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 892, in run
run_metadata=run_metadata)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 967, in run
raise six.reraise(*original_exc_info)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 952, in run
return self._sess.run(*args, **kwargs)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1024, in run
run_metadata=run_metadata)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 827, in run
return self._sess.run(*args, **kwargs)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0] = 319 is not in [0, 167)
[[Node: seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/decoder_1/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/decoder_1/convert_gradient_to_tensor_cc661786"], validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/decoder_1/convert_gradient_to_tensor_cc661786, seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/GatherNd)]]
[[Node: seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/Greater/Switch/_525 = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_7992_seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/Greater/Switch", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](^_cloopseq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/TrainingHelperNextInputs/GreaterEqual/_132)]]
Caused by op 'seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/decoder_1/embedding_lookup', defined at:
File "/hltsrv0/rdessi/sw/anaconda3/bin/onmt-main", line 11, in <module>
load_entry_point('OpenNMT-tf==1.5.0', 'console_scripts', 'onmt-main')()
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/bin/main.py", line 133, in main
runner.train_and_evaluate()
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/runner.py", line 148, in train_and_evaluate
tf.estimator.train_and_evaluate(self._estimator, train_spec, eval_spec)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 432, in train_and_evaluate
executor.run_local()
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 611, in run_local
hooks=train_hooks)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 302, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 711, in _train_model
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 694, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/models/model.py", line 103, in _model_fn
_loss_op, features_shards, labels_shards, params, mode, config)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/utils/parallel.py", line 148, in __call__
outputs.append(funs[i](*args[i], **kwargs[i]))
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/models/model.py", line 66, in _loss_op
logits, _ = self._build(features, labels, params, mode, config=config)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/models/sequence_to_sequence.py", line 144, in _build
memory_sequence_length=encoder_sequence_length)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/decoders/rnn_decoder.py", line 130, in decode
outputs, state, length = tf.contrib.seq2seq.dynamic_decode(decoder)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/decoder.py", line 286, in dynamic_decode
swap_memory=swap_memory)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2816, in while_loop
result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2640, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2590, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/decoder.py", line 234, in body
decoder_finished) = decoder.step(time, inputs, state)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/basic_decoder.py", line 147, in step
sample_ids=sample_ids)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/helper.py", line 342, in next_inputs
all_finished, lambda: base_next_inputs, maybe_sample)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 316, in new_func
return func(*args, **kwargs)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1864, in cond
orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1725, in BuildCondBranch
original_result = fn()
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/helper.py", line 331, in maybe_sample
sampled_next_inputs = self._embedding_fn(sample_ids_sampling)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/models/sequence_to_sequence.py", line 103, in _target_embedding_fn
return self.target_inputter.transform(ids, mode=mode)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/inputters/text_inputter.py", line 392, in transform
outputs = embedding_lookup(embeddings, inputs)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/OpenNMT_tf-1.5.0-py3.6.egg/opennmt/layers/common.py", line 30, in embedding_lookup
return tf.nn.embedding_lookup(params, ids)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/embedding_ops.py", line 328, in embedding_lookup
transform_fn=None)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/embedding_ops.py", line 150, in _embedding_lookup_and_transform
result = _clip(_gather(params[0], ids, name=name), ids, max_norm)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/embedding_ops.py", line 54, in _gather
return array_ops.gather(params, ids, name=name)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 2486, in gather
params, indices, validate_indices=validate_indices, name=name)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1834, in gather
validate_indices=validate_indices, name=name)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/hltsrv0/rdessi/sw/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): indices[0] = 319 is not in [0, 167)
[[Node: seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/decoder_1/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/decoder_1/convert_gradient_to_tensor_cc661786"], validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/decoder_1/convert_gradient_to_tensor_cc661786, seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/GatherNd)]]
[[Node: seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/Greater/Switch/_525 = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_7992_seq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/cond/Greater/Switch", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](^_cloopseq2seq/parallel_0/seq2seq/decoder/decoder_1/while/BasicDecoderStep/ScheduledEmbeddingTrainingHelperNextInputs/TrainingHelperNextInputs/GreaterEqual/_132)]]
```
From what I understood it seems like the WordEmbedder is querying the lookup table for a value that is outside the vocab length range:
`InvalidArgumentError (see above for traceback): indices[0] = 319 is not in [0, 167)
`
Is that correct?
Could it be fixed with <unk> token? I am not sure how the model deals with OOV internally.
Thanks in advance!
| Hello,
Tokens not present in the vocabulary are mapped to a single index, the last entry in the lookup table.
Is the error happening during the initial training, or after continuing from a checkpoint? Could you double check the number of lines in the vocabulary file you set in the training configuration?
It is happening during initial training. I just ran the command again with a different vocabulary of length 82 and the error I get (with the same traceback) is:
`InvalidArgumentError (see above for traceback): indices[0] = 445 is not in [0, 83)
`
Could you share the complete training configuration and the command line you used? Thanks.
I ran:
`onmt-main train_and_eval --model_type ListenAttendSpell --config ../exp_code/config.yml`
```
model_dir: LAS_default
data:
train_features_file: /hltsrv3/rdessi/train.tfrecords
train_labels_file: /hltsrv3/rdessi/train.de
eval_features_file: /hltsrv3/rdessi/dev.tfrecords
eval_labels_file: /hltsrv3/rdessi/dev.de
target_words_vocabulary: /hltsrv3/rdessi/vocab.de
params:
optimizer: GradientDescentOptimizer
learning_rate: 0.2
param_init: 0.1
clip_gradients: 1.0
average_loss_in_time: true
decay_type: exponential_decay
decay_rate: 0.98
decay_steps: 2674
decay_step_duration: 1
start_decay_steps: 40106
minimum_learning_rate: 0.00001
scheduled_sampling_type: constant
scheduled_sampling_read_probability: 0.9
scheduled_sampling_k: 0
label_smoothing: 0.1
beam_width: 5
length_penalty: 0.2
maximum_iterations: 200
replace_unknown_target: false
train:
batch_size: 64
batch_type: examples
save_checkpoints_steps: 2674
keep_checkpoint_max: 8
save_summary_steps: 100
train_steps: 1000000
single_pass: false
bucket_width: 1
num_threads: 4
sample_buffer_size: 500000
prefetch_buffer_size: null
average_last_checkpoints: 8
```
Thank you. I'm reproducing the issue that is related to the scheduled sampling. If you want to continue your experiments, you can disable scheduled sampling by setting:
```yaml
params:
scheduled_sampling_read_probability: 1
```
It seems to be working fine now. Thanks!
In the paper a 10% probability of sampling form previous outputs rather than from the ground truth is used, how could I replicate that?
Your settings were correct but scheduled sampling appears to be broken. I'm pushing a fix. | 2018-07-12T06:57:22 |
|
OpenNMT/OpenNMT-tf | 173 | OpenNMT__OpenNMT-tf-173 | [
"161"
] | 1f9509c0fdf40e5db601c1feee2af07e8c86abfb | diff --git a/opennmt/utils/checkpoint.py b/opennmt/utils/checkpoint.py
--- a/opennmt/utils/checkpoint.py
+++ b/opennmt/utils/checkpoint.py
@@ -191,7 +191,7 @@ def average_checkpoints(model_dir, output_dir, max_count=8, session_config=None)
avg_values = {}
for name, shape in var_list:
if not name.startswith("global_step"):
- avg_values[name] = np.zeros(shape)
+ avg_values[name] = np.zeros(shape, dtype=np.float32)
for checkpoint_path in checkpoints_path:
tf.logging.info("Loading checkpoint %s" % checkpoint_path)
| DataLossError when infer with the model from checkpoints averaging
Hi,
tensorflow-gpu 1.8.0
opennmt-tf 1.5.0
Model is trained with the same parameter of [wmt_ende.yml](https://github.com/OpenNMT/OpenNMT-tf/blob/master/scripts/wmt/config/wmt_ende.yml) with 4 x 1080Ti
After the checkpoints averaging (max_count=5),
```bash
onmt-average-checkpoints --max_count=5 --model_dir=model/ --output_dir=model/avg/
```
do the inference in averaged model, but it return an error,
```bash
onmt-main infer --model_type Transformer --config train_enfr.yml --features_file newstest2014-fren-src.en.sp --predictions_file newstest2014-fren-src.en.sp.pred --checkpoint_path /model/avg/
```
```bash
DataLossError (see above for traceback): Invalid size in bundle entry: key transformer/decoder/LayerNorm/beta; stored size 4096; expected size 2048
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_INT64, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
[[Node: save/RestoreV2/_301 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_306_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
```
But there is **no error** if I do inference with each single checkpoint,
```bash
onmt-main infer --model_type Transformer --config train_enfr.yml --features_file newstest2014-fren-src.en.sp --predictions_file newstest2014-fren-src.en.sp.pred --checkpoint_path /model/model.ckpt-345921
```
I wonder if there something wrong when I average the checkpoints ?
Thanks !
| Hello,
I did not succeed in reproducing the issue. Are you running these commands on the same server?
Thanks for your reply,
I run all the commands on the same server, I trained the model with the same parameter but on en-fr data of wmt14
The strange thing is that all single checkpoints work but the average one doesn't
Here is the screen shot of my model directory,
<img width="505" alt="screen shot 2018-07-02 at 19 49 21" src="https://user-images.githubusercontent.com/10563679/42178544-1cf16fde-7e31-11e8-9ade-cac9d75eeb80.png">
Did you try re-running the `average-checkpoints` script? Seems like it saved the wrong `dtype` for a variable.
If you can, I suggest to try the code on the master branch. There was some changes on checkpoint management that could help in your case.
Yes, I re-running and get the same error.
I have installed the lastest code by ***python setup.py install***
I re-running the code with the same data in [this script](https://github.com/OpenNMT/OpenNMT-tf/blob/master/scripts/wmt/README.md), I download the data in lazy run part and train then eval the model with the provided bash script. But finally get the same error when inference with the average checkpoint,
```bash
WARNING:tensorflow:You provided a model configuration but a checkpoint already exists. The model configuration must define the same model as the one used for the initial training. However, you can change non structural values like dropout.
2018-07-03 14:37:57.691948: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-07-03 14:38:00.441040: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:05:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2018-07-03 14:38:00.601709: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 1 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:06:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2018-07-03 14:38:00.778173: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 2 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:09:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2018-07-03 14:38:00.953637: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 3 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:0a:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2018-07-03 14:38:00.958714: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0, 1, 2, 3
2018-07-03 14:38:01.741761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-03 14:38:01.741793: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0 1 2 3
2018-07-03 14:38:01.741800: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N Y Y Y
2018-07-03 14:38:01.741804: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 1: Y N Y Y
2018-07-03 14:38:01.741809: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 2: Y Y N Y
2018-07-03 14:38:01.741813: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 3: Y Y Y N
2018-07-03 14:38:01.742552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10412 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
2018-07-03 14:38:01.843134: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10413 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:06:00.0, compute capability: 6.1)
2018-07-03 14:38:01.942467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10413 MB memory) -> physical GPU (device: 2, name: GeForce GTX 1080 Ti, pci bus id: 0000:09:00.0, compute capability: 6.1)
2018-07-03 14:38:02.041326: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 10413 MB memory) -> physical GPU (device: 3, name: GeForce GTX 1080 Ti, pci bus id: 0000:0a:00.0, compute capability: 6.1)
2018-07-03 14:38:02.142136: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0, 1, 2, 3
2018-07-03 14:38:02.142277: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-03 14:38:02.142287: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0 1 2 3
2018-07-03 14:38:02.142294: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N Y Y Y
2018-07-03 14:38:02.142300: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 1: Y N Y Y
2018-07-03 14:38:02.142304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 2: Y Y N Y
2018-07-03 14:38:02.142310: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 3: Y Y Y N
2018-07-03 14:38:02.142658: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/device:GPU:0 with 10412 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
2018-07-03 14:38:02.142747: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/device:GPU:1 with 10413 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:06:00.0, compute capability: 6.1)
2018-07-03 14:38:02.142848: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/device:GPU:2 with 10413 MB memory) -> physical GPU (device: 2, name: GeForce GTX 1080 Ti, pci bus id: 0000:09:00.0, compute capability: 6.1)
2018-07-03 14:38:02.142959: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/device:GPU:3 with 10413 MB memory) -> physical GPU (device: 3, name: GeForce GTX 1080 Ti, pci bus id: 0000:0a:00.0, compute capability: 6.1)
INFO:tensorflow:Using config: {'_model_dir': 'wmt_ende_transformer_4gpu_lr2_ws8000_dur2_0.998', '_tf_random_seed': None, '_save_summary_steps': 50, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': gpu_options {
}
allow_soft_placement: true
, '_keep_checkpoint_max': 10, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 50, '_train_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f1c8befe0b8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
2018-07-03 14:38:06.116385: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0, 1, 2, 3
2018-07-03 14:38:06.116505: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-03 14:38:06.116515: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0 1 2 3
2018-07-03 14:38:06.116524: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N Y Y Y
2018-07-03 14:38:06.116531: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 1: Y N Y Y
2018-07-03 14:38:06.116539: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 2: Y Y N Y
2018-07-03 14:38:06.116543: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 3: Y Y Y N
2018-07-03 14:38:06.116864: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10412 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
2018-07-03 14:38:06.116954: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10413 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:06:00.0, compute capability: 6.1)
2018-07-03 14:38:06.117021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10413 MB memory) -> physical GPU (device: 2, name: GeForce GTX 1080 Ti, pci bus id: 0000:09:00.0, compute capability: 6.1)
2018-07-03 14:38:06.117084: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 10413 MB memory) -> physical GPU (device: 3, name: GeForce GTX 1080 Ti, pci bus id: 0000:0a:00.0, compute capability: 6.1)
INFO:tensorflow:Restoring parameters from wmt_ende_transformer_4gpu_lr2_ws8000_dur2_0.998/avg/model.ckpt-8934
2018-07-03 14:38:06.270786: W tensorflow/core/framework/op_kernel.cc:1318] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Data loss: Invalid size in bundle entry: key transformer/decoder/LayerNorm/beta; stored size 4096; expected size 2048
Traceback (most recent call last):
File "/home/XXXX/anaconda3/envs/dlnlp/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "/home/XXXX/anaconda3/envs/dlnlp/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/XXXX/anaconda3/envs/dlnlp/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.DataLossError: Invalid size in bundle entry: key transformer/decoder/LayerNorm/beta; stored size 4096; expected size 2048
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_INT64, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
[[Node: save/RestoreV2/_301 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_306_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
```
Thank you for the details. I'm running out of ideas at the moment but can you also share the Numpy version that is installed and the output of:
```bash
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
```
just to make sure.
Thanks for your help,
My numpy version is **1.14.0**
output of the command is
```bash
/home/XXXX/anaconda3/envs/dlnlp/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
v1.8.0-0-g93bc2e2072 1.8.0
```
@silverguo The same problem, have you solved it? | 2018-07-13T07:17:49 |
|
OpenNMT/OpenNMT-tf | 189 | OpenNMT__OpenNMT-tf-189 | [
"187"
] | 12593bbf69148c19d6e679e1ad1732567b902be6 | diff --git a/opennmt/bin/main.py b/opennmt/bin/main.py
--- a/opennmt/bin/main.py
+++ b/opennmt/bin/main.py
@@ -27,6 +27,10 @@ def _prefix_paths(prefix, paths):
for key, path in six.iteritems(paths):
paths[key] = _prefix_paths(prefix, path)
return paths
+ elif isinstance(paths, list):
+ for i, path in enumerate(paths):
+ paths[i] = _prefix_paths(prefix, path)
+ return paths
else:
path = paths
new_path = os.path.join(prefix, path)
| Crash loading parallel inputs with --data_dir
I found the next issue if I follow the tutorial and try to do
data:
train_features_file:
- train_source_1.records
- train_source_2.txt
- train_source_3.txt
in main.py at the method _prefix_paths
new_path = os.path.join(prefix, path)
will crash because paths is a list and join can't be done on a list.
The fix should be just check the instance type at paths and iterate
| Thanks for reporting. As you identified the fix, do you want to send a PR?
Sure, tomorrow i will be back to the office and i can send it to you if you tell me how to do it
not sure how to send you the code.
its just substitute inside main.py
_prefix_paths with this method
```
def _prefix_paths(prefix, paths):
"""Recursively prefix paths.
Args:
prefix: The prefix to apply.
data: A dict of relative paths.
Returns:
The updated dict.
"""
if isinstance(paths, dict):
for key, path in six.iteritems(paths):
paths[key] = _prefix_paths(prefix, path)
return paths
elif isinstance(paths, list):
fixedPaths = list()
for path in paths:
fixedPaths.append(_prefix_paths(prefix,path))
return fixedPaths
else:
path = paths
new_path = os.path.join(prefix, path)
if os.path.isfile(new_path):
return new_path
else:
return path
``` | 2018-07-31T08:48:37 |
|
OpenNMT/OpenNMT-tf | 222 | OpenNMT__OpenNMT-tf-222 | [
"221"
] | ae0ada93a652c02793de3b481e20315716ea4f23 | diff --git a/opennmt/layers/bridge.py b/opennmt/layers/bridge.py
--- a/opennmt/layers/bridge.py
+++ b/opennmt/layers/bridge.py
@@ -25,7 +25,11 @@ def assert_state_is_compatible(expected_state, state):
for x, y in zip(expected_state_flat, state_flat):
if tf.contrib.framework.is_tensor(x):
- tf.contrib.framework.with_same_shape(x, y)
+ expected_depth = x.get_shape().as_list()[-1]
+ depth = y.get_shape().as_list()[-1]
+ if depth != expected_depth:
+ raise ValueError("Tensor %s in state has shape %s which is incompatible "
+ "with the target shape %s" % (y.name, y.shape, x.shape))
@six.add_metaclass(abc.ABCMeta)
| assert_state_is_compatible() cannot detect dimension difference between encoder_state and decoder_zero_state when encoder and decoder dimensions are not the same in NMTSmall model
I just followed the instructions on the page [http://opennmt.net/OpenNMT-tf/quickstart.html](http://opennmt.net/OpenNMT-tf/quickstart.html) and played around a little bit with the NMTSmall model by setting a different `num_units` value to the `UnidirectionalRNNEncoder`, say `256`, which is different from the `512` for the `AttentionalRNNDecoder`.
This line
https://github.com/OpenNMT/OpenNMT-tf/blob/ae0ada93a652c02793de3b481e20315716ea4f23/opennmt/layers/bridge.py#L56
in the `CopyBridge` did not throw any error, even though the `encoder_state` and `decoder_zero_state` do not have the same dimensions, `256` vs `512`.
It probably natual for someone to think of using the `DenseBridge` when dimensions are set differently. However, the `CopyBridge` should be throwing some errors in such misusage cases here, instead of letting one to figure out that with some error message as follows
`ValueError: Dimensions must be equal, but are 1280 and 1536 for 'seq2seq/parallel_0/seq2seq/decoder_1/decoder/while/BasicDecoderStep/decoder/attention_wrapper/attention_wrapper/multi_rnn_cell/cell_0/lstm_cell/MatMul' (op: 'MatMul') with input shapes: [?,1280], [1536,2048].`
Can anyone please explain why the
https://github.com/OpenNMT/OpenNMT-tf/blob/ae0ada93a652c02793de3b481e20315716ea4f23/opennmt/layers/bridge.py#L28
passed without an issue?
Thanks!
| Thanks for reporting.
Looks like `tf.contrib.framework.with_same_shape(x, y)` is incorrectly used here as it does not throw directly but returns `y` with an [assert op](https://www.tensorflow.org/api_docs/python/tf/Assert) dependency. I think we could remove the use of this function and just check manually that the depth dimension is the same. | 2018-10-15T08:52:04 |
|
OpenNMT/OpenNMT-tf | 234 | OpenNMT__OpenNMT-tf-234 | [
"228"
] | a2f2549a3e0bae50f3ee3522285b2b4bed19e2a3 | diff --git a/opennmt/bin/convert_checkpoint.py b/opennmt/bin/convert_checkpoint.py
new file mode 100644
--- /dev/null
+++ b/opennmt/bin/convert_checkpoint.py
@@ -0,0 +1,44 @@
+"""Script to convert checkpoint variables from one data type to another."""
+
+import argparse
+
+import tensorflow as tf
+
+from opennmt.utils import checkpoint
+
+
+def main():
+ tf.logging.set_verbosity(tf.logging.INFO)
+
+ parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+ parser.add_argument("--model_dir", default=None,
+ help="The path to the model directory.")
+ parser.add_argument("--checkpoint_path", default=None,
+ help="The path to the checkpoint to convert.")
+ parser.add_argument("--output_dir", required=True,
+ help="The output directory where the updated checkpoint will be saved.")
+ parser.add_argument("--target_dtype", required=True,
+ help="Target data type (e.g. float16 or float32).")
+ parser.add_argument("--source_dtype", default=None,
+ help="Source data type (e.g. float16 or float32, inferred if not set).")
+ args = parser.parse_args()
+ if args.model_dir is None and args.checkpoint_path is None:
+ raise ValueError("One of --checkpoint_path and --model_dir should be set")
+ checkpoint_path = args.checkpoint_path
+ if checkpoint_path is None:
+ checkpoint_path = tf.train.latest_checkpoint(args.model_dir)
+ target_dtype = tf.as_dtype(args.target_dtype)
+ if args.source_dtype is None:
+ source_dtype = tf.float32 if target_dtype == tf.float16 else tf.float16
+ else:
+ source_dtype = tf.as_dtype(args.source_dtype)
+ checkpoint.convert_checkpoint(
+ checkpoint_path,
+ args.output_dir,
+ source_dtype,
+ target_dtype,
+ session_config=tf.ConfigProto(device_count={"GPU": 0}))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/opennmt/utils/checkpoint.py b/opennmt/utils/checkpoint.py
--- a/opennmt/utils/checkpoint.py
+++ b/opennmt/utils/checkpoint.py
@@ -54,8 +54,9 @@ def _update_vocabulary_variables(variables, current_vocab_path, new_vocab_path,
tf.logging.debug("Updating variable %s" % name)
variables[name] = _update_vocabulary_variable(tensor, current_size, mapping)
-def _save_new_variables(variables, output_dir, base_checkpoint_path, session_config=None):
+def _create_checkpoint_from_variables(variables, output_dir, latest_step=None, session_config=None):
if "global_step" in variables:
+ latest_step = variables["global_step"]
del variables["global_step"]
tf_vars = []
for name, value in six.iteritems(variables):
@@ -72,7 +73,6 @@ def _save_new_variables(variables, output_dir, base_checkpoint_path, session_con
placeholders = [tf.placeholder(v.dtype, shape=v.shape) for v in tf_vars]
assign_ops = [tf.assign(v, p) for (v, p) in zip(tf_vars, placeholders)]
- latest_step = int(base_checkpoint_path.split("-")[-1])
out_base_file = os.path.join(output_dir, "model.ckpt")
global_step = tf.get_variable(
"global_step",
@@ -89,6 +89,52 @@ def _save_new_variables(variables, output_dir, base_checkpoint_path, session_con
return output_dir
+def get_checkpoint_variables(checkpoint_path):
+ """Returns variables included in a checkpoint.
+
+ Args:
+ checkpoint_path: Path to the checkpoint.
+
+ Returns:
+ A dictionary mapping variables name to value.
+ """
+ reader = tf.train.load_checkpoint(checkpoint_path)
+ return {
+ name:reader.get_tensor(name)
+ for name in six.iterkeys(reader.get_variable_to_shape_map())}
+
+def convert_checkpoint(checkpoint_path,
+ output_dir,
+ source_dtype,
+ target_type,
+ session_config=None):
+ """Converts checkpoint variables from one dtype to another.
+
+ Args:
+ checkpoint_path: The path to the checkpoint to convert.
+ output_dir: The directory that will contain the converted checkpoint.
+ source_dtype: The data type to convert from.
+ target_dtype: The data type to convert to.
+ session_config: Optional configuration to use when creating the session.
+
+ Returns:
+ The path to the directory containing the converted checkpoint.
+
+ Raises:
+ ValueError: if :obj:`output_dir` points to the same directory as
+ :obj:`checkpoint_path`.
+ """
+ if os.path.dirname(checkpoint_path) == output_dir:
+ raise ValueError("Checkpoint and output directory must be different")
+ variables = get_checkpoint_variables(checkpoint_path)
+ for name, value in six.iteritems(variables):
+ if not name.startswith("optim") and tf.as_dtype(value.dtype) == source_dtype:
+ variables[name] = value.astype(target_type.as_numpy_dtype())
+ return _create_checkpoint_from_variables(
+ variables,
+ output_dir,
+ session_config=session_config)
+
def update_vocab(model_dir,
output_dir,
current_src_vocab,
@@ -130,17 +176,14 @@ def update_vocab(model_dir,
return model_dir
checkpoint_path = tf.train.latest_checkpoint(model_dir)
tf.logging.info("Updating vocabulary related variables in checkpoint %s" % checkpoint_path)
- reader = tf.train.load_checkpoint(checkpoint_path)
- variable_map = reader.get_variable_to_shape_map()
- variable_value = {name:reader.get_tensor(name) for name, _ in six.iteritems(variable_map)}
+ variable_value = get_checkpoint_variables(checkpoint_path)
if new_src_vocab is not None:
_update_vocabulary_variables(variable_value, current_src_vocab, new_src_vocab, "encoder", mode)
if new_tgt_vocab is not None:
_update_vocabulary_variables(variable_value, current_tgt_vocab, new_tgt_vocab, "decoder", mode)
- return _save_new_variables(
+ return _create_checkpoint_from_variables(
variable_value,
output_dir,
- checkpoint_path,
session_config=session_config)
@@ -199,8 +242,9 @@ def average_checkpoints(model_dir, output_dir, max_count=8, session_config=None)
for name in avg_values:
avg_values[name] += reader.get_tensor(name) / num_checkpoints
- return _save_new_variables(
+ latest_step = int(checkpoints_path[-1].split("-")[-1])
+ return _create_checkpoint_from_variables(
avg_values,
output_dir,
- checkpoints_path[-1],
+ latest_step=latest_step,
session_config=session_config)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -55,6 +55,7 @@
"onmt-ark-to-records=opennmt.bin.ark_to_records:main",
"onmt-average-checkpoints=opennmt.bin.average_checkpoints:main",
"onmt-build-vocab=opennmt.bin.build_vocab:main",
+ "onmt-convert-checkpoint=opennmt.bin.convert_checkpoint:main",
"onmt-detokenize-text=opennmt.bin.detokenize_text:main",
"onmt-main=opennmt.bin.main:main",
"onmt-merge-config=opennmt.bin.merge_config:main",
| diff --git a/opennmt/tests/checkpoint_test.py b/opennmt/tests/checkpoint_test.py
--- a/opennmt/tests/checkpoint_test.py
+++ b/opennmt/tests/checkpoint_test.py
@@ -49,14 +49,17 @@ def _generateCheckpoint(self,
with tf.Graph().as_default() as graph:
for name, value in six.iteritems(variables):
if isinstance(value, tuple):
+ dtype = None
initializer = tf.random_uniform_initializer()
shape = value
else:
- initializer = tf.constant_initializer(value, dtype=tf.as_dtype(value.dtype))
+ dtype = tf.as_dtype(value.dtype)
+ initializer = tf.constant_initializer(value, dtype=dtype)
shape = value.shape
_ = tf.get_variable(
name,
shape=shape,
+ dtype=dtype,
initializer=initializer)
global_step = tf.get_variable(
"global_step",
@@ -70,14 +73,6 @@ def _generateCheckpoint(self,
saver.save(sess, os.path.join(model_dir, prefix), global_step=global_step)
return saver.last_checkpoints[0], time.time()
- def _readCheckpoint(model_dir, checkpoint_path=None):
- if checkpoint_path is None:
- checkpoint_path = tf.train.latest_checkpoint(model_dir)
- reader = tf.train.load_checkpoint(checkpoint_path)
- variable_map = reader.get_variable_to_shape_map()
- variables = {name:reader.get_tensor(name) for name, _ in six.iteritems(variable_map)}
- return variables
-
def testCheckpointAveraging(self):
model_dir = os.path.join(self.get_temp_dir(), "ckpt")
os.makedirs(model_dir)
@@ -88,10 +83,27 @@ def testCheckpointAveraging(self):
model_dir, 20, {"x": np.ones((2, 3), dtype=np.float32)}, last_checkpoints=checkpoints))
avg_dir = os.path.join(model_dir, "avg")
checkpoint.average_checkpoints(model_dir, avg_dir)
- avg_var = self._readCheckpoint(avg_dir)
+ avg_var = checkpoint.get_checkpoint_variables(avg_dir)
self.assertEqual(avg_var["global_step"], 20)
self.assertAllEqual(avg_var["x"], np.full((2, 3), 0.5, dtype=np.float32))
+ def testCheckpointDTypeConversion(self):
+ model_dir = os.path.join(self.get_temp_dir(), "ckpt-fp32")
+ os.makedirs(model_dir)
+ variables = {
+ "x": np.ones((2, 3), dtype=np.float32),
+ "optim/x": np.ones((2, 3), dtype=np.float32),
+ "counter": np.int64(42)
+ }
+ checkpoint_path, _ = self._generateCheckpoint(model_dir, 10, variables)
+ half_dir = os.path.join(model_dir, "fp16")
+ checkpoint.convert_checkpoint(checkpoint_path, half_dir, tf.float32, tf.float16)
+ half_var = checkpoint.get_checkpoint_variables(half_dir)
+ self.assertEqual(half_var["global_step"], 10)
+ self.assertEqual(half_var["x"].dtype, np.float16)
+ self.assertEqual(half_var["optim/x"].dtype, np.float32)
+ self.assertEqual(half_var["counter"].dtype, np.int64)
+
if __name__ == "__main__":
tf.test.main()
| Checkpoint conversion between float16 and float32
Users should be able to convert a model trained with `float16` to `float32` (or vice versa).
* Add an API endpoint in `opennmt.utils.checkpoint`
* Add a CLI tool in `opennmt.bin.convert_checkpoint`
| 2018-10-17T15:29:11 |
|
OpenNMT/OpenNMT-tf | 267 | OpenNMT__OpenNMT-tf-267 | [
"266"
] | d95287e937ca716c930c4c7d6ea874288450ea9f | diff --git a/opennmt/decoders/decoder.py b/opennmt/decoders/decoder.py
--- a/opennmt/decoders/decoder.py
+++ b/opennmt/decoders/decoder.py
@@ -305,10 +305,6 @@ def dynamic_decode_and_search(self,
if memory is None:
raise ValueError("dtype argument is required when no memory is set")
dtype = tf.contrib.framework.nest.flatten(memory)[0].dtype
- if output_layer is None:
- if vocab_size is None:
- raise ValueError("vocab_size must be known when the output_layer is not set")
- output_layer = build_output_layer(self.output_size, vocab_size, dtype=dtype)
if beam_width > 1:
if initial_state is not None:
@@ -327,6 +323,10 @@ def dynamic_decode_and_search(self,
memory=memory,
memory_sequence_length=memory_sequence_length,
dtype=dtype)
+ if output_layer is None:
+ if vocab_size is None:
+ raise ValueError("vocab_size must be known when the output_layer is not set")
+ output_layer = build_output_layer(self.output_size, vocab_size, dtype=dtype)
state = {"decoder": initial_state}
if self.support_alignment_history and not isinstance(memory, (tuple, list)):
| diff --git a/opennmt/tests/decoder_test.py b/opennmt/tests/decoder_test.py
--- a/opennmt/tests/decoder_test.py
+++ b/opennmt/tests/decoder_test.py
@@ -1,4 +1,5 @@
import math
+import os
import tensorflow as tf
import numpy as np
@@ -6,6 +7,7 @@
from opennmt import decoders
from opennmt.decoders import decoder
from opennmt.utils import beam_search
+from opennmt.layers import bridge
class DecoderTest(tf.test.TestCase):
@@ -43,7 +45,7 @@ def testSamplingProbability(self):
self.assertAlmostEqual(
1.0 - (1.0 / (1.0 + math.exp(5.0 / 1.0))), sess.run(inv_sig_sample_prob))
- def _testDecoderTraining(self, decoder, dtype=tf.float32):
+ def _testDecoderTraining(self, decoder, initial_state_fn=None, dtype=tf.float32):
batch_size = 4
vocab_size = 10
time_dim = 5
@@ -58,10 +60,15 @@ def _testDecoderTraining(self, decoder, dtype=tf.float32):
memory = tf.placeholder_with_default(
np.random.randn(batch_size, memory_time, depth).astype(dtype.as_numpy_dtype()),
shape=(None, None, depth))
+ if initial_state_fn is not None:
+ initial_state = initial_state_fn(tf.shape(memory)[0], dtype)
+ else:
+ initial_state = None
outputs, _, _, attention = decoder.decode(
inputs,
sequence_length,
vocab_size=vocab_size,
+ initial_state=initial_state,
memory=memory,
memory_sequence_length=memory_sequence_length,
return_alignment_history=True)
@@ -72,44 +79,23 @@ def _testDecoderTraining(self, decoder, dtype=tf.float32):
else:
self.assertIsNone(attention)
- with self.test_session() as sess:
+ saver = tf.train.Saver(var_list=tf.global_variables())
+ with self.test_session(graph=tf.get_default_graph()) as sess:
sess.run(tf.global_variables_initializer())
- with self.test_session() as sess:
output_time_dim_val = sess.run(output_time_dim)
self.assertEqual(time_dim, output_time_dim_val)
if decoder.support_alignment_history:
attention_val = sess.run(attention)
self.assertAllEqual([batch_size, time_dim, memory_time], attention_val.shape)
-
- def testRNNDecoderTraining(self):
- decoder = decoders.RNNDecoder(2, 20)
- self._testDecoderTraining(decoder)
-
- def testAttentionalRNNDecoderTraining(self):
- decoder = decoders.AttentionalRNNDecoder(2, 20)
- self._testDecoderTraining(decoder)
-
- def testMultiAttentionalRNNDecoderTraining(self):
- decoder = decoders.MultiAttentionalRNNDecoder(2, 20, attention_layers=[0])
- self._testDecoderTraining(decoder)
-
- def testRNMTPlusDecoderTraining(self):
- decoder = decoders.RNMTPlusDecoder(2, 20, 4)
- self._testDecoderTraining(decoder)
-
- def testSelfAttentionDecoderTraining(self):
- decoder = decoders.SelfAttentionDecoder(2, num_units=6, num_heads=2, ffn_inner_dim=12)
- self._testDecoderTraining(decoder)
-
- def testSelfAttentionDecoderFP16Training(self):
- decoder = decoders.SelfAttentionDecoder(2, num_units=6, num_heads=2, ffn_inner_dim=12)
- self._testDecoderTraining(decoder, dtype=tf.float16)
-
- def _testDecoderGeneric(self,
- decoder,
- with_beam_search=False,
- with_alignment_history=False,
- dtype=tf.float32):
+ return saver.save(sess, os.path.join(self.get_temp_dir(), "model.ckpt"))
+
+ def _testDecoderInference(self,
+ decoder,
+ initial_state_fn=None,
+ with_beam_search=False,
+ with_alignment_history=False,
+ dtype=tf.float32,
+ checkpoint_path=None):
batch_size = 4
beam_width = 5
num_hyps = beam_width if with_beam_search else 1
@@ -126,6 +112,10 @@ def _testDecoderGeneric(self,
embedding = tf.placeholder_with_default(
np.random.randn(vocab_size, depth).astype(dtype.as_numpy_dtype()),
shape=(vocab_size, depth))
+ if initial_state_fn is not None:
+ initial_state = initial_state_fn(tf.shape(memory)[0], dtype)
+ else:
+ initial_state = None
if with_beam_search:
decode_fn = decoder.dynamic_decode_and_search
@@ -143,6 +133,7 @@ def _testDecoderGeneric(self,
start_tokens,
end_token,
vocab_size=vocab_size,
+ initial_state=initial_state,
maximum_iterations=10,
memory=memory,
memory_sequence_length=memory_sequence_length,
@@ -155,55 +146,71 @@ def _testDecoderGeneric(self,
self.assertEqual(log_probs.dtype, tf.float32)
decode_time = tf.shape(ids)[-1]
+ saver = tf.train.Saver(var_list=tf.global_variables())
- with self.test_session() as sess:
- sess.run(tf.global_variables_initializer())
+ with self.test_session(graph=tf.get_default_graph()) as sess:
+ if checkpoint_path is not None:
+ saver.restore(sess, checkpoint_path)
+ else:
+ sess.run(tf.global_variables_initializer())
- if not with_alignment_history:
- self.assertEqual(4, len(outputs))
- else:
- self.assertEqual(5, len(outputs))
- alignment_history = outputs[4]
- if decoder.support_alignment_history:
- self.assertIsInstance(alignment_history, tf.Tensor)
- with self.test_session() as sess:
+ if not with_alignment_history:
+ self.assertEqual(4, len(outputs))
+ else:
+ self.assertEqual(5, len(outputs))
+ alignment_history = outputs[4]
+ if decoder.support_alignment_history:
+ self.assertIsInstance(alignment_history, tf.Tensor)
alignment_history, decode_time = sess.run([alignment_history, decode_time])
self.assertAllEqual(
[batch_size, num_hyps, decode_time, memory_time], alignment_history.shape)
- else:
- self.assertIsNone(alignment_history)
+ else:
+ self.assertIsNone(alignment_history)
- with self.test_session() as sess:
ids, lengths, log_probs = sess.run([ids, lengths, log_probs])
self.assertAllEqual([batch_size, num_hyps], ids.shape[0:2])
self.assertAllEqual([batch_size, num_hyps], lengths.shape)
self.assertAllEqual([batch_size, num_hyps], log_probs.shape)
- def _testDecoder(self, decoder, dtype=tf.float32):
- with tf.variable_scope(tf.get_variable_scope()):
- self._testDecoderGeneric(
+ def _testDecoder(self, decoder, initial_state_fn=None, dtype=tf.float32):
+ with tf.Graph().as_default() as g:
+ checkpoint_path = self._testDecoderTraining(
decoder,
+ initial_state_fn=initial_state_fn,
+ dtype=dtype)
+
+ with tf.Graph().as_default() as g:
+ self._testDecoderInference(
+ decoder,
+ initial_state_fn=initial_state_fn,
with_beam_search=False,
with_alignment_history=False,
- dtype=dtype)
- with tf.variable_scope(tf.get_variable_scope(), reuse=True):
- self._testDecoderGeneric(
+ dtype=dtype,
+ checkpoint_path=checkpoint_path)
+ with tf.Graph().as_default() as g:
+ self._testDecoderInference(
decoder,
+ initial_state_fn=initial_state_fn,
with_beam_search=False,
with_alignment_history=True,
- dtype=dtype)
- with tf.variable_scope(tf.get_variable_scope(), reuse=True):
- self._testDecoderGeneric(
+ dtype=dtype,
+ checkpoint_path=checkpoint_path)
+ with tf.Graph().as_default() as g:
+ self._testDecoderInference(
decoder,
+ initial_state_fn=initial_state_fn,
with_beam_search=True,
with_alignment_history=False,
- dtype=dtype)
- with tf.variable_scope(tf.get_variable_scope(), reuse=True):
- self._testDecoderGeneric(
+ dtype=dtype,
+ checkpoint_path=checkpoint_path)
+ with tf.Graph().as_default() as g:
+ self._testDecoderInference(
decoder,
+ initial_state_fn=initial_state_fn,
with_beam_search=True,
with_alignment_history=True,
- dtype=dtype)
+ dtype=dtype,
+ checkpoint_path=checkpoint_path)
def testRNNDecoder(self):
decoder = decoders.RNNDecoder(2, 20)
@@ -213,6 +220,13 @@ def testAttentionalRNNDecoder(self):
decoder = decoders.AttentionalRNNDecoder(2, 20)
self._testDecoder(decoder)
+ def testAttentionalRNNDecoderWithDenseBridge(self):
+ decoder = decoders.AttentionalRNNDecoder(2, 36, bridge=bridge.DenseBridge())
+ encoder_cell = tf.nn.rnn_cell.MultiRNNCell([tf.nn.rnn_cell.LSTMCell(5),
+ tf.nn.rnn_cell.LSTMCell(5)])
+ initial_state_fn = lambda batch_size, dtype: encoder_cell.zero_state(batch_size, dtype)
+ self._testDecoder(decoder, initial_state_fn=initial_state_fn)
+
def testMultiAttentionalRNNDecoder(self):
decoder = decoders.MultiAttentionalRNNDecoder(2, 20, attention_layers=[0])
self._testDecoder(decoder)
| Parellel Encoder
I tried multi_source_nmt.py with SequnenceRecordInputter, but in eval mode it gives me an error. ValueError: Trying to share variable seq2seq/decoder/dense/kernel, but specified shape (512, 2892) and found shape (4096, 4096). I want to encode two different sources separately, than concat them and make transformation for 2048 to 512 for decoder initial.
| Can you post the full model configuration you used?
```python
import tensorflow as tf
import opennmt as onmt
def model():
return onmt.models.SequenceToSequence(
source_inputter=onmt.inputters.ParallelInputter([
onmt.inputters.SequenceRecordInputter(),
onmt.inputters.SequenceRecordInputter()]),
target_inputter=onmt.inputters.WordEmbedder(
vocabulary_file_key="target_words_vocabulary",
embedding_size=512),
encoder=onmt.encoders.ParallelEncoder([
onmt.encoders.BidirectionalRNNEncoder(
num_layers=2,
num_units=512,
reducer=onmt.layers.ConcatReducer(),
cell_class=tf.contrib.rnn.LSTMCell,
dropout=0.3,
residual_connections=False),
onmt.encoders.BidirectionalRNNEncoder(
num_layers=2,
num_units=512,
reducer=onmt.layers.ConcatReducer(),
cell_class=tf.contrib.rnn.LSTMCell,
dropout=0.3,
residual_connections=False)],
outputs_reducer=onmt.layers.ConcatReducer(axis=1)),
decoder=onmt.decoders.AttentionalRNNDecoder(
num_layers=4,
num_units=512,
bridge=onmt.layers.DenseBridge(),
attention_mechanism_class=tf.contrib.seq2seq.LuongAttention,
cell_class=tf.contrib.rnn.LSTMCell,
dropout=0.3,
residual_connections=False))
``` | 2018-11-19T10:37:00 |
OpenNMT/OpenNMT-tf | 277 | OpenNMT__OpenNMT-tf-277 | [
"275"
] | c787ed0d9d808049247d91123f35a7cae383c0e6 | diff --git a/opennmt/utils/data.py b/opennmt/utils/data.py
--- a/opennmt/utils/data.py
+++ b/opennmt/utils/data.py
@@ -330,11 +330,11 @@ def _inject_index(index, x):
return x
def _key_func(x):
- length = tf.cast(length_fn(x), tf.int32)
- bucket_id = tf.constant(0, dtype=tf.int32)
+ length = length_fn(x)
+ bucket_id = tf.constant(0, dtype=tf.int64)
if not isinstance(length, list):
- bucket_id = tf.maximum(bucket_id, length // bucket_width)
- return tf.cast(bucket_id, tf.int64)
+ bucket_id = tf.maximum(bucket_id, tf.cast(length, bucket_id.dtype) // bucket_width)
+ return bucket_id
def _reduce_func(unused_key, dataset):
return dataset.apply(batch_dataset(batch_size))
| Latest (1.14) OpenNMT-tf gives error on inferencing
Hi,
I have trained a model ParallelEncoder having two input files (words and their features). This far all has worked just fine, but now I get following error when trying to run inference:
Traceback (most recent call last):
File "opennmt/bin/main.py", line 192, in <module>
main()
File "opennmt/bin/main.py", line 175, in main
log_time=args.log_prediction_time)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/runner.py", line 329, in infer
hooks=infer_hooks):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 548, in predict
input_fn, model_fn_lib.ModeKeys.PREDICT)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1023, in _get_features_from_input_fn
result = self._call_input_fn(input_fn, mode)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1135, in _call_input_fn
return input_fn(**kwargs)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/models/model.py", line 492, in <lambda>
maximum_labels_length=maximum_labels_length)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/models/model.py", line 415, in _input_fn_impl
length_fn=self._get_features_length)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/data.py", line 352, in inference_pipeline
_key_func, _reduce_func, window_size=batch_size))
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1140, in apply
dataset = transformation_func(self)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/grouping.py", line 117, in _apply_fn
window_size_func)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/grouping.py", line 422, in __init__
self._make_key_func(key_func, input_dataset)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/data/python/ops/grouping.py", line 451, in _make_key_func
"`key_func` must return a single tf.int64 scalar tensor.")
ValueError: `key_func` must return a single tf.int64 scalar tensor.
When I run infer on 1.10 it will work just fine on the model trained on 1.14. Error seems to be present also on 1.13.1
| Hi,
Thanks for reporting. Do you confirm you are using the `--auto_config` flag?
To workaround the issue, try adding this in your configuration:
```yml
infer:
bucket_width: 0
```
Great. This now works on "old" models. bucket_width 0 or 1 work just fine. When I'm trying to infer the new multi-source transformer mode I started getting another error:
2018-11-28 10:58:38.020515: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:344] Starting optimization for grappler item: tf_graph
INFO:tensorflow:Running local_init_op.
2018-11-28 10:58:38.257302: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:344] Starting optimization for grappler item: tf_graph
INFO:tensorflow:Done running local_init_op.
2018-11-28 10:58:38.349261: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:344] Starting optimization for grappler item: tf_graph
2018-11-28 10:58:38.580481: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:344] Starting optimization for grappler item: tf_graph
Traceback (most recent call last):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1292, in _do_call
return fn(*args)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1277, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1365, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 1818624 values, but the requested shape has 3637248
[[{{node transformer/decoder/while/Reshape_70}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](transformer/decoder/while/layer_5/multi_head_1/cond/Merge_1, transformer/decoder/while/Reshape_70/shape)]]
[[{{node transformer/Cast/_1585}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1967_transformer/Cast", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "opennmt/bin/main.py", line 192, in <module>
main()
File "opennmt/bin/main.py", line 175, in main
log_time=args.log_prediction_time)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/runner.py", line 329, in infer
hooks=infer_hooks):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 568, in predict
preds_evaluated = mon_sess.run(predictions)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 671, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1148, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1239, in run
raise six.reraise(*original_exc_info)
File "/home/ari/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1224, in run
return self._sess.run(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1296, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1076, in run
return self._sess.run(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 887, in run
run_metadata_ptr)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1110, in _run
feed_dict_tensor, options, run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1286, in _do_run
run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1306, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 1818624 values, but the requested shape has 3637248
[[node transformer/decoder/while/Reshape_70 (defined at /home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py:88) = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](transformer/decoder/while/layer_5/multi_head_1/cond/Merge_1, transformer/decoder/while/Reshape_70/shape)]]
[[{{node transformer/Cast/_1585}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1967_transformer/Cast", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Caused by op 'transformer/decoder/while/Reshape_70', defined at:
File "opennmt/bin/main.py", line 192, in <module>
main()
File "opennmt/bin/main.py", line 175, in main
log_time=args.log_prediction_time)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/runner.py", line 329, in infer
hooks=infer_hooks):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 550, in predict
features, None, model_fn_lib.ModeKeys.PREDICT, self.config)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1168, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/models/model.py", line 152, in _model_fn
_, predictions = self._build(features, labels, params, mode, config=config)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/models/sequence_to_sequence.py", line 277, in _build
return_alignment_history=True))
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/decoders/decoder.py", line 378, in dynamic_decode_and_search
min_decode_length=minimum_length)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 613, in beam_search
back_prop=False)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3274, in while_loop
return_same_structure)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2994, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2929, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 539, in inner_loop
i, alive_seq, alive_log_probs, states)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 428, in grow_topk
lambda t: _unmerge_beam_dim(t, batch_size, beam_size), flat_states)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 347, in map_structure
structure[0], [func(*x) for x in entries])
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 347, in <listcomp>
structure[0], [func(*x) for x in entries])
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 428, in <lambda>
lambda t: _unmerge_beam_dim(t, batch_size, beam_size), flat_states)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 88, in _unmerge_beam_dim
return tf.reshape(tensor, new_shape)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 6296, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 1818624 values, but the requested shape has 3637248
[[node transformer/decoder/while/Reshape_70 (defined at /home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py:88) = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](transformer/decoder/while/layer_5/multi_head_1/cond/Merge_1, transformer/decoder/while/Reshape_70/shape)]]
[[{{node transformer/Cast/_1585}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1967_transformer/Cast", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
The model trains just fine (and it is a copy from example) | 2018-11-28T09:09:43 |
|
OpenNMT/OpenNMT-tf | 279 | OpenNMT__OpenNMT-tf-279 | [
"276"
] | 0a2890a7ea72a8ee0335c25e6dcaffdb920c3a10 | diff --git a/opennmt/models/sequence_to_sequence.py b/opennmt/models/sequence_to_sequence.py
--- a/opennmt/models/sequence_to_sequence.py
+++ b/opennmt/models/sequence_to_sequence.py
@@ -236,7 +236,7 @@ def _build(self, features, labels, params, mode, config=None):
if mode != tf.estimator.ModeKeys.TRAIN:
with tf.variable_scope("decoder", reuse=labels is not None):
- batch_size = tf.shape(encoder_sequence_length)[0]
+ batch_size = tf.shape(tf.contrib.framework.nest.flatten(encoder_outputs)[0])[0]
beam_width = params.get("beam_width", 1)
maximum_iterations = params.get("maximum_iterations", 250)
minimum_length = params.get("minimum_decoding_length", 0)
| Error when running inference on the multi-source Transformer
```
2018-11-28 10:58:38.020515: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:344] Starting optimization for grappler item: tf_graph
INFO:tensorflow:Running local_init_op.
2018-11-28 10:58:38.257302: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:344] Starting optimization for grappler item: tf_graph
INFO:tensorflow:Done running local_init_op.
2018-11-28 10:58:38.349261: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:344] Starting optimization for grappler item: tf_graph
2018-11-28 10:58:38.580481: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:344] Starting optimization for grappler item: tf_graph
Traceback (most recent call last):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1292, in _do_call
return fn(*args)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1277, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1365, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 1818624 values, but the requested shape has 3637248
[[{{node transformer/decoder/while/Reshape_70}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](transformer/decoder/while/layer_5/multi_head_1/cond/Merge_1, transformer/decoder/while/Reshape_70/shape)]]
[[{{node transformer/Cast/_1585}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1967_transformer/Cast", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "opennmt/bin/main.py", line 192, in <module>
main()
File "opennmt/bin/main.py", line 175, in main
log_time=args.log_prediction_time)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/runner.py", line 329, in infer
hooks=infer_hooks):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 568, in predict
preds_evaluated = mon_sess.run(predictions)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 671, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1148, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1239, in run
raise six.reraise(*original_exc_info)
File "/home/ari/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1224, in run
return self._sess.run(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1296, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1076, in run
return self._sess.run(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 887, in run
run_metadata_ptr)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1110, in _run
feed_dict_tensor, options, run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1286, in _do_run
run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1306, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 1818624 values, but the requested shape has 3637248
[[node transformer/decoder/while/Reshape_70 (defined at /home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py:88) = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](transformer/decoder/while/layer_5/multi_head_1/cond/Merge_1, transformer/decoder/while/Reshape_70/shape)]]
[[{{node transformer/Cast/_1585}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1967_transformer/Cast", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Caused by op 'transformer/decoder/while/Reshape_70', defined at:
File "opennmt/bin/main.py", line 192, in <module>
main()
File "opennmt/bin/main.py", line 175, in main
log_time=args.log_prediction_time)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/runner.py", line 329, in infer
hooks=infer_hooks):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 550, in predict
features, None, model_fn_lib.ModeKeys.PREDICT, self.config)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1168, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/models/model.py", line 152, in _model_fn
_, predictions = self._build(features, labels, params, mode, config=config)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/models/sequence_to_sequence.py", line 277, in _build
return_alignment_history=True))
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/decoders/decoder.py", line 378, in dynamic_decode_and_search
min_decode_length=minimum_length)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 613, in beam_search
back_prop=False)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3274, in while_loop
return_same_structure)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2994, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2929, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 539, in inner_loop
i, alive_seq, alive_log_probs, states)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 428, in grow_topk
lambda t: _unmerge_beam_dim(t, batch_size, beam_size), flat_states)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 347, in map_structure
structure[0], [func(*x) for x in entries])
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 347, in <listcomp>
structure[0], [func(*x) for x in entries])
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 428, in <lambda>
lambda t: _unmerge_beam_dim(t, batch_size, beam_size), flat_states)
File "/home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py", line 88, in _unmerge_beam_dim
return tf.reshape(tensor, new_shape)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 6296, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 1818624 values, but the requested shape has 3637248
[[node transformer/decoder/while/Reshape_70 (defined at /home/ari/tf/Onmt-14/OpenNMT-tf/opennmt/utils/beam_search.py:88) = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](transformer/decoder/while/layer_5/multi_head_1/cond/Merge_1, transformer/decoder/while/Reshape_70/shape)]]
[[{{node transformer/Cast/_1585}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1967_transformer/Cast", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
```
The model trains just fine (and it is a copy from example)
_Originally posted by @Dagamies in https://github.com/OpenNMT/OpenNMT-tf/issues/275#issuecomment-442370984_
| @Dagamies, can you share your run configuration?
Model is from example, other parameters are:
```
model_dir: /data/20/tf/multi_tf
data:
train_features_file:
- /data/20/tf/a_train.words.txt
- /data/20/tf/a_train.feats.txt
train_labels_file: /data/19/tf/a_train_r.words.txt
eval_features_file:
- /data/20/tf/a_validate.words.txt
- /data/20/tf/a_validate.feats.txt
eval_labels_file: /data/20/tf/a_validate_r.words.txt
source_vocabulary_1: /data/20/tf/a_train.words-vocab.txt
source_vocabulary_2: /data/20/tf/a_train.feats-vocab.txt
target_vocabulary: /data/20/tf/a_train_r.words-vocab.txt
params:
optimizer: AdamOptimizer
learning_rate: 4.0
decay_type: noam_decay
decay_rate: 512
decay_steps: 1000
decay_step_duration: 8
average_loss_in_time: true
label_smoothing: 0.01 # was 0.1
beam_width: 16
# length_penalty: 0.6
clip_gradients: 8.0
optimizer_params:
beta1: 0.9
beta2: 0.998
train:
batch_size: 32
batch_type: tokens
bucket_width: 1
maximum_features_length: 351
maximum_labels_length: 767
save_checkpoints_steps: 20000
keep_checkpoint_max: 8
save_summary_steps: 100
train_steps: 10000000
clip_gradients: 8.0
# Consider setting this to -1 to match the number of training examples.
sample_buffer_size: -1
eval:
batch_size: 32
eval_delay: 3600
infer:
batch_size: 1
bucket_width: 0
``` | 2018-11-28T11:39:37 |
|
OpenNMT/OpenNMT-tf | 291 | OpenNMT__OpenNMT-tf-291 | [
"289"
] | c6abe8eeb3055e2c1502d5f08f72e114abbc1ad4 | diff --git a/opennmt/models/model.py b/opennmt/models/model.py
--- a/opennmt/models/model.py
+++ b/opennmt/models/model.py
@@ -494,6 +494,11 @@ def input_fn(self,
def _serving_input_fn_impl(self, metadata):
"""See ``serving_input_fn``."""
self._initialize(metadata)
+ # This is a hack for SequenceRecordInputter that currently infers the input
+ # depth from the data files.
+ # TODO: This method should not require the training data.
+ if self.features_inputter is not None and "train_features_file" in metadata:
+ _ = self.features_inputter.make_dataset(metadata["train_features_file"])
return self._get_serving_input_receiver()
def serving_input_fn(self, metadata):
| Crash trying to export model using SequenceRecordInputter
What I see its calling to _get_serving_input inside SequenceRecordInputter before make_dataset
is called. so self.input_depth doesn't exist yet
File "/usr/local/lib/python3.6/dist-packages/opennmt/bin/main.py", line 179, in main
export_dir_base=args.export_dir_base)
File "/usr/local/lib/python3.6/dist-packages/opennmt/runner.py", line 376, in export
**kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 734, in export_saved_model
strip_default_attrs=True)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 663, in export_savedmodel
mode=model_fn_lib.ModeKeys.PREDICT)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 789, in _export_saved_model_for_mode
strip_default_attrs=strip_default_attrs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 907, in _export_all_saved_models
mode=model_fn_lib.ModeKeys.PREDICT)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 977, in _add_meta_graph_for_mode
input_receiver = input_receiver_fn()
File "/usr/local/lib/python3.6/dist-packages/opennmt/models/model.py", line 509, in <lambda>
return lambda: self._serving_input_fn_impl(metadata)
File "/usr/local/lib/python3.6/dist-packages/opennmt/models/model.py", line 497, in _serving_input_fn_impl
return self._get_serving_input_receiver()
File "/usr/local/lib/python3.6/dist-packages/opennmt/models/model.py", line 272, in _get_serving_input_receiver
return self.features_inputter.get_serving_input_receiver()
File "/usr/local/lib/python3.6/dist-packages/opennmt/inputters/inputter.py", line 104, in get_serving_input_receiver
receiver_tensors, features = self._get_serving_input()
File "/usr/local/lib/python3.6/dist-packages/opennmt/inputters/inputter.py", line 333, in _get_serving_input
receiver_tensors, features = inputter._get_serving_input() # pylint: disable=protected-access
File "/usr/local/lib/python3.6/dist-packages/opennmt/inputters/record_inputter.py", line 42, in _get_serving_input
"tensor": tf.placeholder(self.dtype, shape=(None, None, self.input_depth)),
AttributeError: 'SequenceRecordInputter' object has no attribute 'input_depth'
| Mmh yes, that is an issue.
Initially, the user had to set the input depth in the configuration file. However, since https://github.com/OpenNMT/OpenNMT-tf/commit/d9f0cbd3cb7ae6830aa77675c63424dd2a959d45, the depth is directly inferred from the data but the export code was not updated accordingly. We may need to rethink this as the training data should not be required when exporting a model.
I glad to help :) | 2018-12-12T13:58:43 |
|
OpenNMT/OpenNMT-tf | 361 | OpenNMT__OpenNMT-tf-361 | [
"360"
] | 2fbbc9c72cf2b430ce6dcfce2cc9652a51c9ba6d | diff --git a/opennmt/models/sequence_tagger.py b/opennmt/models/sequence_tagger.py
--- a/opennmt/models/sequence_tagger.py
+++ b/opennmt/models/sequence_tagger.py
@@ -49,6 +49,10 @@ def __init__(self,
else:
self.tagging_scheme = None
+ def initialize(self, metadata):
+ self.tagging_scheme = metadata.get("tagging_scheme", self.tagging_scheme)
+ super(SequenceTagger, self).initialize(metadata)
+
def _call(self, features, labels, params, mode):
training = mode == tf.estimator.ModeKeys.TRAIN
length = self.features_inputter.get_length(features)
| Question about evaluation metrics
Hello,
I use opennmt-tf to train models for different tasks, and I want to evaluate the effectiveness of a model to know how long should I take for training. But I’m not sure in how to evaluate sequence tagging model or other sequence to sequence model using validation data to measure accuracy, recall, precision, and F1-score. I have tried to train on a supported tagging scheme (BIOES) by adding
```
train:
tagging_scheme: BIOES
```
to parameters in YMAL file, but I could’t find where the additional evaluation metrics are computed. Could you provide some tutorial for me?
Thanks.
| Hi,
Thanks for the question. Currently, the tagging scheme should be configured in the model definition. If you used the `SeqTagger` model from the catalog, you should copy its definition to another file and customize it, for example:
```python
import opennmt as onmt
def model():
return onmt.models.SequenceTagger(
inputter=onmt.inputters.MixedInputter([
onmt.inputters.WordEmbedder(
vocabulary_file_key="words_vocabulary",
embedding_size=None,
embedding_file_key="words_embedding",
trainable=True),
onmt.inputters.CharConvEmbedder(
vocabulary_file_key="chars_vocabulary",
embedding_size=30,
num_outputs=30,
kernel_size=3,
stride=1,
dropout=0.5)],
dropout=0.5),
encoder=onmt.encoders.BidirectionalRNNEncoder(
num_layers=1,
num_units=400,
reducer=onmt.layers.ConcatReducer(),
dropout=0.5,
residual_connections=False),
labels_vocabulary_file_key="tags_vocabulary",
tagging_scheme="bioes",
crf_decoding=True)
```
The metrics will then be automatically computed.
But you are correct, this parameter should be configurable in the YAML configuration as it is related to the data, not the model structure. I add this change to my list.
If the tagging scheme is successfully set, would the evaluation report be printed every 100 training steps along with loss, or written in a text file after an epoch of training?
Yes, like this:
```text
INFO:tensorflow:Finished evaluation at 2019-03-04-09:43:49
INFO:tensorflow:Saving dict for global step 6800: accuracy = 0.9654079, f1 = 0.9654079, global_step = 6800, loss = 2.480431, precision = 0.9654079, recall = 0.9654079
```
They also appear in TensorBoard. | 2019-03-04T09:54:25 |
|
OpenNMT/OpenNMT-tf | 362 | OpenNMT__OpenNMT-tf-362 | [
"338"
] | aa6ed5b701042aa43d11e840b25f967bc7437f5a | diff --git a/opennmt/models/sequence_to_sequence.py b/opennmt/models/sequence_to_sequence.py
--- a/opennmt/models/sequence_to_sequence.py
+++ b/opennmt/models/sequence_to_sequence.py
@@ -6,6 +6,7 @@
from opennmt import inputters
from opennmt import layers
+from opennmt.layers import reducer
from opennmt.models.model import Model
from opennmt.utils import compat
from opennmt.utils.losses import cross_entropy_sequence_loss
@@ -276,6 +277,9 @@ def _call(self, features, labels, params, mode):
original_shape = tf.shape(target_tokens)
target_tokens = tf.reshape(target_tokens, [-1, original_shape[-1]])
attention = tf.reshape(alignment, [-1, tf.shape(alignment)[2], tf.shape(alignment)[3]])
+ # We don't have attention for </s> but ensure that the attention time dimension matches
+ # the tokens time dimension.
+ attention = reducer.align_in_time(attention, tf.shape(target_tokens)[1])
replaced_target_tokens = replace_unknown_target(target_tokens, source_tokens, attention)
target_tokens = tf.reshape(replaced_target_tokens, original_shape)
| diff --git a/opennmt/tests/model_test.py b/opennmt/tests/model_test.py
--- a/opennmt/tests/model_test.py
+++ b/opennmt/tests/model_test.py
@@ -159,6 +159,20 @@ def testSequenceToSequenceWithGuidedAlignment(self):
loss = sess.run(estimator_spec.loss)
self.assertIsInstance(loss, Number)
+ def testSequenceToSequenceWithReplaceUnknownTarget(self):
+ mode = tf.estimator.ModeKeys.PREDICT
+ model = catalog.NMTSmall()
+ params = model.auto_config()["params"]
+ params["replace_unknown_target"] = True
+ features_file, _, metadata = self._makeToyEnDeData()
+ features = model.input_fn(mode, 16, metadata, features_file)()
+ estimator_spec = model.model_fn()(features, None, params, mode, None)
+ with self.test_session() as sess:
+ sess.run(tf.global_variables_initializer())
+ sess.run(tf.local_variables_initializer())
+ sess.run(tf.tables_initializer())
+ _ = sess.run(estimator_spec.predictions)
+
def testSequenceToSequenceServing(self):
# Test that serving features can be forwarded into the model.
model = catalog.NMTSmall()
| Receiving "Inputs to operation seq2seq/Select of type Select must have the same size and shape"
Hi
When I trained NMTSmall model, I have met this crash. But I used older version of opennmt with same data, there was not happening this issue .
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1278, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1263, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1350, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Inputs to operation seq2seq/Select of type Select must have the same size and shape. Input 0: [160,8] != input 1: [160,7]
[[Node: seq2seq/Select = Select[T=DT_STRING, _device="/job:localhost/replica:0/task:0/device:CPU:0"](seq2seq/Equal, seq2seq/GatherNd, seq2seq/Reshape)]]
[[Node: seq2seq/decoder_2/while/grow_finished_topk_seq/_410 = _Send[T=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1653_seq2seq/decoder_2/while/grow_finished_topk_seq", _device="/job:localhost/replica:0/task:0/device:GPU:0"](seq2seq/decoder_2/while/grow_finished_topk_seq)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/onmt-main", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/opennmt/bin/main.py", line 172, in main
runner.train_and_evaluate(checkpoint_path=args.checkpoint_path)
File "/usr/local/lib/python3.5/dist-packages/opennmt/runner.py", line 283, in train_and_evaluate
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 451, in train_and_evaluate
return executor.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 590, in run
return self.run_local()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 691, in run_local
saving_listeners=saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 376, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1145, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1173, in _train_model_default
saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1451, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 583, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1059, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1150, in run
raise six.reraise(*original_exc_info)
File "/usr/local/lib/python3.5/dist-packages/six.py", line 693, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1135, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1215, in run
run_metadata=run_metadata))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/basic_session_run_hooks.py", line 464, in after_run
if self._save(run_context.session, global_step):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/basic_session_run_hooks.py", line 489, in _save
if l.after_save(session, step):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 497, in after_save
self._evaluate(global_step_value) # updates self.eval_result
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 517, in _evaluate
self._evaluator.evaluate_and_export())
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 884, in evaluate_and_export
hooks=self._eval_spec.hooks)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 470, in evaluate
output_dir=self.eval_dir(name))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1501, in _evaluate_run
config=self._session_config)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/evaluation.py", line 212, in _evaluate_once
session.run(eval_ops, feed_dict)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 583, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1059, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1150, in run
raise six.reraise(*original_exc_info)
File "/usr/local/lib/python3.5/dist-packages/six.py", line 693, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1135, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1207, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 987, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 877, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1100, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1272, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1291, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Inputs to operation seq2seq/Select of type Select must have the same size and shape. Input 0: [160,8] != input 1: [160,7]
[[Node: seq2seq/Select = Select[T=DT_STRING, _device="/job:localhost/replica:0/task:0/device:CPU:0"](seq2seq/Equal, seq2seq/GatherNd, seq2seq/Reshape)]]
[[Node: seq2seq/decoder_2/while/grow_finished_topk_seq/_410 = _Send[T=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1653_seq2seq/decoder_2/while/grow_finished_topk_seq", _device="/job:localhost/replica:0/task:0/device:GPU:0"](seq2seq/decoder_2/while/grow_finished_topk_seq)]]
Caused by op 'seq2seq/Select', defined at:
File "/usr/local/bin/onmt-main", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/opennmt/bin/main.py", line 172, in main
runner.train_and_evaluate(checkpoint_path=args.checkpoint_path)
File "/usr/local/lib/python3.5/dist-packages/opennmt/runner.py", line 283, in train_and_evaluate
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 451, in train_and_evaluate
return executor.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 590, in run
return self.run_local()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 691, in run_local
saving_listeners=saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 376, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1145, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1173, in _train_model_default
saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1451, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 583, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1059, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1135, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1215, in run
run_metadata=run_metadata))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/basic_session_run_hooks.py", line 464, in after_run
if self._save(run_context.session, global_step):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/basic_session_run_hooks.py", line 489, in _save
if l.after_save(session, step):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 497, in after_save
self._evaluate(global_step_value) # updates self.eval_result
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 517, in _evaluate
self._evaluator.evaluate_and_export())
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 884, in evaluate_and_export
hooks=self._eval_spec.hooks)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 463, in evaluate
input_fn, hooks, checkpoint_path)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1463, in _evaluate_build_graph
features, labels, model_fn_lib.ModeKeys.EVAL, self.config)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1133, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/opennmt/models/model.py", line 152, in _model_fn
logits, predictions = self._build(features, labels, params, mode, config=config)
File "/usr/local/lib/python3.5/dist-packages/opennmt/models/sequence_to_sequence.py", line 276, in _build
replaced_target_tokens = replace_unknown_target(target_tokens, source_tokens, attention)
File "/usr/local/lib/python3.5/dist-packages/opennmt/models/sequence_to_sequence.py", line 516, in replace_unknown_target
y=target_tokens)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/array_ops.py", line 2608, in where
return gen_math_ops.select(condition=condition, x=x, y=y, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 6876, in select
"Select", condition=condition, t=x, e=y, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 454, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3155, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1717, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Inputs to operation seq2seq/Select of type Select must have the same size and shape. Input 0: [160,8] != input 1: [160,7]
[[Node: seq2seq/Select = Select[T=DT_STRING, _device="/job:localhost/replica:0/task:0/device:CPU:0"](seq2seq/Equal, seq2seq/GatherNd, seq2seq/Reshape)]]
[[Node: seq2seq/decoder_2/while/grow_finished_topk_seq/_410 = _Send[T=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1653_seq2seq/decoder_2/while/grow_finished_topk_seq", _device="/job:localhost/replica:0/task:0/device:GPU:0"](seq2seq/decoder_2/while/grow_finished_topk_seq)]]
| Hi,
Can you share the full configuration file and the command line?
The command line is:
onmt-main train_and_eval --model_type NMTSmall --config myconfig.yml
The configuration is:
```yml
data:
eval_features_file: /root/data/20190214/dev.ocr
eval_labels_file: /root/data/20190214/dev.std
source_words_vocabulary: /root/data/20190214/vocab.ocr
target_words_vocabulary: /root/data/20190214/vocab.std
train_features_file: /root/data/20190214/train.ocr
train_labels_file: /root/data/20190214/train.std
eval:
batch_size: 32
eval_delay: 36000
export: true
exporters: last
external_evaluations: BLEU
num_threads: 4
save_eval_predictions: true
infer:
batch_size: 32
bucket_width: null
n_best: 1
num_threads: 1
model_dir: /root/models/20190214/words
params:
beam_width: 5
clip_gradients: 5.0
decat_type: exponential_decay
decay_rate: 0.7
decay_steps: 50000
learning_rate: 1.0
maximum_iterations: 250
optimizer: GradientDescentOptimizer
param_init: 0.1
replace_unknown_target: true
start_decay_steps: 500000
score:
batch_size: 64
train:
batch_size: 64
batch_type: examples
bucket_width: 5
maximum_features_length: 30
maximum_labels_length: 30
sample_buffer_size: -1
save_checkpoints_steps: 10000
save_summary_steps: 1000
train_steps: 1000000
```
Hi
Today I upgraded OpenNMT to 1.20.1, however I have the same problem too
Disabling the beam search is a quick workaround:
```yaml
params:
beam_width: 1
```
It would help if you can share the checkpoint + the vocabularies + the test data so that I can reproduce the error.
Hi
I setted "beam_with" as 1, it also failed.
Here is data
[temp.zip](https://github.com/OpenNMT/OpenNMT-tf/files/2924752/temp.zip)
Thanks, but some checkpoint files are missing. There should also be files ending with `.meta`, `.index`, and `.data`.
[temp.zip](https://github.com/OpenNMT/OpenNMT-tf/files/2925659/temp.zip)
Almost :), the file `model.ckpt-5000.data-00001-of-00002` is missing. Thanks for the effort.
this file is too big to upload
I has uploaded this file to network, @guillaumekln can get it by
link: https://pan.baidu.com/s/1uv-QKB4T9ko608bRtCdnkA password: f7eu
Thanks, I reproduced this. The commit https://github.com/OpenNMT/OpenNMT-tf/commit/59c25b66dffb5c1bf968d47488dd043c6e66e256 broke the parameter `replace_unknown_target`. | 2019-03-04T13:04:20 |
OpenNMT/OpenNMT-tf | 366 | OpenNMT__OpenNMT-tf-366 | [
"365"
] | 02f2e48e7041cea4ec765d6c559d563f13fae1a2 | diff --git a/opennmt/layers/bridge.py b/opennmt/layers/bridge.py
--- a/opennmt/layers/bridge.py
+++ b/opennmt/layers/bridge.py
@@ -49,8 +49,11 @@ def __call__(self, encoder_state, decoder_zero_state): # pylint: disable=argume
The decoder initial state.
"""
inputs = [encoder_state, decoder_zero_state]
- # Always build for backward compatibility.
- self.build(compat.nest.map_structure(lambda x: x.shape, inputs))
+ if compat.is_tf2():
+ return super(Bridge, self).__call__(inputs)
+ # Build by default for backward compatibility.
+ if not compat.reuse():
+ self.build(compat.nest.map_structure(lambda x: x.shape, inputs))
return self.call(inputs)
@abc.abstractmethod
diff --git a/opennmt/layers/position.py b/opennmt/layers/position.py
--- a/opennmt/layers/position.py
+++ b/opennmt/layers/position.py
@@ -63,9 +63,13 @@ def __call__(self, inputs, sequence_length=None, position=None): # pylint: disa
A ``tf.Tensor`` of shape :math:`[B, T, D]` where :math:`D` depends on the
:attr:`reducer`.
"""
- # Always build for backward compatibility.
+ if compat.is_tf2():
+ return super(PositionEncoder, self).__call__(
+ inputs, sequence_length=sequence_length, position=position)
self._dtype = inputs.dtype
- self.build(inputs.shape)
+ # Build by default for backward compatibility.
+ if not compat.reuse():
+ self.build(inputs.shape)
return self.call(
inputs, sequence_length=sequence_length, position=position)
diff --git a/opennmt/utils/compat.py b/opennmt/utils/compat.py
--- a/opennmt/utils/compat.py
+++ b/opennmt/utils/compat.py
@@ -53,6 +53,10 @@ def name_from_variable_scope(name=""):
compat_name = "%s/%s" % (var_scope, compat_name)
return compat_name
+def reuse():
+ """Returns ``True`` if the current variable scope is marked for reuse."""
+ return tf_compat(v1="get_variable_scope")().reuse
+
def _string_to_tf_symbol(symbol):
modules = symbol.split(".")
namespace = tf
| Failed to restore checkpoint during evaluation when using DenseBridge
As it seems there still is an issue, after writing 1st checkpoint OpenNMT fails:
```
INFO:tensorflow:Starting evaluation at 2019-03-04T18:24:11Z
INFO:tensorflow:Graph was finalized.
2019-03-04 20:24:11.831808: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-03-04 20:24:11.831888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-04 20:24:11.831898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-03-04 20:24:11.831906: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-03-04 20:24:11.832092: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15190 MB memory) -> physical GPU (device: 0, name: Tesla P100-SXM2-16GB, pci bus id: 0002:01:00.0, compute capability: 6.0)
WARNING:tensorflow:From /home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from /projects/aigine/35/tf/m-1088-3-5-5-s/model.ckpt-4000
2019-03-04 20:24:12.044384: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key seq2seq/decoder/dense/bias_2 not found in checkpoint
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
Traceback (most recent call last):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: Key seq2seq/decoder/dense/bias_2 not found in checkpoint
[[{{node save/RestoreV2}}]]
[[{{node save/RestoreV2}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1276, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key seq2seq/decoder/dense/bias_2 not found in checkpoint
[[node save/RestoreV2 (defined at /home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py:1537) ]]
[[node save/RestoreV2 (defined at /home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py:1537) ]]
Caused by op 'save/RestoreV2', defined at:
File "opennmt/bin/main.py", line 201, in <module>
main()
File "opennmt/bin/main.py", line 172, in main
runner.train_and_evaluate(checkpoint_path=args.checkpoint_path)
File "/home/ari/tf/onmt-120/OpenNMT-tf/opennmt/runner.py", line 293, in train_and_evaluate
result = tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 471, in train_and_evaluate
return executor.run()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 611, in run
return self.run_local()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 712, in run_local
saving_listeners=saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _train_model_default
saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1407, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 676, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1171, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1255, in run
return self._sess.run(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1335, in run
run_metadata=run_metadata))
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 582, in after_run
if self._save(run_context.session, global_step):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 607, in _save
if l.after_save(session, step):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 517, in after_save
self._evaluate(global_step_value) # updates self.eval_result
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 537, in _evaluate
self._evaluator.evaluate_and_export())
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 913, in evaluate_and_export
hooks=self._eval_spec.hooks)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 469, in evaluate
name=name)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 511, in _actual_eval
return _evaluate()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 500, in _evaluate
output_dir=self.eval_dir(name))
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1537, in _evaluate_run
config=self._session_config)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/evaluation.py", line 271, in _evaluate_once
session_creator=session_creator, hooks=hooks) as session:
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 934, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 648, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1122, in __init__
_WrappedSession.__init__(self, self._create_session())
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1127, in _create_session
return self._sess_creator.create_session()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 805, in create_session
self.tf_sess = self._session_creator.create_session()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 562, in create_session
self._scaffold.finalize()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 217, in finalize
self._saver = training_saver._get_saver_or_default() # pylint: disable=protected-access
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 604, in _get_saver_or_default
saver = Saver(sharded=True, allow_empty=True)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 832, in __init__
self.build()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 844, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 881, in _build
build_save=build_save, build_restore=build_restore)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 507, in _build_internal
restore_sequentially, reshape)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 385, in _AddShardedRestoreOps
name="restore_shard"))
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 332, in _AddRestoreOps
restore_sequentially)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 580, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1572, in restore_v2
name=name)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
NotFoundError (see above for traceback): Key seq2seq/decoder/dense/bias_2 not found in checkpoint
[[node save/RestoreV2 (defined at /home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py:1537) ]]
[[node save/RestoreV2 (defined at /home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py:1537) ]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1286, in restore
names_to_keys = object_graph_key_mapping(save_path)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1591, in object_graph_key_mapping
checkpointable.OBJECT_GRAPH_PROTO_KEY)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 370, in get_tensor
status)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "opennmt/bin/main.py", line 201, in <module>
main()
File "opennmt/bin/main.py", line 172, in main
runner.train_and_evaluate(checkpoint_path=args.checkpoint_path)
File "/home/ari/tf/onmt-120/OpenNMT-tf/opennmt/runner.py", line 293, in train_and_evaluate
result = tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 471, in train_and_evaluate
return executor.run()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 611, in run
return self.run_local()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 712, in run_local
saving_listeners=saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _train_model_default
saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1407, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 676, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1171, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1270, in run
raise six.reraise(*original_exc_info)
File "/home/ari/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1255, in run
return self._sess.run(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1335, in run
run_metadata=run_metadata))
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 582, in after_run
if self._save(run_context.session, global_step):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 607, in _save
if l.after_save(session, step):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 517, in after_save
self._evaluate(global_step_value) # updates self.eval_result
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 537, in _evaluate
self._evaluator.evaluate_and_export())
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 913, in evaluate_and_export
hooks=self._eval_spec.hooks)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 469, in evaluate
name=name)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 511, in _actual_eval
return _evaluate()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 500, in _evaluate
output_dir=self.eval_dir(name))
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1537, in _evaluate_run
config=self._session_config)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/evaluation.py", line 271, in _evaluate_once
session_creator=session_creator, hooks=hooks) as session:
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 934, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 648, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1122, in __init__
_WrappedSession.__init__(self, self._create_session())
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1127, in _create_session
return self._sess_creator.create_session()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 805, in create_session
self.tf_sess = self._session_creator.create_session()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 571, in create_session
init_fn=self._scaffold.init_fn)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/session_manager.py", line 281, in prepare_session
config=config)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/session_manager.py", line 195, in _restore_checkpoint
saver.restore(sess, checkpoint_filename_with_path)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1292, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Key seq2seq/decoder/dense/bias_2 not found in checkpoint
[[node save/RestoreV2 (defined at /home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py:1537) ]]
[[node save/RestoreV2 (defined at /home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py:1537) ]]
Caused by op 'save/RestoreV2', defined at:
File "opennmt/bin/main.py", line 201, in <module>
main()
File "opennmt/bin/main.py", line 172, in main
runner.train_and_evaluate(checkpoint_path=args.checkpoint_path)
File "/home/ari/tf/onmt-120/OpenNMT-tf/opennmt/runner.py", line 293, in train_and_evaluate
result = tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 471, in train_and_evaluate
return executor.run()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 611, in run
return self.run_local()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 712, in run_local
saving_listeners=saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _train_model_default
saving_listeners)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1407, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 676, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1171, in run
run_metadata=run_metadata)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1255, in run
return self._sess.run(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1335, in run
run_metadata=run_metadata))
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 582, in after_run
if self._save(run_context.session, global_step):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 607, in _save
if l.after_save(session, step):
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 517, in after_save
self._evaluate(global_step_value) # updates self.eval_result
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 537, in _evaluate
self._evaluator.evaluate_and_export())
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 913, in evaluate_and_export
hooks=self._eval_spec.hooks)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 469, in evaluate
name=name)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 511, in _actual_eval
return _evaluate()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 500, in _evaluate
output_dir=self.eval_dir(name))
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1537, in _evaluate_run
config=self._session_config)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/evaluation.py", line 271, in _evaluate_once
session_creator=session_creator, hooks=hooks) as session:
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 934, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 648, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1122, in __init__
_WrappedSession.__init__(self, self._create_session())
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1127, in _create_session
return self._sess_creator.create_session()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 805, in create_session
self.tf_sess = self._session_creator.create_session()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 562, in create_session
self._scaffold.finalize()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 217, in finalize
self._saver = training_saver._get_saver_or_default() # pylint: disable=protected-access
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 604, in _get_saver_or_default
saver = Saver(sharded=True, allow_empty=True)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 832, in __init__
self.build()
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 844, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 881, in _build
build_save=build_save, build_restore=build_restore)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 507, in _build_internal
restore_sequentially, reshape)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 385, in _AddShardedRestoreOps
name="restore_shard"))
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 332, in _AddRestoreOps
restore_sequentially)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 580, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1572, in restore_v2
name=name)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/home/ari/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Key seq2seq/decoder/dense/bias_2 not found in checkpoint
[[node save/RestoreV2 (defined at /home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py:1537) ]]
[[node save/RestoreV2 (defined at /home/ari/anaconda3/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py:1537) ]]
```
_Originally posted by @Dagamies in https://github.com/OpenNMT/OpenNMT-tf/issues/359#issuecomment-469378670_
| 2019-03-05T10:59:50 |
||
OpenNMT/OpenNMT-tf | 367 | OpenNMT__OpenNMT-tf-367 | [
"363"
] | ea72cb39ede791a356419b7231658d48f4bbd8eb | diff --git a/opennmt/inputters/inputter.py b/opennmt/inputters/inputter.py
--- a/opennmt/inputters/inputter.py
+++ b/opennmt/inputters/inputter.py
@@ -454,6 +454,7 @@ def build(self, input_shape=None):
or (isinstance(attr, tf.keras.layers.Layer) and attr.variables)):
for inputter in others:
setattr(inputter, name, attr)
+ inputter.built = True
else:
for inputter, scope in zip(self.inputters, self._get_names()):
with compat.tf_compat(v1="variable_scope")(scope):
| Sharing Embedding meets ERROR!
when I set "share_embeddings=EmbeddingsSharingLevel.ALL" in Transformer model, The trained model can not be correctly infered, e.g. the model will output serveral same tokens for each input query. There may be a bug under this parameter. I also tried share_embeddings=EmbeddingsSharingLevel.NONE, it's ok.
my env:
gpu: k40
os:redhat
cuda:9.0
opennmt-tf version: v1.20.1 and the current master.
| Did you make sure to train with a joint vocabulary? If yes, how does the training loss look like with `share_embeddings=EmbeddingsSharingLevel.ALL`?
Yes.My task is Dialogue so I use same vocabulary for input and target. The loss curve seems well but the problem only happens during inference.
I guess the embedding sharing has something bug. Please help me the fix that.
The embedding sharing Transformer I used is just from config/models/transformer_share_embeddings.py
@guillaumekln
Yeah looks like there is a bug. Thanks for testing and reporting, that's helpful. | 2019-03-05T11:35:43 |
|
OpenNMT/OpenNMT-tf | 372 | OpenNMT__OpenNMT-tf-372 | [
"371"
] | 64f59f189e6f1a490277fb006d446df23d03037c | diff --git a/opennmt/models/language_model.py b/opennmt/models/language_model.py
--- a/opennmt/models/language_model.py
+++ b/opennmt/models/language_model.py
@@ -48,11 +48,7 @@ def auto_config(self, num_devices=1):
}
})
- def _call(self, features, labels, params, mode):
- training = mode == tf.estimator.ModeKeys.TRAIN
- outputs, predictions = None, None
-
- # Initialize input and output layers.
+ def _build(self):
self.examples_inputter.build()
vocab_size = self.examples_inputter.vocabulary_size
output_layer = None
@@ -64,6 +60,10 @@ def _call(self, features, labels, params, mode):
dtype=self.examples_inputter.dtype)
self.decoder.initialize(vocab_size=vocab_size, output_layer=output_layer)
+ def _call(self, features, labels, params, mode):
+ training = mode == tf.estimator.ModeKeys.TRAIN
+ outputs, predictions = None, None
+
ids, length = features["ids"], features["length"]
if mode != tf.estimator.ModeKeys.PREDICT:
# For training and evaluation, forward the full sequence.
diff --git a/opennmt/models/model.py b/opennmt/models/model.py
--- a/opennmt/models/model.py
+++ b/opennmt/models/model.py
@@ -9,6 +9,7 @@
from opennmt import estimator
from opennmt import inputters
+from opennmt.utils import compat
from opennmt.utils.optim import optimize_loss
@@ -82,6 +83,8 @@ def __call__(self, features, labels, params, mode, config=None): # pylint: disa
the arguments of this function.
"""
with tf.variable_scope(self.name, initializer=self._initializer(params)):
+ if not compat.reuse():
+ self._build() # Always rebuild unless the scope is marked for reuse.
return self._call(features, labels, params, mode)
def _initializer(self, params):
@@ -99,6 +102,10 @@ def _initializer(self, params):
minval=-param_init, maxval=param_init, dtype=self.dtype)
return None
+ def _build(self):
+ """Builds stateful layers."""
+ return
+
@abc.abstractmethod
def _call(self, features, labels, params, mode):
"""Creates the graph.
diff --git a/opennmt/models/sequence_to_sequence.py b/opennmt/models/sequence_to_sequence.py
--- a/opennmt/models/sequence_to_sequence.py
+++ b/opennmt/models/sequence_to_sequence.py
@@ -135,6 +135,7 @@ def __init__(self,
self.encoder = encoder
self.decoder = decoder
self.share_embeddings = share_embeddings
+ self.output_layer = None
def auto_config(self, num_devices=1):
config = super(SequenceToSequence, self).auto_config(num_devices=num_devices)
@@ -153,9 +154,19 @@ def auto_config(self, num_devices=1):
}
})
+ def _build(self):
+ self.examples_inputter.build()
+ if EmbeddingsSharingLevel.share_target_embeddings(self.share_embeddings):
+ self.output_layer = layers.Dense(
+ self.labels_inputter.vocabulary_size,
+ weight=self.labels_inputter.embedding,
+ transpose=True,
+ dtype=self.labels_inputter.vocabulary_size.dtype)
+ with tf.name_scope(tf.get_variable_scope().name + "/"):
+ self.output_layer.build([None, self.decoder.output_size])
+
def _call(self, features, labels, params, mode):
training = mode == tf.estimator.ModeKeys.TRAIN
- self.examples_inputter.build()
features_length = self.features_inputter.get_length(features)
source_inputs = self.features_inputter.make_inputs(features, training=training)
@@ -167,16 +178,6 @@ def _call(self, features, labels, params, mode):
target_vocab_size = self.labels_inputter.vocabulary_size
target_dtype = self.labels_inputter.dtype
- output_layer = None
- if EmbeddingsSharingLevel.share_target_embeddings(self.share_embeddings):
- output_layer = layers.Dense(
- target_vocab_size,
- weight=self.labels_inputter.embedding,
- transpose=True,
- dtype=target_dtype)
- with tf.name_scope(tf.get_variable_scope().name + "/"):
- output_layer.build([None, self.decoder.output_size])
-
if labels is not None:
target_inputs = self.labels_inputter.make_inputs(labels, training=training)
with tf.variable_scope("decoder"):
@@ -195,7 +196,7 @@ def _call(self, features, labels, params, mode):
initial_state=encoder_state,
sampling_probability=sampling_probability,
embedding=self.labels_inputter.embedding,
- output_layer=output_layer,
+ output_layer=self.output_layer,
mode=mode,
memory=encoder_outputs,
memory_sequence_length=encoder_sequence_length,
@@ -228,7 +229,7 @@ def _call(self, features, labels, params, mode):
end_token,
vocab_size=target_vocab_size,
initial_state=encoder_state,
- output_layer=output_layer,
+ output_layer=self.output_layer,
maximum_iterations=maximum_iterations,
minimum_length=minimum_length,
mode=mode,
@@ -247,7 +248,7 @@ def _call(self, features, labels, params, mode):
end_token,
vocab_size=target_vocab_size,
initial_state=encoder_state,
- output_layer=output_layer,
+ output_layer=self.output_layer,
beam_width=beam_width,
length_penalty=length_penalty,
maximum_iterations=maximum_iterations,
| GPT-2 training has problem: ValueError: Duplicate node name in graph: 'lm/w_embs'
env: opennmt-tf=master, cuda=9.0,gpu=4
model:gpt_2.py
error info:
```python
INFO:tensorflow:Using parameters:
data:
eval_features_file: dev_mini.txt
train_features_file: train_full.txt
vocabulary: tokens_v3.gpt2
eval:
batch_size: 32
eval_delay: 1200
exporter: last
exporters: last
external_evaluators: null
num_threads: 1
prefetch_buffer_size: 1
save_eval_predictions: false
infer:
batch_size: 32
bucket_width: 5
n_best: 1
num_threads: 1
prefetch_buffer_size: 1
with_scores: false
model_dir: run
params:
average_loss_in_time: true
beam_width: 4
decay_params:
max_step: 1000000
model_dim: 512
warmup_steps: 8000
decay_type: noam_decay_v2
gradients_accum: 1
label_smoothing: 0.1
learning_rate: 2.0
length_penalty: 0.6
maximum_iterations: 50
minimum_decoding_length: 0
optimizer: LazyAdamOptimizer
optimizer_params:
beta1: 0.9
beta2: 0.998
weight_decay: 0.01
score:
batch_size: 64
train:
average_last_checkpoints: 8
batch_size: 4096
batch_type: tokens
bucket_width: 8
effective_batch_size: null
keep_checkpoint_max: 8
maximum_features_length: 256
maximum_labels_length: 256
num_threads: 8
prefetch_buffer_size: 4
sample_buffer_size: 100000
save_checkpoints_secs: null
save_checkpoints_steps: 5000
save_summary_steps: 100
train_steps: 1000000
2019-03-06 23:09:26.843446: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-06 23:09:27.382716: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: Tesla K40c major: 3 minor: 5 memoryClockRate(GHz): 0.745
pciBusID: 0000:02:00.0
totalMemory: 11.17GiB freeMemory: 11.09GiB
2019-03-06 23:09:27.645287: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 1 with properties:
name: Tesla K40c major: 3 minor: 5 memoryClockRate(GHz): 0.745
pciBusID: 0000:04:00.0
totalMemory: 11.17GiB freeMemory: 11.09GiB
2019-03-06 23:09:27.910482: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 2 with properties:
name: Tesla K40c major: 3 minor: 5 memoryClockRate(GHz): 0.745
pciBusID: 0000:83:00.0
totalMemory: 11.17GiB freeMemory: 11.09GiB
2019-03-06 23:09:28.184100: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 3 with properties:
name: Tesla K40c major: 3 minor: 5 memoryClockRate(GHz): 0.745
pciBusID: 0000:84:00.0
totalMemory: 11.17GiB freeMemory: 11.09GiB
2019-03-06 23:09:28.184971: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1, 2, 3
2019-03-06 23:09:35.080283: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-06 23:09:35.080335: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2 3
2019-03-06 23:09:35.080348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y N N
2019-03-06 23:09:35.080356: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N N N
2019-03-06 23:09:35.080364: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 2: N N N Y
2019-03-06 23:09:35.080372: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 3: N N Y N
2019-03-06 23:09:35.081500: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:0 with 10747 MB memory) -> physical GPU (device: 0, name: Tesla K40c, pci bus id: 0000:02:00.0, compute capability: 3.5)
2019-03-06 23:09:35.082024: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:1 with 10747 MB memory) -> physical GPU (device: 1, name: Tesla K40c, pci bus id: 0000:04:00.0, compute capability: 3.5)
2019-03-06 23:09:35.082351: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:2 with 10747 MB memory) -> physical GPU (device: 2, name: Tesla K40c, pci bus id: 0000:83:00.0, compute capability: 3.5)
2019-03-06 23:09:35.082622: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:3 with 10747 MB memory) -> physical GPU (device: 3, name: Tesla K40c, pci bus id: 0000:84:00.0, compute capability: 3.5)
INFO:tensorflow:Using config: {'_model_dir': '/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/data/gpt2/run', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 5000, '_save_checkpoints_secs': None, '_session_config': gpu_options {
}
allow_soft_placement: true
graph_options {
rewrite_options {
layout_optimizer: OFF
}
}
, '_keep_checkpoint_max': 8, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f079dd36cc0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 5000 or save_checkpoints_secs None.
INFO:tensorflow:Calling model_fn.
Traceback (most recent call last):
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1628, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Duplicate node name in graph: 'lm/w_embs'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/../OpenNMT-tf-master/opennmt/bin/main.py", line 201, in <module>
main()
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/../OpenNMT-tf-master/opennmt/bin/main.py", line 172, in main
runner.train_and_evaluate(checkpoint_path=args.checkpoint_path)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/runner.py", line 295, in train_and_evaluate
result = tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 471, in train_and_evaluate
return executor.run()
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 610, in run
return self.run_local()
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 711, in run_local
saving_listeners=saving_listeners)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 354, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1207, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1237, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1195, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/estimator.py", line 162, in _fn
_loss_op, local_model, features_shards, labels_shards, params, mode)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/utils/parallel.py", line 151, in __call__
outputs.append(funs[i](*args[i], **kwargs[i]))
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/estimator.py", line 231, in _loss_op
logits, _ = model(features, labels, params, mode)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/models/model.py", line 85, in __call__
return self._call(features, labels, params, mode)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/models/language_model.py", line 56, in _call
self.examples_inputter.build()
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/inputters/text_inputter.py", line 414, in build
name=compat.name_from_variable_scope("w_embs"))
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2444, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1329, in __init__
constraint=constraint)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1443, in _init_from_args
name=name)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 77, in variable_op_v2
shared_name=shared_name)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 1357, in variable_v2
shared_name=shared_name, name=name)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1792, in __init__
control_input_ops)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1631, in _create_c_op
raise ValueError(str(e))
ValueError: Duplicate node name in graph: 'lm/w_embs'
```
| Thanks for reporting.
Does it work on a single GPU?
Yep, multi-GPU is currently broken. Will push a fix shortly. | 2019-03-06T16:01:01 |
|
OpenNMT/OpenNMT-tf | 375 | OpenNMT__OpenNMT-tf-375 | [
"374"
] | 6bc37f199f10635f30182c5ff670b7f494dd3d6c | diff --git a/opennmt/estimator.py b/opennmt/estimator.py
--- a/opennmt/estimator.py
+++ b/opennmt/estimator.py
@@ -8,11 +8,13 @@
from opennmt.utils import parallel
-def make_serving_input_fn(model):
+def make_serving_input_fn(model, metadata=None):
"""Returns the serving input function.
Args:
model: An initialized :class:`opennmt.models.model.Model` instance.
+ metadata: Optional data configuration (to be removed). Some inputters
+ currently require to peek into some data files to infer input sizes.
Returns:
A callable that returns a ``tf.estimator.export.ServingInputReceiver``.
@@ -20,6 +22,11 @@ def make_serving_input_fn(model):
def _fn():
local_model = copy.deepcopy(model)
+ # This is a hack for SequenceRecordInputter that currently infers the input
+ # depth from the data files.
+ # TODO: This function should not require the training data.
+ if metadata is not None and "train_features_file" in metadata:
+ _ = local_model.features_inputter.make_dataset(metadata["train_features_file"])
return local_model.features_inputter.get_serving_input_receiver()
return _fn
diff --git a/opennmt/runner.py b/opennmt/runner.py
--- a/opennmt/runner.py
+++ b/opennmt/runner.py
@@ -448,16 +448,9 @@ def export(self, checkpoint_path=None, export_dir_base=None):
# with the behavior of tf.estimator.Exporter.
kwargs["strip_default_attrs"] = True
- # This is a hack for SequenceRecordInputter that currently infers the input
- # depth from the data files.
- # TODO: This method should not require the training data.
- data_config = self._config["data"]
- if "train_features_file" in data_config:
- _ = model.features_inputter.make_dataset(data_config["train_features_file"])
-
return export_fn(
export_dir_base,
- estimator_util.make_serving_input_fn(self._model),
+ estimator_util.make_serving_input_fn(self._model, metadata=self._config["data"]),
assets_extra=self._get_model_assets(),
checkpoint_path=checkpoint_path,
**kwargs)
| Error when exporting multi feature model
There seems to be small issue with exporting a model having multiple input features:
Traceback (most recent call last):
File "opennmt/bin/main.py", line 201, in <module>
main()
File "opennmt/bin/main.py", line 190, in main
export_dir_base=args.export_dir_base)
File "/home/ari/tf/Onmt-1211/OpenNMT-tf/opennmt/runner.py", line 456, in export
_ = model.features_inputter.make_dataset(data_config["train_features_file"])
NameError: name 'model' is not defined
| Thanks for reporting. | 2019-03-07T11:16:55 |
|
OpenNMT/OpenNMT-tf | 382 | OpenNMT__OpenNMT-tf-382 | [
"377"
] | 6a34f95e701d54c426fe4b81e2940b35da513355 | diff --git a/opennmt/models/language_model.py b/opennmt/models/language_model.py
--- a/opennmt/models/language_model.py
+++ b/opennmt/models/language_model.py
@@ -87,16 +87,17 @@ def _call(self, features, labels, params, mode):
name=self.name + "/") # Force the name scope.
# Iteratively decode from the last decoder state.
- sampled_ids, sampled_length, _ = decoder_util.greedy_decode(
- self._decode,
- tf.squeeze(start_ids, 1),
- constants.END_OF_SENTENCE_ID,
- decode_length=params.get("maximum_iterations", 250),
- state=state,
- min_decode_length=params.get("minimum_decoding_length", 0),
- last_step_as_input=True,
- sample_from=params.get("sampling_topk", 1),
- sample_temperature=params.get("sampling_temperature", 1))
+ with tf.variable_scope(tf.get_variable_scope(), reuse=True):
+ sampled_ids, sampled_length, _ = decoder_util.greedy_decode(
+ self._decode,
+ tf.squeeze(start_ids, 1),
+ constants.END_OF_SENTENCE_ID,
+ decode_length=params.get("maximum_iterations", 250),
+ state=state,
+ min_decode_length=params.get("minimum_decoding_length", 0),
+ last_step_as_input=True,
+ sample_from=params.get("sampling_topk", 1),
+ sample_temperature=params.get("sampling_temperature", 1))
# Build the full prediction.
full_ids = tf.concat([ids, sampled_ids], 1)
| ValueError: Duplicate node name in graph: 'lm/position_encoding/w_embs'
When I train GPT-2 model, I meet this error after the evaluation.
```
INFO:tensorflow:loss = 8.426817, step = 14800 (274.140 sec)
INFO:tensorflow:source_words/sec: 13932
INFO:tensorflow:loss = 8.261417, step = 14900 (275.013 sec)
INFO:tensorflow:source_words/sec: 13854
INFO:tensorflow:Saving checkpoints for 15000 into /mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/data/gpt2_12/run/model.ckpt.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2019-03-08-13:46:01
INFO:tensorflow:Graph was finalized.
2019-03-08 21:46:01.956459: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1, 2, 3
2019-03-08 21:46:01.956642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-08 21:46:01.956657: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2 3
2019-03-08 21:46:01.956664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y Y Y
2019-03-08 21:46:01.956671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N Y Y
2019-03-08 21:46:01.956678: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 2: Y Y N Y
2019-03-08 21:46:01.956684: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 3: Y Y Y N
2019-03-08 21:46:01.957593: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 23005 MB memory) -> physical GPU (device: 0, name: Tesla P40, pci bus id: 0000:04:00.0, compute capability: 6.1)
2019-03-08 21:46:01.957740: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 23005 MB memory) -> physical GPU (device: 1, name: Tesla P40, pci bus id: 0000:06:00.0, compute capability: 6.1)
2019-03-08 21:46:01.957867: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 23005 MB memory) -> physical GPU (device: 2, name: Tesla P40, pci bus id: 0000:07:00.0, compute capability: 6.1)
2019-03-08 21:46:01.957976: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 23005 MB memory) -> physical GPU (device: 3, name: Tesla P40, pci bus id: 0000:0e:00.0, compute capability: 6.1)
INFO:tensorflow:Restoring parameters from /mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/data/gpt2_12/run/model.ckpt-15000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Finished evaluation at 2019-03-09-02:43:36
INFO:tensorflow:Saving dict for global step 15000: global_step = 15000, loss = 8.081819
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 15000: /mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/data/gpt2_12/run/model.ckpt-15000
INFO:tensorflow:Calling model_fn.
Traceback (most recent call last):
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1628, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Duplicate node name in graph: 'lm/position_encoding/w_embs'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/../OpenNMT-tf-master/opennmt/bin/main.py", line 201, in <module>
main()
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/../OpenNMT-tf-master/opennmt/bin/main.py", line 172, in main
runner.train_and_evaluate(checkpoint_path=args.checkpoint_path)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/runner.py", line 295, in train_and_evaluate
result = tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 471, in train_and_evaluate
return executor.run()
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 610, in run
return self.run_local()
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 711, in run_local
saving_listeners=saving_listeners)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 354, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1207, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1241, in _train_model_default
saving_listeners)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1471, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 671, in run
run_metadata=run_metadata)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1156, in run
run_metadata=run_metadata)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1255, in run
raise six.reraise(*original_exc_info)
File "/usr/local/python3/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1240, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1320, in run
run_metadata=run_metadata))
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 582, in after_run
if self._save(run_context.session, global_step):
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 607, in _save
if l.after_save(session, step):
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 517, in after_save
self._evaluate(global_step_value) # updates self.eval_result
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 537, in _evaluate
self._evaluator.evaluate_and_export())
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 924, in evaluate_and_export
is_the_final_export)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 957, in _export_eval_result
is_the_final_export=is_the_final_export))
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/exporter.py", line 472, in export
is_the_final_export)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/exporter.py", line 126, in export
strip_default_attrs=self._strip_default_attrs)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 663, in export_savedmodel
mode=model_fn_lib.ModeKeys.PREDICT)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 789, in _export_saved_model_for_mode
strip_default_attrs=strip_default_attrs)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 907, in _export_all_saved_models
mode=model_fn_lib.ModeKeys.PREDICT)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 984, in _add_meta_graph_for_mode
config=self.config)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1195, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/estimator.py", line 208, in _fn
_, predictions = local_model(features, labels, params, mode)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/models/model.py", line 88, in __call__
return self._call(features, labels, params, mode)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/models/language_model.py", line 99, in _call
sample_temperature=params.get("sampling_temperature", 1))
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/decoders/decoder.py", line 784, in greedy_decode
parallel_iterations=1)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3291, in while_loop
return_same_structure)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3004, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2939, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/decoders/decoder.py", line 727, in _body
logits, state = symbols_to_logits_fn(inputs, step, state)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/models/language_model.py", line 113, in _decode
logits, state, _ = self.decoder(inputs, length_or_step, state=state, training=training)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 757, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/decoders/decoder.py", line 593, in call
training=training)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/decoders/self_attention_decoder.py", line 404, in step
training=training)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/decoders/self_attention_decoder.py", line 338, in _run
inputs = self.position_encoder(inputs, position=step + 1 if step is not None else None)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/layers/position.py", line 72, in __call__
self.build(inputs.shape)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/layers/position.py", line 155, in build
name=compat.name_from_variable_scope("position_encoding/w_embs"))
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2444, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1329, in __init__
constraint=constraint)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1443, in _init_from_args
name=name)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 77, in variable_op_v2
shared_name=shared_name)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 1357, in variable_v2
shared_name=shared_name, name=name)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1792, in __init__
control_input_ops)
File "/usr/local/python3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1631, in _create_c_op
raise ValueError(str(e))
ValueError: Duplicate node name in graph: 'lm/position_encoding/w_embs'
```
| Could you try disabling the model export? There could be incompatibility with this model currently.
```yaml
eval:
exporters: null
```
@guillaumekln I will try it.
Actually, the inference is currently broken with this model. Sorry, it's a very new and experimental model so thank you for testing and reporting. | 2019-03-11T10:10:37 |
|
OpenNMT/OpenNMT-tf | 383 | OpenNMT__OpenNMT-tf-383 | [
"380"
] | 1020e975c540cfec26722977ab567abc01e9fc75 | diff --git a/opennmt/models/sequence_to_sequence.py b/opennmt/models/sequence_to_sequence.py
--- a/opennmt/models/sequence_to_sequence.py
+++ b/opennmt/models/sequence_to_sequence.py
@@ -10,7 +10,7 @@
from opennmt.models.model import Model
from opennmt.utils import compat
from opennmt.utils.losses import cross_entropy_sequence_loss
-from opennmt.utils.misc import print_bytes, format_translation_output, merge_dict
+from opennmt.utils.misc import print_bytes, format_translation_output, merge_dict, shape_list
from opennmt.decoders.decoder import get_sampling_probability
@@ -277,7 +277,9 @@ def _call(self, features, labels, params, mode):
# Merge batch and beam dimensions.
original_shape = tf.shape(target_tokens)
target_tokens = tf.reshape(target_tokens, [-1, original_shape[-1]])
- attention = tf.reshape(alignment, [-1, tf.shape(alignment)[2], tf.shape(alignment)[3]])
+ align_shape = shape_list(alignment)
+ attention = tf.reshape(
+ alignment, [align_shape[0] * align_shape[1], align_shape[2], align_shape[3]])
# We don't have attention for </s> but ensure that the attention time dimension matches
# the tokens time dimension.
attention = reducer.align_in_time(attention, tf.shape(target_tokens)[1])
| Error while serving nmtsmall model
Hi
I trained nmtsmall model by using opennmt 1.21.3. Then I upgradeed to 1.21.4 for exporting model.
When I used export model serving, I have met this error.
Error
Traceback (most recent call last):
File "D:\GNMT\venv\lib\site-packages\grpc\beta\_client_adaptations.py", line 95, in result
return self._future.result(timeout=timeout)
File "D:\GNMT\venv\lib\site-packages\grpc\_channel.py", line 276, in result
raise self
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero
[[Node: seq2seq/Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _class=["loc:@seq2seq/cond/strided_slice/Switch"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](seq2seq/decoder_1/strided_slice_17, seq2seq/Reshape_1/shape)]]
[[Node: seq2seq/decoder_1/while/concat_5/_225 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1187_seq2seq/decoder_1/while/concat_5", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_cloopseq2seq/decoder_1/while/decoder/decoder/attention_wrapper/assert_equal/Assert/Assert/data_0/_39)]]"
debug_error_string = "{"created":"@1552276258.713000000","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1095,"grpc_message":"Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero\n\t [[Node: seq2seq/Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _class=["loc:@seq2seq/cond/strided_slice/Switch"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](seq2seq/decoder_1/strided_slice_17, seq2seq/Reshape_1/shape)]]\n\t [[Node: seq2seq/decoder_1/while/concat_5/_225 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1187_seq2seq/decoder_1/while/concat_5", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_cloopseq2seq/decoder_1/while/decoder/decoder/attention_wrapper/assert_equal/Assert/Assert/data_0/_39)]]","grpc_status":3}"
>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python35\Lib\unittest\case.py", line 59, in testPartExecutor
yield
File "C:\Python35\Lib\unittest\case.py", line 605, in run
testMethod()
File "D:\GNMT\tests\onmt_rpc_client_test.py", line 78, in test_request_from_file
address, result = client.request(line.strip("\n").split(" "))
File "D:\GNMT\address\client\onmt_rpc_client.py", line 11, in request
result = self._parse_translation_result(future.result())
File "D:\GNMT\venv\lib\site-packages\grpc\beta\_client_adaptations.py", line 97, in result
raise _abortion_error(rpc_error_call)
grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero
[[Node: seq2seq/Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _class=["loc:@seq2seq/cond/strided_slice/Switch"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](seq2seq/decoder_1/strided_slice_17, seq2seq/Reshape_1/shape)]]
[[Node: seq2seq/decoder_1/while/concat_5/_225 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1187_seq2seq/decoder_1/while/concat_5", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_cloopseq2seq/decoder_1/while/decoder/decoder/attention_wrapper/assert_equal/Assert/Assert/data_0/_39)]]")
===============================================================================
[ERROR]
Traceback (most recent call last):
Failure: builtins.tuple: (<class 'grpc.framework.interfaces.face.face.AbortionError'>, AbortionError(), <traceback object at 0x000000001C31D748>)
onmt_rpc_client_test.TestOpenNMTPredictRPCClient.test_request_from_file
-------------------------------------------------------------------------------
Ran 1 tests in 45.879s
FAILED (errors=1)
Process finished with exit code 1
| Hi,
Can you post the configuration file that was used when exporting the model?
Hi
Configuration:
params:
optimizer: GradientDescentOptimizer
learning_rate: 1.0
param_init: 0.1
clip_gradients: 5.0
decat_type: exponential_decay
decay_rate: 0.7
decay_steps: 50000
start_decay_steps: 500000
beam_width: 5
maximum_iterations: 250
replace_unknown_target: true
train:
batch_size: 64
bucket_width: 5
save_checkpoints_steps: 10000
save_summary_steps: 1000
train_steps: 1000000
maximum_features_length: 30
maximum_labels_length: 30
sample_buffer_size: -1
eval:
batch_size: 32
num_threads: 4
eval_delay: 36000
save_eval_predictions: true
external_evaluators: BLEU
exporters: last
infer:
batch_size: 32
num_threads: 1
n_best: 1 | 2019-03-11T11:13:40 |
|
OpenNMT/OpenNMT-tf | 393 | OpenNMT__OpenNMT-tf-393 | [
"391"
] | 60b8ababfa4d8810be5140fc96535bed91ca2015 | diff --git a/opennmt/models/sequence_to_sequence.py b/opennmt/models/sequence_to_sequence.py
--- a/opennmt/models/sequence_to_sequence.py
+++ b/opennmt/models/sequence_to_sequence.py
@@ -161,7 +161,7 @@ def _build(self):
self.labels_inputter.vocabulary_size,
weight=self.labels_inputter.embedding,
transpose=True,
- dtype=self.labels_inputter.vocabulary_size.dtype)
+ dtype=self.labels_inputter.dtype)
with tf.name_scope(tf.get_variable_scope().name + "/"):
self.output_layer.build([None, self.decoder.output_size])
| AttributeError: 'int' object has no attribute 'dtype'

| Right, it should be `self.labels_inputter.dtype`. Feel free to send a PR. | 2019-03-19T09:36:57 |
|
OpenNMT/OpenNMT-tf | 420 | OpenNMT__OpenNMT-tf-420 | [
"414"
] | ac267850809003748123d083f0214d1d364401a1 | diff --git a/opennmt/utils/hooks.py b/opennmt/utils/hooks.py
--- a/opennmt/utils/hooks.py
+++ b/opennmt/utils/hooks.py
@@ -271,36 +271,28 @@ class LoadWeightsFromCheckpointHook(_SESSION_RUN_HOOK):
def __init__(self, checkpoint_path):
self.checkpoint_path = checkpoint_path
+ self.assign_pairs = []
def begin(self):
- var_list = tf.train.list_variables(self.checkpoint_path)
-
- names = []
- for name, _ in var_list:
+ names = set()
+ for name, _ in tf.train.list_variables(self.checkpoint_path):
if (not name.startswith("optim")
and not name.startswith("global_step")
and not name.startswith("words_per_sec")):
- names.append(name)
+ names.add(name)
- self.values = {}
reader = tf.train.load_checkpoint(self.checkpoint_path)
- for name in names:
- self.values[name] = reader.get_tensor(name)
-
- tf_vars = []
- current_scope = tf.get_variable_scope()
- reuse = tf.AUTO_REUSE if hasattr(tf, "AUTO_REUSE") else True
- with tf.variable_scope(current_scope, reuse=reuse):
- for name, value in six.iteritems(self.values):
- tf_vars.append(tf.get_variable(name, shape=value.shape, dtype=tf.as_dtype(value.dtype)))
-
- self.placeholders = [tf.placeholder(v.dtype, shape=v.shape) for v in tf_vars]
- self.assign_ops = [tf.assign(v, p) for (v, p) in zip(tf_vars, self.placeholders)]
+ variables = tf.trainable_variables()
+ for variable in variables:
+ name = variable.op.name
+ if name in names:
+ value = reader.get_tensor(name)
+ self.assign_pairs.append((variable, value))
def after_create_session(self, session, coord):
_ = coord
- for p, op, value in zip(self.placeholders, self.assign_ops, six.itervalues(self.values)):
- session.run(op, {p: value})
+ for variable, value in self.assign_pairs:
+ variable.load(value, session=session)
class VariablesInitializerHook(_SESSION_RUN_HOOK):
| ValueError: Duplicate node name in graph: 'transformer/shared_embeddings/w_embs'
I met this problem when I use:
```
model=transformer_shared_embedding.py
$onmt_main train \
--model ${model} \
--config ${root}/config/transformer_gpu2.yml --auto_config \
--num_gpus 1 \
--checkpoint_path pretrain/model.ckpt-150000 \
--data_dir ${root}
```
```
Traceback (most recent call last):
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1628, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Duplicate node name in graph: 'transformer/shared_embeddings/w_embs'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/../OpenNMT-tf-master/opennmt/bin/main.py", line 201, in <module>
main()
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/code/../OpenNMT-tf-master/opennmt/bin/main.py", line 174, in main
runner.train(checkpoint_path=args.checkpoint_path)
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/runner.py", line 316, in train
train_spec.input_fn, hooks=train_spec.hooks, max_steps=train_spec.max_steps)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 354, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1207, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1241, in _train_model_default
saving_listeners)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1468, in _train_with_estimator_spec
log_step_count_steps=log_step_count_steps) as mon_sess:
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 504, in MonitoredTrainingSession
stop_grace_period_secs=stop_grace_period_secs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 921, in __init__
92,3 67%
stop_grace_period_secs=stop_grace_period_secs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 921, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 631, in __init__
h.begin()
File "/mnt/yardcephfs/mmyard/g_wxg_td_prc/chriscxyan/OpenNMT-TF/opennmt-tf/OpenNMT-tf-master/opennmt/utils/hooks.py", line 295, in begin
tf_vars.append(tf.get_variable(name, shape=value.shape, dtype=tf.as_dtype(value.dtype)))
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1487, in get_variable
aggregation=aggregation)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1237, in get_variable
aggregation=aggregation)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 540, in get_variable
aggregation=aggregation)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 492, in _true_getter
aggregation=aggregation)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 922, in _get_single_variable
aggregation=aggregation)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2444, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1329, in __init__
constraint=constraint)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1443, in _init_from_args
name=name)
124,5 89%
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 1443, in _init_from_args
name=name)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 77, in variable_op_v2
shared_name=shared_name)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 1357, in variable_v2
shared_name=shared_name, name=name)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1792, in __init__
control_input_ops)
File "/data1/qspace/chriscxyan/anaconda2/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1631, in _create_c_op
raise ValueError(str(e))
ValueError: Duplicate node name in graph: 'transformer/shared_embeddings/w_embs'
```
| This seems to happen because of `--checkpoint_path`. Thanks for reporting.
@guillaumekln do you have any updates?
Sorry, I have a few days off. Will try to fix this ASAP.
Here is the code loading the trained weights:
https://github.com/OpenNMT/OpenNMT-tf/blob/master/opennmt/utils/hooks.py#L269
I think this should get the existing variables using `tf.trainable_variables()` instead of calling `tf.get_variable()` for each. | 2019-04-29T09:56:56 |
|
OpenNMT/OpenNMT-tf | 421 | OpenNMT__OpenNMT-tf-421 | [
"416"
] | 0383fea069a325e87ed38ba170b7ee672501d4cf | diff --git a/opennmt/inputters/inputter.py b/opennmt/inputters/inputter.py
--- a/opennmt/inputters/inputter.py
+++ b/opennmt/inputters/inputter.py
@@ -302,6 +302,12 @@ def get_leaf_inputters(self):
inputters.append(inputter)
return inputters
+ def __getattribute__(self, name):
+ if name == "built":
+ return all(inputter.built for inputter in self.inputters)
+ else:
+ return super(MultiInputter, self).__getattribute__(name)
+
def initialize(self, metadata, asset_dir=None, asset_prefix=""):
for i, inputter in enumerate(self.inputters):
inputter.initialize(metadata, asset_prefix="%s%d_" % (asset_prefix, i + 1))
| Problem in sharing embeddings
I noticed that when I use DualTransformer with sharing parameters between Encoders and also setting sharing level to 'ALL', two sets of embeddings are created with these prefixes:
> **transformer//w_embs**
> **transformer/shared_embeddings/w_embs**
However, it should be one for everything.
| I tried to change
`def _get_shared_name(self):
return ""`
to:
`def _get_shared_name(self):
return "shared_embeddings"`
in `ParallelInputter` but I'm getting **Duplicate Node** error.
Do you know a quick fix for it?
The best way I found is reimplementing `build()` function in `ExampleInputter`.
Thanks for reporting. Do you have a working fix that you can share?
I sent a pull request.
It works for DualTransformer, but I'm not sure for other use cases.
Thanks. This indeed needs a more general fix: the issue is that the `built` flag is not properly set for `MultiInputter` classes and weights are recreated when the layer is first called. I will push an appropriate fix to that. | 2019-04-29T10:35:19 |
|
OpenNMT/OpenNMT-tf | 436 | OpenNMT__OpenNMT-tf-436 | [
"435"
] | 679e5025d05da91741aafa5a002483f43e4b68cc | diff --git a/opennmt/runner.py b/opennmt/runner.py
--- a/opennmt/runner.py
+++ b/opennmt/runner.py
@@ -172,19 +172,21 @@ def is_chief(self):
return cluster_spec["task"]["type"] == "chief"
def _make_eval_prediction_hooks_fn(self):
- if (not self._config["eval"].get("save_eval_predictions", False)
- and self._config["eval"].get("external_evaluators") is None):
+ external_scorers = self._config["eval"].get("external_evaluators")
+ if not self._config["eval"].get("save_eval_predictions", False) and external_scorers is None:
return None
if self._model.unsupervised:
raise RuntimeError("This model does not support saving evaluation predictions")
save_path = os.path.join(self._config["model_dir"], "eval")
if not tf.gfile.Exists(save_path):
tf.gfile.MakeDirs(save_path)
- scorers = evaluator.make_scorers(self._config["eval"].get("external_evaluators"))
- external_evaluator = evaluator.ExternalEvaluator(
- labels_file=self._config["data"]["eval_labels_file"],
- output_dir=save_path,
- scorers=scorers)
+ if external_scorers is not None:
+ external_evaluator = evaluator.ExternalEvaluator(
+ labels_file=self._config["data"]["eval_labels_file"],
+ output_dir=save_path,
+ scorers=evaluator.make_scorers(external_scorers))
+ else:
+ external_evaluator = None
return lambda predictions: [
hooks.SaveEvaluationPredictionHook(
self._model,
| Why its forced an external evaluator with save_eval_predictions
right know if I want to save the predictions file. I have to setup an external_evaluator, but if for
example I'm doing a classification or tagger model, none of those external evals helps me
but I need to save the predictions into a file.
Could be possible to save the predictions file and not setup an external eval?
because right know its crashing here:
File "/Users/sergio.hurtado/gitlab/NaturalLanguageRecognition/venv/lib/python3.6/site-packages/opennmt/utils/evaluator.py", line 187, in make_scorers
name = name.lower()
AttributeError: 'NoneType' object has no attribute 'lower'
since I want to save the file but I dont want an evaluator external so its None
| Thanks for reporting. The code should allow it.
you are welcome :) | 2019-05-23T15:24:54 |
|
OpenNMT/OpenNMT-tf | 446 | OpenNMT__OpenNMT-tf-446 | [
"329"
] | 8c62dddffb3ab0b12775970c2324001164338972 | diff --git a/opennmt/runner.py b/opennmt/runner.py
--- a/opennmt/runner.py
+++ b/opennmt/runner.py
@@ -185,7 +185,7 @@ def _make_eval_prediction_hooks_fn(self):
if external_scorers is not None:
external_evaluator = evaluator.ExternalEvaluator(
labels_file=self._config["data"]["eval_labels_file"],
- output_dir=save_path,
+ output_dir=os.path.join(self._config["model_dir"], "external_eval"),
scorers=evaluator.make_scorers(external_scorers))
else:
external_evaluator = None
| Receiving a “ValueError: best_eval_result cannot be empty or no loss is found in it.” while training the transformer model
Hello I have an issue similar to http://forum.opennmt.net/t/receiving-a-valueerror-best-eval-result-cannot-be-empty-or-no-loss-is-found-in-it-while-training-the-transformer-model/2309.
In detail, I am training a Transformer for translation task with this script (actually almost as in WMT example https://github.com/OpenNMT/OpenNMT-tf/tree/master/scripts/wmt):
```
onmt-main train_and_eval \
--model_type Transformer \
--config transformer.config --auto_config \
--num_gpus 4
```
while my config file looks like:
```
model_dir: base_transformer
data:
train_features_file: train_input.sp
train_labels_file: train_target.sp
eval_features_file: dev_input.sp
eval_labels_file: dev_target.sp
source_words_vocabulary: base.vocab
target_words_vocabulary: base.vocab
train:
save_checkpoints_steps: 1000
eval:
eval_delay: 3600 # Every 1 hour
external_evaluators: BLEU
infer:
batch_size: 32
```
*.sp files contain lines tokenized using ```spm_train``` and ```spm_encode``` programs and vocab file is from ```spm_train```.
And the full log:
```
INFO:tensorflow:Using parameters:
data:
eval_features_file: /home/naplava/experiments/dev_input.sp
eval_labels_file: /home/naplava/experiments/dev_target.sp
source_words_vocabulary: /home/naplava/experiments/base.vocab
target_words_vocabulary: /home/naplava/experiments/base.vocab
train_features_file: /home/naplava/experiments/train_input.sp
train_labels_file: /home/naplava/experiments/train_target.sp
eval:
batch_size: 32
eval_delay: 3600
exporters: best
external_evaluators: BLEU
infer:
batch_size: 32
bucket_width: 5
model_dir: base_transformer
params:
average_loss_in_time: true
beam_width: 4
decay_params:
model_dim: 512
warmup_steps: 8000
decay_type: noam_decay_v2
label_smoothing: 0.1
learning_rate: 2.0
length_penalty: 0.6
optimizer: LazyAdamOptimizer
optimizer_params:
beta1: 0.9
beta2: 0.998
score:
batch_size: 64
train:
average_last_checkpoints: 8
batch_size: 3072
batch_type: tokens
bucket_width: 1
effective_batch_size: 25000
keep_checkpoint_max: 8
maximum_features_length: 100
maximum_labels_length: 100
sample_buffer_size: -1
save_checkpoints_steps: 1000
save_summary_steps: 100
train_steps: 500000
INFO:tensorflow:Accumulate gradients of 3 iterations to reach effective batch size of 25000
2019-02-12 18:15:41.373034: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-02-12 18:15:41.975291: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: Quadro P5000 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:09:00.0
totalMemory: 15.90GiB freeMemory: 15.78GiB
2019-02-12 18:15:42.217610: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 1 with properties:
name: Quadro P5000 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:83:00.0
totalMemory: 15.90GiB freeMemory: 15.78GiB
2019-02-12 18:15:42.451703: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 2 with properties:
name: Quadro P5000 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:84:00.0
totalMemory: 15.90GiB freeMemory: 15.78GiB
2019-02-12 18:15:42.677550: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 3 with properties:
name: Quadro P5000 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:87:00.0
totalMemory: 15.90GiB freeMemory: 15.78GiB
2019-02-12 18:15:42.686989: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1, 2, 3
2019-02-12 18:15:46.010084: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-02-12 18:15:46.010220: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2 3
2019-02-12 18:15:46.010250: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N N N N
2019-02-12 18:15:46.010272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: N N Y Y
2019-02-12 18:15:46.010292: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 2: N Y N Y
2019-02-12 18:15:46.010312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 3: N Y Y N
2019-02-12 18:15:46.011855: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:0 with 15288 MB memory) -> physical GPU (device: 0, name: Quadro P5000, pci bus id: 0000:09:00.0, compute capability: 6.1)
2019-02-12 18:15:46.014814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:1 with 15288 MB memory) -> physical GPU (device: 1, name: Quadro P5000, pci bus id: 0000:83:00.0, compute capability: 6.1)
2019-02-12 18:15:46.015489: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:2 with 15288 MB memory) -> physical GPU (device: 2, name: Quadro P5000, pci bus id: 0000:84:00.0, compute capability: 6.1)
2019-02-12 18:15:46.015823: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:3 with 15288 MB memory) -> physical GPU (device: 3, name: Quadro P5000, pci bus id: 0000:87:00.0, compute capability: 6.1)
INFO:tensorflow:Using config: {'_model_dir': 'base_transformer', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': gpu_options {
}
allow_soft_placement: true
graph_options {
rewrite_options {
layout_optimizer: OFF
}
}
, '_keep_checkpoint_max': 8, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 300, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x14efa3847a20>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 1000 or save_checkpoints_secs None.
INFO:tensorflow:Training on 1157339 examples
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Number of trainable parameters: 93326081
INFO:tensorflow:Graph was finalized.
2019-02-12 18:16:50.504395: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1, 2, 3
2019-02-12 18:16:50.504698: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-02-12 18:16:50.504733: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2 3
2019-02-12 18:16:50.504756: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N N N N
2019-02-12 18:16:50.504776: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: N N Y Y
2019-02-12 18:16:50.504795: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 2: N Y N Y
2019-02-12 18:16:50.504814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 3: N Y Y N
2019-02-12 18:16:50.506508: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15288 MB memory) -> physical GPU (device: 0, name: Quadro P5000, pci bus id: 0000:09:00.0, compute capability: 6.1)
2019-02-12 18:16:50.506842: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 15288 MB memory) -> physical GPU (device: 1, name: Quadro P5000, pci bus id: 0000:83:00.0, compute capability: 6.1)
2019-02-12 18:16:50.507181: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 15288 MB memory) -> physical GPU (device: 2, name: Quadro P5000, pci bus id: 0000:84:00.0, compute capability: 6.1)
2019-02-12 18:16:50.507447: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 15288 MB memory) -> physical GPU (device: 3, name: Quadro P5000, pci bus id: 0000:87:00.0, compute capability: 6.1)
INFO:tensorflow:Running local_init_op.
2019-02-12 18:16:57.836436: I tensorflow/core/kernels/lookup_util.cc:376] Table trying to initialize from file /home/naplava/experiments/base.vocab is already initialized.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into base_transformer/model.ckpt.
INFO:tensorflow:loss = 10.396471, step = 0
INFO:tensorflow:loss = 9.543553, step = 100 (182.627 sec)
INFO:tensorflow:loss = 8.319929, step = 200 (147.124 sec)
INFO:tensorflow:source_words/sec: 22859
INFO:tensorflow:target_words/sec: 24948
INFO:tensorflow:global_step/sec: 0.629292
INFO:tensorflow:loss = 7.550838, step = 300 (147.459 sec)
INFO:tensorflow:source_words/sec: 22801
INFO:tensorflow:target_words/sec: 24889
INFO:tensorflow:loss = 6.224883, step = 400 (147.491 sec)
INFO:tensorflow:source_words/sec: 22779
INFO:tensorflow:target_words/sec: 24886
INFO:tensorflow:loss = 6.6132474, step = 500 (147.308 sec)
INFO:tensorflow:source_words/sec: 21945
INFO:tensorflow:target_words/sec: 23952
INFO:tensorflow:global_step/sec: 0.678376
INFO:tensorflow:loss = 5.8994384, step = 600 (147.432 sec)
INFO:tensorflow:source_words/sec: 22786
INFO:tensorflow:target_words/sec: 24895
INFO:tensorflow:loss = 4.9993877, step = 700 (147.753 sec)
INFO:tensorflow:source_words/sec: 22746
INFO:tensorflow:target_words/sec: 24839
INFO:tensorflow:loss = 4.5198164, step = 800 (147.632 sec)
INFO:tensorflow:source_words/sec: 22827
INFO:tensorflow:target_words/sec: 24861
INFO:tensorflow:global_step/sec: 0.677785
INFO:tensorflow:loss = 3.671426, step = 900 (147.247 sec)
INFO:tensorflow:source_words/sec: 21928
INFO:tensorflow:target_words/sec: 23965
INFO:tensorflow:Saving checkpoints for 1000 into base_transformer/model.ckpt.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2019-02-12-17:43:49
INFO:tensorflow:Graph was finalized.
2019-02-12 18:43:49.638650: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1, 2, 3
2019-02-12 18:43:49.638908: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-02-12 18:43:49.638939: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2 3
2019-02-12 18:43:49.638960: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N N N N
2019-02-12 18:43:49.638978: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: N N Y Y
2019-02-12 18:43:49.638996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 2: N Y N Y
2019-02-12 18:43:49.639014: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 3: N Y Y N
2019-02-12 18:43:49.639593: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15288 MB memory) -> physical GPU (device: 0, name: Quadro P5000, pci bus id: 0000:09:00.0, compute capability: 6.1)
2019-02-12 18:43:49.639769: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 15288 MB memory) -> physical GPU (device: 1, name: Quadro P5000, pci bus id: 0000:83:00.0, compute capability: 6.1)
2019-02-12 18:43:49.640067: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 15288 MB memory) -> physical GPU (device: 2, name: Quadro P5000, pci bus id: 0000:84:00.0, compute capability: 6.1)
2019-02-12 18:43:49.640265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 15288 MB memory) -> physical GPU (device: 3, name: Quadro P5000, pci bus id: 0000:87:00.0, compute capability: 6.1)
INFO:tensorflow:Restoring parameters from base_transformer/model.ckpt-1000
INFO:tensorflow:Running local_init_op.
2019-02-12 18:43:50.472196: I tensorflow/core/kernels/lookup_util.cc:376] Table trying to initialize from file /home/naplava/experiments/base.vocab is already initialized.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation predictions saved to base_transformer/eval/predictions.txt.1000
INFO:tensorflow:BLEU evaluation score: 41.470000
INFO:tensorflow:Finished evaluation at 2019-02-12-17:45:48
INFO:tensorflow:Saving dict for global step 1000: global_step = 1000, loss = 2.5220914
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1000: base_transformer/model.ckpt-1000
INFO:tensorflow:Loading best metric from event files.
Traceback (most recent call last):
File "/home/naplava/venv/bin/onmt-main", line 11, in <module>
sys.exit(main())
File "/home/naplava/venv/lib/python3.6/site-packages/opennmt/bin/main.py", line 172, in main
runner.train_and_evaluate(checkpoint_path=args.checkpoint_path)
File "/home/naplava/venv/lib/python3.6/site-packages/opennmt/runner.py", line 283, in train_and_evaluate
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 471, in train_and_evaluate
return executor.run()
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 610, in run
return self.run_local()
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 711, in run_local
saving_listeners=saving_listeners)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 354, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1207, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1241, in _train_model_default
saving_listeners)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1471, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 671, in run
run_metadata=run_metadata)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1156, in run
run_metadata=run_metadata)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1255, in run
raise six.reraise(*original_exc_info)
File "/home/naplava/venv/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1240, in run
return self._sess.run(*args, **kwargs)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1320, in run
run_metadata=run_metadata))
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 582, in after_run
if self._save(run_context.session, global_step):
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 607, in _save
if l.after_save(session, step):
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 517, in after_save
self._evaluate(global_step_value) # updates self.eval_result
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 537, in _evaluate
self._evaluator.evaluate_and_export())
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 924, in evaluate_and_export
is_the_final_export)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/training.py", line 957, in _export_eval_result
is_the_final_export=is_the_final_export))
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/exporter.py", line 298, in export
full_event_file_pattern)
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/exporter.py", line 365, in _get_best_eval_result
best_eval_result, event_eval_result):
File "/home/naplava/venv/lib/python3.6/site-packages/tensorflow/python/estimator/exporter.py", line 150, in _loss_smaller
'best_eval_result cannot be empty or no loss is found in it.')
ValueError: best_eval_result cannot be empty or no loss is found in it.
```
| Hi,
This appears to be a recurrent TensorFlow bug. The workaround is the change or disable the model exporter:
```yaml
eval:
exporters: last
```
or:
```yaml
eval:
exporters: null
```
I will push a fix to change the default value which seems to cause issues for some users.
I pushed the change in version 1.19.1.
> I pushed the change in version 1.19.1.
I also wonder how to deal with this problem. I think OpenNMT can save the best top N model rather than the last models.
Did you try a newer TensorFlow version?
> Did you try a newer TensorFlow version?
My tensorflow is 1.13.1
It seems to be interfering with `external_evaluators: BLEU`. Will try to find a workaround.
> It seems to be interfering with `external_evaluators: BLEU`. Will try to find a workaround.
Thanks for you reply. | 2019-05-30T09:06:25 |
|
OpenNMT/OpenNMT-tf | 453 | OpenNMT__OpenNMT-tf-453 | [
"450"
] | 32f604c65fbec44586ff3fc24816f6dc3a07b56f | diff --git a/opennmt/utils/optim.py b/opennmt/utils/optim.py
--- a/opennmt/utils/optim.py
+++ b/opennmt/utils/optim.py
@@ -1,6 +1,7 @@
"""Optimization related functions."""
import collections
+import re
import tensorflow as tf
@@ -183,6 +184,8 @@ def optimize_loss(loss, params, mixed_precision=False, var_list=None, hvd=None):
optimizer = DistributedOptimizer.from_params(optimizer, params=params.get("horovod"))
# Gradients.
+ var_list = _get_trainable_variables(
+ var_list=var_list, freeze_variables=params.get("freeze_variables"))
gradients = optimizer.compute_gradients(
loss, var_list=var_list, colocate_gradients_with_ops=True)
_summarize_gradients_norm("global_norm/gradient_norm", gradients)
@@ -309,3 +312,28 @@ def _clip_gradients_by_norm(grads_and_vars, clip_gradients):
def _summarize_gradients_norm(name, gradients):
"""Summarizes global norm of gradients."""
tf.summary.scalar(name, tf.global_norm(list(zip(*gradients))[0]))
+
+def _print_var_list(var_list):
+ for variable in var_list:
+ tf.logging.info(" * %s", variable.name)
+
+def _get_trainable_variables(var_list=None, freeze_variables=None):
+ if var_list is None:
+ var_list = tf.trainable_variables()
+ if freeze_variables:
+ if not isinstance(freeze_variables, list):
+ freeze_variables = [freeze_variables]
+ regexs = list(map(re.compile, freeze_variables))
+ frozen_variables = []
+ trainable_variables = []
+ for variable in var_list:
+ if any(regex.match(variable.name) for regex in regexs):
+ frozen_variables.append(variable)
+ else:
+ trainable_variables.append(variable)
+ tf.logging.info("Frozen variables:")
+ _print_var_list(frozen_variables)
+ tf.logging.info("Trainable variables:")
+ _print_var_list(trainable_variables)
+ var_list = trainable_variables
+ return var_list
| Fine-tuning only a subset of parameters
Hi,
Is there the possibility of optimizing only a subset of parameters in the computation graph?
| Hi,
Currently not but this is not too difficult to implement with a regular expression on variable names. I will look to add that. | 2019-06-05T09:27:44 |
|
OpenNMT/OpenNMT-tf | 503 | OpenNMT__OpenNMT-tf-503 | [
"429"
] | 9cb7f6c0e9919248ddf42243f70cf0507a3c06b0 | diff --git a/opennmt/evaluation.py b/opennmt/evaluation.py
--- a/opennmt/evaluation.py
+++ b/opennmt/evaluation.py
@@ -92,7 +92,7 @@ def _eval(next_fn):
self._eval = _eval
- self._metrics_name = {"loss"}
+ self._metrics_name = {"loss", "perplexity"}
for scorer in self._scorers:
self._metrics_name.update(scorer.scores_name)
model_metrics = self._model.get_metrics()
@@ -173,7 +173,7 @@ def should_stop(self):
if higher_is_better is None:
# TODO: the condition below is not always true, find a way to set it
# correctly for Keras metrics.
- higher_is_better = target_metric != "loss"
+ higher_is_better = target_metric not in ("loss", "perplexity")
metrics = [values[target_metric] for _, values in self._metrics_history]
should_stop = early_stop(
metrics,
@@ -224,7 +224,7 @@ def __call__(self, step):
raise RuntimeError("No examples were evaluated")
loss = loss_num / loss_den
- results = dict(loss=loss)
+ results = dict(loss=loss, perplexity=tf.math.exp(loss))
if metrics:
for name, metric in six.iteritems(metrics):
results[name] = metric.result()
| diff --git a/opennmt/tests/evaluation_test.py b/opennmt/tests/evaluation_test.py
--- a/opennmt/tests/evaluation_test.py
+++ b/opennmt/tests/evaluation_test.py
@@ -1,3 +1,4 @@
+import math
import os
import six
@@ -55,6 +56,12 @@ def compute_loss(self, outputs, labels, training=True):
class EvaluationTest(tf.test.TestCase):
+ def _assertMetricsEqual(self, metrics, expected):
+ self.assertLen(metrics, len(expected))
+ for name in six.iterkeys(expected):
+ self.assertIn(name, metrics)
+ self.assertAllClose(metrics[name], expected[name])
+
def testEvaluationMetric(self):
features_file = os.path.join(self.get_temp_dir(), "features.txt")
labels_file = os.path.join(self.get_temp_dir(), "labels.txt")
@@ -72,18 +79,18 @@ def testEvaluationMetric(self):
batch_size=1,
early_stopping=early_stopping,
eval_dir=eval_dir)
- self.assertSetEqual(evaluator.metrics_name, {"loss", "a", "b"})
+ self.assertSetEqual(evaluator.metrics_name, {"loss", "perplexity", "a", "b"})
metrics_5 = evaluator(5)
- self.assertDictEqual(metrics_5, {"loss": 1.0, "a": 2, "b": 3})
+ self._assertMetricsEqual(
+ metrics_5, {"loss": 1.0, "perplexity": math.exp(1.0), "a": 2, "b": 3})
self.assertFalse(evaluator.should_stop())
metrics_10 = evaluator(10)
- self.assertDictEqual(metrics_10, {"loss": 4.0, "a": 5, "b": 6})
+ self._assertMetricsEqual(
+ metrics_10, {"loss": 4.0, "perplexity": math.exp(4.0), "a": 5, "b": 6})
self.assertTrue(evaluator.should_stop())
self.assertLen(evaluator.metrics_history, 2)
- self.assertEqual(evaluator.metrics_history[0][0], 5)
- self.assertEqual(evaluator.metrics_history[0][1], metrics_5)
- self.assertEqual(evaluator.metrics_history[1][0], 10)
- self.assertEqual(evaluator.metrics_history[1][1], metrics_10)
+ self._assertMetricsEqual(evaluator.metrics_history[0][1], metrics_5)
+ self._assertMetricsEqual(evaluator.metrics_history[1][1], metrics_10)
# Recreating the evaluator should load the metrics history from the eval directory.
evaluator = evaluation.Evaluator(
@@ -93,13 +100,12 @@ def testEvaluationMetric(self):
batch_size=1,
eval_dir=eval_dir)
self.assertLen(evaluator.metrics_history, 2)
- self.assertEqual(evaluator.metrics_history[0][0], 5)
- self.assertEqual(evaluator.metrics_history[0][1], metrics_5)
- self.assertEqual(evaluator.metrics_history[1][0], 10)
- self.assertEqual(evaluator.metrics_history[1][1], metrics_10)
+ self._assertMetricsEqual(evaluator.metrics_history[0][1], metrics_5)
+ self._assertMetricsEqual(evaluator.metrics_history[1][1], metrics_10)
# Evaluating previous steps should clear future steps in the history.
- self.assertDictEqual(evaluator(7), {"loss": 7.0, "a": 8, "b": 9})
+ self._assertMetricsEqual(
+ evaluator(7), {"loss": 7.0, "perplexity": math.exp(7.0), "a": 8, "b": 9})
recorded_steps = list(step for step, _ in evaluator.metrics_history)
self.assertListEqual(recorded_steps, [5, 7])
| How to support PPL
Can you add the PPL measurement?
| 2019-10-02T07:45:51 |
|
OpenNMT/OpenNMT-tf | 515 | OpenNMT__OpenNMT-tf-515 | [
"514"
] | d770060b8d3f58257f3f6eef0d005e050a50bf8f | diff --git a/opennmt/data/text.py b/opennmt/data/text.py
--- a/opennmt/data/text.py
+++ b/opennmt/data/text.py
@@ -40,13 +40,18 @@ def tokens_to_words(tokens, subword_token="■", is_spacer=None):
if is_spacer is None:
is_spacer = subword_token == "▁"
if is_spacer:
- subword = tf.strings.regex_full_match(tokens, "[^%s].*" % subword_token)
+ # First token implicitly starts with a spacer.
+ left_and_single = tf.logical_or(
+ tf.strings.regex_full_match(tokens, "%s.*" % subword_token),
+ tf.one_hot(0, tf.shape(tokens)[0], on_value=True, off_value=False))
+ right = tf.strings.regex_full_match(tokens, ".+%s" % subword_token)
+ word_start = tf.logical_or(tf.roll(right, shift=1, axis=0), left_and_single)
else:
right = tf.strings.regex_full_match(tokens, ".*%s" % subword_token)
left = tf.strings.regex_full_match(tokens, "%s.*" % subword_token)
subword = tf.logical_or(tf.roll(right, shift=1, axis=0), left)
- start = tf.logical_not(subword)
- start_indices = tf.squeeze(tf.where(start), -1)
+ word_start = tf.logical_not(subword)
+ start_indices = tf.squeeze(tf.where(word_start), -1)
return tf.RaggedTensor.from_row_starts(tokens, start_indices)
def alignment_matrix_from_pharaoh(alignment_line,
| diff --git a/opennmt/tests/text_test.py b/opennmt/tests/text_test.py
--- a/opennmt/tests/text_test.py
+++ b/opennmt/tests/text_test.py
@@ -44,6 +44,8 @@ def testToWordsWithJoiner(self, tokens, expected):
@parameterized.expand([
[["▁a", "b", "▁c", "d", "e"], [["▁a", "b", ""], ["▁c", "d", "e"]]],
[["▁", "a", "b", "▁", "c", "d", "e"], [["▁", "a", "b", ""], ["▁", "c", "d", "e"]]],
+ [["a▁", "b", "c▁", "d", "e"], [["a▁", ""], ["b", "c▁"], ["d", "e"]]],
+ [["a", "▁b▁", "c", "d", "▁", "e"], [["a", ""], ["▁b▁", ""], ["c", "d"], ["▁", "e"]]],
])
def testToWordsWithSpacer(self, tokens, expected):
tokens = tf.constant(tokens)
| Subword tokenisation spacer can mark the beginning of word
Certain sequence noising operations need to retrieve a list of words from the raw list of subword tokens. For example:
* Decoding with word removal/reordering to produce noisy back-translations as in [Scaling BT paper](https://arxiv.org/abs/1808.09381)
* Word omission to support the new contrastive learning feature as in the [contrastive learning paper](https://www.aclweb.org/anthology/P19-1623.pdf)
* Presumably more features relying on word level noise might come up in the future
In these cases the user should specify some details for the sub-tokenisation process:
1. What subword tokens was used? (`decoding_subword_token`)
2. Was that token a joiner or a spacer? (`decoding_subword_token_is_spacer`)
When the user specifies (explicitly or implicitly) a spacer, the framework assumes that the spacer symbol appears at the beginning of each word, similar to what SentencePiece does. However this does not have to be the case, the spacer could also appear at the end of each word - for example [this one does](https://github.com/kovalevfm/SubTokenizer). If that extra sub-tokenisation flexibility is desired, we can add this configuration parameter. A sample implementation could look like [this](https://github.com/steremma/OpenNMT-tf/commit/d109af49911431e424b28def575fb94f07bfec47).
I realise that most user's rely on standard tools that are covered by the current implementation. If there is a user base for which the extra flexibility is desired, I can submit a PR that reads this option from the YAML.
| 2019-10-10T08:20:03 |
|
OpenNMT/OpenNMT-tf | 521 | OpenNMT__OpenNMT-tf-521 | [
"519"
] | 541540fe1eff1393f61983a3df2e1e7f3f0dbc4d | diff --git a/opennmt/data/noise.py b/opennmt/data/noise.py
--- a/opennmt/data/noise.py
+++ b/opennmt/data/noise.py
@@ -54,6 +54,10 @@ def __call__(self, tokens, sequence_length=None, keep_shape=False):
Returns:
A tuple with the noisy version of :obj:`tokens` and the new lengths.
"""
+ with tf.device("cpu:0"):
+ return self._call(tokens, sequence_length, keep_shape)
+
+ def _call(self, tokens, sequence_length, keep_shape):
rank = tokens.shape.ndims
if rank == 1:
input_length = tf.shape(tokens)[0]
@@ -77,7 +81,7 @@ def __call__(self, tokens, sequence_length=None, keep_shape=False):
if sequence_length is None:
raise ValueError("sequence_length must be passed for 2D inputs")
tokens, sequence_length = tf.map_fn(
- lambda arg: self(*arg, keep_shape=True),
+ lambda arg: self._call(*arg, keep_shape=True),
(tokens, sequence_length),
back_prop=False)
if not keep_shape:
@@ -89,7 +93,7 @@ def __call__(self, tokens, sequence_length=None, keep_shape=False):
original_shape = misc.shape_list(tokens)
tokens = tf.reshape(tokens, [-1, original_shape[-1]])
sequence_length = tf.reshape(sequence_length, [-1])
- tokens, sequence_length = self(tokens, sequence_length, keep_shape=keep_shape)
+ tokens, sequence_length = self._call(tokens, sequence_length, keep_shape=keep_shape)
tokens = tf.reshape(tokens, original_shape[:-1] + [-1])
sequence_length = tf.reshape(sequence_length, original_shape[:-1])
return tokens, sequence_length
diff --git a/opennmt/tokenizers/tokenizer.py b/opennmt/tokenizers/tokenizer.py
--- a/opennmt/tokenizers/tokenizer.py
+++ b/opennmt/tokenizers/tokenizer.py
@@ -76,6 +76,10 @@ def tokenize(self, text):
Raises:
ValueError: if the rank of :obj:`text` is greater than 1.
"""
+ with tf.device("cpu:0"):
+ return self._tokenize(text)
+
+ def _tokenize(self, text):
if tf.is_tensor(text):
rank = len(text.shape)
if rank == 0:
@@ -112,6 +116,10 @@ def detokenize(self, tokens, sequence_length=None):
ValueError: if :obj:`tokens` is a 2-D dense ``tf.Tensor`` and
:obj:`sequence_length` is not set.
"""
+ with tf.device("cpu:0"):
+ return self._detokenize(tokens, sequence_length)
+
+ def _detokenize(self, tokens, sequence_length):
if tf.is_tensor(tokens):
rank = len(tokens.shape)
if rank == 1:
| diff --git a/opennmt/tests/noise_test.py b/opennmt/tests/noise_test.py
--- a/opennmt/tests/noise_test.py
+++ b/opennmt/tests/noise_test.py
@@ -60,14 +60,21 @@ def testWordPermutation(self, k):
for i, v in enumerate(y.tolist()):
self.assertLess(abs(int(v) - i), k)
- def testWordNoising(self):
- tokens = tf.constant([["a■", "b", "c■", "d", "■e"], ["a", "b", "c", "", ""]])
- lengths = tf.constant([5, 3])
+ @parameterized.expand([
+ [True, [["a■", "b", "c■", "d", "■e"], ["a", "b", "c", "", ""]], [5, 3]],
+ [False, [["a■", "b", "c■", "d", "■e"], ["a", "b", "c", "", ""]], [5, 3]],
+ [False, ["a■", "b", "c■", "d", "■e"], None]
+ ])
+ def testWordNoising(self, as_function, tokens, lengths):
+ tokens = tf.constant(tokens)
+ if lengths is not None:
+ lengths = tf.constant(lengths, dtype=tf.int32)
noiser = noise.WordNoiser()
noiser.add(noise.WordDropout(0.1))
noiser.add(noise.WordReplacement(0.1))
noiser.add(noise.WordPermutation(3))
- noisy_tokens, noisy_lengths = noiser(tokens, sequence_length=lengths, keep_shape=True)
+ noiser_fn = tf.function(noiser) if as_function else noiser
+ noisy_tokens, noisy_lengths = noiser_fn(tokens, sequence_length=lengths, keep_shape=True)
tokens, noisy_tokens = self.evaluate([tokens, noisy_tokens])
self.assertAllEqual(noisy_tokens.shape, tokens.shape)
| InvalidArgumentError when running evaluation with noise decoding
I use OpenNMT-tf to train a transformer model. The traning process goes well, but en error occurs when running evaluation.
Here is the log
```bash
INFO:tensorflow:Saved checkpoint /opt/algo_nfs/kdd_luozhouyang/jdrewrite/model/transformer/ckpt-0
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1781: calResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removuture version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
2019-10-16 02:56:01.463522: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:143] Filling up shuffle buffer (this may tae): 1521766 of 1799870
2019-10-16 02:56:03.298341: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:193] Shuffle buffer filled.
INFO:tensorflow:Step = 100 ; source words/s = 10616, target words/s = 289 ; Learning rate = 0.000100 ; Loss = 5.583332
INFO:tensorflow:Step = 200 ; source words/s = 77108, target words/s = 1940 ; Learning rate = 0.000100 ; Loss = 5.666554
INFO:tensorflow:Step = 300 ; source words/s = 78702, target words/s = 1986 ; Learning rate = 0.000100 ; Loss = 5.430358
INFO:tensorflow:Step = 400 ; source words/s = 77159, target words/s = 1919 ; Learning rate = 0.000100 ; Loss = 5.042500
INFO:tensorflow:Step = 500 ; source words/s = 76053, target words/s = 2014 ; Learning rate = 0.000100 ; Loss = 4.841046
INFO:tensorflow:Step = 600 ; source words/s = 78079, target words/s = 1891 ; Learning rate = 0.000100 ; Loss = 4.983972
INFO:tensorflow:Step = 700 ; source words/s = 77481, target words/s = 2021 ; Learning rate = 0.000100 ; Loss = 4.961732
INFO:tensorflow:Step = 800 ; source words/s = 77352, target words/s = 1915 ; Learning rate = 0.000100 ; Loss = 4.508535
INFO:tensorflow:Step = 900 ; source words/s = 76161, target words/s = 1987 ; Learning rate = 0.000100 ; Loss = 3.886328
INFO:tensorflow:Step = 1000 ; source words/s = 76549, target words/s = 1901 ; Learning rate = 0.000100 ; Loss = 4.771955
INFO:tensorflow:Step = 1100 ; source words/s = 79043, target words/s = 1934 ; Learning rate = 0.000100 ; Loss = 3.687625
INFO:tensorflow:Step = 1200 ; source words/s = 78311, target words/s = 1921 ; Learning rate = 0.000100 ; Loss = 3.797333
INFO:tensorflow:Step = 1300 ; source words/s = 78706, target words/s = 1999 ; Learning rate = 0.000100 ; Loss = 3.763635
INFO:tensorflow:Step = 1400 ; source words/s = 78690, target words/s = 2004 ; Learning rate = 0.000100 ; Loss = 3.591205
INFO:tensorflow:Step = 1500 ; source words/s = 79119, target words/s = 1951 ; Learning rate = 0.000100 ; Loss = 3.379777
INFO:tensorflow:Step = 1600 ; source words/s = 77603, target words/s = 1934 ; Learning rate = 0.000100 ; Loss = 3.521855
INFO:tensorflow:Step = 1700 ; source words/s = 76440, target words/s = 2002 ; Learning rate = 0.000100 ; Loss = 2.911270
INFO:tensorflow:Step = 1800 ; source words/s = 77462, target words/s = 1838 ; Learning rate = 0.000100 ; Loss = 3.304400
INFO:tensorflow:Step = 1900 ; source words/s = 77666, target words/s = 1945 ; Learning rate = 0.000100 ; Loss = 3.305823
INFO:tensorflow:Step = 2000 ; source words/s = 76761, target words/s = 1893 ; Learning rate = 0.000100 ; Loss = 2.937279
INFO:tensorflow:Step = 2100 ; source words/s = 77877, target words/s = 2019 ; Learning rate = 0.000100 ; Loss = 2.932232
INFO:tensorflow:Step = 2200 ; source words/s = 76121, target words/s = 1924 ; Learning rate = 0.000100 ; Loss = 3.219757
INFO:tensorflow:Step = 2300 ; source words/s = 77865, target words/s = 1909 ; Learning rate = 0.000100 ; Loss = 2.884820
INFO:tensorflow:Step = 2400 ; source words/s = 75515, target words/s = 1970 ; Learning rate = 0.000100 ; Loss = 2.969361
INFO:tensorflow:Step = 2500 ; source words/s = 77161, target words/s = 1861 ; Learning rate = 0.000100 ; Loss = 2.742362
INFO:tensorflow:Step = 2600 ; source words/s = 76826, target words/s = 1941 ; Learning rate = 0.000100 ; Loss = 2.909730
INFO:tensorflow:Step = 2700 ; source words/s = 76092, target words/s = 1955 ; Learning rate = 0.000100 ; Loss = 3.543749
INFO:tensorflow:Step = 2800 ; source words/s = 77019, target words/s = 1937 ; Learning rate = 0.000100 ; Loss = 2.521420
INFO:tensorflow:Step = 2900 ; source words/s = 77562, target words/s = 1834 ; Learning rate = 0.000100 ; Loss = 2.453693
INFO:tensorflow:Step = 3000 ; source words/s = 76726, target words/s = 1950 ; Learning rate = 0.000100 ; Loss = 2.646461
INFO:tensorflow:Step = 3100 ; source words/s = 76233, target words/s = 1901 ; Learning rate = 0.000100 ; Loss = 3.034106
INFO:tensorflow:Step = 3200 ; source words/s = 75719, target words/s = 1955 ; Learning rate = 0.000100 ; Loss = 3.000289
INFO:tensorflow:Step = 3300 ; source words/s = 76795, target words/s = 1940 ; Learning rate = 0.000100 ; Loss = 2.223612
INFO:tensorflow:Step = 3400 ; source words/s = 77994, target words/s = 1928 ; Learning rate = 0.000100 ; Loss = 2.867700
INFO:tensorflow:Step = 3500 ; source words/s = 75764, target words/s = 1936 ; Learning rate = 0.000100 ; Loss = 2.304001
INFO:tensorflow:Step = 3600 ; source words/s = 78271, target words/s = 1863 ; Learning rate = 0.000100 ; Loss = 2.193941
INFO:tensorflow:Step = 3700 ; source words/s = 76626, target words/s = 2013 ; Learning rate = 0.000100 ; Loss = 2.812444
INFO:tensorflow:Step = 3800 ; source words/s = 77158, target words/s = 1861 ; Learning rate = 0.000100 ; Loss = 2.488263
INFO:tensorflow:Step = 3900 ; source words/s = 77520, target words/s = 1848 ; Learning rate = 0.000100 ; Loss = 2.229686
INFO:tensorflow:Step = 4000 ; source words/s = 76969, target words/s = 1936 ; Learning rate = 0.000100 ; Loss = 3.432198
INFO:tensorflow:Step = 4100 ; source words/s = 77902, target words/s = 1921 ; Learning rate = 0.000100 ; Loss = 2.815073
INFO:tensorflow:Step = 4200 ; source words/s = 78302, target words/s = 1989 ; Learning rate = 0.000100 ; Loss = 2.300324
INFO:tensorflow:Step = 4300 ; source words/s = 78127, target words/s = 2023 ; Learning rate = 0.000100 ; Loss = 2.335404
INFO:tensorflow:Step = 4400 ; source words/s = 79337, target words/s = 1939 ; Learning rate = 0.000100 ; Loss = 2.305170
INFO:tensorflow:Step = 4500 ; source words/s = 77957, target words/s = 1956 ; Learning rate = 0.000100 ; Loss = 2.216660
INFO:tensorflow:Step = 4600 ; source words/s = 77612, target words/s = 1936 ; Learning rate = 0.000100 ; Loss = 2.549562
INFO:tensorflow:Step = 4700 ; source words/s = 77886, target words/s = 1901 ; Learning rate = 0.000100 ; Loss = 2.596675
INFO:tensorflow:Step = 4800 ; source words/s = 77120, target words/s = 1955 ; Learning rate = 0.000100 ; Loss = 1.904201
INFO:tensorflow:Step = 4900 ; source words/s = 76252, target words/s = 1977 ; Learning rate = 0.000100 ; Loss = 2.229644
INFO:tensorflow:Step = 5000 ; source words/s = 78700, target words/s = 1816 ; Learning rate = 0.000100 ; Loss = 2.202982
INFO:tensorflow:Running evaluation for step 5000
2019-10-16 03:05:49.514388: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAlid argument: 2 root error(s) found.
(0) Invalid argument: During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string
(1) Invalid argument: During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string
0 successful operations.
0 derived errors ignored.
[[{{node transformer/map/TensorArrayUnstack/TensorListFromTensor/_96}}]]
[[transformer/map/while/body/_397/RaggedFromTensor/boolean_mask/concat/_252]]
2019-10-16 03:05:49.514983: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAlid argument: 2 root error(s) found.
(0) Invalid argument: During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string
(1) Invalid argument: During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string
0 successful operations.
0 derived errors ignored.
[[{{node transformer/map/TensorArrayUnstack/TensorListFromTensor/_96}}]]
Traceback (most recent call last):
File "/usr/local/bin/onmt-main", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.6/dist-packages/opennmt/bin/main.py", line 189, in main
checkpoint_path=args.checkpoint_path)
File "/usr/local/lib/python3.6/dist-packages/opennmt/runner.py", line 205, in train
export_on_best=eval_config.get("export_on_best"))
File "/usr/local/lib/python3.6/dist-packages/opennmt/training.py", line 168, in __call__
self._evaluate(evaluator, step, export_on_best=export_on_best)
File "/usr/local/lib/python3.6/dist-packages/opennmt/training.py", line 180, in _evaluate
metrics = evaluator(step)
File "/usr/local/lib/python3.6/dist-packages/opennmt/evaluation.py", line 241, in __call__
for loss, predictions, target in self._eval(): # pylint: disable=no-value-for-parameter
File "/usr/local/lib/python3.6/dist-packages/opennmt/data/dataset.py", line 433, in _fun
outputs = _tf_fun()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/def_function.py", line 457, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/def_function.py", line 526, in _call
return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds) # pylint: disable=protected-access
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 1141, in _filtered_call
self.captured_inputs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 1224, in _call_flat
ctx, args, cancellation_manager=cancellation_manager)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 511, in call
ctx=ctx)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: 2 root error(s) found.
(0) Invalid argument: During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string
(1) Invalid argument: During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string
0 successful operations.
0 derived errors ignored.
[[{{node transformer/map/TensorArrayUnstack/TensorListFromTensor/_96}}]]
[[transformer/map/while/body/_397/RaggedFromTensor/boolean_mask/concat/_252]]
(1) Invalid argument: 2 root error(s) found.
(0) Invalid argument: During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string
(1) Invalid argument: During Variant Host->Device Copy: non-DMA-copy attempted of tensor type: string
0 successful operations.
0 derived errors ignored.
[[{{node transformer/map/TensorArrayUnstack/TensorListFromTensor/_96}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference__tf_fun_60338]
Function call stack:
_tf_fun -> _tf_fun
```
| Here is a issue of tensorflow [tf.function failed with tf.image.summary](https://github.com/tensorflow/tensorflow/issues/28007)
Can you post your configuration file?
Here is my config file:
```yml
model_dir: /opt/algo_nfs/kdd_luozhouyang/model/transformer
data:
# (required for train run type).
train_features_file: /opt/algo_nfs/kdd_luozhouyang/onmt/src.1800000.lower.char.train
train_labels_file: /opt/algo_nfs/kdd_luozhouyang/onmt/tgt.1800000.lower.char.train
# (optional) Pharaoh alignments of the training files.
train_alignment:
# (required for train_end_eval and eval run types).
eval_features_file: /opt/algo_nfs/kdd_luozhouyang/onmt/src.100000.lower.char.eval
eval_labels_file: /opt/algo_nfs/kdd_luozhouyang/onmt/tgt.100000.lower.char.eval
# (optional) Models may require additional resource files (e.g. vocabularies).
source_vocabulary: /opt/algo_nfs/kdd_luozhouyang/onmt/vocab.src.txt
target_vocabulary: /opt/algo_nfs/kdd_luozhouyang/onmt/vocab.tgt.txt
# (optional) Tokenization configuration (or path to a configuration file).
# See also: https://github.com/OpenNMT/Tokenizer/blob/master/docs/options.md
source_tokenization:
type: SpaceTokenizer
#params:
#mode: space
#joiner_annotate: true
#segment_numbers: true
#segment_alphabet_change: true
target_tokenization:
type: SpaceTokenizer
#params:
#mode: space
#joiner_annotate: true
#segment_numbers: true
#segment_alphabet_change: true
# (optional) Pretrained embedding configuration.
#source_embedding:
# path: data/glove/glove-100000.txt
# with_header: True
# case_insensitive: True
# trainable: False
# (optional) For sequence tagging tasks, the tagging scheme that is used (e.g. BIOES).
# For supported schemes, additional evaluation metrics could be computed such as
# precision, recall, etc. (accepted values: bioes; default: null).
#tagging_scheme: bioes
# Model and optimization parameters.
params:
# The optimizer class name in tf.keras.optimizers or tfa.optimizers.
optimizer: Adam
# (optional) Additional optimizer parameters as defined in their documentation.
# If weight_decay is set, the optimizer will be extended with decoupled weight decay.
optimizer_params:
beta_1: 0.8
beta_2: 0.998
learning_rate: 1.0
# (optional) If set, overrides all dropout values configured in the model definition.
dropout: 0.3
# (optional) List of layer to not optimize.
#freeze_layers:
# - "encoder/layers/0"
# - "decoder/output_layer"
# (optional) Weights regularization penalty (default: null).
regularization:
type: l2 # can be "l1", "l2", "l1_l2" (case-insensitive).
scale: 1e-4 # if using "l1_l2" regularization, this should be a YAML list.
# (optional) Average loss in the time dimension in addition to the batch dimension (default: False).
average_loss_in_time: false
# (optional) The type of learning rate decay (default: null). See:
# * https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/schedules
# * opennmt/schedules/lr_schedules.py
# This value may change the semantics of other decay options. See the documentation or the code.
decay_type: NoamDecay
# (optional unless decay_type is set) Decay parameters.
decay_params:
model_dim: 512
warmup_steps: 4000
# (optional) After how many steps to start the decay (default: 0).
start_decay_steps: 50000
# (optional) The learning rate minimum value (default: 0).
minimum_learning_rate: 0.0001
# (optional) Type of scheduled sampling (can be "constant", "linear", "exponential",
# or "inverse_sigmoid", default: "constant").
#scheduled_sampling_type: constant
# (optional) Probability to read directly from the inputs instead of sampling categorically
# from the output ids (default: 1).
#scheduled_sampling_read_probability: 1
# (optional unless scheduled_sampling_type is set) The constant k of the schedule.
#scheduled_sampling_k: 0
# (optional) The label smoothing value.
label_smoothing: 0.1
# (optional) Width of the beam search (default: 1).
beam_width: 5
# (optional) Number of hypotheses to return (default: 1). Set 0 to return all
# available hypotheses. This value is also set by infer/n_best.
num_hypotheses: 1
# (optional) Length penaly weight to use during beam search (default: 0).
length_penalty: 0.2
# (optional) Coverage penaly weight to use during beam search (default: 0).
coverage_penalty: 0.2
# (optional) Sample predictions from the top K most likely tokens (requires
# beam_width to 1). If 0, sample from the full output distribution (default: 1).
sampling_topk: 1
# (optional) High temperatures generate more random samples (default: 1).
sampling_temperature: 1
# (optional) Sequence of noise to apply to the decoding output. Each element
# should be a noise type (can be: "dropout", "replacement", "permutation") and
# the module arguments
# (see http://opennmt.net/OpenNMT-tf/package/opennmt.data.noise.html)
decoding_noise:
- dropout: 0.1
- replacement: [0.1, ⦅unk⦆]
- permutation: 3
# (optional) Define the subword marker. This is useful to apply noise at the
# word level instead of the subword level (default: ■).
decoding_subword_token: ■
# (optional) Whether decoding_subword_token is used as a spacer (as in SentencePiece) or a joiner (as in BPE).
# If unspecified, will infer directly from decoding_subword_token.
decoding_subword_token_is_spacer: false
# (optional) Minimum length of decoded sequences, end token excluded (default: 0).
minimum_decoding_length: 0
# (optional) Maximum length of decoded sequences, end token excluded (default: 250).
maximum_decoding_length: 30
# (optional) Replace unknown target tokens by the original source token with the
# highest attention (default: false).
replace_unknown_target: false
# (optional) The type of guided alignment cost to compute (can be: "null", "ce", "mse",
# default: "null").
guided_alignment_type: null
# (optional) The weight of the guided alignment cost (default: 1).
guided_alignment_weight: 1
# (optional) Enable contrastive learning mode, see
# https://www.aclweb.org/anthology/P19-1623 (default: false).
# See also "decoding_subword_token" that is used by this mode.
contrastive_learning: false
# (optional) The value of the parameter eta in the max-margin loss (default: 0.1).
max_margin_eta: 0.1
# Training options.
train:
# (optional when batch_type=tokens) If not set, the training will search the largest
# possible batch size.
batch_size: 32
# (optional) Batch size is the number of "examples" or "tokens" (default: "examples").
batch_type: examples
# (optional) Tune gradient accumulation to train with at least this effective batch size
# (default: null).
effective_batch_size: null
# (optional) Save a checkpoint every this many steps (default: 5000).
save_checkpoints_steps: 10000
# (optional) How many checkpoints to keep on disk.
keep_checkpoint_max: 5
# (optional) Dump summaries and logs every this many steps (default: 100).
save_summary_steps: 100
# (optional) Maximum training step. If not set, train forever.
max_step: 1000000
# (optional) If true, makes a single pass over the training data (default: false).
single_pass: false
# (optional) The maximum length of feature sequences during training (default: null).
maximum_features_length: 1500
# (optional) The maximum length of label sequences during training (default: null).
maximum_labels_length: 30
# (optional) The width of the length buckets to select batch candidates from.
# A smaller value means less padding and increased efficiency. (default: 1).
length_bucket_width: 100
# (optional) The number of elements from which to sample during shuffling (default: 500000).
# Set 0 or null to disable shuffling, -1 to match the number of training examples.
sample_buffer_size: -1
# (optional) Number of checkpoints to average at the end of the training to the directory
# model_dir/avg (default: 0).
average_last_checkpoints: 6
# (optional) Evaluation options.
eval:
# (optional) The batch size to use (default: 32).
batch_size: 30
# (optional) Evaluate every this many steps (default: 5000).
steps: 5000
# (optional) Save evaluation predictions in model_dir/eval/.
save_eval_predictions: true
# (optional) Evalutator or list of evaluators that are called on the saved evaluation predictions.
# Available evaluators: bleu, rouge
external_evaluators: bleu
# (optional) Export a SavedModel when a metric has the best value so far (default: null).
export_on_best: bleu
# (optional) Early stopping condition.
# Should be read as: stop the training if "metric" did not improve more
# than "min_improvement" in the last "steps" evaluations.
early_stopping:
# (optional) The target metric name (default: "loss").
metric: bleu
# (optional) The metric should improve at least by this much to be considered as an improvement (default: 0)
min_improvement: 0.01
steps: 10
# (optional) Inference options.
infer:
# (optional) The batch size to use (default: 1).
batch_size: 10
# (optional) For compatible models, the number of hypotheses to output (default: 1).
# This sets the parameter params/num_hypotheses.
n_best: 1
# (optional) For compatible models, also output the score (default: false).
with_scores: false
# (optional) For compatible models, also output the alignments (can be: null, hard, soft,
# default: null).
with_alignments: null
# (optional) The width of the length buckets to select batch candidates from.
# If set, the test data will be sorted by length to increase the translation
# efficiency. The predictions will still be outputted in order as they are
# available (default: 0).
length_bucket_width: 100
# (optional) Scoring options.
score:
# (optional) The batch size to use (default: 64).
batch_size: 64
# (optional) Also report token-level cross entropy.
with_token_level: false
# (optional) Also output the alignments (can be: null, hard, soft, default: null).
with_alignments: null
```
The error appears when using:
```yaml
params:
decoding_noise:
- dropout: 0.1
- replacement: [0.1, ⦅unk⦆]
- permutation: 3
```
Do you want to decode with noise? If not, remove this block and it should run fine.
On a related note, it looks like you copied all parameters from http://opennmt.net/OpenNMT-tf/configuration.html. I would not recommend doing that. Instead, start from an empty configuration file and progressively add the parameters you want to set. | 2019-10-16T13:18:36 |
OpenNMT/OpenNMT-tf | 525 | OpenNMT__OpenNMT-tf-525 | [
"524"
] | bec38f98de000361b0ce017cc86b9f8d0d2815c9 | diff --git a/opennmt/models/model.py b/opennmt/models/model.py
--- a/opennmt/models/model.py
+++ b/opennmt/models/model.py
@@ -268,7 +268,7 @@ def create_variables(self, optimizer=None):
[dim or 1 for dim in spec.shape.as_list()[1:]],
tf.constant("" if spec.dtype is tf.string else 1, dtype=spec.dtype)),
self.examples_inputter.input_signature())
- features = self.examples_inputter.make_features(features=features, training=True)
+ features = self.examples_inputter.make_features(features=features)
# Add the batch dimension back before calling the model.
features, labels = tf.nest.map_structure(lambda x: tf.expand_dims(x, 0), features)
| diff --git a/opennmt/tests/model_test.py b/opennmt/tests/model_test.py
--- a/opennmt/tests/model_test.py
+++ b/opennmt/tests/model_test.py
@@ -205,6 +205,7 @@ def testSequenceToSequenceWithGuidedAlignment(self, ga_type):
params["guided_alignment_type"] = ga_type
features_file, labels_file, data_config = self._makeToyEnDeData(with_alignments=True)
model.initialize(data_config, params=params)
+ model.create_variables()
dataset = model.examples_inputter.make_training_dataset(features_file, labels_file, 16)
features, labels = next(iter(dataset))
self.assertIn("alignment", labels)
| TypeError: 'NoneType' object is not iterable
OpenNMT-tf v2.1.0
Python v3.5.2
tensorflow-gpu v2.0.0
```
model_dir: toy_enru_transformer_withalign
data:
train_features_file: data/train.en
train_labels_file: data/train.ru
train_alignments: data/train.align
eval_features_file: data/valid.en
eval_labels_file: data/valid.ru
source_vocabulary: data/mdl-en.vocab
target_vocabulary: data/mdl-ru.vocab
params:
guided_alignment_type: "ce"
guided_alignment_weight: 1.0
train:
save_checkpoints_steps: 1000
eval:
external_evaluators: BLEU
export_on_best: bleu
early_stopping:
metric: bleu
min_improvement: 0.01
steps: 4
infer:
batch_size: 32
```
```
Traceback (most recent call last):
File "/usr/local/bin/onmt-main", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/opennmt/bin/main.py", line 189, in main
checkpoint_path=args.checkpoint_path)
File "/usr/local/lib/python3.5/dist-packages/opennmt/runner.py", line 205, in train
export_on_best=eval_config.get("export_on_best"))
File "/usr/local/lib/python3.5/dist-packages/opennmt/training.py", line 70, in __call__
self._model.create_variables(optimizer=self._optimizer)
File "/usr/local/lib/python3.5/dist-packages/opennmt/models/model.py", line 271, in create_variables
features = self.examples_inputter.make_features(features=features, training=True)
File "/usr/local/lib/python3.5/dist-packages/opennmt/models/sequence_to_sequence.py", line 426, in make_features
element, alignment = element
TypeError: 'NoneType' object is not iterable
```
https://github.com/OpenNMT/OpenNMT-tf/blob/bec38f98de000361b0ce017cc86b9f8d0d2815c9/opennmt/models/model.py#L271
https://github.com/OpenNMT/OpenNMT-tf/blob/bec38f98de000361b0ce017cc86b9f8d0d2815c9/opennmt/models/sequence_to_sequence.py#L433-L435
Trying to unpack a `NoneType` tuple. There's no check for None in `sequence_to_sequence.py` and in `model.py` we don't send a value for `element`.
This looks like a bug. Can you actually train a model on v2.1.0?
| Thanks for reporting! The error happens when using guided alignment. I will fix this ASAP. | 2019-10-18T09:17:57 |
OpenNMT/OpenNMT-tf | 527 | OpenNMT__OpenNMT-tf-527 | [
"526"
] | acfb1f47de8e7e3f063ad88763a1050ab1274038 | diff --git a/opennmt/utils/losses.py b/opennmt/utils/losses.py
--- a/opennmt/utils/losses.py
+++ b/opennmt/utils/losses.py
@@ -101,9 +101,9 @@ def guided_alignment_cost(attention_probs,
ValueError: if :obj:`cost_type` is invalid.
"""
if cost_type == "ce":
- loss = tf.keras.losses.CategoricalCrossentropy()
+ loss = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.SUM)
elif cost_type == "mse":
- loss = tf.keras.losses.MeanSquaredError()
+ loss = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.SUM)
else:
raise ValueError("invalid guided alignment cost: %s" % cost_type)
@@ -113,13 +113,16 @@ def guided_alignment_cost(attention_probs,
maxlen=tf.shape(attention_probs)[1],
dtype=attention_probs.dtype)
sample_weight = tf.expand_dims(sample_weight, -1)
+ normalizer = tf.reduce_sum(sequence_length)
else:
sample_weight = None
+ normalizer = tf.size(attention_probs)
cost = loss(
gold_alignment,
attention_probs,
sample_weight=sample_weight)
+ cost /= tf.cast(normalizer, cost.dtype)
return weight * cost
def regularization_penalty(regularization_type, scale, weights):
| diff --git a/opennmt/tests/losses_test.py b/opennmt/tests/losses_test.py
--- a/opennmt/tests/losses_test.py
+++ b/opennmt/tests/losses_test.py
@@ -30,6 +30,26 @@ def testRegulaizationMissingScaleValue(self):
with self.assertRaises(ValueError):
losses.regularization_penalty("l1_l2", 1e-4, [])
+ @parameterized.expand([
+ ["ce", False],
+ ["mse", False],
+ ["mse", True],
+ ])
+ def testGuidedAlignmentCostUnderDistributionStrategy(self, cost_type, with_length):
+ strategy = tf.distribute.MirroredStrategy(devices=["/cpu:0"])
+ attention_probs = tf.random.uniform([2, 5, 6])
+ gold_alignment = tf.random.uniform([2, 5, 6])
+ if with_length:
+ sequence_length = tf.constant([4, 5], dtype=tf.int32)
+ else:
+ sequence_length = None
+ with strategy.scope():
+ losses.guided_alignment_cost(
+ attention_probs,
+ gold_alignment,
+ sequence_length=sequence_length,
+ cost_type=cost_type)
+
if __name__ == "__main__":
tf.test.main()
| ValueError in guided alignment loss during training
Keras losses throw an exception when they are used within a distribution strategy scope and the reduction mode is unset.
```
ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
| I've happen to reproduce this issue.
OpenNMT-tf v2.1.0
Python v3.5.2
tensorflow-gpu v2.0.0
I'm trying to train a Transformer model with guided alignment. I've used SentencePiece to tokenize EN-RU corpora with BPE and then I've ran `fast_align` with `grow-diag-final-and`.
```
▁I ▁found ▁them ▁in ▁the ▁garbage ▁.
▁But ▁look ▁how ▁many ▁there ▁are ▁.
▁Well ▁, ▁let ▁me ▁take ▁a ▁look ▁at ▁' ▁em ▁.
```
```
▁На шёл ▁на ▁помо й ке ▁.
▁Зато ▁смотри ▁как ▁их ▁много ▁.
▁Ладно ▁, ▁дай ▁взгля ну ▁поближе ▁.
```
```
1-0 3-2 5-3 6-6
0-0 1-1 2-2 3-3 6-5
0-0 1-1 2-2 6-3 10-6
```
Here is actual basic configuration that I use:
```
model_dir: toy_enru_transformer_withalign
data:
train_features_file: data/train.en
train_labels_file: data/train.ru
train_alignments: data/train.align
eval_features_file: data/valid.en
eval_labels_file: data/valid.ru
source_vocabulary: data/mdl-en.vocab
target_vocabulary: data/mdl-ru.vocab
params:
guided_alignment_type: "ce"
guided_alignment_weight: 1.0
train:
batch_type: tokens
save_checkpoints_steps: 5000
keep_checkpoint_max: 8
eval:
external_evaluators: BLEU
infer:
batch_size: 32
```
Running this command produces following output:
$ `onmt-main --model_type Transformer --config config/toy.yml --auto_config train --with_eval`
```
one NUMA node, so returning NUMA node zero
2019-10-18 09:59:04.801496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/device:GPU:7 with 15021 MB memory) -> physical GPU (device: 7, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:1e.0, compute capability: 7.0)
2019-10-18 09:59:04.962543: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/constant_op.py:253: _EagerTensorBase.cpu (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.identity instead.
INFO:tensorflow:Saved checkpoint toy_enru_transformer_withalign/ckpt-0
INFO:tensorflow:Error reported to Coordinator: in converted code:
relative to /usr/local/lib/python3.5/dist-packages:
opennmt/training.py:87 _accumulate_gradients *
loss = self._model.compute_loss(outputs, target, training=True)
opennmt/models/sequence_to_sequence.py:333 compute_loss *
loss += losses.guided_alignment_cost(
opennmt/utils/losses.py:119 guided_alignment_cost *
cost = loss(
tensorflow_core/python/keras/losses.py:128 __call__
losses, sample_weight, reduction=self._get_reduction())
tensorflow_core/python/keras/losses.py:162 _get_reduction
'Please use `tf.keras.losses.Reduction.SUM` or '
ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
with strategy.scope():
loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.reduction.NONE)
....
loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
```
Please see https://www.tensorflow.org/alpha/tutorials/distribute/training_loops for more details.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/training/coordinator.py", line 297, in stop_on_exception
yield
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/distribute/mirrored_strategy.py", line 879, in run
self.main_result = self.main_fn(*self.main_args, **self.main_kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/autograph/impl/api.py", line 237, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in converted code:
relative to /usr/local/lib/python3.5/dist-packages:
opennmt/training.py:87 _accumulate_gradients *
loss = self._model.compute_loss(outputs, target, training=True)
opennmt/models/sequence_to_sequence.py:333 compute_loss *
loss += losses.guided_alignment_cost(
opennmt/utils/losses.py:119 guided_alignment_cost *
cost = loss(
tensorflow_core/python/keras/losses.py:128 __call__
losses, sample_weight, reduction=self._get_reduction())
tensorflow_core/python/keras/losses.py:162 _get_reduction
'Please use `tf.keras.losses.Reduction.SUM` or '
ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
with strategy.scope():
loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.reduction.NONE)
....
loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
```
Please see https://www.tensorflow.org/alpha/tutorials/distribute/training_loops for more details.
Traceback (most recent call last):
File "/usr/local/bin/onmt-main", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/opennmt/bin/main.py", line 189, in main
checkpoint_path=args.checkpoint_path)
File "/usr/local/lib/python3.5/dist-packages/opennmt/runner.py", line 205, in train
export_on_best=eval_config.get("export_on_best"))
File "/usr/local/lib/python3.5/dist-packages/opennmt/training.py", line 146, in __call__
for i, (loss, num_words) in enumerate(_forward()): # pylint: disable=no-value-for-parameter
File "/usr/local/lib/python3.5/dist-packages/opennmt/data/dataset.py", line 433, in _fun
outputs = _tf_fun()
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/eager/def_function.py", line 457, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/eager/def_function.py", line 503, in _call
self._initialize(args, kwds, add_initializers_to=initializer_map)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/eager/def_function.py", line 408, in _initialize
*args, **kwds))
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/eager/function.py", line 1848, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/eager/function.py", line 2150, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/eager/function.py", line 2041, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/func_graph.py", line 915, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/eager/def_function.py", line 358, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/func_graph.py", line 905, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in converted code:
relative to /usr/local/lib/python3.5/dist-packages:
opennmt/data/dataset.py:429 _tf_fun *
return func(lambda: next(iterator))
opennmt/training.py:122 _forward *
per_replica_loss, per_replica_words = self._strategy.experimental_run_v2(
tensorflow_core/python/distribute/distribute_lib.py:760 experimental_run_v2
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
opennmt/training.py:87 _accumulate_gradients *
loss = self._model.compute_loss(outputs, target, training=True)
opennmt/models/sequence_to_sequence.py:333 compute_loss *
loss += losses.guided_alignment_cost(
opennmt/utils/losses.py:119 guided_alignment_cost *
cost = loss(
tensorflow_core/python/keras/losses.py:128 __call__
losses, sample_weight, reduction=self._get_reduction())
tensorflow_core/python/keras/losses.py:162 _get_reduction
'Please use `tf.keras.losses.Reduction.SUM` or '
ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
with strategy.scope():
loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.reduction.NONE)
....
loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
```
Please see https://www.tensorflow.org/alpha/tutorials/distribute/training_loops for more details.
```
Please could you elaborate on that. Maybe there is a workaround to train Transformer with guided alignment on > v2.0.0?
This just requires a fix in the code. Transformer with guided alignment works by itself but not when used with the training utilities that run the model within a [distribution strategy](https://www.tensorflow.org/api_docs/python/tf/distribute).
Ok. I'm really waiting forward for this to be fixed.
As my understanding goes I can pass a reduction of `SUM` to loss in `losses.py` and it should work correctly. Can this actually work? | 2019-10-18T11:48:02 |
OpenNMT/OpenNMT-tf | 536 | OpenNMT__OpenNMT-tf-536 | [
"529"
] | 43b377988116600e96d46d9eaae3e2d462fe2923 | diff --git a/opennmt/decoders/self_attention_decoder.py b/opennmt/decoders/self_attention_decoder.py
--- a/opennmt/decoders/self_attention_decoder.py
+++ b/opennmt/decoders/self_attention_decoder.py
@@ -74,7 +74,7 @@ def maximum_sources(self):
@property
def support_alignment_history(self):
- return self.num_sources == 1
+ return True
def map_v1_weights(self, weights):
m = []
diff --git a/opennmt/layers/transformer.py b/opennmt/layers/transformer.py
--- a/opennmt/layers/transformer.py
+++ b/opennmt/layers/transformer.py
@@ -356,12 +356,12 @@ def __init__(self,
self.self_attention = TransformerLayerWrapper(
self.self_attention, dropout)
self.attention = []
- for _ in range(num_sources):
+ for i in range(num_sources):
attention = MultiHeadAttention(
num_heads,
num_units,
dropout=attention_dropout,
- return_attention=num_sources == 1)
+ return_attention=i == 0)
attention = TransformerLayerWrapper(
attention, dropout)
self.attention.append(attention)
| diff --git a/opennmt/tests/decoder_test.py b/opennmt/tests/decoder_test.py
--- a/opennmt/tests/decoder_test.py
+++ b/opennmt/tests/decoder_test.py
@@ -93,14 +93,15 @@ def _testDecoder(self,
training=True)
self.assertEqual(outputs.dtype, dtype)
output_time_dim = tf.shape(outputs)[1]
- if decoder.support_alignment_history and num_sources == 1:
+ if decoder.support_alignment_history:
self.assertIsNotNone(attention)
else:
self.assertIsNone(attention)
output_time_dim_val = self.evaluate(output_time_dim)
self.assertEqual(time_dim, output_time_dim_val)
- if decoder.support_alignment_history and num_sources == 1:
- attention_val, memory_time = self.evaluate([attention, tf.shape(memory)[1]])
+ if decoder.support_alignment_history:
+ first_memory = memory[0] if isinstance(memory, list) else memory
+ attention_val, memory_time = self.evaluate([attention, tf.shape(first_memory)[1]])
self.assertAllEqual([batch_size, time_dim, memory_time], attention_val.shape)
# Test 2D inputs.
@@ -111,7 +112,7 @@ def _testDecoder(self,
step,
state=initial_state)
self.assertEqual(outputs.dtype, dtype)
- if decoder.support_alignment_history and num_sources == 1:
+ if decoder.support_alignment_history:
self.assertIsNotNone(attention)
else:
self.assertIsNone(attention)
| can we output alignment of ParallelInputter?
I was using the ParallelInputter and transformer, and I found that we can't output the alignment,
> (optional) For compatible models, also output the alignments (can be: null, hard, soft,default: null).
> with_alignments: null
according the turtorial, want to know which part is not supported, and how to get the alignment?
my model is simliar as https://github.com/OpenNMT/OpenNMT-tf/blob/master/config/models/multi_source_transformer.py
| The alignment information is not returned in this case because it's unclear what to do: return alignment for first source? for second? for both?
What do you suggest?
for my specific case, i want to use alignment for first source.
for generic cases, maybe can add parameter to control output what alignments.
As a rule of thumbs, let's always return the attention of the first source. It seems better than returning nothing at the moment. | 2019-10-29T17:28:15 |
OpenNMT/OpenNMT-tf | 543 | OpenNMT__OpenNMT-tf-543 | [
"542"
] | c6edefdca203a4b58418684295203c3b521cd5bc | diff --git a/opennmt/runner.py b/opennmt/runner.py
--- a/opennmt/runner.py
+++ b/opennmt/runner.py
@@ -155,7 +155,6 @@ def train(self, num_devices=1, with_eval=False, checkpoint_path=None):
data_config.get("train_labels_file"),
train_config["batch_size"],
batch_type=batch_type,
- batch_multiplier=num_devices,
batch_size_multiple=batch_size_multiple,
shuffle_buffer_size=train_config["sample_buffer_size"],
length_bucket_width=train_config["length_bucket_width"],
diff --git a/opennmt/training.py b/opennmt/training.py
--- a/opennmt/training.py
+++ b/opennmt/training.py
@@ -67,7 +67,13 @@ def __call__(self,
with self._strategy.scope():
self._model.create_variables(optimizer=self._optimizer)
variables = self._model.trainable_variables
- dataset = self._strategy.experimental_distribute_dataset(dataset)
+ base_dataset = dataset
+ # We prefer not to use experimental_distribute_dataset here because it
+ # sometimes fails to split the batches (noticed with tokens batch type).
+ # We also assume for now that we are training with a single worker
+ # otherwise we would need to correctly shard the input dataset.
+ dataset = self._strategy.experimental_distribute_datasets_from_function(
+ lambda _: base_dataset)
gradient_accumulator = optimizer_util.GradientAccumulator()
if self._mixed_precision:
| experimental_distribute_dataset does not always split the batches
In 4f8189b, we scaled the batch size by the number of devices based on `experimental_distribute_dataset` documentation:
> We will assume that the input dataset is batched by the global batch size
However, the method does not always split the input batches. This was at least found when using `batch_type: tokens` even though the first dimension is divisible by the number of replicas.
| 2019-11-07T11:10:53 |
||
OpenNMT/OpenNMT-tf | 545 | OpenNMT__OpenNMT-tf-545 | [
"544"
] | bbf48f4ca8ffd2804b36019f217d07e3704a560e | diff --git a/opennmt/training.py b/opennmt/training.py
--- a/opennmt/training.py
+++ b/opennmt/training.py
@@ -123,15 +123,40 @@ def _forward(next_fn):
with tf.summary.record_if(should_record_summaries):
with self._strategy.scope():
per_replica_source, per_replica_target = next_fn()
- per_replica_loss, per_replica_words = self._strategy.experimental_run_v2(
- _accumulate_gradients, args=(per_replica_source, per_replica_target))
- # TODO: these reductions could be delayed until _step is called.
- loss = self._strategy.reduce(tf.distribute.ReduceOp.MEAN, per_replica_loss, None)
- num_words = {
- k:self._strategy.reduce(tf.distribute.ReduceOp.SUM, v, None)
- for k, v in six.iteritems(per_replica_words)}
- return loss, num_words
+ def _run():
+ per_replica_loss, per_replica_words = self._strategy.experimental_run_v2(
+ _accumulate_gradients, args=(per_replica_source, per_replica_target))
+
+ # TODO: these reductions could be delayed until _step is called.
+ loss = self._strategy.reduce(tf.distribute.ReduceOp.MEAN, per_replica_loss, None)
+ num_words = {
+ k:self._strategy.reduce(tf.distribute.ReduceOp.SUM, v, None)
+ for k, v in six.iteritems(per_replica_words)}
+ return loss, num_words, False
+
+ def _skip():
+ loss = tf.constant(0, dtype=tf.float32)
+ num_words = {}
+ if "length" in per_replica_source:
+ num_words["source"] = tf.constant(0, dtype=tf.int32)
+ if "length" in per_replica_target:
+ num_words["target"] = tf.constant(0, dtype=tf.int32)
+ return loss, num_words, True
+
+ # We verify here that each replica receives a non empty batch. If not,
+ # we skip this iteration. This typically happens at the last iteration
+ # when training on a finite dataset.
+ # TODO: is there a simpler way to handle this case?
+ per_replica_non_empty_batch = self._strategy.experimental_run_v2(
+ lambda tensor: tf.math.count_nonzero(tf.shape(tensor)[0]),
+ args=(tf.nest.flatten(per_replica_source)[0],))
+ non_empty_batch_count = self._strategy.reduce(
+ tf.distribute.ReduceOp.SUM, per_replica_non_empty_batch, None)
+ return tf.cond(
+ tf.math.equal(non_empty_batch_count, self._strategy.num_replicas_in_sync),
+ true_fn=_run,
+ false_fn=_skip)
@tf.function
def _step():
@@ -147,7 +172,12 @@ def _step():
self._checkpoint.save(0)
self._model.visualize(self._checkpoint.model_dir)
- for i, (loss, num_words) in enumerate(_forward()): # pylint: disable=no-value-for-parameter
+ for i, (loss, num_words, skipped) in enumerate(_forward()): # pylint: disable=no-value-for-parameter
+ if skipped:
+ # We assume only the last partial batch can possibly be skipped.
+ tf.get_logger().warning("Batch %d is partial, i.e. some training replicas "
+ "received an empty batch as input. Skipping.", i + 1)
+ break
if tf.math.is_nan(loss):
raise RuntimeError("Model diverged with loss = NaN.")
if i == 0 or (i + 1) % accum_steps == 0:
| diff --git a/opennmt/tests/runner_test.py b/opennmt/tests/runner_test.py
--- a/opennmt/tests/runner_test.py
+++ b/opennmt/tests/runner_test.py
@@ -111,7 +111,8 @@ def testTrainDistribute(self):
"train": {
"batch_size": 2,
"length_bucket_width": None,
- "max_step": 145002 # Just train for 2 steps.
+ "max_step": 145003,
+ "single_pass": True, # Test we do not fail when a batch is missing for a replica.
}
}
runner = self._getTransliterationRunner(config)
| Possible error when training on a finite dataset with multiple GPU
When the total number of batches is not a multiple of the number of replicas (finite dataset), the training can stop with an error because some replicas receive an empty batch.
This error can happen on master, or on v2.2.0 when TensorFlow fails to use batch splitting approach to feed the replicas.
| 2019-11-07T16:29:55 |
|
OpenNMT/OpenNMT-tf | 569 | OpenNMT__OpenNMT-tf-569 | [
"568"
] | 4ce448a89d8a6f5f82b99ec7edb4d7810ed7661b | diff --git a/examples/serving/python/ende_client.py b/examples/serving/python/ende_client.py
--- a/examples/serving/python/ende_client.py
+++ b/examples/serving/python/ende_client.py
@@ -43,7 +43,7 @@ def _preprocess(self, texts):
def _postprocess(self, outputs):
texts = []
for tokens, length in zip(outputs["tokens"].numpy(), outputs["length"].numpy()):
- tokens = tokens[0][:length[0]]
+ tokens = tokens[0][:length[0]].tolist()
texts.append(self._tokenizer.detokenize(tokens))
return texts
| Error while running the exported model
Hi,
I was trying to run the example given [https://github.com/OpenNMT/OpenNMT-tf/tree/master/examples/serving/python](url).
I am getting the following error.
> Source: I am going.
Traceback (most recent call last):
File "ende_client.py", line 66, in <module>
main()
File "ende_client.py", line 60, in main
output = translator.translate([text])
File "ende_client.py", line 22, in translate
return self._postprocess(outputs)
File "ende_client.py", line 47, in _postprocess
texts.append(self._tokenizer.detokenize(tokens))
TypeError: detokenize(): incompatible function arguments. The following argument types are supported:
1. (self: pyonmttok.Tokenizer, tokens: list, features: object = None) -> str
> Invoked with: <pyonmttok.Tokenizer object at 0x147d10d0d538>, array([b'\xe2\x96\x81Ich', b'\xe2\x96\x81gehe', b'.'], dtype=object)
> WARNING:tensorflow:Unresolved object in checkpoint: (root).examples_inputter.features_inputter.ids_to_tokens._initializer
> WARNING:tensorflow:Unresolved object in checkpoint: (root).examples_inputter.labels_inputter.ids_to_tokens._initializer
> WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics for details.
>
I have the updated version of pyonmttok.
Thanks,
Sriram
| 2019-12-08T14:13:52 |
||
OpenNMT/OpenNMT-tf | 577 | OpenNMT__OpenNMT-tf-577 | [
"576"
] | ea09d0f8c58e811ff906fb18c534305ac8be7ed7 | diff --git a/opennmt/bin/ark_to_records.py b/opennmt/bin/ark_to_records.py
--- a/opennmt/bin/ark_to_records.py
+++ b/opennmt/bin/ark_to_records.py
@@ -43,7 +43,7 @@ def consume_next_vector(ark_file):
if end:
break
- return idx, np.asarray(vector, dtype=tf.float32)
+ return idx, np.asarray(vector, dtype=np.float32)
def consume_next_text(text_file):
"""Consumes the next text line from `text_file`."""
| Bug in "onmt-ark-to-records" code
I have found a small bug in the code line referenced below. It causes the script to terminate with a `TypeError: data type not understood`. Just for the sake of completeness, this is caused by the fact that numpy doesn't understand the object `tf.float32`. I changed that to `float` and it worked as it was supposed to. I can create a PR for this, but I suppose it is too trivial to do so and claim a contribution, unless you want me to.
https://github.com/OpenNMT/OpenNMT-tf/blob/5809c293d7bc65d923274cfd56b3339fc4107af6/opennmt/bin/ark_to_records.py#L46
| 2019-12-16T08:38:55 |
||
OpenNMT/OpenNMT-tf | 586 | OpenNMT__OpenNMT-tf-586 | [
"547"
] | 1743778ab77c3ee922a98ba53dfc6968a993b78b | diff --git a/opennmt/layers/rnn.py b/opennmt/layers/rnn.py
--- a/opennmt/layers/rnn.py
+++ b/opennmt/layers/rnn.py
@@ -49,19 +49,6 @@ def get_initial_state(self, inputs=None, batch_size=None, dtype=None):
return self.cell.get_initial_state(
inputs=inputs, batch_size=batch_size, dtype=dtype)
-
-class _StackedRNNCells(tf.keras.layers.StackedRNNCells):
-
- # To pass the training flag to the cell, tf.keras.layers.RNN checks that the
- # cell call method explicitly takes the "training" argument, which
- # tf.keras.layers.StackedRNNCells do not.
- # TODO: remove this when this change is released:
- # https://github.com/tensorflow/tensorflow/commit/df2b252fa380994cd9236cc56b06557bcf12a9d3
- def call(self, inputs, states, constants=None, training=None, **kwargs):
- kwargs["training"] = training
- return super(_StackedRNNCells, self).call(inputs, states, constants=constants, **kwargs)
-
-
def make_rnn_cell(num_layers,
num_units,
dropout=0,
@@ -94,7 +81,7 @@ def make_rnn_cell(num_layers,
cell = RNNCellWrapper(
cell, output_dropout=dropout, residual_connection=residual_connections)
cells.append(cell)
- return _StackedRNNCells(cells)
+ return tf.keras.layers.StackedRNNCells(cells)
class _RNNWrapper(tf.keras.layers.Layer):
diff --git a/opennmt/utils/misc.py b/opennmt/utils/misc.py
--- a/opennmt/utils/misc.py
+++ b/opennmt/utils/misc.py
@@ -2,12 +2,10 @@
import collections
import copy
-import copyreg
import sys
import inspect
import heapq
import os
-import threading
import numpy as np
import tensorflow as tf
@@ -183,9 +181,6 @@ def index_structure(structure, path):
def clone_layer(layer):
"""Clones a layer."""
- # TODO: clean this up when this change is released:
- # https://github.com/tensorflow/tensorflow/commit/4fd10c487c7e287f99b9a1831316add453dcba04
- copyreg.pickle(threading.local, lambda _: (threading.local, []))
return copy.deepcopy(layer)
def gather_all_layers(layer):
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -36,7 +36,7 @@
"pyyaml==5.1.*",
"rouge==0.3.1",
"sacrebleu>=1.4.1,<2",
- "tensorflow-addons>=0.6,<0.7"
+ "tensorflow-addons>=0.7,<0.8"
],
extras_require={
"tests": tests_require,
| Tagger with CRF fails on sequences of length 1
```
File "/opt/anaconda3/bin/onmt-main", line 10, in <module>
sys.exit(main())
File "/opt/anaconda3/lib/python3.7/site-packages/opennmt/bin/main.py", line 189, in main
checkpoint_path=args.checkpoint_path)
File "/opt/anaconda3/lib/python3.7/site-packages/opennmt/runner.py", line 187, in train
mixed_precision=self._mixed_precision)
File "/opt/anaconda3/lib/python3.7/site-packages/opennmt/training.py", line 27, in __init__
raise ValueError("No optimizer is defined")
ValueError: No optimizer is defined
```
my command is: onmt-main --config tag22.yml --model_type LstmCnnCrfTagger train
and with OpenNMT-tf 2.2.1
tag22.yml
```
model_dir: tag4/
data:
train_features_file: tag_train_q.txt
train_labels_file: tag_train_l.txt
eval_features_file: tag_test_q.txt
eval_labels_file: tag_test_l.txt
source_1_vocabulary: tag_src.txt
source_2_vocabulary: char_vocab.txt
target_vocabulary: tag.txt
train:
batch_size: 32
effective_batch_size: 320
sample_buffer_size: 10000000
bucket_width: 5
train_steps: 1000000000
infer:
n_best: 30
batch_size: 1
with_scores: false
with_alignments: null
```
| You should define an optimizer in your configuration, e.g.:
```yaml
params:
optimizer: Adam
learning_rate: 0.001
```
after enable this, and got new error
````
tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Invalid argument: Tried to stack elements of an empty list with non-fully-defined element_shape: [?,2]
[[{{node cond_2/then/_35/scan/TensorArrayV2Stack/TensorListStack}}]]
Traceback (most recent call last):
File "/opt/anaconda3/bin/onmt-main", line 10, in <module>
sys.exit(main())
File "/opt/anaconda3/lib/python3.7/site-packages/opennmt/bin/main.py", line 189, in main
checkpoint_path=args.checkpoint_path)
File "/opt/anaconda3/lib/python3.7/site-packages/opennmt/runner.py", line 196, in train
export_on_best=eval_config.get("export_on_best"))
File "/opt/anaconda3/lib/python3.7/site-packages/opennmt/training.py", line 175, in __call__
for i, (loss, num_words, skipped) in enumerate(_forward()): # pylint: disable=no-value-for-parameter
File "/opt/anaconda3/lib/python3.7/site-packages/opennmt/data/dataset.py", line 433, in _fun
outputs = _tf_fun()
File "/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 457, in __call__
result = self._call(*args, **kwds)
File "/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 487, in _call
return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
File "/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 1823, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 1141, in _filtered_call
self.captured_inputs)
File "/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 1224, in _call_flat
ctx, args, cancellation_manager=cancellation_manager)
File "/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 511, in call
ctx=ctx)
File "/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Tried to stack elements of an empty list with non-fully-defined element_shape: [?,2]
[[{{node cond_2/then/_35/scan/TensorArrayV2Stack/TensorListStack}}]] [Op:__inference__tf_fun_8432]
Function call stack:
_tf_fun
```
BTW, should be super(LstmCnnCrfTagger .....
https://github.com/OpenNMT/OpenNMT-tf/blob/master/opennmt/models/catalog.py#L125
> after enable this, and got new error
Thanks, I was able to reproduce. I will have a look.
> BTW, should be super(LstmCnnCrfTagger .....
This one is fixed by the PR linked above.
The error is in the CRF module from TensorFlow Addons. I opened an issue there: https://github.com/tensorflow/addons/issues/694.
In the meantime, you could disable `crf_decoding`.
Let's keep this open to track when the fix will be included in OpenNMT-tf.
Fixed in TFA by commit https://github.com/tensorflow/addons/commit/7f68da6c717c232bc0760ca1ab9ad13ce26f221a
Thanks for the fix @Squadrick! | 2020-01-10T16:31:45 |
|
OpenNMT/OpenNMT-tf | 587 | OpenNMT__OpenNMT-tf-587 | [
"516"
] | 26e9a782391ad9224620abff12b6c3c744d2fc2d | diff --git a/opennmt/inputters/text_inputter.py b/opennmt/inputters/text_inputter.py
--- a/opennmt/inputters/text_inputter.py
+++ b/opennmt/inputters/text_inputter.py
@@ -207,18 +207,35 @@ def _get_field(config, key, prefix=None, default=None, required=False):
raise ValueError("Missing field '%s' in the data configuration" % key)
return value
-def _create_tokens_to_ids_table(tokens, ids, num_oov_buckets):
- # TODO: consider reverting back to TextFileInitializer when this change is released:
- # https://github.com/tensorflow/tensorflow/pull/32773
- initializer = tf.lookup.KeyValueTensorInitializer(tokens, ids)
+def _create_vocabulary_tables(vocabulary_file, num_oov_buckets, as_asset=True):
+ vocabulary = Vocab.from_file(vocabulary_file)
+ vocabulary_size = len(vocabulary)
+ if as_asset:
+ tokens_to_ids_initializer = tf.lookup.TextFileInitializer(
+ vocabulary_file,
+ tf.string,
+ tf.lookup.TextFileIndex.WHOLE_LINE,
+ tf.int64,
+ tf.lookup.TextFileIndex.LINE_NUMBER,
+ vocab_size=vocabulary_size)
+ ids_to_tokens_initializer = tf.lookup.TextFileInitializer(
+ vocabulary_file,
+ tf.int64,
+ tf.lookup.TextFileIndex.LINE_NUMBER,
+ tf.string,
+ tf.lookup.TextFileIndex.WHOLE_LINE,
+ vocab_size=vocabulary_size)
+ else:
+ tokens = tf.constant(vocabulary.words, dtype=tf.string)
+ ids = tf.constant(list(range(vocabulary_size)), dtype=tf.int64)
+ tokens_to_ids_initializer = tf.lookup.KeyValueTensorInitializer(tokens, ids)
+ ids_to_tokens_initializer = tf.lookup.KeyValueTensorInitializer(ids, tokens)
if num_oov_buckets > 0:
- return tf.lookup.StaticVocabularyTable(initializer, num_oov_buckets)
+ tokens_to_ids = tf.lookup.StaticVocabularyTable(tokens_to_ids_initializer, num_oov_buckets)
else:
- return tf.lookup.StaticHashTable(initializer, 0)
-
-def _create_ids_to_tokens_table(ids, tokens):
- initializer = tf.lookup.KeyValueTensorInitializer(ids, tokens)
- return tf.lookup.StaticHashTable(initializer, constants.UNKNOWN_TOKEN)
+ tokens_to_ids = tf.lookup.StaticHashTable(tokens_to_ids_initializer, 0)
+ ids_to_tokens = tf.lookup.StaticHashTable(ids_to_tokens_initializer, constants.UNKNOWN_TOKEN)
+ return vocabulary_size + num_oov_buckets, tokens_to_ids, ids_to_tokens
class TextInputter(Inputter):
@@ -234,12 +251,10 @@ def __init__(self, num_oov_buckets=1, **kwargs):
def initialize(self, data_config, asset_prefix=""):
self.vocabulary_file = _get_field(
data_config, "vocabulary", prefix=asset_prefix, required=True)
- vocabulary = Vocab.from_file(self.vocabulary_file)
- self.vocabulary_size = len(vocabulary) + self.num_oov_buckets
- tokens = tf.constant(vocabulary.words, dtype=tf.string)
- ids = tf.constant(list(range(len(vocabulary))), dtype=tf.int64)
- self.tokens_to_ids = _create_tokens_to_ids_table(tokens, ids, self.num_oov_buckets)
- self.ids_to_tokens = _create_ids_to_tokens_table(ids, tokens)
+ self.vocabulary_size, self.tokens_to_ids, self.ids_to_tokens = _create_vocabulary_tables(
+ self.vocabulary_file,
+ self.num_oov_buckets,
+ as_asset=data_config.get("export_vocabulary_assets", True))
tokenizer_config = _get_field(data_config, "tokenization", prefix=asset_prefix)
self.tokenizer = tokenizers.make_tokenizer(tokenizer_config)
| diff --git a/opennmt/tests/runner_test.py b/opennmt/tests/runner_test.py
--- a/opennmt/tests/runner_test.py
+++ b/opennmt/tests/runner_test.py
@@ -233,9 +233,11 @@ def testScore(self):
lines = f.readlines()
self.assertEqual(len(lines), 5)
- def testExport(self):
+ @parameterized.expand([[True], [False]])
+ def testExport(self, export_vocabulary_assets):
config = {
"data": {
+ "export_vocabulary_assets": export_vocabulary_assets,
"source_tokenization": {
"mode": "char"
}
@@ -245,10 +247,23 @@ def testExport(self):
runner = self._getTransliterationRunner(config)
runner.export(export_dir)
self.assertTrue(tf.saved_model.contains_saved_model(export_dir))
+
+ # Check assets directories.
+ assets = os.listdir(os.path.join(export_dir, "assets"))
+ if export_vocabulary_assets:
+ self.assertLen(assets, 2)
+ else:
+ self.assertLen(assets, 0)
extra_assets_dir = os.path.join(export_dir, "assets.extra")
self.assertTrue(os.path.isdir(extra_assets_dir))
self.assertLen(os.listdir(extra_assets_dir), 1)
- imported = tf.saved_model.load(export_dir)
+
+ # Export directory could be relocated and does not reference the original vocabulary files.
+ shutil.rmtree(runner.model_dir)
+ export_dir_2 = os.path.join(self.get_temp_dir(), "export_2")
+ os.rename(export_dir, export_dir_2)
+ self.assertTrue(tf.saved_model.contains_saved_model(export_dir_2))
+ imported = tf.saved_model.load(export_dir_2)
translate_fn = imported.signatures["serving_default"]
outputs = translate_fn(
tokens=tf.constant([["آ" ,"ت" ,"ز" ,"م" ,"و" ,"ن"]]),
| exported model does not include vocabulary assets
When using Version `2.1` and calling `onmt-main ... export` the exported model's `assets` directory is empty, rather than including BPE vocab files as in previous versions. Is this intended behaviour? I could submit a patch if it isn't.
| Yes, it is the expected behavior. Because of this TensorFlow issue https://github.com/tensorflow/tensorflow/issues/32770, the vocabulary is currently embedded in the graph itself. I plan to revise this once the TensorFlow fix is released. | 2020-01-10T16:52:49 |
OpenNMT/OpenNMT-tf | 644 | OpenNMT__OpenNMT-tf-644 | [
"640"
] | 5d8e2c188fd9dd129103d012e09da89176ce97fe | diff --git a/opennmt/training.py b/opennmt/training.py
--- a/opennmt/training.py
+++ b/opennmt/training.py
@@ -183,7 +183,8 @@ def _run_model(self, source, target):
training_loss = self._model.regularize_loss(
training_loss, variables=self._model.trainable_variables)
self._update_words_counter("source", source)
- self._update_words_counter("target", target)
+ if not self._model.unsupervised:
+ self._update_words_counter("target", target)
if first_call and self._is_master:
if self._checkpoint is not None:
self._model.visualize(self._checkpoint.model_dir)
| GPT2 training reports on source and target words
When training a language mode (GPT2) model - we can see the following logs:
```
INFO:tensorflow:Evaluation result for step 20000: perplexity = 99.456184 ; loss = 4.599717
INFO:tensorflow:Step = 20100 ; steps/s = 2.09, source words/s = 6382, target words/s = 6382 ; Learning rate = 0.000250 ; Loss = 3.851096
```
There is no reason to see source & target word reports. If I correctly understand the code, `source` and `target` are defined as words counters in `_accumulate_gradients_on_replica` function in training.py regardless of training type. Can you give an hint on how/where we could correct this?
Thanks
| Right, good find. A quick fix would be to add a condition when declaring the target counter, for example:
```python
if not self._model.unsupervised:
self._update_words_counter("target", target)
``` | 2020-04-03T12:29:06 |
|
OpenNMT/OpenNMT-tf | 667 | OpenNMT__OpenNMT-tf-667 | [
"665"
] | 87c49411b9f264e9456aea56f88404b64c52df2e | diff --git a/opennmt/models/model.py b/opennmt/models/model.py
--- a/opennmt/models/model.py
+++ b/opennmt/models/model.py
@@ -235,6 +235,9 @@ def serve_function(self):
@tf.function(input_signature=(input_signature,))
def _run(features):
features = self.features_inputter.make_features(features=features.copy())
+ if isinstance(features, (list, tuple)):
+ # Special case for unsupervised inputters that always return a tuple (features, labels).
+ features = features[0]
_, predictions = self(features)
return predictions
| diff --git a/opennmt/tests/model_test.py b/opennmt/tests/model_test.py
--- a/opennmt/tests/model_test.py
+++ b/opennmt/tests/model_test.py
@@ -318,6 +318,15 @@ def testLanguageModel(self, mode):
prediction_heads=["tokens", "length"],
params=params)
+ def testLanguageModelServing(self):
+ _, data_config = self._makeToyLMData()
+ decoder = decoders.SelfAttentionDecoder(
+ 2, num_units=16, num_heads=4, ffn_inner_dim=32, num_sources=0)
+ model = models.LanguageModel(decoder, embedding_size=16)
+ model.initialize(data_config)
+ function = model.serve_function()
+ function.get_concrete_function()
+
def testLanguageModelInputter(self):
vocabulary_path = test_util.make_vocab(
os.path.join(self.get_temp_dir(), "vocab.txt"), ["a", "b", "c"])
| GPT2 Model export error
When I want to use onmt-main command or runner.export to export a GPT2Small model.
```python
print(tf.__version__)
print(onmt.__version__)
2.1.1
2.9.3
```
I got:
```
runner.export(export_dir='/gdrive/My Drive/nmt/chat50w/models')
INFO:tensorflow:Using parameters:
data:
train_features_file: /gdrive/My Drive/nmt/chat50w/train_tokens.txt
vocabulary: /gdrive/My Drive/nmt/chat50w/vocab.txt
eval:
batch_size: 32
infer:
batch_size: 16
length_bucket_width: 1
model_dir: /gdrive/My Drive/nmt/chat50w/run/
params:
average_loss_in_time: true
decay_params:
max_step: 1000000
warmup_steps: 2000
decay_type: CosineAnnealing
learning_rate: 0.00025
num_hypotheses: 1
optimizer: Adam
score:
batch_size: 64
train:
batch_size: 32
batch_type: examples
length_bucket_width: 1
maximum_features_length: 512
sample_buffer_size: 500000
save_summary_steps: 100
INFO:tensorflow:Restored checkpoint /gdrive/My Drive/nmt/chat50w/run/ckpt-615000
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-35-9a691167a69a> in <module>()
----> 1 runner.export(export_dir='/gdrive/My Drive/nmt/chat50w/models')
14 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise
TypeError: in converted code:
/usr/local/lib/python3.6/dist-packages/opennmt/models/model.py:238 _run *
_, predictions = self(features)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py:778 __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
/usr/local/lib/python3.6/dist-packages/opennmt/models/language_model.py:57 call *
ids, length = features["ids"], features["length"]
TypeError: tuple indices must be integers or slices, not str
```
I think the problem is the function 'serve_function' in opennmt/models/model.py
When I use custom serve_function, it works:
```python
def serve_function(model):
"""Returns a function for serving this model.
Returns:
A ``tf.function``.
"""
# Set name attribute of the input TensorSpec.
input_signature = {
name: tf.TensorSpec.from_spec(spec, name=name)
for name, spec in model.features_inputter.input_signature().items()
}
@tf.function(input_signature=(input_signature,))
def _run(features):
# I THINK THIS IS THE PROBLEM
# OLD
# features = self.features_inputter.make_features(features=features.copy())
# NEW
features, _ = model.features_inputter.make_features(features=features.copy())
_, predictions = model(features)
return predictions
return _run
# USE custom method to build model and export with custom serve_function
config = runner._finalize_config()
model = runner._init_model(config)
checkpoint = checkpoint_util.Checkpoint.from_config(config, model)
checkpoint.restore(checkpoint_path=None, weights_only=True)
tf.saved_model.save(model, '/gdrive/My Drive/nmt/chat50w/model_export', signatures=serve_function(model))
```
| 2020-05-25T07:43:54 |
|
OpenNMT/OpenNMT-tf | 671 | OpenNMT__OpenNMT-tf-671 | [
"655"
] | 508f9e7bd69041c5e16ef5344418bf959ed2ae22 | diff --git a/opennmt/utils/fmeasure.py b/opennmt/utils/fmeasure.py
--- a/opennmt/utils/fmeasure.py
+++ b/opennmt/utils/fmeasure.py
@@ -9,21 +9,15 @@ def fmeasure(ref_path,
with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:
list_null_tags = ["X", "null", "NULL", "Null", "O"]
listtags = []
- linecpt = 0
classref = []
classrandom = []
classhyp = []
nbrtagref = {}
nbrtaghyp = {}
nbrtagok = {}
- for tag in listtags:
- nbrtagref[tag] = 0
- nbrtaghyp[tag] = 0
- nbrtagok[tag] = 0
for line in ref_fp:
line = line.strip()
tabline = line.split(' ')
- tagcpt = 0
lineref = []
for tag in tabline:
lineref.append(tag)
@@ -31,36 +25,29 @@ def fmeasure(ref_path,
nbrtagref[tag] = nbrtagref[tag]+1
else:
nbrtagref[tag] = 1
- tagcpt = tagcpt+1
classref.append(lineref)
- linecpt = linecpt+1
- linecpt = 0
- for line in hyp_fp:
+ for line, lineref in zip(hyp_fp, classref):
line = line.strip()
tabline = line.split(' ')
- tagcpt = 0
linehyp = []
linerandom = []
- for tag in tabline:
+ for tagcpt, tag in enumerate(tabline):
linehyp.append(tag)
if tag not in listtags:
listtags.append(tag)
linerandom.append(tag)
- if tag == classref[linecpt][tagcpt]:
+ if tagcpt < len(lineref) and tag == lineref[tagcpt]:
if tag in nbrtagok.keys():
nbrtagok[tag] = nbrtagok[tag]+1
else:
nbrtagok[tag] = 1
- tagcpt = tagcpt+1
if tag in nbrtaghyp.keys():
nbrtaghyp[tag] = nbrtaghyp[tag]+1
else:
nbrtaghyp[tag] = 1
classhyp.append(linehyp)
classrandom.append(linerandom)
- linecpt = linecpt+1
- tagcpt = 0
fullprecision = 0
fullrecall = 0
precision = {}
@@ -87,12 +74,11 @@ def fmeasure(ref_path,
fulltagok = fulltagok+nbrtagok[tag]
fulltaghyp = fulltaghyp+nbrtaghyp[tag]
fulltagref = fulltagref+nbrtagref[tag]
-# fullprecision = fullprecision+precision[tag]
-# fullrecall = fullrecall+recall[tag]
- tagcpt = tagcpt+1
- fullprecision = round(100*fulltagok/fulltaghyp, 2)/100
- fullrecall = round(100*fulltagok/fulltagref, 2)/100
- fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100
+ fullprecision = fulltagok / fulltaghyp if fulltaghyp != 0 else 0
+ fullrecall = fulltagok / fulltagref if fulltagref != 0 else 0
+ fullfmeasure = (
+ (2 * fullprecision * fullrecall) / (fullprecision + fullrecall)
+ if (fullprecision + fullrecall) != 0 else 0)
if return_precision_only:
return fullprecision
if return_recall_only:
| diff --git a/opennmt/tests/scorers_test.py b/opennmt/tests/scorers_test.py
--- a/opennmt/tests/scorers_test.py
+++ b/opennmt/tests/scorers_test.py
@@ -2,36 +2,49 @@
import tensorflow as tf
+from opennmt.tests import test_util
from opennmt.utils import scorers
class ScorersTest(tf.test.TestCase):
- def _make_perfect_hypothesis_file(self):
- ref_path = os.path.join(self.get_temp_dir(), "ref.txt")
- hyp_path = os.path.join(self.get_temp_dir(), "hyp.txt")
- with open(ref_path, "w") as ref_file, open(hyp_path, "w") as hyp_file:
- text = "Hello world !\nHow is it going ?\n"
- ref_file.write(text)
- hyp_file.write(text)
- return ref_path, hyp_path
+ def _run_scorer(self, scorer, refs, hyps):
+ ref_path = test_util.make_data_file(os.path.join(self.get_temp_dir(), "ref.txt"), refs)
+ hyp_path = test_util.make_data_file(os.path.join(self.get_temp_dir(), "hyp.txt"), hyps)
+ return scorer(ref_path, hyp_path)
def testBLEUScorer(self):
- bleu_scorer = scorers.BLEUScorer()
- ref_path, hyp_path = self._make_perfect_hypothesis_file()
- score = bleu_scorer(ref_path, hyp_path)
+ refs = ["Hello world !", "How is it going ?"]
+ scorer = scorers.BLEUScorer()
+ score = self._run_scorer(scorer, refs, refs)
self.assertEqual(100, int(score))
def testROUGEScorer(self):
- rouge_scorer = scorers.ROUGEScorer()
- ref_path, hyp_path = self._make_perfect_hypothesis_file()
- score = rouge_scorer(ref_path, hyp_path)
+ refs = ["Hello world !", "How is it going ?"]
+ scorer = scorers.ROUGEScorer()
+ score = self._run_scorer(scorer, refs, refs)
self.assertIsInstance(score, dict)
self.assertIn("rouge-l", score)
self.assertIn("rouge-1", score)
self.assertIn("rouge-2", score)
self.assertAlmostEqual(1.0, score["rouge-1"])
+ def testPRFScorer(self):
+ scorer = scorers.PRFScorer()
+ score = self._run_scorer(scorer, refs=["TAG O TAG O O TAG TAG"], hyps=["TAG O O O TAG TAG O"])
+ expected_precision = 2 / 3
+ expected_recall = 2 / 4
+ expected_fscore = (
+ 2 * (expected_precision * expected_recall) / (expected_precision + expected_recall))
+ self.assertAlmostEqual(score["precision"], expected_precision, places=6)
+ self.assertAlmostEqual(score["recall"], expected_recall, places=6)
+ self.assertAlmostEqual(score["fmeasure"], expected_fscore, places=6)
+
+ def testPRFScorerEmptyLine(self):
+ scorer = scorers.PRFScorer()
+ self._run_scorer(scorer, [""], ["O TAG"])
+ self._run_scorer(scorer, ["O TAG"], [""])
+
def testMakeScorers(self):
def _check_scorers(scorers, instances):
| PRF evaluator: list index out of range
Hi!
I'm getting `list index out of range` when prf evaluator is used.
**Config:**
Model: TransformerRelative
params:
beam_width: 1
train:
maximum_features_length: 50
maximum_labels_length: 50
save_summary_steps: 100
sample_buffer_size: 1000000
keep_checkpoint_max: 20
save_checkpoints_steps: 5000
max_step: 2000000
eval:
batch_size: 32
steps: 5000
export_on_best: bleu
external_evaluators: [ "bleu", "prf", "wer" ]
infer:
batch_size: 1024
**Full stack:**
W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled
Traceback (most recent call last):
File "/home/dima/anaconda3/envs/tf/bin/onmt-main", line 8, in <module>
sys.exit(main())
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/bin/main.py", line 224, in main
hvd=hvd)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/runner.py", line 217, in train
moving_average_decay=train_config.get("moving_average_decay"))
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py", line 118, in __call__
early_stop = self._evaluate(evaluator, step, moving_average=moving_average)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py", line 140, in _evaluate
evaluator(step)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/evaluation.py", line 299, in __call__
score = scorer(self._labels_file, output_path)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/scorers.py", line 132, in __call__
precision_score, recall_score, fmeasure_score = fmeasure(ref_path, hyp_path)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/fmeasure.py", line 49, in fmeasure
if tag == classref[linecpt][tagcpt]:
IndexError: list index out of range
Can I help you with the issue? I'm not familiar with the code base, but I can try to reproduce it locally and extract the context if necessary.
| Hello,
@cservan Does the error mean anything to you?
> Can I help you with the issue? I'm not familiar with the code base, but I can try to reproduce it locally and extract the context if necessary.
Sure, that would be helpful. If you want to go further and fix the issue, you could first try to add a test in `opennmt/tests/scorers_test.py` to reproduce the error. (You can find more information on how to run tests [here](https://github.com/OpenNMT/OpenNMT-tf/blob/master/CONTRIBUTING.md#testing).)
Thanks @guillaumekln ,
I'll take a look, it seems it comes from the "prf" scorer.
Cheers,
C.
It is most likely related to an empty hypothesis or reference. | 2020-05-28T14:24:10 |
OpenNMT/OpenNMT-tf | 696 | OpenNMT__OpenNMT-tf-696 | [
"695"
] | 9079cc474244bae77834f668b2c4ba5cdf57bcb5 | diff --git a/opennmt/layers/transformer.py b/opennmt/layers/transformer.py
--- a/opennmt/layers/transformer.py
+++ b/opennmt/layers/transformer.py
@@ -272,8 +272,8 @@ def _compute_kv(x):
keys_length,
self.maximum_relative_position,
with_cache=bool(cache))
- relative_repr_keys = tf.gather(self.relative_position_keys, relative_pos)
- relative_repr_values = tf.gather(self.relative_position_values, relative_pos)
+ relative_repr_keys = tf.nn.embedding_lookup(self.relative_position_keys, relative_pos)
+ relative_repr_values = tf.nn.embedding_lookup(self.relative_position_values, relative_pos)
else:
relative_repr_keys = None
relative_repr_values = None
| diff --git a/opennmt/tests/transformer_test.py b/opennmt/tests/transformer_test.py
--- a/opennmt/tests/transformer_test.py
+++ b/opennmt/tests/transformer_test.py
@@ -142,6 +142,20 @@ def testMultiHeadSelfAttentionRelativePositionsWithCache(self):
cache = (tf.zeros([4, 4, 0, 5]), tf.zeros([4, 4, 0, 5]))
_, cache = attention(x, cache=cache)
+ def testMultiHeadSelfAttentionRelativeGradients(self):
+ attention = transformer.MultiHeadAttention(4, 20, maximum_relative_position=6)
+
+ @tf.function
+ def _compute_gradients_in_function(x):
+ with tf.GradientTape() as tape:
+ y, _ = attention(x)
+ loss = tf.math.reduce_sum(y)
+ gradients = tape.gradient(loss, attention.weights)
+ for gradient in gradients:
+ self.assertTrue(gradient.shape.is_fully_defined())
+
+ _compute_gradients_in_function(tf.random.uniform([4, 1, 10]))
+
def testMultiHeadAttention(self):
attention = transformer.MultiHeadAttention(4, 20)
queries = tf.random.uniform([4, 5, 10])
| Crash when training TransformerBaseRelative model
When training a TransformerBaseRelative model using the following command:
```
onmt-main --model_type TransformerBaseRelative --config config.yml --auto_config train --with_eval
```
I receive the following error:
```
Traceback (most recent call last):
File "C:\Users\damie\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\damie\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\damie\AppData\Local\pypoetry\Cache\virtualenvs\nlp-mt-testing-U3-1H1OD-py3.7\Scripts\onmt-main.exe\__main__.py", line 9, in <module>
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\opennmt\bin\main.py", line 224, in main
hvd=hvd)
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\opennmt\runner.py", line 236, in train
moving_average_decay=train_config.get("moving_average_decay"))
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\opennmt\training.py", line 93, in __call__
for loss in self._steps(dataset, accum_steps=accum_steps, report_steps=report_steps):
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\opennmt\training.py", line 223, in _steps
_step()
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\eager\def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\eager\def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\eager\def_function.py", line 506, in _initialize
*args, **kwds))
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\eager\function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\eager\function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\eager\function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\framework\func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\eager\def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\framework\func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\opennmt\training.py:215 _step *
return self._step()
c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\opennmt\training.py:314 _step *
self._gradient_accumulator.reset()
c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\opennmt\optimizers\utils.py:124 reset *
gradient.assign(tf.zeros(gradient.shape, dtype=gradient.dtype))
c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\ops\array_ops.py:2677 wrapped **
tensor = fun(*args, **kwargs)
c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\ops\array_ops.py:2730 zeros
shape = ops.convert_to_tensor(shape, dtype=dtypes.int32)
c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\framework\ops.py:1341 convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
c:\users\damie\appdata\local\pypoetry\cache\virtualenvs\nlp-mt-testing-u3-1h1od-py3.7\lib\site-packages\tensorflow\python\framework\constant_op.py:338 _tensor_shape_tensor_conversion_function
"Cannot convert a partially known TensorShape to a Tensor: %s" % s)
ValueError: Cannot convert a partially known TensorShape to a Tensor: (None, 64)
```
This only seems to occur in OpenNMT-tf 2.11. My config file does not set any hyperparameters. It only configures the location of data. Training using TransformerBase does not produce the error. I am running on Windows 10.
| 2020-06-23T09:28:43 |
|
OpenNMT/OpenNMT-tf | 702 | OpenNMT__OpenNMT-tf-702 | [
"701"
] | f28e56f6d10235250545ec7f6f802ba2f6b8f57b | diff --git a/opennmt/bin/main.py b/opennmt/bin/main.py
--- a/opennmt/bin/main.py
+++ b/opennmt/bin/main.py
@@ -45,13 +45,15 @@ def _prefix_paths(prefix, paths):
for i, path in enumerate(paths):
paths[i] = _prefix_paths(prefix, path)
return paths
- else:
+ elif isinstance(paths, str):
path = paths
new_path = os.path.join(prefix, path)
if tf.io.gfile.exists(new_path):
return new_path
else:
return path
+ else:
+ return paths
def main():
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
| Bug with multiple source files and data_dir configuration
Sample When you add the export_vocabulary_assets with multiple files
```
model_dir: model
data:
# (optional) During export save the vocabularies as model assets, otherwise embed
# them in the graph itself (default: True).
export_vocabulary_assets: true
train_features_file:
- train/src.txt
- train/ner.txt
```
the crash:
```
File "/usr/local/lib/python3.6/dist-packages/opennmt/bin/main.py", line 200, in main
config["data"] = _prefix_paths(args.data_dir, config["data"])
File "/usr/local/lib/python3.6/dist-packages/opennmt/bin/main.py", line 42, in _prefix_paths
paths[key] = _prefix_paths(prefix, path)
File "/usr/local/lib/python3.6/dist-packages/opennmt/bin/main.py", line 50, in _prefix_paths
new_path = os.path.join(prefix, path)
File "/usr/lib/python3.6/posixpath.py", line 94, in join
genericpath._check_arg_types('join', a, *p)
File "/usr/lib/python3.6/genericpath.py", line 149, in _check_arg_types
(funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'bool'
```
| This is also giving me issues with the data_dir config. I guess is trying to update the path of the files inside "data:"
but on the sample in the web all this keys are inside the "data:" key
```
# (optional) Pretrained embedding configuration.
source_embedding:
path: data/glove/glove-100000.txt
with_header: True
case_insensitive: True
trainable: False
``` | 2020-07-06T15:51:56 |
|
OpenNMT/OpenNMT-tf | 823 | OpenNMT__OpenNMT-tf-823 | [
"822"
] | 12a3c55ccf2bb246f6dc236b012ee04611e99f24 | diff --git a/opennmt/bin/main.py b/opennmt/bin/main.py
--- a/opennmt/bin/main.py
+++ b/opennmt/bin/main.py
@@ -24,8 +24,22 @@
}
-def _set_log_level(log_level):
- tf.get_logger().setLevel(log_level)
+def _initialize_logging(log_level):
+ logger = tf.get_logger()
+ logger.setLevel(log_level)
+
+ # Configure the TensorFlow logger to use the same log format as the TensorFlow C++ logs.
+ for handler in list(logger.handlers):
+ logger.removeHandler(handler)
+ formatter = logging.Formatter(
+ fmt="%(asctime)s.%(msecs)03d000: %(levelname).1s %(filename)s:%(lineno)d] %(message)s",
+ datefmt="%Y-%m-%d %H:%M:%S",
+ )
+ handler = logging.StreamHandler()
+ handler.setFormatter(formatter)
+ logger.addHandler(handler)
+
+ # Align the TensorFlow C++ log level with the Python level.
os.environ["TF_CPP_MIN_LOG_LEVEL"] = str(
_PYTHON_TO_TENSORFLOW_LOGGING_LEVEL[log_level]
)
@@ -264,7 +278,7 @@ def main():
):
args.features_file = args.features_file[0]
- _set_log_level(getattr(logging, args.log_level))
+ _initialize_logging(getattr(logging, args.log_level))
tf.config.threading.set_intra_op_parallelism_threads(
args.intra_op_parallelism_threads
)
| Feature request: Log time for events during training
Hi Guillaume and others, thanks a lot for the great work you are doing!
I was thinking if it's possible to add time tracking to the logger during training. I tried to search the forum and google for relevant issues but didn't find anything. I think it would be helpful for everyone. So please tell me what you think about it.
Thank you,
Alex
| 2021-04-06T16:42:02 |
||
OpenNMT/OpenNMT-tf | 834 | OpenNMT__OpenNMT-tf-834 | [
"833"
] | cf5496c2a861b70688c1949d4549bc9c7da002c0 | diff --git a/opennmt/decoders/decoder.py b/opennmt/decoders/decoder.py
--- a/opennmt/decoders/decoder.py
+++ b/opennmt/decoders/decoder.py
@@ -341,7 +341,7 @@ def _body(step, state, inputs, outputs_ta, attention_ta):
step = tf.constant(0, dtype=tf.int32)
outputs_ta = tf.TensorArray(inputs.dtype, size=max_step)
- attention_ta = tf.TensorArray(tf.float32, size=max_step)
+ attention_ta = tf.TensorArray(inputs.dtype, size=max_step)
_, state, _, outputs_ta, attention_ta = tf.while_loop(
lambda *arg: True,
| diff --git a/opennmt/tests/model_test.py b/opennmt/tests/model_test.py
--- a/opennmt/tests/model_test.py
+++ b/opennmt/tests/model_test.py
@@ -382,6 +382,19 @@ def testSequenceToSequenceServing(self):
op_types = set(op.type for op in concrete_function.graph.get_operations())
self.assertNotIn("Addons>GatherTree", op_types)
+ @test_util.run_with_mixed_precision
+ def testRNNWithMixedPrecision(self):
+ features_file, labels_file, data_config = self._makeToyEnDeData()
+ model = models.LuongAttention()
+ model.initialize(data_config)
+ dataset = model.examples_inputter.make_training_dataset(
+ features_file, labels_file, 16
+ )
+ features, labels = next(iter(dataset))
+ outputs, _ = model(features, labels=labels, training=True)
+ self.assertEqual(outputs["logits"].dtype, tf.float16)
+ self.assertEqual(outputs["attention"].dtype, tf.float16)
+
@parameterized.expand(
[
[tf.estimator.ModeKeys.TRAIN],
| Mixed Precision error training with RNN models
I am getting an error when running Sequence to Sequence models with mixed precision.
Models I have encountered this issue with:
- NMTSmallV1
- NMTBigV1
The models will work fine if I remove the option _--mixed_precision_
I can also run a Transformer model fine with mixed precision so it seems to only affect the Sequence to Sequence models.
TensorFlow version: 2.4.1
Python version: 3.6.13
CUDA 11.1
I have a git cloned version of the latest OpenNMT-tf version which adds TF 2.5 support
Example command that causes this:
`python opennmt/bin/main.py --model_type NMTBigV1 --config ~/data_bnmt_en_fr.yml --auto_config --mixed_precision train --with_eval --num_gpus 6`
```
This is the error I am getting:
2021-05-25 12:50:32.018136: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-25 12:50:32.020761: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-25 12:50:32.023276: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-25 12:50:32.025797: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-25 12:50:32.028300: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-25 12:50:32.030800: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-25 12:50:32.033302: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-05-25 12:50:32.036000: I device_compatibility_check.py:129] Mixed precision compatibility check (mixed_float16): OK
Your GPUs will likely run quickly with dtype policy mixed_float16 as they all have compute capability of at least 7.0
2021-05-25 12:50:33.115000: W runner.py:242] No checkpoint to restore in ../Models/en-fr/CCAligned/fr-en-ccaligned-bignmt-v2
2021-05-25 12:50:33.119000: W deprecation.py:339] From /p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/summary/summary_iterator.py:31: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
`tf.data.TFRecordDataset(path)`
2021-05-25 12:50:36.613000: I mirrored_strategy.py:350] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3', '/job:localhost/replica:0/task:0/device:GPU:4', '/job:localhost/replica:0/task:0/device:GPU:5')
2021-05-25 12:50:42.719000: I dataset_ops.py:1996] Training on 13992542 examples
2021-05-25 12:50:44.009313: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-05-25 12:50:44.013304: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3616000000 Hz
2021-05-25 12:51:01.734000: E tensor_array_ops.py:1315] Error: Input value Tensor("nmt_big_v1_1/attentional_rnn_decoder_1/while/attention_wrapper_1/LuongAttention/Softmax:0", shape=(None, None), dtype=float16, device=/job:localhost/replica:0/task:0/device:GPU:0) has dtype <dtype: 'float16'>, but expected dtype <dtype: 'float32'>. This leads to undefined behavior and will be an error in future versions of TensorFlow. Traceback:
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/distribute/mirrored_run.py", line 323, in run
self.main_result = self.main_fn(*self.main_args, **self.main_kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 667, in wrapper
return converted_call(f, args, kwargs, options=options)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/tmpx3typ5u2.py", line 11, in tf___forward
(loss, gradients) = ag__.converted_call(ag__.ld(self)._compute_gradients, (ag__.ld(source), ag__.ld(target), ag__.ld(accum_steps), ag__.ld(report_steps)), None, fscope)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 461, in converted_call
result = converted_f(*effective_args)
File "/tmp/tmpu3du9bqd.py", line 67, in tf___compute_gradients
ag__.if_stmt(ag__.converted_call(ag__.ld(tf).executing_eagerly, (), None, fscope), if_body_2, else_body_2, get_state_2, set_state_2, ('gradients', 'reported_loss'), 2)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1165, in if_stmt
_py_if_stmt(cond, body, orelse)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1218, in _py_if_stmt
return body() if cond else orelse()
File "/tmp/tmpu3du9bqd.py", line 61, in else_body_2
(training_loss, reported_loss) = ag__.converted_call(ag__.ld(self)._run_model, (ag__.ld(source), ag__.ld(target)), dict(accum_steps=ag__.ld(accum_steps)), fscope)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/tmpx78hsj6g.py", line 12, in tf___run_model
(outputs, _) = ag__.converted_call(ag__.ld(self)._model, (ag__.ld(source),), dict(labels=ag__.ld(target), training=True, step=ag__.ld(self)._optimizer.iterations), fscope)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 396, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 478, in _call_unconverted
return f(*args, **kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1012, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 667, in wrapper
return converted_call(f, args, kwargs, options=options)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/tmpey7rqdfk.py", line 30, in tf__call
ag__.if_stmt((ag__.ld(labels) is not None), if_body, else_body, get_state, set_state, ('outputs',), 1)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1165, in if_stmt
_py_if_stmt(cond, body, orelse)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1218, in _py_if_stmt
return body() if cond else orelse()
File "/tmp/tmpey7rqdfk.py", line 25, in if_body
outputs = ag__.converted_call(ag__.ld(self)._decode_target, (ag__.ld(labels), ag__.ld(encoder_outputs), ag__.ld(encoder_state), ag__.ld(encoder_sequence_length)), dict(step=ag__.ld(step), training=ag__.ld(training)), fscope)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/tmpy6qhbxsm.py", line 31, in tf___decode_target
(logits, _, attention) = ag__.converted_call(ag__.ld(self).decoder, (ag__.ld(target_inputs), ag__.converted_call(ag__.ld(self).labels_inputter.get_length, (ag__.ld(labels),), None, fscope)), dict(state=ag__.ld(initial_state), input_fn=ag__.ld(input_fn), sampling_probability=ag__.ld(sampling_probability), training=ag__.ld(training)), fscope)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 396, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 478, in _call_unconverted
return f(*args, **kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1012, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 667, in wrapper
return converted_call(f, args, kwargs, options=options)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/tmpgwed_v1y.py", line 75, in tf__call
ag__.if_stmt((ag__.ld(rank) == 2), if_body_3, else_body_3, get_state_3, set_state_3, ('attention', 'state', 'logits'), 3)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1165, in if_stmt
_py_if_stmt(cond, body, orelse)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1218, in _py_if_stmt
return body() if cond else orelse()
File "/tmp/tmpgwed_v1y.py", line 71, in else_body_3
ag__.if_stmt((ag__.ld(rank) == 3), if_body_2, else_body_2, get_state_2, set_state_2, ('attention', 'state', 'logits'), 3)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1165, in if_stmt
_py_if_stmt(cond, body, orelse)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1218, in _py_if_stmt
return body() if cond else orelse()
File "/tmp/tmpgwed_v1y.py", line 64, in if_body_2
(logits, state, attention) = ag__.converted_call(ag__.ld(self).forward, (ag__.ld(inputs),), dict(sequence_length=ag__.ld(length_or_step), initial_state=ag__.ld(state), memory=ag__.ld(self).memory, memory_sequence_length=ag__.ld(self).memory_sequence_length, input_fn=ag__.ld(input_fn), sampling_probability=ag__.ld(sampling_probability), training=ag__.ld(training)), fscope)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/tmpvmpbeshd.py", line 146, in tf__forward
(_, state, _, outputs_ta, attention_ta) = ag__.converted_call(ag__.ld(tf).while_loop, (ag__.autograph_artifact((lambda *arg: True)), ag__.ld(_body)), dict(loop_vars=(ag__.ld(step), ag__.ld(initial_state), ag__.converted_call(ag__.ld(inputs_ta).read, (0,), None, fscope), ag__.ld(outputs_ta), ag__.ld(attention_ta)), parallel_iterations=32, swap_memory=True, maximum_iterations=ag__.ld(max_step)), fscope)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 396, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 478, in _call_unconverted
return f(*args, **kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 605, in new_func
return func(*args, **kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2499, in while_loop_v2
return_same_structure=True)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2696, in while_loop
back_prop=back_prop)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py", line 200, in while_loop
add_control_dependencies=add_control_dependencies)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py", line 178, in wrapped_body
outputs = body(*_pack_sequence_as(orig_loop_vars, args))
File "/tmp/tmpvmpbeshd.py", line 135, in _body
ag__.if_stmt((ag__.ld(attention) is not None), if_body_5, else_body_5, get_state_5, set_state_5, ('attention_ta',), 1)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1165, in if_stmt
_py_if_stmt(cond, body, orelse)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py", line 1218, in _py_if_stmt
return body() if cond else orelse()
File "/tmp/tmpvmpbeshd.py", line 130, in if_body_5
attention_ta = ag__.converted_call(ag__.ld(attention_ta).write, (ag__.ld(step), ag__.ld(attention)), None, fscope_2)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 350, in converted_call
return _call_unconverted(f, args, kwargs, options, False)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 479, in _call_unconverted
return f(*args)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 247, in wrapped
return _add_should_use_warning(fn(*args, **kwargs),
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 1159, in write
return self._implementation.write(index, value, name=name)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 536, in write
_check_dtypes(value, self._dtype)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 1315, in _check_dtypes
"".join(traceback.format_stack())))
2021-05-25 12:51:22.729000: I control_flow.py:1218] Number of model parameters: 199372366
2021-05-25 12:51:23.987000: I control_flow.py:1218] Number of model weights: 42 (trainable = 42, non trainable = 0)
2021-05-25 12:51:24.138000: I coordinator.py:219] Error reported to Coordinator: in user code:
/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py:370 _forward *
loss, gradients = self._compute_gradients(
/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py:356 _compute_gradients *
gradients = self._optimizer.get_gradients(
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/keras/mixed_precision/loss_scale_optimizer.py:693 get_gradients **
grads = self._optimizer.get_gradients(loss, params)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:716 get_gradients
grads = gradients.gradients(loss, params)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:172 gradients
unconnected_gradients)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 _GradientsHelper
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:340 _MaybeCompile
return grad_fn() # Exit early
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 <lambda>
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:358 _WhileGrad
util.unique_grad_fn_name(body_graph.name), op, maximum_iterations)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:636 _create_grad_func
body_graph_inputs, body_graph_outputs))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:990 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:632 <lambda>
lambda *args: _grad_fn(ys, xs, args, body_graph),
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:692 _grad_fn
unconnected_gradients="zero")
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 _GradientsHelper
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:340 _MaybeCompile
return grad_fn() # Exit early
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 <lambda>
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/list_ops.py:313 _TensorListSetItemGrad
element_dtype=item.dtype)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/list_ops.py:111 tensor_list_get_item
name=name)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gen_list_ops.py:565 tensor_list_get_item
element_dtype=element_dtype, name=name)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:750 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:979 _create_op_internal
compute_device=compute_device)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:592 _create_op_internal
compute_device)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:3536 _create_op_internal
op_def=op_def)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:2016 __init__
control_input_ops, op_def)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:1856 _create_c_op
raise ValueError(str(e))
ValueError: Expected list with element dtype half but got list with element dtype float for '{{node Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while/TensorArrayV2Write_1/TensorListSetItem_grad/TensorListGetItem}} = TensorListGetItem[element_dtype=DT_HALF](Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/grad_ys_13, Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while/TensorArrayV2Write/TensorListSetItem_grad/TensorListSetItem/TensorListPopBack:1, Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while/TensorArrayV2Write_1/TensorListSetItem_grad/Shape)' with input shapes: [], [], [2].
Traceback (most recent call last):
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/training/coordinator.py", line 297, in stop_on_exception
yield
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/distribute/mirrored_run.py", line 323, in run
self.main_result = self.main_fn(*self.main_args, **self.main_kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 670, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py:370 _forward *
loss, gradients = self._compute_gradients(
/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py:356 _compute_gradients *
gradients = self._optimizer.get_gradients(
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/keras/mixed_precision/loss_scale_optimizer.py:693 get_gradients **
grads = self._optimizer.get_gradients(loss, params)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:716 get_gradients
grads = gradients.gradients(loss, params)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:172 gradients
unconnected_gradients)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 _GradientsHelper
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:340 _MaybeCompile
return grad_fn() # Exit early
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 <lambda>
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:358 _WhileGrad
util.unique_grad_fn_name(body_graph.name), op, maximum_iterations)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:636 _create_grad_func
body_graph_inputs, body_graph_outputs))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:990 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:632 <lambda>
lambda *args: _grad_fn(ys, xs, args, body_graph),
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:692 _grad_fn
unconnected_gradients="zero")
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 _GradientsHelper
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:340 _MaybeCompile
return grad_fn() # Exit early
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 <lambda>
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/list_ops.py:313 _TensorListSetItemGrad
element_dtype=item.dtype)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/list_ops.py:111 tensor_list_get_item
name=name)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gen_list_ops.py:565 tensor_list_get_item
element_dtype=element_dtype, name=name)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:750 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:979 _create_op_internal
compute_device=compute_device)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:592 _create_op_internal
compute_device)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:3536 _create_op_internal
op_def=op_def)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:2016 __init__
control_input_ops, op_def)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:1856 _create_c_op
raise ValueError(str(e))
ValueError: Expected list with element dtype half but got list with element dtype float for '{{node Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while/TensorArrayV2Write_1/TensorListSetItem_grad/TensorListGetItem}} = TensorListGetItem[element_dtype=DT_HALF](Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/grad_ys_13, Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while/TensorArrayV2Write/TensorListSetItem_grad/TensorListSetItem/TensorListPopBack:1, Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while/TensorArrayV2Write_1/TensorListSetItem_grad/Shape)' with input shapes: [], [], [2].
Traceback (most recent call last):
File "opennmt/bin/main.py", line 363, in <module>
main()
File "opennmt/bin/main.py", line 326, in main
hvd=hvd,
File "/p/home/gerryc/OpenNMT-tf-latest/opennmt/runner.py", line 281, in train
moving_average_decay=train_config.get("moving_average_decay"),
File "/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py", line 123, in __call__
dataset, accum_steps=accum_steps, report_steps=report_steps
File "/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py", line 260, in _steps
loss = forward_fn()
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 871, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 726, in _initialize
*args, **kwds))
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3206, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py:245 _forward *
target,
/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py:485 _forward *
per_replica_loss = self._strategy.run(
/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py:370 _forward *
loss, gradients = self._compute_gradients(
/p/home/gerryc/OpenNMT-tf-latest/opennmt/training.py:356 _compute_gradients *
gradients = self._optimizer.get_gradients(
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/keras/mixed_precision/loss_scale_optimizer.py:693 get_gradients **
grads = self._optimizer.get_gradients(loss, params)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:716 get_gradients
grads = gradients.gradients(loss, params)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:172 gradients
unconnected_gradients)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 _GradientsHelper
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:340 _MaybeCompile
return grad_fn() # Exit early
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 <lambda>
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:358 _WhileGrad
util.unique_grad_fn_name(body_graph.name), op, maximum_iterations)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:636 _create_grad_func
body_graph_inputs, body_graph_outputs))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:990 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:632 <lambda>
lambda *args: _grad_fn(ys, xs, args, body_graph),
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:692 _grad_fn
unconnected_gradients="zero")
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 _GradientsHelper
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:340 _MaybeCompile
return grad_fn() # Exit early
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:684 <lambda>
lambda: grad_fn(op, *out_grads))
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/list_ops.py:313 _TensorListSetItemGrad
element_dtype=item.dtype)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/list_ops.py:111 tensor_list_get_item
name=name)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/gen_list_ops.py:565 tensor_list_get_item
element_dtype=element_dtype, name=name)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:750 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/ops/while_v2.py:979 _create_op_internal
compute_device=compute_device)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:592 _create_op_internal
compute_device)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:3536 _create_op_internal
op_def=op_def)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:2016 __init__
control_input_ops, op_def)
/p/home/gerryc/.conda/envs/tf_venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py:1856 _create_c_op
raise ValueError(str(e))
ValueError: Expected list with element dtype half but got list with element dtype float for '{{node Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while/TensorArrayV2Write_1/TensorListSetItem_grad/TensorListGetItem}} = TensorListGetItem[element_dtype=DT_HALF](Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/grad_ys_13, Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while/TensorArrayV2Write/TensorListSetItem_grad/TensorListSetItem/TensorListPopBack:1, Adam/gradients/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while_grad/gradients/nmt_big_v1_1/attentional_rnn_decoder_1/while/TensorArrayV2Write_1/TensorListSetItem_grad/Shape)' with input shapes: [], [], [2].
```
| 2021-05-27T11:17:43 |
|
OpenNMT/OpenNMT-tf | 848 | OpenNMT__OpenNMT-tf-848 | [
"846"
] | f38a95332fd619a214b752cd8c0d737cbc95362c | diff --git a/opennmt/evaluation.py b/opennmt/evaluation.py
--- a/opennmt/evaluation.py
+++ b/opennmt/evaluation.py
@@ -305,8 +305,11 @@ def __call__(self, step):
if self._save_predictions:
output_path = os.path.join(self._eval_dir, "predictions.txt.%d" % step)
output_file = tf.io.gfile.GFile(output_path, "w")
+ params = {"n_best": 1}
write_fn = lambda prediction: (
- self._model.print_prediction(prediction, stream=output_file)
+ self._model.print_prediction(
+ prediction, params=params, stream=output_file
+ )
)
index_fn = lambda prediction: prediction.get("index")
ordered_writer = misc.OrderRestorer(index_fn, write_fn)
diff --git a/opennmt/models/sequence_to_sequence.py b/opennmt/models/sequence_to_sequence.py
--- a/opennmt/models/sequence_to_sequence.py
+++ b/opennmt/models/sequence_to_sequence.py
@@ -457,7 +457,7 @@ def print_prediction(self, prediction, params=None, stream=None):
raise ValueError(
"with_alignments is set but the model did not return alignment information"
)
- num_hypotheses = len(prediction["log_probs"])
+ num_hypotheses = params.get("n_best", len(prediction["log_probs"]))
for i in range(num_hypotheses):
if "tokens" in prediction:
target_length = prediction["length"][i]
| diff --git a/opennmt/tests/runner_test.py b/opennmt/tests/runner_test.py
--- a/opennmt/tests/runner_test.py
+++ b/opennmt/tests/runner_test.py
@@ -219,10 +219,14 @@ def testTrainLanguageModel(self):
runner.train()
def testEvaluate(self):
+ if not tf.config.functions_run_eagerly():
+ self.skipTest("Test case not passing in GitHub Actions environment")
ar_file, en_file = self._makeTransliterationData()
config = {
+ "params": {"beam_width": 4},
"data": {"eval_features_file": ar_file, "eval_labels_file": en_file},
"eval": {"external_evaluators": "BLEU"},
+ "infer": {"n_best": 4},
}
runner = self._getTransliterationRunner(config)
metrics = runner.evaluate()
| Inference parameter `n_best` is applied during evaluation
The inference parameter `n_best` should not be applied when translating during evaluation. Reported in https://forum.opennmt.net/t/training-crash-at-validation/4513.
| 2021-07-01T11:14:12 |
|
OpenNMT/OpenNMT-tf | 881 | OpenNMT__OpenNMT-tf-881 | [
"880"
] | 2335b9bbbf5dbe7f56487e3279fa5c72bdfa1c59 | diff --git a/opennmt/models/sequence_to_sequence.py b/opennmt/models/sequence_to_sequence.py
--- a/opennmt/models/sequence_to_sequence.py
+++ b/opennmt/models/sequence_to_sequence.py
@@ -474,7 +474,7 @@ def print_prediction(self, prediction, params=None, stream=None):
tokens = prediction["tokens"][i][:target_length]
sentence = self.labels_inputter.tokenizer.detokenize(tokens)
else:
- sentence = prediction["text"][i]
+ sentence = prediction["text"][i].decode("utf-8")
score = None
attention = None
if with_scores:
| Output of inference with score formatted as a "byte string"
Running `onmt-main --auto_config infer` with scores enabled in the config, outputs the prediction formatted as a byte string.
Example:
`-18.279692 ||| b'Mira mit Orien- und B\xc3\xa4hrschienen .'`
When parsing the output afterwards, this leads to problems because
```
>>> str("b'Mira mit Orien- und B\xc3\xa4hrschienen .'`")
"b'Mira mit Orien- und Bährschienen .'`"
```
while it should be
```
>>> b'Mira mit Orien- und B\xc3\xa4hrschienen .'.decode()
'Mira mit Orien- und Bährschienen .'
```
I am using OpenNMT-tf 2.21.0 and TF/TF-text 2.4.3. To generate the output, I followed the [quickstart ](https://opennmt.net/OpenNMT-tf/quickstart.html) and used SentancePiece (as described [here](https://opennmt.net/OpenNMT-tf/tokenization.html#example-sentencepiece-tokenization)).
For the inference I added
```
infer:
n_best: 4
with_scores: ture
```
to the config. Setting `with_scores: false` and rerunning, results in correct formatting/encoding.
I happy to provide additional information if necessary.
| 2021-09-24T16:37:13 |
||
OpenNMT/OpenNMT-tf | 896 | OpenNMT__OpenNMT-tf-896 | [
"799"
] | 5d73956b6d94567f299244458321156fe4ef3b0c | diff --git a/opennmt/inputters/inputter.py b/opennmt/inputters/inputter.py
--- a/opennmt/inputters/inputter.py
+++ b/opennmt/inputters/inputter.py
@@ -121,13 +121,9 @@ def make_inference_dataset(
See Also:
:func:`opennmt.data.inference_pipeline`
"""
- map_fn = lambda *arg: self.make_features(
- element=misc.item_or_tuple(arg), training=False
+ transform_fns = _get_dataset_transforms(
+ self, num_threads=num_threads, training=False
)
- transform_fns = [
- lambda dataset: dataset.map(map_fn, num_parallel_calls=num_threads or 1)
- ]
-
dataset = self.make_dataset(features_file, training=False)
dataset = dataset.apply(
dataset_util.inference_pipeline(
@@ -181,6 +177,30 @@ def get_padded_shapes(self, element_spec, maximum_length=None):
element_spec,
)
+ def has_prepare_step(self):
+ """Returns ``True`` if this inputter implements a data preparation step
+ in method :meth:`opennmt.inputters.Inputter.prepare_elements`.
+ """
+ return False
+
+ def prepare_elements(self, elements, training=None):
+ """Prepares dataset elements.
+
+ This method is called on a batch of dataset elements. For example, it
+ can be overriden to apply an external pre-tokenization.
+
+ Note that the results of the method are unbatched and then passed to
+ method :meth:`opennmt.inputters.Inputter.make_features`.
+
+ Args:
+ elements: A batch of dataset elements.
+ training: Run in training mode.
+
+ Returns:
+ A (possibly nested) structure of ``tf.Tensor``.
+ """
+ return elements
+
@abc.abstractmethod
def make_features(self, element=None, features=None, training=None):
"""Creates features from data.
@@ -305,9 +325,14 @@ def export_assets(self, asset_dir):
assets.update(inputter.export_assets(asset_dir))
return assets
- @abc.abstractmethod
- def make_dataset(self, data_file, training=None):
- raise NotImplementedError()
+ def has_prepare_step(self):
+ return any(inputter.has_prepare_step() for inputter in self.inputters)
+
+ def prepare_elements(self, elements, training=None):
+ return tuple(
+ inputter.prepare_elements(elts)
+ for inputter, elts in zip(self.inputters, elements)
+ )
def visualize(self, model_root, log_dir):
for inputter in self.inputters:
@@ -651,13 +676,9 @@ def make_evaluation_dataset(
data_files = features_file
length_fn = self.get_length
- map_fn = lambda *arg: self.make_features(
- element=misc.item_or_tuple(arg), training=False
+ transform_fns = _get_dataset_transforms(
+ self, num_threads=num_threads, training=False
)
- transform_fns = [
- lambda dataset: dataset.map(map_fn, num_parallel_calls=num_threads or 1)
- ]
-
dataset = self.make_dataset(data_files, training=False)
dataset = dataset.apply(
dataset_util.inference_pipeline(
@@ -753,18 +774,16 @@ def make_training_dataset(
dataset = self.make_dataset(data_files, training=True)
- map_fn = lambda *arg: self.make_features(
- element=misc.item_or_tuple(arg), training=True
- )
filter_fn = lambda *arg: (
self.keep_for_training(
misc.item_or_tuple(arg), maximum_length=maximum_length
)
)
- transform_fns = [
- lambda dataset: dataset.map(map_fn, num_parallel_calls=num_threads or 4),
- lambda dataset: dataset.filter(filter_fn),
- ]
+
+ transform_fns = _get_dataset_transforms(
+ self, num_threads=num_threads, training=True
+ )
+ transform_fns.append(lambda dataset: dataset.filter(filter_fn))
if batch_autotune_mode:
# In this mode we want to return batches where all sequences are padded
@@ -952,3 +971,32 @@ def make_inference_dataset(
num_threads=num_threads,
prefetch_buffer_size=prefetch_buffer_size,
)
+
+
+def _get_dataset_transforms(
+ inputter,
+ num_threads=None,
+ training=None,
+ prepare_batch_size=128,
+):
+ transform_fns = []
+
+ if inputter.has_prepare_step():
+ prepare_fn = lambda *arg: inputter.prepare_elements(
+ misc.item_or_tuple(arg), training=training
+ )
+ transform_fns.extend(
+ [
+ lambda dataset: dataset.batch(prepare_batch_size),
+ lambda dataset: dataset.map(prepare_fn, num_parallel_calls=num_threads),
+ lambda dataset: dataset.unbatch(),
+ ]
+ )
+
+ map_fn = lambda *arg: inputter.make_features(
+ element=misc.item_or_tuple(arg), training=training
+ )
+ transform_fns.append(
+ lambda dataset: dataset.map(map_fn, num_parallel_calls=num_threads)
+ )
+ return transform_fns
diff --git a/opennmt/inputters/text_inputter.py b/opennmt/inputters/text_inputter.py
--- a/opennmt/inputters/text_inputter.py
+++ b/opennmt/inputters/text_inputter.py
@@ -269,6 +269,16 @@ def get_dataset_size(self, data_file):
return list(map(misc.count_lines, data_file))
return misc.count_lines(data_file)
+ def has_prepare_step(self):
+ # For performance reasons, we apply external tokenizers on a batch of
+ # dataset elements during the preparation step.
+ return not self.tokenizer.in_graph and not isinstance(
+ self.tokenizer, tokenizers.SpaceTokenizer
+ )
+
+ def prepare_elements(self, elements, training=None):
+ return {"tokens": self.tokenizer.tokenize(elements, training=training)}
+
def make_features(self, element=None, features=None, training=None):
"""Tokenizes raw text."""
self._assert_is_initialized()
@@ -276,10 +286,14 @@ def make_features(self, element=None, features=None, training=None):
features = {}
if "tokens" in features:
return features
- if "text" in features:
- element = features.pop("text")
- element = tf.convert_to_tensor(element, dtype=tf.string)
- tokens = self.tokenizer.tokenize(element, training=training)
+
+ element = features.pop("text", element)
+ if isinstance(element, dict):
+ tokens = element["tokens"]
+ else:
+ element = tf.convert_to_tensor(element, dtype=tf.string)
+ tokens = self.tokenizer.tokenize(element, training=training)
+
if isinstance(tokens, tf.RaggedTensor):
length = tokens.row_lengths()
tokens = tokens.to_tensor(default_value=constants.PADDING_TOKEN)
| diff --git a/opennmt/tests/inputter_test.py b/opennmt/tests/inputter_test.py
--- a/opennmt/tests/inputter_test.py
+++ b/opennmt/tests/inputter_test.py
@@ -654,6 +654,7 @@ def testExampleInputterAsset(self):
}
)
self.assertIsInstance(source_inputter.tokenizer, tokenizers.OpenNMTTokenizer)
+ self.assertTrue(example_inputter.has_prepare_step())
asset_dir = self.get_temp_dir()
example_inputter.export_assets(asset_dir)
self.assertIn("source_tokenizer_config.yml", set(os.listdir(asset_dir)))
| Lower throughput than expected in multi-GPU training with online tokenization
There are some performance issues when training on multiple GPUs and enabling the online OpenNMT tokenization. The throughput is lower than expected and GPU usage frequently goes down to 0%. Possible workarounds:
* Tokenize the data before the training
* Use the `SentencePieceTokenizer` that is implemented as a TensorFlow op
* Use Horovod for multi-GPU training
See https://forum.opennmt.net/t/cannot-scale-well-with-multiple-gpus/4239 for a discussion and possible explanation.
| 2021-10-11T14:17:05 |
|
OpenNMT/OpenNMT-tf | 925 | OpenNMT__OpenNMT-tf-925 | [
"923"
] | 36612202dddb647977fa91d4f2bb024ee91ab05c | diff --git a/opennmt/models/model.py b/opennmt/models/model.py
--- a/opennmt/models/model.py
+++ b/opennmt/models/model.py
@@ -330,12 +330,14 @@ def get_optimizer(self):
optimizer_name = params.get("optimizer")
if optimizer_name is None:
return None
- learning_rate = tf.constant(params["learning_rate"], dtype=tf.float32)
- if params.get("decay_type") is not None:
+ schedule_type = params.get("decay_type")
+ if schedule_type is None:
+ learning_rate = tf.constant(params["learning_rate"], dtype=tf.float32)
+ else:
schedule_params = params.get("decay_params", {})
learning_rate = schedules.make_learning_rate_schedule(
- learning_rate,
- params["decay_type"],
+ params.get("learning_rate"),
+ schedule_type,
schedule_params=schedule_params,
schedule_step_duration=params.get("decay_step_duration", 1),
start_step=params.get("start_decay_steps", 0),
diff --git a/opennmt/schedules/lr_schedules.py b/opennmt/schedules/lr_schedules.py
--- a/opennmt/schedules/lr_schedules.py
+++ b/opennmt/schedules/lr_schedules.py
@@ -1,5 +1,7 @@
"""Define learning rate decay functions."""
+import inspect
+
import numpy as np
import tensorflow as tf
@@ -45,10 +47,10 @@ def make_learning_rate_schedule(
"""Creates the learning rate schedule.
Args:
- initial_learning_rate: The initial learning rate value or scale.
+ initial_learning_rate: The initial learning rate value. This can be
+ ``None`` if the learning rate is fully defined by the schedule.
schedule_type: The type of learning rate schedule. A class name from
- ``tf.keras.optimizers.schedules``
- or :mod:`opennmt.schedules` as a string.
+ ``tf.keras.optimizers.schedules`` or :mod:`opennmt.schedules` as a string.
schedule_params: Additional parameters passed to the schedule constructor.
schedule_step_duration: The number of training steps that make 1 schedule step.
start_step: Start the schedule after this many steps.
@@ -67,7 +69,10 @@ def make_learning_rate_schedule(
if schedule_params is None:
schedule_params = {}
schedule_class = get_lr_schedule_class(schedule_type)
- schedule = schedule_class(initial_learning_rate, **schedule_params)
+ first_arg = inspect.getfullargspec(schedule_class)[0][1]
+ if first_arg not in schedule_params:
+ schedule_params[first_arg] = initial_learning_rate
+ schedule = schedule_class(**schedule_params)
schedule = ScheduleWrapper(
schedule,
step_start=start_step,
| diff --git a/opennmt/tests/lr_schedules_test.py b/opennmt/tests/lr_schedules_test.py
--- a/opennmt/tests/lr_schedules_test.py
+++ b/opennmt/tests/lr_schedules_test.py
@@ -31,10 +31,18 @@ def testMakeSchedule(self):
self.assertIsInstance(
wrapper.schedule, tf.keras.optimizers.schedules.ExponentialDecay
)
+
wrapper = lr_schedules.make_learning_rate_schedule(
2.0, "NoamDecay", dict(model_dim=512, warmup_steps=4000)
)
self.assertIsInstance(wrapper.schedule, lr_schedules.NoamDecay)
+ self.assertEqual(wrapper.schedule.scale, 2)
+
+ wrapper = lr_schedules.make_learning_rate_schedule(
+ None, "NoamDecay", dict(scale=2, model_dim=512, warmup_steps=4000)
+ )
+ self.assertEqual(wrapper.schedule.scale, 2)
+
with self.assertRaises(ValueError):
lr_schedules.make_learning_rate_schedule(2.0, "InvalidScheduleName")
| RsqrtDecay fails when decay_params scale is configured
With following configuration:
```
decay_type: RsqrtDecay
decay_params:
warmup_steps: 10000
scale: 4
```
Training fails to start with an error:
```
Traceback (most recent call last):
File "opennmt/bin/main.py", line 350, in <module>
main()
File "opennmt/bin/main.py", line 308, in main
runner.train(
File "/ai/onmt/onmt-2240/OpenNMT-tf/opennmt/runner.py", line 207, in train
optimizer = model.get_optimizer()
File "/ai/onmt/onmt-2240/OpenNMT-tf/opennmt/models/model.py", line 338, in get_optimizer
learning_rate = schedules.make_learning_rate_schedule(
File "/ai/onmt/onmt-2240/OpenNMT-tf/opennmt/schedules/lr_schedules.py", line 75, in make_learning_rate_schedule
schedule = schedule_class(initial_learning_rate, **schedule_params)
TypeError: __init__() got multiple values for argument 'scale'
```
A small change to RsqrtDecay function will fix the issue. It seems that 1st parameter to __init__ is not scale but lr, currently:
```
class RsqrtDecay(tf.keras.optimizers.schedules.LearningRateSchedule):
"""Decay based on the reciprocal of the step square root."""
def __init__(self ,scale, warmup_steps):
```
If changed to will fix the issue:
```
class RsqrtDecay(tf.keras.optimizers.schedules.LearningRateSchedule):
"""Decay based on the reciprocal of the step square root."""
def __init__(self, lr ,scale, warmup_steps):
```
| In the current implementation, the first argument of learning rate schedules is always the `learning_rate` value defined in the configuration. Some schedules will use this value as the initial learning rate, other schedules will use the value as a scale constant. We could probably improve this behavior.
For now, you could just remove the `scale` parameter and set `learning_rate` instead:
```yaml
params:
learning_rate: 4
decay_type: RsqrtDecay
decay_params:
warmup_steps: 10000
``` | 2022-02-07T11:36:38 |
OpenNMT/OpenNMT-tf | 953 | OpenNMT__OpenNMT-tf-953 | [
"952"
] | 36c737d1446e475e87b71519a6e7791b22a0f919 | diff --git a/opennmt/inputters/record_inputter.py b/opennmt/inputters/record_inputter.py
--- a/opennmt/inputters/record_inputter.py
+++ b/opennmt/inputters/record_inputter.py
@@ -48,7 +48,7 @@ def make_features(self, element=None, features=None, training=None):
},
)
tensor = feature_lists["values"]
- features["length"] = lengths["values"]
+ features["length"] = tf.cast(lengths["values"], tf.int32)
features["tensor"] = tf.cast(tensor, self.dtype)
return features
| diff --git a/opennmt/tests/inputter_test.py b/opennmt/tests/inputter_test.py
--- a/opennmt/tests/inputter_test.py
+++ b/opennmt/tests/inputter_test.py
@@ -770,6 +770,7 @@ def testSequenceRecordBatch(self):
features = next(iter(dataset))
lengths = features["length"]
tensors = features["tensor"]
+ self.assertEqual(lengths.dtype, tf.int32)
self.assertAllEqual(lengths, [3, 6, 1])
for length, tensor, expected_vector in zip(lengths, tensors, vectors):
self.assertAllClose(tensor[:length], expected_vector)
| An issue with SequenceRecordInputter ?
I tried to create SequenceClassifier model which used SequenceRecordInputter as a part of ParallelInputter - it produced an error before ending first learning step. After isolating the problem, it seems that SequenceRecordInputter dataset generation is the source of it:
Reproducible code:
```python3
import numpy as np
from opennmt import encoders, inputters, models, Runner
vectors = []
for i in range(1000):
vectors.append(np.random.rand(np.random.randint(1, 9), 16))
inputters.create_sequence_records(vectors, "train.records")
with open("train_labels.txt", "w") as f:
f.write("\n".join(np.random.randint(0, 2, 1000).astype("str")))
with open("labels_vocab.txt", "w") as f:
f.write("\n".join(["0", "1"]))
model = models.SequenceClassifier(
inputters.SequenceRecordInputter(16),
encoders.SelfAttentionEncoder(
num_layers=2, num_units=16, num_heads=4, ffn_inner_dim=64
),
)
config = {
"model_dir": ".",
"data": {
"target_vocabulary": "labels_vocab.txt",
"train_features_file": "train.records",
"train_labels_file": "train_labels.txt",
},
"params": {"optimizer": "Adam", "learning_rate": 0.001},
"train": {"batch_size": 1, "max_step": 2},
}
runner = Runner(model, config, auto_config=False)
runner.train()
```
Error text
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [12], in <cell line: 1>()
----> 1 runner.train()
File ~/.local/lib/python3.9/site-packages/opennmt/runner.py:281, in Runner.train(self, num_devices, with_eval, checkpoint_path, hvd, return_summary, fallback_to_cpu, continue_from_checkpoint)
278 else:
279 trainer = training_util.Trainer(model, optimizer, checkpoint=checkpoint)
--> 281 summary = trainer(
282 dataset_fn,
283 max_step=train_config.get("max_step"),
284 accum_steps=accum_steps,
285 report_steps=train_config.get("save_summary_steps", 100),
286 save_steps=train_config.get("save_checkpoints_steps", 5000),
287 evaluator=evaluator,
288 eval_steps=eval_config.get("steps", 5000),
289 moving_average_decay=train_config.get("moving_average_decay"),
290 )
292 average_last_checkpoints = train_config.get("average_last_checkpoints", 0)
293 if checkpoint is None:
File ~/.local/lib/python3.9/site-packages/opennmt/training.py:109, in Trainer.__call__(self, dataset, max_step, accum_steps, report_steps, save_steps, evaluator, eval_steps, moving_average_decay)
107 step = None
108 moving_average = None
--> 109 for i, loss in enumerate(
110 self._steps(dataset, accum_steps=accum_steps, report_steps=report_steps)
111 ):
112 if i == 0:
113 self._log_model_info()
File ~/.local/lib/python3.9/site-packages/opennmt/training.py:221, in Trainer._steps(self, dataset, accum_steps, report_steps)
209 def _steps(self, dataset, accum_steps=1, report_steps=None):
210 """Returns a generator over training steps (i.e. parameters update).
211
212 Args:
(...)
219 A generator that yields a loss value to report for this step.
220 """
--> 221 dataset = self._finalize_dataset(dataset)
222 iterator = iter(dataset)
224 # We define 2 separate functions to support gradient accumulation:
225 # * forward: compute and accumulate the gradients
226 # * step: apply the gradients
227 # When gradient accumulation is disabled, the forward function also applies the gradients.
File ~/.local/lib/python3.9/site-packages/opennmt/training.py:206, in Trainer._finalize_dataset(self, dataset)
196 """Returns the final dataset instance to be used for training.
197
198 Args:
(...)
203 A ``tf.data.Dataset``.
204 """
205 if callable(dataset):
--> 206 dataset = dataset(tf.distribute.InputContext())
207 return dataset
File ~/.local/lib/python3.9/site-packages/opennmt/runner.py:220, in Runner.train.<locals>.<lambda>(input_context)
216 batch_type = train_config["batch_type"]
217 batch_size_multiple = 8 if mixed_precision and batch_type == "tokens" else 1
219 dataset_fn = (
--> 220 lambda input_context: model.examples_inputter.make_training_dataset(
221 data_config["train_features_file"],
222 data_config.get("train_labels_file"),
223 train_config["batch_size"],
224 batch_type=batch_type,
225 batch_size_multiple=batch_size_multiple,
226 shuffle_buffer_size=train_config["sample_buffer_size"],
227 length_bucket_width=train_config["length_bucket_width"],
228 maximum_features_length=train_config.get("maximum_features_length"),
229 maximum_labels_length=train_config.get("maximum_labels_length"),
230 single_pass=train_config.get("single_pass", False),
231 num_shards=input_context.num_input_pipelines,
232 shard_index=input_context.input_pipeline_id,
233 prefetch_buffer_size=train_config.get("prefetch_buffer_size"),
234 cardinality_multiple=input_context.num_replicas_in_sync,
235 weights=data_config.get("train_files_weights"),
236 batch_autotune_mode=train_config.get("batch_autotune_mode"),
237 )
238 )
240 checkpoint = None
241 evaluator = None
File ~/.local/lib/python3.9/site-packages/opennmt/inputters/inputter.py:834, in ExampleInputterAdapter.make_training_dataset(self, features_file, labels_file, batch_size, batch_type, batch_multiplier, batch_size_multiple, shuffle_buffer_size, length_bucket_width, maximum_features_length, maximum_labels_length, single_pass, num_shards, shard_index, num_threads, prefetch_buffer_size, cardinality_multiple, weights, batch_autotune_mode)
832 if weights is not None:
833 dataset = (dataset, weights)
--> 834 dataset = dataset_util.training_pipeline(
835 batch_size,
836 batch_type=batch_type,
837 batch_multiplier=batch_multiplier,
838 batch_size_multiple=batch_size_multiple,
839 transform_fns=transform_fns,
840 length_bucket_width=length_bucket_width,
841 features_length_fn=features_length_fn,
842 labels_length_fn=labels_length_fn,
843 single_pass=single_pass,
844 num_shards=num_shards,
845 shard_index=shard_index,
846 num_threads=num_threads,
847 dataset_size=self.get_dataset_size(data_files),
848 shuffle_buffer_size=shuffle_buffer_size,
849 prefetch_buffer_size=prefetch_buffer_size,
850 cardinality_multiple=cardinality_multiple,
851 )(dataset)
852 return dataset
File ~/.local/lib/python3.9/site-packages/opennmt/data/dataset.py:637, in training_pipeline.<locals>._pipeline(dataset)
635 if labels_length_fn is not None:
636 length_fn.append(labels_length_fn)
--> 637 dataset = dataset.apply(
638 batch_sequence_dataset(
639 batch_size,
640 batch_type=batch_type,
641 batch_multiplier=batch_multiplier,
642 batch_size_multiple=batch_size_multiple,
643 length_bucket_width=length_bucket_width,
644 length_fn=length_fn,
645 )
646 )
647 dataset = dataset.apply(filter_irregular_batches(batch_multiplier))
648 if not single_pass:
File ~/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/dataset_ops.py:2270, in DatasetV2.apply(self, transformation_func)
2248 def apply(self, transformation_func):
2249 """Applies a transformation function to this dataset.
2250
2251 `apply` enables chaining of custom `Dataset` transformations, which are
(...)
2268 dataset.
2269 """
-> 2270 dataset = transformation_func(self)
2271 if not isinstance(dataset, DatasetV2):
2272 raise TypeError(
2273 f"`transformation_func` must return a `tf.data.Dataset` object. "
2274 f"Got {type(dataset)}.")
File ~/.local/lib/python3.9/site-packages/opennmt/data/dataset.py:482, in batch_sequence_dataset.<locals>.<lambda>(dataset)
475 else:
476 raise ValueError(
477 "Invalid batch type: '{}'; should be 'examples' or 'tokens'".format(
478 batch_type
479 )
480 )
--> 482 return lambda dataset: dataset.group_by_window(_key_func, _reduce_func, **kwargs)
File ~/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/dataset_ops.py:2823, in DatasetV2.group_by_window(self, key_func, reduce_func, window_size, window_size_func, name)
2819 window_size_func = constant_window_func
2821 assert window_size_func is not None
-> 2823 return _GroupByWindowDataset(
2824 self, key_func, reduce_func, window_size_func, name=name)
File ~/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/dataset_ops.py:5683, in _GroupByWindowDataset.__init__(self, input_dataset, key_func, reduce_func, window_size_func, name)
5681 """See `group_by_window()` for details."""
5682 self._input_dataset = input_dataset
-> 5683 self._make_key_func(key_func, input_dataset)
5684 self._make_reduce_func(reduce_func, input_dataset)
5685 self._make_window_size_func(window_size_func)
File ~/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/dataset_ops.py:5721, in _GroupByWindowDataset._make_key_func(self, key_func, input_dataset)
5718 def key_func_wrapper(*args):
5719 return ops.convert_to_tensor(key_func(*args), dtype=dtypes.int64)
-> 5721 self._key_func = structured_function.StructuredFunctionWrapper(
5722 key_func_wrapper, self._transformation_name(), dataset=input_dataset)
5723 if not self._key_func.output_structure.is_compatible_with(
5724 tensor_spec.TensorSpec([], dtypes.int64)):
5725 raise ValueError(f"Invalid `key_func`. `key_func` must return a single "
5726 f"`tf.int64` scalar tensor but its return type is "
5727 f"{self._key_func.output_structure}.")
File ~/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/structured_function.py:271, in StructuredFunctionWrapper.__init__(self, func, transformation_name, dataset, input_classes, input_shapes, input_types, input_structure, add_to_graph, use_legacy_function, defun_kwargs)
264 warnings.warn(
265 "Even though the `tf.config.experimental_run_functions_eagerly` "
266 "option is set, this option does not apply to tf.data functions. "
267 "To force eager execution of tf.data functions, please use "
268 "`tf.data.experimental.enable_debug_mode()`.")
269 fn_factory = trace_tf_function(defun_kwargs)
--> 271 self._function = fn_factory()
272 # There is no graph to add in eager mode.
273 add_to_graph &= not context.executing_eagerly()
File ~/.local/lib/python3.9/site-packages/tensorflow/python/eager/function.py:2567, in Function.get_concrete_function(self, *args, **kwargs)
2558 def get_concrete_function(self, *args, **kwargs):
2559 """Returns a `ConcreteFunction` specialized to inputs and execution context.
2560
2561 Args:
(...)
2565 or `tf.Tensor` or `tf.TensorSpec`.
2566 """
-> 2567 graph_function = self._get_concrete_function_garbage_collected(
2568 *args, **kwargs)
2569 graph_function._garbage_collector.release() # pylint: disable=protected-access
2570 return graph_function
File ~/.local/lib/python3.9/site-packages/tensorflow/python/eager/function.py:2533, in Function._get_concrete_function_garbage_collected(self, *args, **kwargs)
2531 args, kwargs = None, None
2532 with self._lock:
-> 2533 graph_function, _ = self._maybe_define_function(args, kwargs)
2534 seen_names = set()
2535 captured = object_identity.ObjectIdentitySet(
2536 graph_function.graph.internal_captures)
File ~/.local/lib/python3.9/site-packages/tensorflow/python/eager/function.py:2711, in Function._maybe_define_function(self, args, kwargs)
2708 cache_key = self._function_cache.generalize(cache_key)
2709 (args, kwargs) = cache_key._placeholder_value() # pylint: disable=protected-access
-> 2711 graph_function = self._create_graph_function(args, kwargs)
2712 self._function_cache.add(cache_key, cache_key_deletion_observer,
2713 graph_function)
2715 return graph_function, filtered_flat_args
File ~/.local/lib/python3.9/site-packages/tensorflow/python/eager/function.py:2627, in Function._create_graph_function(self, args, kwargs)
2622 missing_arg_names = [
2623 "%s_%d" % (arg, i) for i, arg in enumerate(missing_arg_names)
2624 ]
2625 arg_names = base_arg_names + missing_arg_names
2626 graph_function = ConcreteFunction(
-> 2627 func_graph_module.func_graph_from_py_func(
2628 self._name,
2629 self._python_function,
2630 args,
2631 kwargs,
2632 self.input_signature,
2633 autograph=self._autograph,
2634 autograph_options=self._autograph_options,
2635 arg_names=arg_names,
2636 capture_by_value=self._capture_by_value),
2637 self._function_attributes,
2638 spec=self.function_spec,
2639 # Tell the ConcreteFunction to clean up its graph once it goes out of
2640 # scope. This is not the default behavior since it gets used in some
2641 # places (like Keras) where the FuncGraph lives longer than the
2642 # ConcreteFunction.
2643 shared_func_graph=False)
2644 return graph_function
File ~/.local/lib/python3.9/site-packages/tensorflow/python/framework/func_graph.py:1141, in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, acd_record_initial_resource_uses)
1138 else:
1139 _, original_func = tf_decorator.unwrap(python_func)
-> 1141 func_outputs = python_func(*func_args, **func_kwargs)
1143 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
1144 # TensorArrays and `None`s.
1145 func_outputs = nest.map_structure(
1146 convert, func_outputs, expand_composites=True)
File ~/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/structured_function.py:248, in StructuredFunctionWrapper.__init__.<locals>.trace_tf_function.<locals>.wrapped_fn(*args)
242 @eager_function.defun_with_attributes(
243 input_signature=structure.get_flat_tensor_specs(
244 self._input_structure),
245 autograph=False,
246 attributes=defun_kwargs)
247 def wrapped_fn(*args): # pylint: disable=missing-docstring
--> 248 ret = wrapper_helper(*args)
249 ret = structure.to_tensor_list(self._output_structure, ret)
250 return [ops.convert_to_tensor(t) for t in ret]
File ~/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/structured_function.py:177, in StructuredFunctionWrapper.__init__.<locals>.wrapper_helper(*args)
175 if not _should_unpack(nested_args):
176 nested_args = (nested_args,)
--> 177 ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
178 if _should_pack(ret):
179 ret = tuple(ret)
File ~/.local/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py:689, in convert.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
687 try:
688 with conversion_ctx:
--> 689 return converted_call(f, args, kwargs, options=options)
690 except Exception as e: # pylint:disable=broad-except
691 if hasattr(e, 'ag_error_metadata'):
File ~/.local/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py:377, in converted_call(f, args, kwargs, caller_fn_scope, options)
374 return _call_unconverted(f, args, kwargs, options)
376 if not options.user_requested and conversion.is_allowlisted(f):
--> 377 return _call_unconverted(f, args, kwargs, options)
379 # internal_convert_user_code is for example turned off when issuing a dynamic
380 # call conversion from generated code while in nonrecursive mode. In that
381 # case we evidently don't want to recurse, but we still have to convert
382 # things like builtins.
383 if not options.internal_convert_user_code:
File ~/.local/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py:458, in _call_unconverted(f, args, kwargs, options, update_cache)
455 return f.__self__.call(args, kwargs)
457 if kwargs is not None:
--> 458 return f(*args, **kwargs)
459 return f(*args)
File ~/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/dataset_ops.py:5719, in _GroupByWindowDataset._make_key_func.<locals>.key_func_wrapper(*args)
5718 def key_func_wrapper(*args):
-> 5719 return ops.convert_to_tensor(key_func(*args), dtype=dtypes.int64)
File ~/.local/lib/python3.9/site-packages/opennmt/data/dataset.py:442, in batch_sequence_dataset.<locals>._key_func(*args)
437 raise ValueError(
438 "%d length functions were passed but this dataset contains "
439 "%d parallel elements" % (len(length_fns), len(args))
440 )
441 # Take the highest bucket id.
--> 442 bucket_id = tf.reduce_max(
443 [
444 _get_bucket_id(features, length_fn)
445 for features, length_fn in zip(args, length_fns)
446 ]
447 )
448 return tf.cast(bucket_id, tf.int64)
File ~/.local/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
File ~/.local/lib/python3.9/site-packages/tensorflow/python/ops/array_ops.py:1506, in _autopacking_helper(list_or_tuple, dtype, name)
1504 if isinstance(elem, core.Tensor):
1505 if dtype is not None and elem.dtype.base_dtype != dtype:
-> 1506 raise TypeError(f"Cannot convert a list containing a tensor of dtype "
1507 f"{elem.dtype} to {dtype} (Tensor is: {elem!r})")
1508 converted_elems.append(elem)
1509 must_pack = True
TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'int32'> to <dtype: 'int64'> (Tensor is: <tf.Tensor 'Const_1:0' shape=() dtype=int32>)
```
| 2022-06-28T16:04:50 |
|
OpenNMT/OpenNMT-tf | 974 | OpenNMT__OpenNMT-tf-974 | [
"973"
] | 5c26fdf8365ee34e4bc2b9de4da370ee73f9e8a7 | diff --git a/opennmt/utils/misc.py b/opennmt/utils/misc.py
--- a/opennmt/utils/misc.py
+++ b/opennmt/utils/misc.py
@@ -3,6 +3,7 @@
import collections
import copy
import functools
+import gzip
import heapq
import io
import os
@@ -221,7 +222,8 @@ def item_or_tuple(x):
def count_lines(filename, buffer_size=65536):
"""Returns the number of lines of the file :obj:`filename`."""
- with tf.io.gfile.GFile(filename, mode="rb") as f:
+ file_class = gzip.open if is_gzip_file(filename) else tf.io.gfile.GFile
+ with file_class(filename, mode="rb") as f:
num_lines = 0
while True:
data = f.read(buffer_size)
| diff --git a/opennmt/tests/misc_test.py b/opennmt/tests/misc_test.py
--- a/opennmt/tests/misc_test.py
+++ b/opennmt/tests/misc_test.py
@@ -1,4 +1,6 @@
+import gzip
import itertools
+import os
import numpy as np
import tensorflow as tf
@@ -181,6 +183,16 @@ def testRelativeConfig(self):
with self.assertRaisesRegex(KeyError, "a_3"):
_ = config["3"]
+ def testCountLinesGzip(self):
+ expected_num_lines = 42
+
+ path = os.path.join(self.get_temp_dir(), "file.gz")
+ with gzip.open(path, "wt") as f:
+ for i in range(expected_num_lines):
+ f.write("%d\n" % i)
+
+ self.assertEqual(misc.count_lines(path), expected_num_lines)
+
if __name__ == "__main__":
tf.test.main()
| Parallel text inputter fails with compressed text files
Hello,
When trying to train a NMT model using GZIP compressed text files (see [this doc](https://opennmt.net/OpenNMT-tf/data.html#compressed-data)) with **OpenNMT-tf 2.29.0**, the following exception is raised: `RuntimeError: Parallel datasets do not have the same size`
The issue seems to lie in the `count_line` function of https://github.com/OpenNMT/OpenNMT-tf/blob/master/opennmt/utils/misc.py#L222 which is not able to deal properly with gzipped files.
Here is how to reproduce the issue with the toy-ende data from https://opennmt.net/OpenNMT-tf/quickstart.html#step-1-prepare-the-data
```
import gzip
from pathlib import Path
from opennmt.utils.misc import count_lines
def create_gzip(original_file):
gziped_file = f'{original_file}.gz'
with open(original_file, 'rt', newline="\n") as f:
lines = f.readlines()
with gzip.open(gziped_file, 'wt', encoding="utf-8") as f:
for line in lines:
f.write(line)
return gziped_file
base_path = Path('/home/jovyan/mt/opennmt_toy_example/toy-ende')
print('txt src:', count_lines(base_path / "src-train.txt"))
print('txt tgt:', count_lines(base_path / "tgt-train.txt"))
print('gzip src:', count_lines(create_gzip(base_path / "src-train.txt")))
print('gzip tgt:', count_lines(create_gzip(base_path / "tgt-train.txt")))
```
This will output the following:
```
txt src: 10000
txt tgt: 10000
gzip src: 1841
gzip tgt: 2035
```
We can check that the compressed files have indeed the same number of lines:
```
zcat src-train.txt.gz | wc -l
10000
```
```
zcat tgt-train.txt.gz | wc -l
10000
```
| 2022-09-28T09:25:13 |
|
OpenNMT/OpenNMT-tf | 982 | OpenNMT__OpenNMT-tf-982 | [
"981"
] | 7d59ed3c06024b148b43491c5ee5d264310c9924 | diff --git a/opennmt/utils/losses.py b/opennmt/utils/losses.py
--- a/opennmt/utils/losses.py
+++ b/opennmt/utils/losses.py
@@ -164,6 +164,7 @@ def guided_alignment_cost(
sample_weight = None
normalizer = tf.size(attention_probs)
+ attention_probs = tf.cast(attention_probs, tf.float32)
cost = loss(gold_alignment, attention_probs, sample_weight=sample_weight)
cost /= tf.cast(normalizer, cost.dtype)
return weight * cost
| diff --git a/opennmt/tests/model_test.py b/opennmt/tests/model_test.py
--- a/opennmt/tests/model_test.py
+++ b/opennmt/tests/model_test.py
@@ -300,6 +300,22 @@ def testSequenceToSequenceWithGuidedAlignment(self, ga_type):
loss = model.compute_loss(outputs, labels, training=True)
loss = loss[0] / loss[1]
+ @test_util.run_with_mixed_precision
+ def testSequenceToSequenceWithGuidedAlignmentMixedPrecision(self):
+ model, params = _seq2seq_model(training=True)
+ params["guided_alignment_type"] = "ce"
+ features_file, labels_file, data_config = self._makeToyEnDeData(
+ with_alignments=True
+ )
+ model.initialize(data_config, params=params)
+ model.create_variables()
+ dataset = model.examples_inputter.make_training_dataset(
+ features_file, labels_file, 16
+ )
+ features, labels = next(iter(dataset))
+ outputs, _ = model(features, labels=labels, training=True)
+ model.compute_loss(outputs, labels, training=True)
+
def testSequenceToSequenceWithGuidedAlignmentAndWeightedDataset(self):
model, _ = _seq2seq_model()
features_file, labels_file, data_config = self._makeToyEnDeData(
| Training fails with --mixed_precision and guided alignments
This occurs with OpenNMT-TF 2.28.00. I have a base mode that is trained without --mixed_precision (as my models fail to train with FP16 quite consistently after about 20k iterations resulting NaN) and without guided alignments. When starting a fine tunning process with guided alignments and --mixed_precision following error occurs. Training starts just fine without --mixed_precision flag:
```
2022-11-05 07:09:26.242000: I main.py:318] Accumulate gradients of 8 iterations to reach effective batch size of 64
2022-11-05 07:09:27.756000: I dataset_ops.py:2270] Training on 1190598 examples
Traceback (most recent call last):
File "opennmt/bin/main.py", line 361, in <module>
main()
File "opennmt/bin/main.py", line 318, in main
runner.train(
File "/ai/onmt/onmt-2280/OpenNMT-tf/opennmt/runner.py", line 281, in train
summary = trainer(
File "/ai/onmt/onmt-2280/OpenNMT-tf/opennmt/training.py", line 109, in __call__
for i, loss in enumerate(
File "/ai/onmt/onmt-2280/OpenNMT-tf/opennmt/training.py", line 248, in _steps
loss = forward_fn()
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/__autograph_generated_fileoze0_yzx.py", line 16, in tf___forward
retval_ = ag__.converted_call(ag__.ld(self)._forward, (ag__.converted_call(ag__.ld(next), (ag__.ld(iterator),), None, fscope),), dict(accum_steps=ag__.ld(accum_steps), report_steps=ag__.ld(report_steps)), fscope)
File "/tmp/__autograph_generated_file0y79rrw1.py", line 11, in tf___forward
(loss, gradients) = ag__.converted_call(ag__.ld(self)._compute_gradients, (ag__.ld(batch), ag__.ld(accum_steps), ag__.ld(report_steps)), None, fscope)
File "/tmp/__autograph_generated_filex26rbucc.py", line 14, in tf___compute_gradients
(reported_loss, gradients) = ag__.converted_call(ag__.ld(self)._model.compute_gradients, (ag__.ld(features), ag__.ld(labels), ag__.ld(self)._optimizer), dict(loss_scale=(ag__.ld(accum_steps) * ag__.ld(self).num_replicas)), fscope)
File "/tmp/__autograph_generated_file_m3r7w1w.py", line 98, in tf__compute_gradients
ag__.if_stmt(ag__.converted_call(ag__.ld(tf).executing_eagerly, (), None, fscope), if_body_3, else_body_3, get_state_3, set_state_3, ('gradients', 'report_loss'), 2)
File "/tmp/__autograph_generated_file_m3r7w1w.py", line 92, in else_body_3
(train_loss, report_loss) = ag__.converted_call(ag__.ld(_compute_loss), (), None, fscope)
File "/tmp/__autograph_generated_file_m3r7w1w.py", line 17, in _compute_loss
(train_loss, report_loss) = ag__.converted_call(ag__.ld(self).compute_training_loss, (ag__.ld(features), ag__.ld(labels)), dict(step=ag__.ld(optimizer).iterations), fscope_1)
File "/tmp/__autograph_generated_file7d1kbnh_.py", line 12, in tf__compute_training_loss
loss = ag__.converted_call(ag__.ld(self).compute_loss, (ag__.ld(outputs), ag__.ld(labels)), dict(training=True), fscope)
File "/tmp/__autograph_generated_filegf82ikml.py", line 112, in tf__compute_loss
ag__.if_stmt(ag__.and_((lambda : (ag__.ld(noisy_logits) is not None)), (lambda : ag__.converted_call(ag__.ld(params).get, ('contrastive_learning',), None, fscope))), if_body_4, else_body_4, get_state_4, set_state_4, ('do_return', 'retval_'), 2)
File "/tmp/__autograph_generated_filegf82ikml.py", line 100, in else_body_4
ag__.if_stmt(ag__.ld(training), if_body_3, else_body_3, get_state_3, set_state_3, ('loss',), 1)
File "/tmp/__autograph_generated_filegf82ikml.py", line 93, in if_body_3
ag__.if_stmt(ag__.and_((lambda : (ag__.ld(gold_alignments) is not None)), (lambda : (ag__.ld(guided_alignment_type) is not None))), if_body_2, else_body_2, get_state_2, set_state_2, ('loss',), 1)
File "/tmp/__autograph_generated_filegf82ikml.py", line 88, in if_body_2
ag__.if_stmt((ag__.ld(attention) is None), if_body_1, else_body_1, get_state_1, set_state_1, ('loss',), 1)
File "/tmp/__autograph_generated_filegf82ikml.py", line 87, in else_body_1
loss += ag__.converted_call(losses.guided_alignment_cost, (attention[:, :(- 1)], gold_alignments), dict(sequence_length=ag__.converted_call(self.labels_inputter.get_length, (labels,), dict(ignore_special_tokens=True), fscope), cost_type=guided_alignment_type, weight=ag__.converted_call(params.get, ('guided_alignment_weight', 1), None, fscope)), fscope)
TypeError: in user code:
File "/ai/onmt/onmt-2280/OpenNMT-tf/opennmt/training.py", line 233, in _forward *
next(iterator),
File "/ai/onmt/onmt-2280/OpenNMT-tf/opennmt/training.py", line 316, in _forward *
loss, gradients = self._compute_gradients(
File "/ai/onmt/onmt-2280/OpenNMT-tf/opennmt/training.py", line 298, in _compute_gradients *
reported_loss, gradients = self._model.compute_gradients(
File "/ai/onmt/onmt-2280/OpenNMT-tf/opennmt/models/model.py", line 223, in _compute_loss *
train_loss, report_loss = self.compute_training_loss(
File "/ai/onmt/onmt-2280/OpenNMT-tf/opennmt/models/model.py", line 264, in compute_training_loss *
loss = self.compute_loss(outputs, labels, training=True)
File "/ai/onmt/onmt-2280/OpenNMT-tf/opennmt/models/sequence_to_sequence.py", line 457, in compute_loss *
loss += losses.guided_alignment_cost(
TypeError: Input 'y' of 'AddV2' Op has type float16 that does not match type float32 of argument 'x'.
```
| 2022-11-07T11:15:36 |
|
OpenNMT/OpenNMT-tf | 986 | OpenNMT__OpenNMT-tf-986 | [
"985"
] | a62b886f28c66451bc2bc04f155d20fce32ae93a | diff --git a/opennmt/training.py b/opennmt/training.py
--- a/opennmt/training.py
+++ b/opennmt/training.py
@@ -346,6 +346,15 @@ def is_master(self):
def num_replicas(self):
return self._hvd.size()
+ def _evaluate(self, evaluator, step, moving_average=None):
+ should_stop = super()._evaluate(evaluator, step, moving_average)
+ # Evaluation is only performed on master, but we want all workers
+ # to be aware of the early stopping decision.
+ should_stop = self._hvd.broadcast_object(
+ should_stop, root_rank=0, name="should_stop"
+ )
+ return should_stop
+
def _finalize_dataset(self, dataset):
if callable(dataset):
dataset = dataset(
| "Horovod has been shut down" error when training is finished due to early stopping
Hello,
I'm training a Transformer model using OpenNMT-tf 2.29.0 with Horovod on a single node with 4 GPUs, but if the training is finished due to early stopping (in the example below, early stopping kicked in at 600k steps but 1M steps was configured), Horovod complains that it has been shut down and the script exits with a non-zero status.
This is in itself not a huge problem since the model is still saved and exported properly, but this causes troubles when using OpenNMT in Azure ML workspace where the training might be a step followed by others.
I tried to look into it, and the checkpoint averaging triggered here https://github.com/OpenNMT/OpenNMT-tf/blob/v2.29.0/opennmt/runner.py#L304-L307 seems to be done properly, but I don't understand why training steps and gradients computation would still be performed after the checkpoint averaging, but I'm no Horovod expert :)
```
[1,0]<stderr>:2022-10-08 05:23:59.894000: I runner.py:362] Restored checkpoint ./logs/ckpt-600000
[1,0]<stderr>:INFO:tensorflow:Restored checkpoint ./logs/ckpt-600000
[1,0]<stderr>:2022-10-08 05:24:02.028000: I runner.py:365] Averaging 5 checkpoints...
[1,0]<stderr>:INFO:tensorflow:Averaging 5 checkpoints...
[1,0]<stderr>:2022-10-08 05:24:02.028000: I runner.py:365] Reading checkpoint ./logs/ckpt-560000...
[1,0]<stderr>:INFO:tensorflow:Reading checkpoint ./logs/ckpt-560000...
[1,0]<stderr>:2022-10-08 05:24:02.496000: I runner.py:365] Reading checkpoint ./logs/ckpt-570000...
[1,0]<stderr>:INFO:tensorflow:Reading checkpoint ./logs/ckpt-570000...
[1,0]<stderr>:2022-10-08 05:24:02.944000: I runner.py:365] Reading checkpoint ./logs/ckpt-580000...
[1,0]<stderr>:INFO:tensorflow:Reading checkpoint ./logs/ckpt-580000...
[1,0]<stderr>:2022-10-08 05:24:03.396000: I runner.py:365] Reading checkpoint ./logs/ckpt-590000...
[1,0]<stderr>:INFO:tensorflow:Reading checkpoint ./logs/ckpt-590000...
[1,0]<stderr>:2022-10-08 05:24:03.844000: I runner.py:365] Reading checkpoint ./logs/ckpt-600000...
[1,0]<stderr>:INFO:tensorflow:Reading checkpoint ./logs/ckpt-600000...
[1,0]<stderr>:2022-10-08 05:24:06.440000: I runner.py:365] Saved averaged checkpoint to ./logs/avg/ckpt-600000
[1,0]<stderr>:INFO:tensorflow:Saved averaged checkpoint to ./logs/avg/ckpt-600000
[1,1]<stdout>:9913eb9c2e4c4db3bd484228cfe17c1a000000:160:368 [1] NCCL INFO comm 0x7f252b96a910 rank 1 nranks 4 cudaDev 1 busId 70f800000 - Destroy COMPLETE
[1,3]<stdout>:9913eb9c2e4c4db3bd484228cfe17c1a000000:162:366 [3] NCCL INFO comm 0x7f6dcf96a340 rank 3 nranks 4 cudaDev 3 busId a66000000 - Destroy COMPLETE
[1,2]<stdout>:9913eb9c2e4c4db3bd484228cfe17c1a000000:161:362 [2] NCCL INFO comm 0x7f9b5f96a060 rank 2 nranks 4 cudaDev 2 busId 973200000 - Destroy COMPLETE
[1,0]<stdout>:9913eb9c2e4c4db3bd484228cfe17c1a000000:159:355 [0] NCCL INFO comm 0x7f9a7b99eef0 rank 0 nranks 4 cudaDev 0 busId 4f5100000 - Destroy COMPLETE
[1,3]<stderr>:Traceback (most recent call last):
[1,3]<stderr>: File "/usr/local/bin/onmt-main", line 8, in <module>
[1,3]<stderr>: sys.exit(main())
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/bin/main.py", line 318, in main
[1,3]<stderr>: runner.train(
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/runner.py", line 289, in train
[1,3]<stderr>: summary = trainer(
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 109, in __call__
[1,3]<stderr>: for i, loss in enumerate(
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 260, in _steps
[1,3]<stderr>: step_fn()
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
[1,3]<stderr>: raise e.with_traceback(filtered_tb) from None
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
[1,3]<stderr>: tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
[1,3]<stderr>:tensorflow.python.framework.errors_impl.UnknownError: Graph execution error:
[1,3]<stderr>:
[1,3]<stderr>:Detected at node 'HorovodAllreduce_ReadVariableOp_22_0' defined at (most recent call last):
[1,3]<stderr>: File "/usr/local/bin/onmt-main", line 8, in <module>
[1,3]<stderr>: sys.exit(main())
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/bin/main.py", line 318, in main
[1,3]<stderr>: runner.train(
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/runner.py", line 289, in train
[1,3]<stderr>: summary = trainer(
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 109, in __call__
[1,3]<stderr>: for i, loss in enumerate(
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 260, in _steps
[1,3]<stderr>: step_fn()
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 239, in _step
[1,3]<stderr>: return self._step()
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 329, in _step
[1,3]<stderr>: self._apply_gradients(self._gradient_accumulator.gradients)
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 381, in _apply_gradients
[1,3]<stderr>: return super()._apply_gradients(map(self._all_reduce_sum, gradients))
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 310, in _apply_gradients
[1,3]<stderr>: self._optimizer.apply_gradients(
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 388, in _all_reduce_sum
[1,3]<stderr>: return self._hvd.allreduce(value, op=self._hvd.Sum)
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 125, in allreduce
[1,3]<stderr>: summed_tensor_compressed = _allreduce(tensor_compressed, op=op,
[1,3]<stderr>: File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/mpi_ops.py", line 127, in _allreduce
[1,3]<stderr>: return MPI_LIB.horovod_allreduce(tensor, name=name, reduce_op=op,
[1,3]<stderr>: File "<string>", line 107, in horovod_allreduce
[1,3]<stderr>:Node: 'HorovodAllreduce_ReadVariableOp_22_0'
[1,3]<stderr>:Horovod has been shut down. This was caused by an exception on one of the ranks or an attempt to allreduce, allgather or broadcast a tensor after one of the ranks finished execution. If the shutdown was caused by an exception, you should see the exception in the log before the first shutdown message.
[1,3]<stderr>: [[{{node HorovodAllreduce_ReadVariableOp_22_0}}]] [Op:__inference__step_50022]
[1,2]<stderr>:Traceback (most recent call last):
[1,2]<stderr>: File "/usr/local/bin/onmt-main", line 8, in <module>
[1,2]<stderr>: sys.exit(main())
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/bin/main.py", line 318, in main
[1,2]<stderr>: runner.train(
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/runner.py", line 289, in train
[1,2]<stderr>: summary = trainer(
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 109, in __call__
[1,2]<stderr>: for i, loss in enumerate(
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 260, in _steps
[1,2]<stderr>: step_fn()
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
[1,2]<stderr>: raise e.with_traceback(filtered_tb) from None
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
[1,2]<stderr>: tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
[1,2]<stderr>:tensorflow.python.framework.errors_impl.UnknownError: Graph execution error:
[1,2]<stderr>:
[1,2]<stderr>:Detected at node 'HorovodAllreduce_ReadVariableOp_165_0' defined at (most recent call last):
[1,2]<stderr>: File "/usr/local/bin/onmt-main", line 8, in <module>
[1,2]<stderr>: sys.exit(main())
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/bin/main.py", line 318, in main
[1,2]<stderr>: runner.train(
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/runner.py", line 289, in train
[1,2]<stderr>: summary = trainer(
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 109, in __call__
[1,2]<stderr>: for i, loss in enumerate(
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 260, in _steps
[1,2]<stderr>: step_fn()
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 239, in _step
[1,2]<stderr>: return self._step()
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 329, in _step
[1,2]<stderr>: self._apply_gradients(self._gradient_accumulator.gradients)
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 381, in _apply_gradients
[1,2]<stderr>: return super()._apply_gradients(map(self._all_reduce_sum, gradients))
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 310, in _apply_gradients
[1,2]<stderr>: self._optimizer.apply_gradients(
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 388, in _all_reduce_sum
[1,2]<stderr>: return self._hvd.allreduce(value, op=self._hvd.Sum)
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 125, in allreduce
[1,2]<stderr>: summed_tensor_compressed = _allreduce(tensor_compressed, op=op,
[1,2]<stderr>: File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/mpi_ops.py", line 127, in _allreduce
[1,2]<stderr>: return MPI_LIB.horovod_allreduce(tensor, name=name, reduce_op=op,
[1,2]<stderr>: File "<string>", line 107, in horovod_allreduce
[1,2]<stderr>:Node: 'HorovodAllreduce_ReadVariableOp_165_0'
[1,2]<stderr>:Horovod has been shut down. This was caused by an exception on one of the ranks or an attempt to allreduce, allgather or broadcast a tensor after one of the ranks finished execution. If the shutdown was caused by an exception, you should see the exception in the log before the first shutdown message.
[1,2]<stderr>: [[{{node HorovodAllreduce_ReadVariableOp_165_0}}]] [Op:__inference__step_50013]
[1,1]<stderr>:Traceback (most recent call last):
[1,1]<stderr>: File "/usr/local/bin/onmt-main", line 8, in <module>
[1,1]<stderr>: sys.exit(main())
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/bin/main.py", line 318, in main
[1,1]<stderr>: runner.train(
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/runner.py", line 289, in train
[1,1]<stderr>: summary = trainer(
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 109, in __call__
[1,1]<stderr>: for i, loss in enumerate(
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 260, in _steps
[1,1]<stderr>: step_fn()
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
[1,1]<stderr>: raise e.with_traceback(filtered_tb) from None
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
[1,1]<stderr>: tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
[1,1]<stderr>:tensorflow.python.framework.errors_impl.UnknownError: Graph execution error:
[1,1]<stderr>:
[1,1]<stderr>:Detected at node 'HorovodAllreduce_ReadVariableOp_127_0' defined at (most recent call last):
[1,1]<stderr>: File "/usr/local/bin/onmt-main", line 8, in <module>
[1,1]<stderr>: sys.exit(main())
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/bin/main.py", line 318, in main
[1,1]<stderr>: runner.train(
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/runner.py", line 289, in train
[1,1]<stderr>: summary = trainer(
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 109, in __call__
[1,1]<stderr>: for i, loss in enumerate(
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 260, in _steps
[1,1]<stderr>: step_fn()
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 239, in _step
[1,1]<stderr>: return self._step()
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 329, in _step
[1,1]<stderr>: self._apply_gradients(self._gradient_accumulator.gradients)
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 381, in _apply_gradients
[1,1]<stderr>: return super()._apply_gradients(map(self._all_reduce_sum, gradients))
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 310, in _apply_gradients
[1,1]<stderr>: self._optimizer.apply_gradients(
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/opennmt/training.py", line 388, in _all_reduce_sum
[1,1]<stderr>: return self._hvd.allreduce(value, op=self._hvd.Sum)
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 125, in allreduce
[1,1]<stderr>: summed_tensor_compressed = _allreduce(tensor_compressed, op=op,
[1,1]<stderr>: File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/mpi_ops.py", line 127, in _allreduce
[1,1]<stderr>: return MPI_LIB.horovod_allreduce(tensor, name=name, reduce_op=op,
[1,1]<stderr>: File "<string>", line 107, in horovod_allreduce
[1,1]<stderr>:Node: 'HorovodAllreduce_ReadVariableOp_127_0'
[1,1]<stderr>:Horovod has been shut down. This was caused by an exception on one of the ranks or an attempt to allreduce, allgather or broadcast a tensor after one of the ranks finished execution. If the shutdown was caused by an exception, you should see the exception in the log before the first shutdown message.
[1,1]<stderr>: [[{{node HorovodAllreduce_ReadVariableOp_127_0}}]] [Op:__inference__step_49995]
```
Command used to launch the training:
`horovodrun -np 4 -H localhost:4 onmt-main --model document_context_model.py --config training_config.yml model_config.yml /mnt/azureml/cr/j/84cb509ce83e4ac59d39e94ee62b0ad2/cap/data-capability/wd/INPUT_input__6593fad7/data_conf.yml --data_dir /mnt/azureml/cr/j/84cb509ce83e4ac59d39e94ee62b0ad2/cap/data-capability/wd/INPUT_input__6593fad7 --auto_config --mixed_precision train --with_eval --horovod`
| Hi,
The evaluation is only run in the master process, so the other workers are not aware that the early stopping condition is met. This information should somehow be communicated with the other workers (probably with `hvd.broadcast()`).
Do you want to look into that? The related code is located in `opennmt/training.py`. We could extend the method `_evaluate` in `HorovodTrainer` and implement additional logic the send the early stop flag to all workers.
Thanks a lot for the info @guillaumekln , I'll have a look then and will create a PR once it's working :) | 2022-12-05T14:19:31 |
|
OpenNMT/OpenNMT-tf | 997 | OpenNMT__OpenNMT-tf-997 | [
"996"
] | 710f0ee09d0283033f5b6c1ee3a8a78b22c174ea | diff --git a/opennmt/runner.py b/opennmt/runner.py
--- a/opennmt/runner.py
+++ b/opennmt/runner.py
@@ -157,6 +157,7 @@ def _finalize_config(self, training=False, num_replicas=1, num_devices=1):
num_devices=num_devices,
scaling_factor=train_config.get("batch_size_autotune_scale", 0.7),
mixed_precision=self._mixed_precision,
+ timeout=train_config.get("batch_size_autotune_timeout", 15 * 60),
)
tf.get_logger().info(
| Make timeout value configurable while searching for an optimal batch size
In the `_auto_tune_batch_size`, the timeout value is hard coded to 15 minutes. For very large amounts of data, the workaround is to manually find a batch size by trying different values. Would be nice, if we could increase the timeout value by setting it as one of the `parameters`
| 2023-02-28T17:36:46 |
||
ktbyers/netmiko | 59 | ktbyers__netmiko-59 | [
"49"
] | 119ac5d0ee40c1096543955fc5faeeb5cb993fca | diff --git a/netmiko/base_connection.py b/netmiko/base_connection.py
--- a/netmiko/base_connection.py
+++ b/netmiko/base_connection.py
@@ -322,6 +322,22 @@ def check_config_mode(self):
pass
+ def send_config_file(self, config_file=None, commit=False):
+ '''
+ Parse a configuration file and relay the data to
+ self.send_config_set()
+ '''
+
+ config_commands = []
+ try:
+ for line in open(config_file, "r"):
+ config_commands.append(line.strip())
+ except IOError as (errno, strerr):
+ print "I/O Error {0}: {1}".format(errno, strerr)
+
+ return self.send_config_set(config_commands=config_commands,
+ commit=commit)
+
def send_config_set(self, config_commands=None, commit=False):
'''
Send in a set of configuration commands as a list
| Add a way to read commands from a file for send_command_set()
| I asked about this a few days ago. However, I think that there may be some things to consider. Creating an array of the files lines may be bad from a performance perspective. Chunking may be better, creating a smaller array, pushing the lines, and then going through the next X chunks. This is particularly true when the file contains entire configurations with comments.
This suggestion may fall under "[premature optimization is the root of all evil](http://en.wikipedia.org/wiki/Program_optimization#Quotes)", though. Take it with a grain of salt. :)
Also, as I require this feature for an upcoming project, I may implement it (crudely) and submit a PR with the change.
I think we should probably keep it simple...just read in the whole file and parse it into a list.
I don't see the performance of chunking as mattering.
The performance that possibly matter is how long it takes to send a whole bunch of configuration commands since send_config_set() calls send_command() which will by default delay 1 second per command.
I've written the code for this. I'll be submitting a PR later today or tomorrow.
| 2015-03-07T18:24:20 |
|
ktbyers/netmiko | 127 | ktbyers__netmiko-127 | [
"126"
] | 2e227bfd3f2bc2c5958108a4e635de9dbb926aaf | diff --git a/netmiko/base_connection.py b/netmiko/base_connection.py
--- a/netmiko/base_connection.py
+++ b/netmiko/base_connection.py
@@ -143,6 +143,20 @@ def disable_paging(self, command="terminal length 0\n", delay_factor=.5):
return output
+ def wait_for_recv_ready(self, delay_factor, max_loops):
+ '''
+ Wait for data to be in the buffer so it can be received.
+
+ delay_factor can be used to increase the delays.
+
+ max_loops can be used to increase the number of times it reads the data buffer
+ '''
+ i = 0
+ while not self.remote_conn.recv_ready() and i < max_loops:
+ time.sleep(1 * delay_factor)
+ i += 1
+
+
def set_base_prompt(self, pri_prompt_terminator='#',
alt_prompt_terminator='>', delay_factor=.5):
'''
@@ -164,7 +178,8 @@ def set_base_prompt(self, pri_prompt_terminator='#',
self.clear_buffer()
self.remote_conn.sendall("\n")
- time.sleep(1 * delay_factor)
+
+ self.wait_for_recv_ready(delay_factor, 10)
prompt = self.remote_conn.recv(MAX_BUFFER).decode('utf-8')
@@ -238,7 +253,6 @@ def clear_buffer(self):
else:
return None
-
def send_command(self, command_string, delay_factor=.5, max_loops=30,
strip_prompt=True, strip_command=True):
'''
@@ -270,18 +284,14 @@ def send_command(self, command_string, delay_factor=.5, max_loops=30,
self.remote_conn.sendall(command_string)
- time.sleep(1 * delay_factor)
- not_done = True
- i = 1
-
- while (not_done) and (i <= max_loops):
- time.sleep(1 * delay_factor)
- i += 1
- # Keep reading data as long as available (up to max_loops)
- if self.remote_conn.recv_ready():
- output += self.remote_conn.recv(MAX_BUFFER).decode('utf-8')
- else:
- not_done = False
+ # Wait for recv_ready()
+ self.wait_for_recv_ready(delay_factor, max_loops)
+
+ # Keep reading data as long as available (up to max_loops)
+ i = 0
+ while self.remote_conn.recv_ready() and i < max_loops:
+ output += self.remote_conn.recv(MAX_BUFFER).decode('utf-8')
+ self.wait_for_recv_ready(delay_factor, max_loops)
# Some platforms have ansi_escape codes
if self.ansi_escape_codes:
| delay_factor for connection initialization
Hi Kirk,
We intermittently get this stack trace when connecting to Nexus devices:
```
SSH connection established to X.X.X.X:22
Interactive SSH session established
Traceback (most recent call last):
File "/Users/mzb/.ansible/tmp/ansible-tmp-1447101573.0-35515167451694/ntc_show_command", line 1843, in <module>
main()
File "/Users/mzb/.ansible/tmp/ansible-tmp-1447101573.0-35515167451694/ntc_show_command", line 211, in main
password=password
File "/Library/Python/2.7/site-packages/netmiko/ssh_dispatcher.py", line 54, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "/Library/Python/2.7/site-packages/netmiko/base_connection.py", line 45, in __init__
self.session_preparation()
File "/Library/Python/2.7/site-packages/netmiko/base_connection.py", line 61, in session_preparation
self.set_base_prompt()
File "/Library/Python/2.7/site-packages/netmiko/base_connection.py", line 187, in set_base_prompt
raise ValueError("Router prompt not found: {0}".format(prompt))
ValueError: Router prompt not found:
```
Our belief is that increasing the `delay_factor` in `set_base_prompt()` would solve this to account for the latency within the switch itself and whatever network latency there may be. Another option would be pushing the `delay_factor` param up the stack to `ConnectHandler()`. If you like either of those options, we'd be happy to submit a PR.
Thanks,
Michael
| @mzbenami We should probably put in a while loop and a check for recv_ready (something somewhat similar to the following). This is inside the set_base_prompt method.
```
i = 0
while i <= 10:
time.sleep(1 * delay_factor)
if self.remote_conn.recv_ready():
prompt = self.remote_conn.recv(MAX_BUFFER).decode('utf-8')
else:
i += 1
```
I think I have documented on a separate issue converting delay_factor to a variable that gets initialized when the object is created, but this is a lot more work and has a lot more implications.
I will try to work on this today or tomorrow.
| 2015-11-10T22:06:21 |
|
ktbyers/netmiko | 478 | ktbyers__netmiko-478 | [
"477"
] | 42e32092d46ce772d7baf8e0a1b8db5382f89a21 | diff --git a/netmiko/base_connection.py b/netmiko/base_connection.py
--- a/netmiko/base_connection.py
+++ b/netmiko/base_connection.py
@@ -226,7 +226,10 @@ def _read_channel(self):
output = ""
while True:
if self.remote_conn.recv_ready():
- output += self.remote_conn.recv(MAX_BUFFER).decode('utf-8', 'ignore')
+ outbuf = self.remote_conn.recv(MAX_BUFFER)
+ if len(outbuf) == 0:
+ raise EOFError
+ output += outbuf.decode('utf-8', 'ignore')
else:
break
elif self.protocol == 'telnet':
@@ -276,7 +279,10 @@ def _read_channel_expect(self, pattern='', re_flags=0, max_loops=None):
try:
# If no data available will wait timeout seconds trying to read
self._lock_netmiko_session()
- new_data = self.remote_conn.recv(MAX_BUFFER).decode('utf-8', 'ignore')
+ new_data = self.remote_conn.recv(MAX_BUFFER)
+ if len(new_data) == 0:
+ raise EOFError
+ new_data = new_data.decode('utf-8', 'ignore')
log.debug("_read_channel_expect read_data: {}".format(new_data))
output += new_data
except socket.timeout:
| No EOF checks after calling `self.remote_conn.recv()`
Both `_read_channel()` and `_read_channel_expect()` do not check for EOF after calling `self.remote_conn.recv()`. This causes a connection which has closed to raise
`NetMikoTimeoutException("Timed-out reading channel, pattern not found in output: {}".format(pattern))`
This isn't exactly desired behavior as it makes it hard to discern whether the command just took too long or the connection is dead. Most of the time, a subsequent call to `self.send_command()` will cause paramiko to raise `socket.error("Socket is closed")`, but I have ran into a case where the socket only half closed, so the `socket.error` never got raised.
I can fix this, but I'd like guidance on the preferred way to signal this error to the user. `EOFError`? A netmiko exception? Something else?
| @kbirkeland `_read_channel()` checks if data is available (in the case of SSH) i.e. it uses `recv_ready()` call or else it breaks out of method (so above should be very unlikely in case of `read_channel()`.
Yes, differentiating between the SSH socket closing and not getting the expected data back is probably hard with `_read_channel_expect()`.
Can you just check this?
```
>>> net_connect.remote_conn.get_transport().is_alive()
True
# Terminated the SSH session on the router
>>> net_connect.remote_conn.get_transport().is_alive()
False
```
This is probably not something I will incorporate in via a PR (as I don't see enough importance in making a distinction here). In practice, it is usually the expected pattern didn't come back (and pretty rarely that the SSH connection closed mid-usage).
I am open to arguments that it is important, however.
While it may be unlikely, testing error cases with sockets is a good thing as unexpected things are bound to happen at some point.
The assumption that connections don't usually close mid-usage is probably true for one-shot scripts, but when using a long running connection, such as napalm with salt, this happens quite often if an exec-timeout is set. When the exec-timeout hits, the connection dies and the salt-proxy minion must be restarted. I'll be sending a PR to napalm-ios to reconnect on `socket.error`, but I'd also like to know if the socket received EOF.
@kbirkeland Fair enough on Salt + NAPALM.
If you are going to submit it on napalm-ios, we should probably discuss the most logical place to include it (netmiko or napalm-ios). It goes to me either way.
Adding @mirceaulinic into this...as this mostly pertains to Salt.
I don't have preference here (mainly want to avoid adding much complexity)...so if you have proposals, let me know. | 2017-05-25T20:11:13 |
|
ktbyers/netmiko | 1,030 | ktbyers__netmiko-1030 | [
"1028"
] | 077732f575e82b950ab1f8132b23027f76d694e0 | diff --git a/netmiko/scp_handler.py b/netmiko/scp_handler.py
--- a/netmiko/scp_handler.py
+++ b/netmiko/scp_handler.py
@@ -232,6 +232,11 @@ def _remote_file_size_unix(self, remote_cmd="", remote_file=None):
self.ssh_ctl_chan._enter_shell()
remote_out = self.ssh_ctl_chan.send_command(remote_cmd, expect_string=r"[\$#]")
+ self.ssh_ctl_chan._return_cli()
+
+ if "No such file or directory" in remote_out:
+ raise IOError("Unable to find file on remote system")
+
escape_file_name = re.escape(remote_file)
pattern = r"^.* ({}).*$".format(escape_file_name)
match = re.search(pattern, remote_out, flags=re.M)
@@ -239,9 +244,11 @@ def _remote_file_size_unix(self, remote_cmd="", remote_file=None):
# Format: -rw-r--r-- 1 pyclass wheel 12 Nov 5 19:07 /var/tmp/test3.txt
line = match.group(0)
file_size = line.split()[4]
+ return int(file_size)
- self.ssh_ctl_chan._return_cli()
- return int(file_size)
+ raise ValueError(
+ "Search pattern not found for remote file size during SCP transfer."
+ )
def file_md5(self, file_name):
"""Compute MD5 hash of file."""
| [2.1.1] BaseFileTransfer._remote_file_size_unix() seems to incorrectly parse ls -l output
Hi,
First, thanks for creating and maintaining a great library that is doing a lot of work for us. It's really awesome :-)
I had an issue doing a file_transfer() to retrieve a remote file, using a linux connection. (version/platform details at bottom of this post)
```
from netmiko import ConnectHandler, file_transfer
client = ConnectHandler(device_type='linux') # other args ommitted
src = 'remote_dir/export.tar.gz'
dst = 'test_export.tar.gz'
direction = 'get'
res = file_transfer(client, source_file=src, dest_file=dst, direction=direction)
```
This gave the following traceback;
```
/home/proj/test_venv/lib/python3.6/site-packages/netmiko/scp_functions.py in file_transfer(ssh_conn, source_file, dest_file, file_system, direction, disable_md5, inline_transfer, overwrite_file)
68
---> 69 with TransferClass(**scp_args) as scp_transfer:
70 if scp_transfer.check_file_exists():
/home/proj/test_venv/lib/python3.6/site-packages/netmiko/ssh_dispatcher.py in FileTransfer(*args, **kwargs)
208 FileTransferClass = FILE_TRANSFER_MAP[device_type]
--> 209 return FileTransferClass(*args, **kwargs)
/home/proj/test_venv/lib/python3.6/site-packages/netmiko/linux/linux_ssh.py in __init__(self, ssh_conn, source_file, dest_file, file_system, direction)
113 file_system=file_system,
--> 114 direction=direction)
115
/home/proj/test_venv/lib/python3.6/site-packages/netmiko/scp_handler.py in __init__(self, ssh_conn, source_file, dest_file, file_system, direction)
77 self.source_md5 = self.remote_md5(remote_file=source_file)
---> 78 self.file_size = self.remote_file_size(remote_file=source_file)
79 else:
/home/proj/test_venv/lib/python3.6/site-packages/netmiko/linux/linux_ssh.py in remote_file_size(self, remote_cmd, remote_file)
125 """Get the file size of the remote file."""
--> 126 return self._remote_file_size_unix(remote_cmd=remote_cmd, remote_file=remote_file)
127
/home/proj/test_venv/lib/python3.6/site-packages/netmiko/scp_handler.py in _remote_file_size_unix(self, remote_cmd, remote_file)
227 self.ssh_ctl_chan._return_cli()
--> 228 return int(file_size)
229
ValueError: invalid literal for int() with base 10: 'file'
```
Replicating the remote-file-size check in https://github.com/ktbyers/netmiko/blob/master/netmiko/scp_handler.py#L221 it appears that the regex also matches error output when the file is not found:
```
In [60]: remote_out
Out[60]: 'ls: /var/tmp/remote_dir/export.tar.gz: No such file or directory'
In [61]: match = re.search(pattern, remote_out, flags=re.M)
In [62]: match.group(0)
Out[62]: 'ls: /var/tmp/remote_dir/export.tar.gz: No such file or directory'
```
In my case, the fix was simple - specify the correct directory in`file_system` parameter to file_transfer:
```
from netmiko import ConnectHandler, file_transfer
client = ConnectHandler(device_type='linux') # other args ommitted
src = 'remote_dir/export.tar.gz'
dst = 'test_export.tar.gz'
direction = 'get'
file_system = '/home/_nonlocl'
res = file_transfer(client, source_file=src, dest_file=dst, direction=direction, file_system=file_system)
```
The file transfer works now, although I'm getting a "MD5 failure between source and destination files", but that's offtopic.
A better error handling/message in _remote_file_size_unix would be helpful in my opinion.
If you take pull-requests I can try writing one up - although I might not be aware of the impact of changing this low-level function.
**Machine running netmiko**
OS: Debian 9 i686
Netmiko: 2.1.1
Python: 3.6.4
**Target machine**
OS: Check Point Gaia R80.10
| 2018-12-11T00:58:37 |
||
ktbyers/netmiko | 1,050 | ktbyers__netmiko-1050 | [
"1049"
] | f8594e06ee8fab8f64f64a6885f457fadcc32014 | diff --git a/netmiko/base_connection.py b/netmiko/base_connection.py
--- a/netmiko/base_connection.py
+++ b/netmiko/base_connection.py
@@ -15,6 +15,7 @@
import socket
import telnetlib
import time
+from collections import deque
from os import path
from threading import Lock
@@ -1256,6 +1257,7 @@ def send_command(
i = 1
output = ""
+ past_three_reads = deque(maxlen=3)
first_line_processed = False
# Keep reading data until search_pattern is found or until max_loops is reached.
@@ -1265,10 +1267,12 @@ def send_command(
if self.ansi_escape_codes:
new_data = self.strip_ansi_escape_codes(new_data)
+ output += new_data
+ past_three_reads.append(new_data)
+
# Case where we haven't processed the first_line yet (there is a potential issue
# in the first line (in cases where the line is repainted).
if not first_line_processed:
- output += new_data
output, first_line_processed = self._first_line_handler(
output, search_pattern
)
@@ -1277,9 +1281,8 @@ def send_command(
break
else:
- output += new_data
- # Check if pattern is in the incremental data
- if re.search(search_pattern, new_data):
+ # Check if pattern is in the past three reads
+ if re.search(search_pattern, "".join(past_three_reads)):
break
time.sleep(delay_factor * loop_delay)
| Output split across two reads so send_command pattern never detected
Hi folks,
I have a strange problem with the build-in `send_command()` method and I am not able to find a good solution to it.
I have a script which connects to several Cisco IOS-XR routers and issue some commands. In most cases it is working just fine, but sometimes there is this error message: `Search pattern never detected in send_command_expect`.
Here is my simplified code:
```
router_params = {
"host": router_hostname,
"username": username,
"password": password,
"device_type": 'cisco_xr',
"ssh_config_file": ssh_config_file
}
router = ConnectHandler(**router_params)
router.send_command('show int description | e down | i R2-R2,', delay_factor=2)
router.send_command('show ipv4 int brief | e down | e Loop | e unassigned | i default', delay_factor=2)
```
When I tried to debug it I noticed strange behavior in the `self.read_channel()` method so I put some additional print() statements into the `send_command()` method as follow:
```
....
# Keep reading data until search_pattern is found or until max_loops is reached.
while i <= max_loops:
print(f'Searching for prompt: {search_pattern}') <---- MY DEBUG LINE
new_data = self.read_channel()
print(f'new_data: {new_data}') <---- MY DEBUG LINE
.....
```
And here is the output when the `Search pattern never detected in send_command_expect` is raised:
```
Interactive SSH session established
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data:
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data: show int description | e down | i R2-R2,
Thu Jan 3 19:15:49.283 CET
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data:
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data: Te0/0/0/0 up up R2-R2,abcde
RP/0/RP0/CPU0:fra-xxxx-yyyy#
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data:
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data:
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data: show ipv4 int brief | e down | e Loop | e unassigned | i default
Thu Jan 3 19:15:50.564 CET
TenGigE0/0/0/0 10.1.1.1 Up Up default
TenGigE0/0/0/18 10.1.2.1 Up Up default
TenGigE0/1/0/18 10.1.3.1 Up Up default
RP/0/RP0/C
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data: PU0:fra-xxxx-yyyy#
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data:
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data:
.......
Search pattern never detected in send_command_expect: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
```
If you take a closer look especially into this line:
```
RP/0/RP0/C
Searching for prompt: RP/0/RP0/CPU0:fra\-xxxx\-yyyy\#
new_data: PU0:fra-xxxx-yyyy#
```
you will see the router prompt `RP/0/RP0/CPU0:fra-xxxx-yyyy#` is split to 2 outputs from the `self.read_channel()` and that is the reason why the prompt is never found and the error `Search pattern never detected in send_command_expect` is issued.
I suspect some delay problem with this but I also think the behavior of the `self.read_channel()` method could be improved as it reads in the middle of the output which could cause problems like this.
| @dorko20 Yes, that case is possible.
I am worried that case is going to be hard to fix--or worded differently I suspect that there are possibly some negative trade-offs in fixing that (mainly slowing things down meaningfully and evaluating the entire output string and not just the new section of the string). Note, the output string could potentially be enormous, for example, 'show ip bgp' of the entire routing table.
I guess we could do a compromises and keep like three or some small numbers of contiguous reads. The issue would still be possible, but much mess likely to happen. | 2019-01-07T05:18:24 |
|
ktbyers/netmiko | 1,070 | ktbyers__netmiko-1070 | [
"955"
] | a29923f5036d6b9bf19dd0d3b44300cbb117ab47 | diff --git a/netmiko/base_connection.py b/netmiko/base_connection.py
--- a/netmiko/base_connection.py
+++ b/netmiko/base_connection.py
@@ -457,7 +457,9 @@ def _read_channel(self):
elif self.protocol == "serial":
output = ""
while self.remote_conn.in_waiting > 0:
- output += self.remote_conn.read(self.remote_conn.in_waiting)
+ output += self.remote_conn.read(self.remote_conn.in_waiting).decode(
+ "utf-8", "ignore"
+ )
log.debug("read_channel: {}".format(output))
self._write_session_log(output)
return output
diff --git a/netmiko/utilities.py b/netmiko/utilities.py
--- a/netmiko/utilities.py
+++ b/netmiko/utilities.py
@@ -183,7 +183,7 @@ def check_serial_port(name):
"""returns valid COM Port."""
try:
cdc = next(serial.tools.list_ports.grep(name))
- return cdc.split()[0]
+ return cdc[0]
except StopIteration:
msg = "device {} not found. ".format(name)
msg += "available devices are: "
| Serial driver python3 issues
```
edited line 173 in utilities.py to the following:
cdc = next(serial.tools.list_ports.grep(name))
return cdc[0]
and edited base_connection.py to add:
output += self.remote_conn.read(self.remote_conn.in_waiting).decode('utf-8', 'ignore')
on my line 385 (edited)
```
| Also:
```
print(cdc) = COM6 - Prolific USB-to-Serial Comm Port (COM6)
print(cdc[0]) = COM6
print(cdc[1]) = Prolific USB-to-Serial Comm Port (COM6)
print(cdc[2]) = USB VID:PID=067B:2303 SER=5 LOCATION=1-6
ktbyers [8:14 PM]
Can you `print(type(cdc))`
<class 'serial.tools.list_ports_common.ListPortInfo'>
``` | 2019-01-17T15:07:35 |
|
ktbyers/netmiko | 1,073 | ktbyers__netmiko-1073 | [
"1072"
] | a29923f5036d6b9bf19dd0d3b44300cbb117ab47 | diff --git a/netmiko/huawei/huawei_ssh.py b/netmiko/huawei/huawei_ssh.py
--- a/netmiko/huawei/huawei_ssh.py
+++ b/netmiko/huawei/huawei_ssh.py
@@ -115,6 +115,7 @@ def commit(self, comment="", delay_factor=1):
strip_prompt=False,
strip_command=False,
delay_factor=delay_factor,
+ expect_string=r"]",
)
output += self.exit_config_mode()
| Huawei vrpv8 commit func issue
After commiting changes on huawei vrpv8, cli on devices look like this:
```
[~HUAWEI]dot1x enable
[*HUAWEI]snmp-agent sys-info version all
Warning: SNMPv1/SNMPv2c is not secure, and SNMPv3 in either authentication or privacy mode is recommended.
[*HUAWEI]commit
[~HUAWEI]
```
with following code:
```
from netmiko import Netmiko
device = {
"host": "10.0.0.3",
"username": "yyy",
"password": "xxx",
"device_type": "huawei_vrpv8",
"session_log": "log_file2.txt"
}
config_commands = ['dot1x enable','snmp-agent sys-info version all']
net_connect = Netmiko(**device)
output = net_connect.send_config_set(config_commands,exit_config_mode=False)
output += net_connect.commit()
print(output)
```
i got this error:
```
Traceback (most recent call last):
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/huawei_netmiko_test.py", line 18, in <module>
output2 = net_connect.commit()
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/huawei/huawei_ssh.py", line 114, in commit
strip_command=False, delay_factor=delay_factor)
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py", line 1206, in send_command_expect
return self.send_command(*args, **kwargs)
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py", line 1188, in send_command
search_pattern))
OSError: Search pattern never detected in send_command_expect: \[\*HUAWEI\]
```
looks like netmiko is expecting [*hostname] after commit, but in reality there is [~hostname] after commit
| 2019-01-20T02:59:51 |
||
ktbyers/netmiko | 1,213 | ktbyers__netmiko-1213 | [
"1207"
] | 3253e219c5c78a83767bbb5c67c9668d72f1baa6 | diff --git a/netmiko/scp_handler.py b/netmiko/scp_handler.py
--- a/netmiko/scp_handler.py
+++ b/netmiko/scp_handler.py
@@ -15,6 +15,7 @@
import hashlib
import scp
+import platform
class SCPConn(object):
@@ -149,8 +150,17 @@ def _remote_space_available_unix(self, search_pattern=""):
def local_space_available(self):
"""Return space available on local filesystem."""
- destination_stats = os.statvfs(".")
- return destination_stats.f_bsize * destination_stats.f_bavail
+ if platform.system() == "Windows":
+ import ctypes
+
+ free_bytes = ctypes.c_ulonglong(0)
+ ctypes.windll.kernel32.GetDiskFreeSpaceExW(
+ ctypes.c_wchar_p("."), None, None, ctypes.pointer(free_bytes)
+ )
+ return free_bytes.value
+ else:
+ destination_stats = os.statvfs(".")
+ return destination_stats.f_bsize * destination_stats.f_bavail
def verify_space_available(self, search_pattern=r"(\d+) \w+ free"):
"""Verify sufficient space is available on destination file system (return boolean)."""
| Unable to SCP using Windows
When trying to copy a file from cisco_ios device to my local windows machine using SCP, I am getting this error:
```
Traceback (most recent call last):
File "<stdin>", line 6, in <module>
File "C:\hbdata\network\lib\site-packages\netmiko\scp_functions.py", line 104, in file_transfer
verifyspace_and_transferfile(scp_transfer)
File "C:\hbdata\network\lib\site-packages\netmiko\scp_functions.py", line 18, in verifyspace_and_transferfile
if not scp_transfer.verify_space_available():
File "C:\hbdata\network\lib\site-packages\netmiko\scp_handler.py", line 160, in verify_space_available
space_avail = self.local_space_available()
File "C:\hbdata\network\lib\site-packages\netmiko\scp_handler.py", line 152, in local_space_available
destination_stats = os.statvfs(".")
AttributeError: module 'os' has no attribute 'statvfs'
```
I am using Python 3.7.3 on Windows 10
| I got around it by modifying the scp_handler.py file. I added platform to the list of imports and starting at line 151:
```
if platform.system() == 'Windows':
def local_space_available(self):
"""Return space available on local filesystem."""
import ctypes
free_bytes = ctypes.c_ulonglong(0)
ctypes.windll.kernel32.GetDiskFreeSpaceExW(ctypes.c_wchar_p("."), None, None, ctypes.pointer(free_bytes))
return free_bytes.value
else:
def local_space_available(self):
destination_stats = os.statvfs(".")
return destination_stats.f_bsize * destination_stats.f_bavail
```
@bouchardh Do you want to submit a PR on this?
I honestly didi not know what a PR was before you asked... I guess it would be a good way to start.
If you don't mind a newbie contributing, I will do my best.
All right, I've done some reading and I think that I understand what I need to do. According to the doc, I need to create a branch but I doubt that anyone can just create a branch in any project so I cloned your project locally, created new branch scp_to_windows and modified the file.
I committed the changes and according to the pull request tutorial, I need to push to origin and create the Pull Request.
Is it that easy?
You would then typically push your branch up to your GitHub repository and then on your GitHub repository click on the `New Pull Request` button and submit it that way.
Regards, Kirk | 2019-05-15T02:15:23 |
|
ktbyers/netmiko | 1,473 | ktbyers__netmiko-1473 | [
"818"
] | 61419c5f5489e8c5340762865a2505f92277bc3b | diff --git a/netmiko/base_connection.py b/netmiko/base_connection.py
--- a/netmiko/base_connection.py
+++ b/netmiko/base_connection.py
@@ -75,6 +75,7 @@ def __init__(
session_log_file_mode="write",
allow_auto_change=False,
encoding="ascii",
+ sock=None,
):
"""
Initialize attributes for establishing connection to target device.
@@ -191,6 +192,10 @@ def __init__(
:param encoding: Encoding to be used when writing bytes to the output channel.
(default: ascii)
:type encoding: str
+
+ :param sock: An open socket or socket-like object (such as a `.Channel`) to use for
+ communication to the target host (default: None).
+ :type sock: socket
"""
self.remote_conn = None
@@ -232,6 +237,7 @@ def __init__(
self.keepalive = keepalive
self.allow_auto_change = allow_auto_change
self.encoding = encoding
+ self.sock = sock
# Netmiko will close the session_log if we open the file
self.session_log = None
@@ -828,6 +834,7 @@ def _connect_params_dict(self):
"timeout": self.timeout,
"auth_timeout": self.auth_timeout,
"banner_timeout": self.banner_timeout,
+ "sock": self.sock,
}
# Check if using SSH 'config' file mainly for SSH proxy support
| diff --git a/tests/unit/test_base_connection.py b/tests/unit/test_base_connection.py
--- a/tests/unit/test_base_connection.py
+++ b/tests/unit/test_base_connection.py
@@ -59,6 +59,7 @@ def test_use_ssh_file():
auth_timeout=None,
banner_timeout=10,
ssh_config_file=join(RESOURCE_FOLDER, "ssh_config"),
+ sock=None,
)
connect_dict = connection._connect_params_dict()
@@ -102,6 +103,7 @@ def test_use_ssh_file_proxyjump():
auth_timeout=None,
banner_timeout=10,
ssh_config_file=join(RESOURCE_FOLDER, "ssh_config_proxyjump"),
+ sock=None,
)
connect_dict = connection._connect_params_dict()
@@ -144,6 +146,7 @@ def test_connect_params_dict():
auth_timeout=None,
banner_timeout=10,
ssh_config_file=None,
+ sock=None,
)
expected = {
@@ -159,6 +162,7 @@ def test_connect_params_dict():
"passphrase": None,
"auth_timeout": None,
"banner_timeout": 10,
+ "sock": None,
}
result = connection._connect_params_dict()
assert result == expected
| Support for ssh parameter BindAddress
Would it be possible to add support for the 'BindAddress' client option for SSH? We have management consoles with multiple interfaces and we need to specify which 'source ip address' for the ssh session with the 'BindAddress' statement. Even though paramiko/netmiko supports user ssh config files, the code does not seem to handle the 'BindAddress' statements. Ref; https://www.ssh.com/ssh/config/#sec-Listing-of-client-configuration-options
This is already documented on https://github.com/paramiko/paramiko/issues/206
| It has to be in Paramiko, if it is not in Paramiko, then I won't be willing to support it.
It looks like from the above that it is not supported in Paramiko?
Kirk
You are absolutely right. I found that 'paramiko' issue afterward. But that issue has been opened since 2013! Doesn't look like it's getting any traction.
I am going to close as it doesn't look like there is a further action for me to do on it. | 2019-12-03T07:43:14 |
ktbyers/netmiko | 1,648 | ktbyers__netmiko-1648 | [
"1578"
] | bdd72bd8aedb96b433b1a53469fe57a34d729a68 | diff --git a/netmiko/cisco/cisco_asa_ssh.py b/netmiko/cisco/cisco_asa_ssh.py
--- a/netmiko/cisco/cisco_asa_ssh.py
+++ b/netmiko/cisco/cisco_asa_ssh.py
@@ -2,6 +2,7 @@
import re
import time
from netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer
+from netmiko.ssh_exception import NetmikoAuthenticationException
class CiscoAsaSSH(CiscoSSHConnection):
@@ -88,12 +89,14 @@ def asa_login(self):
twb-dc-fw1> login
Username: admin
- Password: ************
+
+ Raises NetmikoAuthenticationException, if we do not reach privilege
+ level 15 after 3 attempts.
"""
delay_factor = self.select_delay_factor(0)
i = 1
- max_attempts = 50
+ max_attempts = 3
self.write_channel("login" + self.RETURN)
while i <= max_attempts:
time.sleep(0.5 * delay_factor)
@@ -103,11 +106,14 @@ def asa_login(self):
elif "ssword" in output:
self.write_channel(self.password + self.RETURN)
elif "#" in output:
- break
+ return True
else:
self.write_channel("login" + self.RETURN)
i += 1
+ msg = "Unable to get to enable mode!"
+ raise NetmikoAuthenticationException(msg)
+
def save_config(self, cmd="write mem", confirm=False, confirm_response=""):
"""Saves Config"""
return super().save_config(
| Raise exception if asa_login() fails to login successfully
| 2020-04-01T21:42:45 |
||
ktbyers/netmiko | 2,037 | ktbyers__netmiko-2037 | [
"1283"
] | d70d30f51ea19eba760c94e5edb1414a6c7d6a98 | diff --git a/netmiko/ssh_autodetect.py b/netmiko/ssh_autodetect.py
--- a/netmiko/ssh_autodetect.py
+++ b/netmiko/ssh_autodetect.py
@@ -101,7 +101,7 @@
},
"dell_force10": {
"cmd": "show version",
- "search_patterns": [r"S4048-ON"],
+ "search_patterns": [r"Real Time Operating System Software"],
"priority": 99,
"dispatch": "_autodetect_std",
},
| ssh_autodetect.py fails to detect Dell OS9 devices
I found issues with Netmiko ssh_autodetect.py feature with Dell OS9 (or dell_force10) switches but this same issues might appear with other vendor OSs as well. I'm asking for the comments and ideas for the best possible implementation.
The first issue is that the ssh_autodetect.py detects only one Dell hardware type, S4048-ON, instead of detecting the running OS. For example, it is also possible to run Dell OS10 on that specific hardware type. It would be better to match on the line 'Networking OS Version : 9.14(0.1)' on the output of 'show version' command and it would be simple to fix.
The other, more complex, issue is that there is 'show system' command in 'SSH_MAPPER_BASE' which is valid for Dell OS9 switches but it returns paginated output and therefore breaks the detection.
I tested this with python3.6 in which dictionaries are insertion ordered. The code loops through the items in SSH_MAPPER_BASE and the cmds are checked in order ‘show system’, ‘show version’, ‘show system’, ‘show version’, ‘show version’ etc against the corresponding search patterns.
Here's the output of the 'show system' command
```
Stack MAC : 00:00:00:00:00:00
Reload-Type : normal-reload [Next boot : normal-reload]
-- Unit 1 --
Unit Type : Management Unit
Status : online
Next Boot : online
Required Type : S3048-ON - 52-port GE/TE (SG-ON)
Current Type : S3048-ON - 52-port GE/TE (SG-ON)
Master priority : 0
Hardware Rev : 0.0
Num Ports : 52
Up Time : 22 wk, 1 day, 21 hr, 54 min
Networking OS Version : 9.14(0.1)
Jumbo Capable : yes
POE Capable : no
FIPS Mode : disabled
Burned In MAC : 00:00:00:00:00:00
No Of MACs : 3
-- Power Supplies --
--More--
```
and then the next command entered to the cli is ‘how version’ as the first character, ‘s’, just ‘exists’ from the previous output.
```
sw1#how version
^
% Error: Invalid input at "^" marker.
sw1#
```
I came up with couple of options how this could be solved;
1. Use OrderedDict for SSH_MAPPER_BASE and change the order of the commands
Currently items in SSH_MAPPER_BASE are in alphabetical order based on vendor name. There would be option to change the order of items in ‘SSH_MAPPER_BASE’ (as an ordered dict) so that the order of commands sent to the devices would be in the order of frequency in ‘SSH_MAPPER_BASE’ i.e.
'show version' -> appeares 11 times
'show system' -> appears 2 times
rest of the commands -> only once
This order would be more optimal as most of the devices can be identified based on output of 'show version'.
1. Change the commands to include only the matched line on the output
This would also solve the issue but there would be more commands to be sent to the devices which is not optimal
'show version | i ASA'
'show version | i Networking OS Version'
etc
1. Add the support for the paginated output
I suppose this would be rather complicated as the OS and the corresponding command is unknown.
Any other ideas, recommendations, comments etc?
| 2020-11-13T10:03:13 |
||
ktbyers/netmiko | 2,043 | ktbyers__netmiko-2043 | [
"1987"
] | 3a6794aeae214e0092a64515fa44c5124bc17b0d | diff --git a/netmiko/tplink/tplink_jetstream.py b/netmiko/tplink/tplink_jetstream.py
--- a/netmiko/tplink/tplink_jetstream.py
+++ b/netmiko/tplink/tplink_jetstream.py
@@ -1,7 +1,6 @@
import re
import time
-from cryptography import utils as crypto_utils
from cryptography.hazmat.primitives.asymmetric import dsa
from netmiko import log
@@ -142,7 +141,7 @@ def _override_check_dsa_parameters(parameters):
It's still not possible to remove this hack.
"""
- if crypto_utils.bit_length(parameters.q) not in [160, 256]:
+ if parameters.q.bit_length() not in [160, 256]:
raise ValueError("q must be exactly 160 or 256 bits long")
if not (1 < parameters.g < parameters.p):
| cryptography 3.1 library breaks tplink drivers
I've just updated to netmiko 3.3.2 from 3.3.0, and paramiko 2.7.2 before and after the netmiko upgrade. My custom ssh driver worked fine with netmiko 3.3.0. The stack trace is interesting for a couple reasons:
The actual error "AttributeError: module 'cryptography.utils' has no attribute 'bit_length'". Looking back through the stack trace, there's a "raise e" from paramiko/transport.py line 660 in start_client(). The reported error is the actual exception raised at that point.
I don't get how the code goes from raising that exception and ends up in the new tplink driver.
I think I've installed all the latest module versions using pip to recreate the environment.
Debugging ideas are much appreciated!
```
Traceback (most recent call last):
(...deleted portion of stack trace...)
File "/usr/local/lib/python3.6/site-packages/netmiko/ssh_dispatcher.py, line 324, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/netmiko/<vendor>/<driverfile>, line 189, in __init__
self.client.connect(self.host, port=22, username=self.username, password=self.password, look_for_keys=False, timeout=self.timeout)
File "/usr/local/lib/python3.6/site-packages/paramiko/client.py, line 406, in connect
t.start_client(timeout=timeout)
File "/usr/local/lib/python3.6/site-packages/paramiko/transport.py, line 660, in start_client
raise e
File "/usr/local/lib/python3.6/site-packages/paramiko/transport.py, line 2075, in run
self.kex_engine.parse_next(ptype, m)
File "/usr/local/lib/python3.6/site-packages/paramiko/kex_group1.py line 75, in parse_next
return self._parse_kexdh_reply(m)
File "/usr/local/lib/python3.6/site-packages/paramiko/kex_group1.py line 120, in _parse_kexdh_reply
self.transport._verify_key(host_key, sig)
File "/usr/local/lib/python3.6/site-packages/paramiko/transport.py, line 1886, in _verify_key
if not key.verify_ssh_sig(self.H, Message(sig)):
File "/usr/local/lib/python3.6/site-packages/paramiko/dsskey.py, line 153, in verify_ssh_sig
).public_key(backend=default_backend())
File "/usr/local/lib64/python3.6/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py, line 212, in public_key
return backend.load_dsa_public_numbers(self)
File "/usr/local/lib64/python3.6/site-packages/cryptography/hazmat/backends/openssl/backend.py, line 873, in load_dsa_public_numbers
dsa._check_dsa_parameters(numbers.parameter_numbers)
File "/usr/local/lib/python3.6/site-packages/netmiko/tplink/tplink_jetstream.py, line 145, in _override_check_dsa_parameters
if crypto_utils.bit_length(parameters.q) not in [160, 256]:
AttributeError: module 'cryptography.utils' has no attribute 'bit_length'
```
| cryptography 3.1 library broke this (released Sept 22). It exists in cryptography 3.0.
```
In [2]: from cryptography import utils as crypto_utils
In [3]: crypto_utils.bit_length
Out[3]: <function cryptography.utils.bit_length(x)>
```
You will need to `pip install cryptography==3.0.0` until we fix/work around this. | 2020-11-17T04:09:02 |
|
ktbyers/netmiko | 2,631 | ktbyers__netmiko-2631 | [
"2597"
] | 585739847296f65ef2a55478ed3d11bffc3ff78c | diff --git a/netmiko/base_connection.py b/netmiko/base_connection.py
--- a/netmiko/base_connection.py
+++ b/netmiko/base_connection.py
@@ -32,6 +32,7 @@
from os import path
from threading import Lock
import functools
+import logging
import paramiko
import serial
@@ -71,6 +72,19 @@
this context. You should remove any use of delay_factor=x from this method call.\n"""
+# Logging filter for #2597
+class SecretsFilter(logging.Filter):
+ def __init__(self, no_log: Optional[Dict[Any, str]] = None) -> None:
+ self.no_log = no_log
+
+ def filter(self, record: logging.LogRecord) -> bool:
+ """Removes secrets (no_log) from messages"""
+ if self.no_log:
+ for hidden_data in self.no_log.values():
+ record.msg = record.msg.replace(hidden_data, "********")
+ return True
+
+
def lock_channel(func: F) -> F:
@functools.wraps(func)
def wrapper_decorator(self: "BaseConnection", *args: Any, **kwargs: Any) -> Any:
@@ -314,17 +328,25 @@ def __init__(
self.allow_auto_change = allow_auto_change
self.encoding = encoding
self.sock = sock
-
- # Netmiko will close the session_log if we open the file
+ self.fast_cli = fast_cli
+ self._legacy_mode = _legacy_mode
+ self.global_delay_factor = global_delay_factor
+ self.global_cmd_verify = global_cmd_verify
+ if self.fast_cli and self.global_delay_factor == 1:
+ self.global_delay_factor = 0.1
self.session_log = None
self._session_log_close = False
- if session_log is not None:
- no_log = {}
- if self.password:
- no_log["password"] = self.password
- if self.secret:
- no_log["secret"] = self.secret
+ # prevent logging secret data
+ no_log = {}
+ if self.password:
+ no_log["password"] = self.password
+ if self.secret:
+ no_log["secret"] = self.secret
+ log.addFilter(SecretsFilter(no_log=no_log))
+
+ # Netmiko will close the session_log if we open the file
+ if session_log is not None:
if isinstance(session_log, str):
# If session_log is a string, open a file corresponding to string name.
self.session_log = SessionLog(
@@ -366,13 +388,6 @@ def __init__(
comm_port = check_serial_port(comm_port)
self.serial_settings.update({"port": comm_port})
- self.fast_cli = fast_cli
- self._legacy_mode = _legacy_mode
- self.global_delay_factor = global_delay_factor
- self.global_cmd_verify = global_cmd_verify
- if self.fast_cli and self.global_delay_factor == 1:
- self.global_delay_factor = 0.1
-
# set in set_base_prompt method
self.base_prompt = ""
self._session_locker = Lock()
| diff --git a/tests/SLOG/cisco881_slog.log b/tests/SLOG/cisco881_slog.log
--- a/tests/SLOG/cisco881_slog.log
+++ b/tests/SLOG/cisco881_slog.log
@@ -1,9 +1,8 @@
-
-cisco1#terminal width 511
-cisco1#terminal length 0
-cisco1#
-cisco1#
-cisco1#show ip interface brief
+cisco1>terminal width 511
+cisco1>terminal length 0
+cisco1>
+cisco1>
+cisco1>show ip interface brief
Interface IP-Address OK? Method Status Protocol
FastEthernet0 unassigned YES unset down down
FastEthernet1 unassigned YES unset down down
@@ -11,5 +10,5 @@ FastEthernet2 unassigned YES unset down down
FastEthernet3 unassigned YES unset down down
FastEthernet4 10.220.88.20 YES NVRAM up up
Vlan1 unassigned YES unset down down
-cisco1#
-cisco1#exit
+cisco1>
+cisco1>exit
diff --git a/tests/SLOG/cisco881_slog_append.log b/tests/SLOG/cisco881_slog_append.log
--- a/tests/SLOG/cisco881_slog_append.log
+++ b/tests/SLOG/cisco881_slog_append.log
@@ -1,10 +1,10 @@
Initial file contents
-cisco1#terminal width 511
-cisco1#terminal length 0
-cisco1#
-cisco1#
-cisco1#show ip interface brief
+cisco1>terminal width 511
+cisco1>terminal length 0
+cisco1>
+cisco1>
+cisco1>show ip interface brief
Interface IP-Address OK? Method Status Protocol
FastEthernet0 unassigned YES unset down down
FastEthernet1 unassigned YES unset down down
@@ -12,18 +12,19 @@ FastEthernet2 unassigned YES unset down down
FastEthernet3 unassigned YES unset down down
FastEthernet4 10.220.88.20 YES NVRAM up up
Vlan1 unassigned YES unset down down
-cisco1#
-cisco1#exit
-cisco1#terminal width 511
-cisco1#terminal length 0
-cisco1#
-cisco1#
+cisco1>
+cisco1>exit
+cisco1>terminal width 511
+cisco1>terminal length 0
+cisco1>
+cisco1>
Testing password and secret replacement
This is my password ********
This is my secret ********
-cisco1#terminal width 511
-cisco1#terminal length 0
-cisco1#
-cisco1#
+
+cisco1>terminal width 511
+cisco1>terminal length 0
+cisco1>
+cisco1>
Testing unicode
😁😁
\ No newline at end of file
diff --git a/tests/SLOG/cisco881_slog_append_compare.log b/tests/SLOG/cisco881_slog_append_compare.log
--- a/tests/SLOG/cisco881_slog_append_compare.log
+++ b/tests/SLOG/cisco881_slog_append_compare.log
@@ -1,8 +1,8 @@
-cisco1#terminal width 511
-cisco1#terminal length 0
-cisco1#
-cisco1#
-cisco1#show ip interface brief
+cisco1>terminal width 511
+cisco1>terminal length 0
+cisco1>
+cisco1>
+cisco1>show ip interface brief
Interface IP-Address OK? Method Status Protocol
FastEthernet0 unassigned YES unset down down
FastEthernet1 unassigned YES unset down down
@@ -10,5 +10,5 @@ FastEthernet2 unassigned YES unset down down
FastEthernet3 unassigned YES unset down down
FastEthernet4 10.220.88.20 YES NVRAM up up
Vlan1 unassigned YES unset down down
-cisco1#
-cisco1#exit
+cisco1>
+cisco1>exit
diff --git a/tests/SLOG/cisco881_slog_compare.log b/tests/SLOG/cisco881_slog_compare.log
--- a/tests/SLOG/cisco881_slog_compare.log
+++ b/tests/SLOG/cisco881_slog_compare.log
@@ -1,8 +1,8 @@
-cisco1#terminal width 511
-cisco1#terminal length 0
-cisco1#
-cisco1#
-cisco1#show ip interface brief
+cisco1>terminal width 511
+cisco1>terminal length 0
+cisco1>
+cisco1>
+cisco1>show ip interface brief
Interface IP-Address OK? Method Status Protocol
FastEthernet0 unassigned YES unset down down
FastEthernet1 unassigned YES unset down down
@@ -10,5 +10,5 @@ FastEthernet2 unassigned YES unset down down
FastEthernet3 unassigned YES unset down down
FastEthernet4 10.220.88.20 YES NVRAM up up
Vlan1 unassigned YES unset down down
-cisco1#
-cisco1#exit
+cisco1>
+cisco1>exit
diff --git a/tests/SLOG/cisco881_slog_wr.log b/tests/SLOG/cisco881_slog_wr.log
--- a/tests/SLOG/cisco881_slog_wr.log
+++ b/tests/SLOG/cisco881_slog_wr.log
@@ -1,28 +1,17 @@
terminal width 511
-
-cisco1#terminal width 511
-cisco1#terminal length 0
+cisco1>terminal width 511
+cisco1>terminal length 0
terminal length 0
-cisco1#
+cisco1>
-cisco1#
+cisco1>
-cisco1#show foooooooo
-show foooooooo
- ^
-% Invalid input detected at '^' marker.
+cisco1>enable
+enable
+Password: ********
cisco1#
-cisco1#show ip interface brief
-show ip interface brief
-Interface IP-Address OK? Method Status Protocol
-FastEthernet0 unassigned YES unset down down
-FastEthernet1 unassigned YES unset down down
-FastEthernet2 unassigned YES unset down down
-FastEthernet3 unassigned YES unset down down
-FastEthernet4 10.220.88.20 YES NVRAM up up
-Vlan1 unassigned YES unset down down
cisco1#
cisco1#exit
diff --git a/tests/SLOG/cisco881_slog_wr_compare.log b/tests/SLOG/cisco881_slog_wr_compare.log
--- a/tests/SLOG/cisco881_slog_wr_compare.log
+++ b/tests/SLOG/cisco881_slog_wr_compare.log
@@ -1,19 +1,19 @@
terminal width 511
-cisco1#terminal width 511
-cisco1#terminal length 0
+cisco1>terminal width 511
+cisco1>terminal length 0
terminal length 0
-cisco1#
+cisco1>
-cisco1#
+cisco1>
-cisco1#show foooooooo
+cisco1>show foooooooo
show foooooooo
^
% Invalid input detected at '^' marker.
-cisco1#
+cisco1>
-cisco1#show ip interface brief
+cisco1>show ip interface brief
show ip interface brief
Interface IP-Address OK? Method Status Protocol
FastEthernet0 unassigned YES unset down down
@@ -22,6 +22,6 @@ FastEthernet2 unassigned YES unset down down
FastEthernet3 unassigned YES unset down down
FastEthernet4 10.220.88.20 YES NVRAM up up
Vlan1 unassigned YES unset down down
-cisco1#
+cisco1>
-cisco1#exit
+cisco1>exit
diff --git a/tests/SLOG/netmiko.log b/tests/SLOG/netmiko.log
new file mode 100644
--- /dev/null
+++ b/tests/SLOG/netmiko.log
@@ -0,0 +1,117 @@
+write_channel: b'\n'
+read_channel:
+read_channel:
+cisco1>
+Pattern found: (cisco1.*)
+cisco1>
+write_channel: b'enable\n'
+read_channel:
+read_channel: enable
+Password:
+Pattern found: (enable) enable
+read_channel:
+Pattern found: ((?:cisco1|ssword))
+Password
+write_channel: b'********\n'
+read_channel:
+read_channel:
+read_channel:
+cisco1#
+Pattern found: (cisco1) :
+cisco1
+write_channel: b'\n'
+read_channel:
+read_channel:
+cisco1#
+Pattern found: (cisco1.*) #
+cisco1#
+read_channel:
+write_channel: b'\n'
+read_channel:
+cisco1#
+read_channel:
+read_channel:
+write_channel: b'exit\n'
+write_channel: b'terminal width 511\n'
+read_channel:
+read_channel:
+cisco1>terminal width
+read_channel: 511
+cisco1>
+Pattern found: (terminal width 511)
+cisco1>terminal width 511
+In disable_paging
+Command: terminal length 0
+
+write_channel: b'terminal length 0\n'
+read_channel:
+read_channel: terminal lengt
+read_channel: h 0
+cisco1>
+Pattern found: (terminal\ length\ 0)
+cisco1>terminal length 0
+
+cisco1>terminal length 0
+Exiting disable_paging
+read_channel:
+Clear buffer detects data in the channel
+read_channel:
+write_channel: b'\n'
+read_channel:
+cisco1>
+read_channel:
+[find_prompt()]: prompt is cisco1>
+write_channel: b'terminal width 511\n'
+read_channel:
+read_channel: cisco1>terminal width
+read_channel: 511
+cisco1>
+Pattern found: (terminal width 511) cisco1>terminal width 511
+In disable_paging
+Command: terminal length 0
+
+write_channel: b'terminal length 0\n'
+read_channel:
+read_channel: terminal lengt
+read_channel: h 0
+cisco1>
+Pattern found: (terminal\ length\ 0)
+cisco1>terminal length 0
+
+cisco1>terminal length 0
+Exiting disable_paging
+read_channel:
+Clear buffer detects data in the channel
+read_channel:
+write_channel: b'\n'
+read_channel:
+cisco1>
+read_channel:
+[find_prompt()]: prompt is cisco1>
+read_channel:
+read_channel:
+write_channel: b'\n'
+read_channel:
+cisco1>
+read_channel:
+[find_prompt()]: prompt is cisco1>
+write_channel: b'show ip interface brief\n'
+read_channel:
+read_channel: show ip interf
+read_channel: ace brief
+Interface IP-Address OK? Method Status Protocol
+FastEthernet0 unassigned YES unset down down
+FastEthernet1 unassigned YES unset down down
+FastEthernet2 unassigned YES unset down down
+FastEthernet3 unassigned YES unset down down
+FastEthernet4 10.220.88.20 YES NVRAM up up
+Vlan1 unassigned YES unset down down
+cisco1>
+Pattern found: (show\ ip\ interface\ brief) show ip interface brief
+read_channel:
+write_channel: b'\n'
+read_channel:
+cisco1>
+read_channel:
+read_channel:
+write_channel: b'exit\n'
diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -89,7 +89,7 @@ def net_connect_cm(request):
return my_prompt
[email protected](scope="module")
[email protected](scope="function")
def net_connect_slog_wr(request):
"""
Create the SSH connection to the remote device. Modify session_log init arguments.
diff --git a/tests/test_netmiko_session_log.py b/tests/test_netmiko_session_log.py
--- a/tests/test_netmiko_session_log.py
+++ b/tests/test_netmiko_session_log.py
@@ -2,6 +2,7 @@
import time
import hashlib
import io
+import logging
from netmiko import ConnectHandler
@@ -130,6 +131,37 @@ def test_session_log_secrets(device_slog):
assert conn.secret not in session_log
+def test_logging_filter_secrets(net_connect_slog_wr):
+ """Verify logging DEBUG output does not contain password or secret."""
+
+ nc = net_connect_slog_wr
+
+ # setup logger to output to file
+ file_name = "SLOG/netmiko.log"
+ netmikologger = logging.getLogger("netmiko")
+ netmikologger.setLevel(logging.DEBUG)
+ file_handler = logging.FileHandler(file_name)
+ file_handler.setLevel(logging.DEBUG)
+ netmikologger.addHandler(file_handler)
+
+ # cleanup the log file
+ with open(file_name, "w") as f:
+ f.write("")
+
+ # run sequence
+ nc.enable()
+ time.sleep(1)
+ nc.clear_buffer()
+ nc.disconnect()
+
+ with open(file_name, "r") as f:
+ netmiko_log = f.read()
+ if nc.password:
+ assert nc.password not in netmiko_log
+ if nc.secret:
+ assert nc.secret not in netmiko_log
+
+
def test_unicode(device_slog):
"""Verify that you can write unicode characters into the session_log."""
conn = ConnectHandler(**device_slog)
| security: prevent Secrets leaking to debug-level logs when entering device privileged mode
### TL;DR: `secret` field is leaked into logging output when level is `logging.DEBUG` or lower
Netmiko logs all channel communication at Debug level via the Python logging facility, and optionally to a file via its own `SessionLog`.
While `SessionLog` filters the secrets and passwords out, the same is not true for the logger.
When a consumer enables debug-level logging at the root level (for tracing or other purposes), this results in inadvertent commitment of secrets and passwords into the logging system.
## Reproduction
```python
import netmiko
import logging
from getpass import getpass
logging.basicConfig(level=logging.DEBUG, format='%(name)6d %(levelname)s %(message)s')
password = getpass()
with ConnectHandler(
device_type="cisco_ios",
host="cisco1.askbow.net",
username="cisco_test",
password=password,
secret=password,
) as nc:
nc.enable()
```
The logging output will contain raw value of the secret in a `write_channel` message.
### Expected outcome
The logging output SHOULD NOT contain the value of the secret
## Workaround
Consumers who need to capture Debug-level logs from all modules can change logging level for Netmiko for this operation to prevent the secret leaking. This results in minimal loss of debug data.
For example:
```python
with ConnectHandler(
device_type="cisco_ios",
host="cisco1.askbow.net",
username="cisco_test",
password=password,
secret=password,
) as nc:
logging.debug('Set netmiko logging to WARNING to prevent secret leaking')
nc_logger = logging.getLogger('netmiko')
base_level = nc_logger.level
nc_logger.setLevel(logging.WARNING)
nc.enable()
nc_logger.setLevel(base_level)
logging.debug('Restored netmiko logging level')
# ...
```
| 2022-01-23T04:34:46 |
|
ktbyers/netmiko | 2,935 | ktbyers__netmiko-2935 | [
"2711"
] | f3b5da244e6ce1e5fd7bd89fd36b090e854828b4 | diff --git a/netmiko/huawei/huawei.py b/netmiko/huawei/huawei.py
--- a/netmiko/huawei/huawei.py
+++ b/netmiko/huawei/huawei.py
@@ -128,7 +128,7 @@ def telnet_login(
"""Telnet login for Huawei Devices"""
delay_factor = self.select_delay_factor(delay_factor)
- password_change_prompt = r"(Change now|Please choose 'YES' or 'NO').+"
+ password_change_prompt = r"(?:Change now|Please choose 'YES' or 'NO').+"
combined_pattern = r"({}|{}|{})".format(
pri_prompt_terminator, alt_prompt_terminator, password_change_prompt
)
| Huawei special_login_handler is not logging in successfully
The system information is as follows:
1. netmiko version: 4.0
2. python 3.10
3. window 11
error_print:
```shell
Traceback (most recent call last):
File "E:\web_API\test.py", line 11, in <module>
app.net_ssh_proxy(switch_json = switch_json, commands=commands)
File "E:\web_API\app.py", line 25, in net_ssh_proxy
with ConnectHandler(**device_info, sock=sock) as net_connect:
File "E:\venv_02\lib\site-packages\netmiko\ssh_dispatcher.py", line 344, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "E:\venv_02\lib\site-packages\netmiko\base_connection.py", line 434, in __init__
self._open()
File "E:\venv_02\lib\site-packages\netmiko\base_connection.py", line 439, in _open
self.establish_connection()
File "E:\venv_02\lib\site-packages\netmiko\base_connection.py", line 1092, in establish_connection
self.special_login_handler()
File "E:\venv_02\lib\site-packages\netmiko\huawei\huawei.py", line 105, in special_login_handler
output = self.read_until_pattern(password_change_prompt)
File "E:\venv_02\lib\site-packages\netmiko\base_connection.py", line 631, in read_until_pattern
raise ReadException(msg)
netmiko.exceptions.ReadException: Unable to successfully split output based on pattern:
pattern=((Change now|Please choose))|([\]>]\s*$)
output='\nInfo: The max number of VTY users is 21, the number of current VTY users online is 2, and total number of terminal users online is 2.\n The current login time is 2022-03-28 15:55:30+08:00.\n<xxxx_hostname>'
results=['\nInfo: The max number of VTY users is 21, the number of current VTY users online is 2, and total number of terminal users online is 2.\n The current login time is 2022-03-28 15:55:30+08:00.\n<xxxx_hostname', None, None, '>', '']
```
test instanse
2. python 3.10
3. window 11
[Netmiko 3.4.0 Release](https://github.com/ktbyers/netmiko/releases/tag/v3.4.0)
out_print
no problem
| are you trying to do it with Huawei device_type?
Cause I have the same problem.
Tes instance
1. python 3.9.7
2. Mac OS
3. Netlike 4.0.0
**from netmiko import ConnectHandler
from getpass import getpass
from pprint import pprint
cisco1 = {
"device_type": "huawei",
"host": "10.179.28.2",
"username": "jmmorales",
"password": getpass(),
}
command = "display version"
with ConnectHandler(**cisco1) as net_connect:
print(net_connect.find_prompt())
# Use TextFSM to retrieve structured data
output = net_connect.send_command(command, use_textfsm=True)
#print()
pprint(output)
#print()
**
Output
**Exception has occurred: ReadException
Unable to successfully split output based on pattern:
pattern=((Change now|Please choose))|([\]>]\s*$)
output='\nInfo: The max number of VTY users is 15, the number of current VTY users online is 2, and total number of terminal users online is 2.\n The current login time is 2022-03-29 18:05:20-06:00.\nInfo: The device is not enabled with secure boot, please enable it.\n<GNCYGTAJN2D2C02B02EII1>'
results=['\nInfo: The max number of VTY users is 15, the number of current VTY users online is 2, and total number of terminal users online is 2.\n The current login time is 2022-03-29 18:05:20-06:00.\nInfo: The device is not enabled with secure boot, please enable it.\n<GNCYGTAJN2D2C02B02EII1', None, None, '>', '']
File "[/Volumes/Extreme]() SSD[/HP/Py/Interfaces]() LIBRES[/ssh-multiple-sessions-textfsm/send_command_textfsm-huawei.py]()", line 14, in <module>
with ConnectHandler(**cisco1) as net_connect:**
Can one of you show me what the Huawei login looks like? In other words, if you login manually what does the CLI session look like?
Yeah sure,
There it is:
**jm@JMG-MBA ~ % ssh [email protected]
+----------------------------------------------------------------+
| Este equipo es propiedad privada, |
| si no tiene autorizacion para ingresar al equipo, por favor |
| cancele la conexion inmediatamente,cualquier anomalia sera |
| reportada a las autoridades correspondientes. |
+----------------------------------------------------------------+
User Authentication
([email protected]) Enter password:
Warning: Negotiated identity key for server authentication is not safe. It is recommended that you disable the insecure algorithm or upgrade the client.
Info: The max number of VTY users is 15, the number of current VTY users online is 1, and total number of terminal users online is 1.
The current login time is 2022-03-29 21:09:19-06:00.
Info: The device is not enabled with secure boot, please enable it.
GNCYGTAJN2D2C02B02EII1>
**
What is this ** after the prompt?
```
GNCYGTAJN2D2C02B02EII1>
**
```
Is that really there or was that an error in the posting above?
that was and error, those **** are not on the prompt.
Its because I wanna make the output to be in bold text.
I have the same problem after having cloned the repository to another computer.
on the computer where I create the rep. it's working fine
I'm using another version of pycharm
I managed to solve the problem.
I was connecting by ConnectionHandler and I switched to HuawelTelnet
Does Netmiko 3.4.0 work (i.e. is this only a Netmiko 4.0.0 issue)?
@migmorales22
With netmiko 3.4.0 the data is not parse:
Here's the script:
from netmiko import ConnectHandler
from getpass import getpass
from pprint import pprint
cisco1 = {
"device_type": "huawei",
"host": "10.179.28.2",
"username": "huawei",
"password": getpass(),
}
command = "display version"
with ConnectHandler(**cisco1) as net_connect:
output = net_connect.send_command(command, use_textfsm=True)
print(output)
Here is the output:
Huawei Versatile Routing Platform Software
VRP (R) software, Version 8.190 (NE40E V800R011C10SPC100)
Copyright (C) 2012-2019 Huawei Technologies Co., Ltd.
HUAWEI NE40E-M2K-B uptime is 350 days, 18 hours, 31 minutes
NE40E-M2K-B version information:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
BKP version information:
PCB Version : CX68BKP01D REV A
IPU Slot Quantity : 1
CARD Slot Quantity : 3
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
IPU version information:
IPU (Master) 3 : uptime is 350 days, 18 hours, 31 minutes
StartupTime 2021/04/13 18:22:50
SDRAM Memory Size : 16384 M bytes
FLASH Memory Size : 128 M bytes
CFCARD Memory Size : 4096 M bytes
IPU CR5B0BKP0393 version information
CPU PCB Version : CX68E4NLAXFB REV B
EPLD Version : 004
FPGA Version : 009
FPGA2 Version : 008
NPU PCB Version : CX68E4NLAXFA REV A
FPGA Version : 102
FPGA2 Version : 007
NP Version : 100
TM Version : 110
NSE Version : NSE REV A
BootROM Version : 04.73
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Power version information:
POWER 4's version information:
PCB Version : CX68PSUF REV B
POWER 5's version information:
PCB Version : CX68PSUF REV B
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FAN version information:
FAN 6's version information:
PCB Version : CX68FCBD REV A
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CLK version information:
CLK 7 : uptime is 350 days, 18 hours, 31 minutes
StartupTime 2021/04/13 18:22:50
FPGA Version : 1509
DSP Version : 17060021
(project1) josemiguelmorales@MIG-MBA ~ %
With netmiko 4.0.0, and same script, the output is:
Exception has occurred: ReadException
Unable to successfully split output based on pattern:
pattern=((Change now|Please choose))|([\]>]\s*$)
output='\nInfo: The max number of VTY users is 11, the number of current VTY users online is 1, and total number of terminal users online is 1.\n The current login time is 2022-03-30 12:56:58-04:00.\n The last login time is 2022-03-30 12:53:46-04:00 from 181.209.150.62 through SSH.\n<MARLLC01>'
results=['\nInfo: The max number of VTY users is 11, the number of current VTY users online is 1, and total number of terminal users online is 1.\n The current login time is 2022-03-30 12:56:58-04:00.\n The last login time is 2022-03-30 12:53:46-04:00 from 181.209.150.62 through SSH.\n<MARLLC01', None, None, '>', '']
File "[/Volumes/Extreme]() SSD[/HP/Py/Interfaces]() LIBRES[/ssh-multiple-sessions-textfsm/send_command_textfsm-huawei.py]()", line 14, in <module>
with ConnectHandler(**cisco1) as net_connect:
@migmorales22 So in Netmiko 3.4.0, you connect, but your TextFSM parsing doesn't work properly (i.e. you have a different issue, but not a connection issue).
Is that correct?
Yes, that's correct,
With netmiko 3.4.0 its the textfsm that doesn't parse the output of the send command, and in netmiko 4.0.0 is the "Exception has occurred:ReadException"
I couldn't connect to Huawei devices after upgrading to V4.0.0 .
```
Connecting to 192.168.1.100:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.
WARNING! The remote SSH server rejected X11 forwarding request.
Info: The max number of VTY users is 10, and the number
of current VTY users on line is 1.
The current login time is 2022-03-31 15:03:24+08:00.
Info: Lastest accessed IP: 192.168.1.100 Time: 2022-03-31 14:58:54+08:00 Password will expire in: -
Info: Smart-upgrade is currently disabled. Enable Smart-upgrade to get recommended version information.
<HUAWEI>
```
I have the same issue with Huawei device
New version V4.0.0 has problems with huawei devices.
@ktbyers
I can confirm the issue using Netmiko 4.0.0 (latest pip release together with Python 3.10.4) using `device_type = 'huawei'`.
With the Netmiko 3.4.0 release everything works fine. Also, it doesn't seem to make any difference whether the password change prompt is displayed or not (see both tracebacks below).
```python
netmiko.exceptions.ReadException: Unable to successfully split output based on pattern:
pattern=((Change now|Please choose))|([\]>]\s*$)
output='\nWarning: The password will expire in 11 days.\nThe password needs to be changed. Change now? [Y/N]: '
results=['\nWarning: The password will expire in 11 days.\nThe password needs to be changed. ', 'Change now', 'Change now', None, '? [Y/N]: ']
```
```python
netmiko.exceptions.ReadException: Unable to successfully split output based on pattern:
pattern=((Change now|Please choose))|([\]>]\s*$)
output='\nInfo: Current mode: Monitor (automatically making switching decisions).\nWarning: The intelligent upgrade function is disabled. Log in to the web platform and enable this function.\n ----------------------------------------------------------------------------- \n User last login information: [...] \n -----------------------------------------------------------------------------\n<AC6508>'
results=['\nInfo: Current mode: Monitor (automatically making switching decisions).\nWarning: The intelligent upgrade function is disabled. Log in to the web platform and enable this function.\n ----------------------------------------------------------------------------- \n User last login information: [...] \n -----------------------------------------------------------------------------\n<AC6508', None, None, '>', '']
```
After spending some time investigating the issue I found out that the pattern matching logic in `read_until_pattern` expects the pattern to split exactly in 3 parts (using re.split), see [base_connection.py#L579](https://github.com/ktbyers/netmiko/blob/c76d09fad12c98b1e70176906a4129a53d9be06c/netmiko/base_connection.py#L624) for reference. Seems this change was introduced during some fundamental changes to the channel reading logic in in https://github.com/ktbyers/netmiko/commit/793100dcfd76d3eb5089ba4cd27c99e3c3c0f886.
I have proposed a fix and documented some more info in #2719.
>
When will the new version be released.
@luweijun1992 Just pin to Netmiko 3.4.0 or directly use the proposed fix from Git (i.e. you don't need a new release to work around this).
> @luweijun1992 Just pin to Netmiko 3.4.0 or directly use the proposed fix from Git (i.e. you don't need a new release to work around this).
I installed it using pip, do you mean rollback version v3.4.0?
Still want to use the new version, need the new function send_multiline()
@luweijun1992 You could use my proposed fix directly with pip like below until a fixed Netmiko version is available.
```sh
pip install 'git+https://github.com/fharbe/netmiko.git@fix_huawei_login#egg=netmiko'
```
Updated PR here:
https://github.com/ktbyers/netmiko/pull/2728
Can anyone test that this works properly?
@luweijun1992 @pyhas @migmorales22 @nivaldoinacios @quinnhao Let me know is someone can test the PR here:
#2728
I want to try to include this in a Netmiko release that I would release shortly.
Thanks, Kirk
> @luweijun1992 @pyhas @migmorales22 @nivaldoinacios @quinnhao Let me know is someone can test the PR here:
>
> #2728
>
> I want to try to include this in a Netmiko release that I would release shortly.
>
> Thanks, Kirk
Looking forward to the release of the new version, I go back to the old version v3.4.0
@luweijun1992 Yep, I need someone to test that it fixes the issue. Can you do that?
> @luweijun1992 Yep, I need someone to test that it fixes the issue. Can you do that?
No problem, I will test the new release.
Yeah, I really need someone to test it beforehand i.e. before it gets merged into `develop` (and before there is a release).
> Yeah, I really need someone to test it beforehand i.e. before it gets merged into `develop` (and before there is a release).
Please tell me how to cooperate with you for the test.
Maybe you can send me the fixed file and I'll overwrite the old file for testing.
@luweijun1992 pip install it and see if the previous failure occurs or not.
You can pip install a specific commit. You can also pip install a branch. Some of this pip syntax is confusing, but should be available online. I usually do:
```
git clone <repo>
cd <repo>
pip uninstall netmiko
pip install -e .
```
i.e. I usually just install it from the local repository. You have to make sure you clone the right location. I will merge into the `develop` branch (right now). So the above should work (as git will clone the `develop` branch automatically).
> @luweijun1992 pip install it and see if the previous failure occurs or not.
>
> You can pip install a specific commit. You can also pip install a branch. Some of this pip syntax is confusing, but should be available online. I usually do:
>
> ```
> git clone <repo>
> cd <repo>
> pip uninstall netmiko
> pip install -e .
> ```
>
> i.e. I usually just install it from the local repository. You have to make sure you clone the right location. I will merge into the `develop` branch (right now). So the above should work (as git will clone the `develop` branch automatically).
```
git clone https://github.com/ktbyers/netmiko.git
cd netmiko
pip install -e .
```

```
[Running] python -u "d:\ssh_conn.py"
Traceback (most recent call last):
File "d:\ssh_conn.py", line 26, in <module>
with ConnectHandler(**huawei) as net_connect:
File "d:\netmiko\netmiko\ssh_dispatcher.py", line 344, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "d:\netmiko\netmiko\base_connection.py", line 434, in __init__
self._open()
File "d:\netmiko\netmiko\base_connection.py", line 439, in _open
self.establish_connection()
File "d:\netmiko\netmiko\base_connection.py", line 1092, in establish_connection
self.special_login_handler()
File "d:\netmiko\netmiko\huawei\huawei.py", line 18, in special_login_handler
data = self.read_until_pattern(pattern=rf"({password_change_prompt}|[>\]])")
File "d:\netmiko\netmiko\base_connection.py", line 631, in read_until_pattern
raise ReadException(msg)
netmiko.exceptions.ReadException: Unable to successfully split output based on pattern:
pattern=((Change now|Please choose)|[>\]])
output='\nInfo: The max number of VTY users is 10, and the number\n of current VTY users on line is 1.\n The current login time is 2009-02-09 21:53:27+08:00.\n<S5700-10P>'
results=['\nInfo: The max number of VTY users is 10, and the number\n of current VTY users on line is 1.\n The current login time is 2009-02-09 21:53:27+08:00.\n<S5700-10P', '>', None, '']
[Done] exited with code=1 in 8.166 seconds
```

@luweijun1992 Okay, great thanks for testing.
I have a new fix here:
https://github.com/ktbyers/netmiko/pull/2737
This has been merged into `develop`.
You should be able to update your code that you are testing with by doing.
```
cd <github repo directory>
git fetch origin
git rebase origin/develop
```
This assumes you still have the `develop` branch checked out (which you should).
If you could re-test this new fix, that would be very helpful.
> @luweijun1992 Okay, great thanks for testing.
>
> I have a new fix here:
>
> #2737
>
> This has been merged into `develop`.
>
> You should be able to update your code that you are testing with by doing.
>
> ```
> cd <github repo directory>
> git fetch origin
> git rebase origin/develop
> ```
>
> This assumes you still have the `develop` branch checked out (which you should).
The test was successful.


> @luweijun1992 Okay, great thanks for testing.
>
> I have a new fix here:
>
> #2737
>
> This has been merged into `develop`.
>
> You should be able to update your code that you are testing with by doing.
>
> ```
> cd <github repo directory>
> git fetch origin
> git rebase origin/develop
> ```
>
> This assumes you still have the `develop` branch checked out (which you should).
But the result of print seems abnormal.
```
print(result)
```

Hello!
@ktbyers 4.1.0 is ok work ssh, problem use huawei_telnet
```
Traceback (most recent call last):
File "/home/mikholap/Рабочий стол/my-projects/mikholap-scripts/core and aggregation/asw_vlan_probros_pub.py", line 17, in <module>
net_connect = ConnectHandler(**device)
File "/home/mikholap/.local/lib/python3.10/site-packages/netmiko/ssh_dispatcher.py", line 344, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "/home/mikholap/.local/lib/python3.10/site-packages/netmiko/base_connection.py", line 434, in __init__
self._open()
File "/home/mikholap/.local/lib/python3.10/site-packages/netmiko/base_connection.py", line 439, in _open
self.establish_connection()
File "/home/mikholap/.local/lib/python3.10/site-packages/netmiko/base_connection.py", line 1009, in establish_connection
self.telnet_login()
File "/home/mikholap/.local/lib/python3.10/site-packages/netmiko/huawei/huawei.py", line 150, in telnet_login
output = self.read_until_pattern(pattern=combined_pattern)
File "/home/mikholap/.local/lib/python3.10/site-packages/netmiko/base_connection.py", line 631, in read_until_pattern
raise ReadException(msg)
netmiko.exceptions.ReadException: Unable to successfully split output based on pattern:
pattern=(]\s*$|>\s*$|(Change now|Please choose 'YES' or 'NO').+)
output=':\nInfo: The max number of VTY users is 10, and the number\n of current VTY users on line is 2.\n The current login time is 2022-05-13 13:46:34+04:00.\nInfo: Smart-upgrade is currently disabled. Enable Smart-upgrade to get recommended version information.\n<***>'
results=[':\nInfo: The max number of VTY users is 10, and the number\n of current VTY users on line is 2.\n The current login time is 2022-05-13 13:46:34+04:00.\nInfo: Smart-upgrade is currently disabled. Enable Smart-upgrade to get recommended version information.\n<***', '>', None, '']
```
@aztec102 So you are still seeing an issue when you suing Huawei and telnet with Netmiko 4.1.0?
@ktbyers Yes, i see.
```
import re
from netmiko import ConnectHandler
host = input('Input Dest host: ')
device = {
'device_type': 'huawei_telnet',
'ip': host,
'username': 'login',
'password': 'pass',
'port' : 23,
'verbose': True,
'session_log': f'{host}.log'
}
net_connect = ConnectHandler(**device)
```
```
pip3 list | grep netmiko
netmiko 4.1.0
```
Hi!MR.Ktbyers!I have the same issue with H3C(hp_comware_telnet) .
SSH(hp_comware) was perfect ! I like it ! but telnet never succeeded.
******************************************************************
from netmiko import ConnectHandler
dev = {
'device_type': "hp_comware_telnet",
'host': '192.168.11.1',
'username': 'admin',
'password': 'admin',
'port': 23,
}
net_con = ConnectHandler(**dev)
output = net_con.send_command('display current-configuration')
print(output)
**********************************************************************
Traceback (most recent call last):
File "C:\Users\18592\AppData\Roaming\JetBrains\PyCharm2022.1\scratches\scratch_6.py", line 10, in <module>
net_con = ConnectHandler(**dev)
File "C:\Users\18592\PycharmProjects\thearding\venv\lib\site-packages\netmiko\ssh_dispatcher.py", line 351, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "C:\Users\18592\PycharmProjects\thearding\venv\lib\site-packages\netmiko\hp\hp_comware.py", line 145, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\18592\PycharmProjects\thearding\venv\lib\site-packages\netmiko\hp\hp_comware.py", line 13, in __init__
super().__init__(**kwargs)
File "C:\Users\18592\PycharmProjects\thearding\venv\lib\site-packages\netmiko\base_connection.py", line 434, in __init__
self._open()
File "C:\Users\18592\PycharmProjects\thearding\venv\lib\site-packages\netmiko\base_connection.py", line 440, in _open
self._try_session_preparation()
File "C:\Users\18592\PycharmProjects\thearding\venv\lib\site-packages\netmiko\base_connection.py", line 879, in _try_session_preparation
self.session_preparation()
File "C:\Users\18592\PycharmProjects\thearding\venv\lib\site-packages\netmiko\hp\hp_comware.py", line 19, in session_preparation
data = self._test_channel_read(pattern=r"to continue|[>\]]")
File "C:\Users\18592\PycharmProjects\thearding\venv\lib\site-packages\netmiko\base_connection.py", line 1119, in _test_channel_read
return self.read_until_pattern(pattern=pattern, read_timeout=20)
File "C:\Users\18592\PycharmProjects\thearding\venv\lib\site-packages\netmiko\base_connection.py", line 651, in read_until_pattern
raise ReadTimeout(msg)
netmiko.exceptions.ReadTimeout:
Pattern not detected: 'to continue|[>\\]]' in output.
Things you might try to fix this:
1. Adjust the regex pattern to better identify the terminating string. Note, in
many situations the pattern is automatically based on the network device's prompt.
2. Increase the read_timeout to a larger value.
You can also look at the Netmiko session_log or debug log for more information.
******************************************************************************************
1、windows 10
2、python3.10.4
3、netmiko 4.1.1
THANK YOU!
@zhtjames Can you show me what a manual telnet looks like i.e. copy and paste the manual telnet session here. You can obscure/change your username and password or anything else that is confidential.
@zhtjames ???
@ktbyers, you are great, thank you for netmiko!
I faced the same issue when try telnet. Below is manual telnet session you asked for:
```
user@my-secret-host:~$ telnet 10.166.66.66
Trying 10.166.66.66...
Connected to 10.166.66.66.
Escape character is '^]'.
Warning: Telnet is not a secure protocol, and it is recommended to use Stelnet.
Login authentication
Username:my-secret-username
Password:
Info: The max number of VTY users is 5, and the number
of current VTY users on line is 1.
<my-secret-prompt>
<my-secret-prompt>quit
Info: The max number of VTY users is 5, and the number
of current VTY users on line is 0.Connection closed by foreign host.
user@my-secret-host:~$
```
Python code:
```python
from netmiko import ConnectHandler
device = {
'device_type': 'huawei_telnet',
'host': '10.166.66.66',
'username': 'my-secret-username',
'password': 'my-secret-password',
}
net_connect = ConnectHandler(**device)
net_connect.disconnect()
```
Exception:
```
Traceback (most recent call last):
File "issue.py", line 10, in <module>
net_connect = ConnectHandler(**device)
File "/home/user/.local/lib/python3.8/site-packages/netmiko/ssh_dispatcher.py", line 365, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/netmiko/base_connection.py", line 439, in __init__
self._open()
File "/home/user/.local/lib/python3.8/site-packages/netmiko/base_connection.py", line 444, in _open
self.establish_connection()
File "/home/user/.local/lib/python3.8/site-packages/netmiko/base_connection.py", line 1034, in establish_connection
self.telnet_login()
File "/home/user/.local/lib/python3.8/site-packages/netmiko/huawei/huawei.py", line 155, in telnet_login
output = self.read_until_pattern(pattern=combined_pattern)
File "/home/user/.local/lib/python3.8/site-packages/netmiko/base_connection.py", line 672, in read_until_pattern
raise ReadTimeout(msg)
netmiko.exceptions.ReadTimeout:
Pattern not detected: "(]\\s*$|>\\s*$|(Change now|Please choose 'YES' or 'NO').+)" in output.
Things you might try to fix this:
1. Adjust the regex pattern to better identify the terminating string. Note, in
many situations the pattern is automatically based on the network device's prompt.
2. Increase the read_timeout to a larger value.
You can also look at the Netmiko session_log or debug log for more information.
```
Configuration:
```
netmiko==4.1.2
Python 3.8.10
Linux Mint 20.3
```
@TimGa Is it possible that you can look in the debugger and see what happened right before this:
https://github.com/ktbyers/netmiko/blob/develop/netmiko/huawei/huawei.py#L155
i.e. add a Pdb debug breakpoint right before this and see what is in the `return_msg` variable?
The failure above says it failed on the looking for `>` which should be in the output after the password is sent.
Here is `return_msg` content:
```
Warning: Telnet is not a secure protocol, and it is recommended to use Stelnet.
Login authentication
Username:my-secret-username
Password
```
It turned out that password in my python script was incorrect - I mixed up IPs and passwords of different devices while investigation. As far as I understood, ReadTimeout raises while waiting for pattern after incorrect password. And it looks like that in my case the reason for timeout is that device has big delay between asking for password re-entry.
Here is manual telnet with incorrect password, just in case:
```
Trying 10.166.66.66...
Connected to 10.166.66.66.
Escape character is '^]'.
Warning: Telnet is not a secure protocol, and it is recommended to use Stelnet.
Login authentication
Username:my-secret-username
Password:
Error: Local authentication is rejected
Username:my-secret-username
Password:
Error: Local authentication is rejected
Username:my-secret-username
Password:
Error: Local authentication is rejected
Connection closed by foreign host.
```
P.S. Sorry for the confusion! I was confused with "ReadTimeout:Pattern not detected:" and didn't even think about password issue, cause usually it is NetmikoAuthenticationException. So maybe it is better to raise auth instead of timeout in such situation, but offcourse it is up to you to decide.
**Important**: Actual problem is still remains - I can't connect to some huawei device using telnet, but actual error is not `ReadTimeout` but `ReadException: Unable to successfully split output based on pattern`. I'll try to investigate it and write results here in few days.
I am going to leave this open and flag at it as an issue where improvement should be made.
1. Better error message.
2. Potentially more reliable loop behavior.
Hi all,
I seem to be running into this issue as well since the new version:
```Traceback (most recent call last):
File "/home/<snip>/devops/test-development/network/huawei_bulk_patch.py", line 19, in <module>
print(net_connect = ConnectHandler(**device))
File "/home/<snip>/.local/lib/python3.10/site-packages/netmiko/ssh_dispatcher.py", line 365, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "/home/<snip>/.local/lib/python3.10/site-packages/netmiko/base_connection.py", line 439, in __init__
self._open()
File "/home/<snip>/.local/lib/python3.10/site-packages/netmiko/base_connection.py", line 444, in _open
self.establish_connection()
File "/home/<snip>/.local/lib/python3.10/site-packages/netmiko/base_connection.py", line 1034, in establish_connection
self.telnet_login()
File "/home/<snip>/.local/lib/python3.10/site-packages/netmiko/huawei/huawei.py", line 155, in telnet_login
output = self.read_until_pattern(pattern=combined_pattern)
File "/home/<snip>/.local/lib/python3.10/site-packages/netmiko/base_connection.py", line 652, in read_until_pattern
raise ReadException(msg)
netmiko.exceptions.ReadException: Unable to successfully split output based on pattern:
pattern=(]\s*$|>\s*$|(Change now|Please choose 'YES' or 'NO').+)
output=':\n ----------------------------------------------------------------------------- \n User last login information: \n -----------------------------------------------------------------------------\n Access Type: Telnet \n IP-Address : <snip> \n Time : 2022-08-29 11:21:20+02:00 DST \n -----------------------------------------------------------------------------\n<support-network-labo-ar1220>'
results=[':\n ----------------------------------------------------------------------------- \n User last login information: \n -----------------------------------------------------------------------------\n Access Type: Telnet \n IP-Address : <snip> \n Time : 2022-08-29 11:21:20+02:00 DST \n -----------------------------------------------------------------------------\n<support-network-labo-ar1220', '>', None, '']
```
Environment: Ubuntu 22.04.1
Python 3.10.4
netmiko 4.1.2
How can I help to investigate this? (not the best coder in the world but I will try to do my best) | 2022-09-07T12:25:04 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.