url
stringlengths 63
63
| repository_url
stringclasses 1
value | labels_url
stringlengths 77
77
| comments_url
stringlengths 72
72
| events_url
stringlengths 70
70
| html_url
stringlengths 51
53
| id
int64 1.57B
2.35B
| node_id
stringlengths 18
19
| number
int64 59.5k
69.6k
| title
stringlengths 1
554
| user
dict | labels
listlengths 0
8
| state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
listlengths 0
8
| milestone
null | comments
sequencelengths 0
30
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | active_lock_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | body
stringlengths 1
65.4k
⌀ | reactions
dict | timeline_url
stringlengths 72
72
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/tensorflow/tensorflow/issues/61374 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61374/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61374/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61374/events | https://github.com/tensorflow/tensorflow/issues/61374 | 1,819,516,720 | I_kwDOArmXAs5sc58w | 61,374 | float8 support for array ops | {
"login": "youchangkim",
"id": 7123948,
"node_id": "MDQ6VXNlcjcxMjM5NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7123948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youchangkim",
"html_url": "https://github.com/youchangkim",
"followers_url": "https://api.github.com/users/youchangkim/followers",
"following_url": "https://api.github.com/users/youchangkim/following{/other_user}",
"gists_url": "https://api.github.com/users/youchangkim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/youchangkim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/youchangkim/subscriptions",
"organizations_url": "https://api.github.com/users/youchangkim/orgs",
"repos_url": "https://api.github.com/users/youchangkim/repos",
"events_url": "https://api.github.com/users/youchangkim/events{/privacy}",
"received_events_url": "https://api.github.com/users/youchangkim/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473173272,
"node_id": "MDU6TGFiZWw0NzMxNzMyNzI=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:feature",
"name": "type:feature",
"color": "159b2e",
"default": false,
"description": "Feature requests"
},
{
"id": 1097547147,
"node_id": "MDU6TGFiZWwxMDk3NTQ3MTQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:ops",
"name": "comp:ops",
"color": "0052cc",
"default": false,
"description": "OPs related issues"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "cantonios",
"id": 2538739,
"node_id": "MDQ6VXNlcjI1Mzg3Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2538739?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cantonios",
"html_url": "https://github.com/cantonios",
"followers_url": "https://api.github.com/users/cantonios/followers",
"following_url": "https://api.github.com/users/cantonios/following{/other_user}",
"gists_url": "https://api.github.com/users/cantonios/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cantonios/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cantonios/subscriptions",
"organizations_url": "https://api.github.com/users/cantonios/orgs",
"repos_url": "https://api.github.com/users/cantonios/repos",
"events_url": "https://api.github.com/users/cantonios/events{/privacy}",
"received_events_url": "https://api.github.com/users/cantonios/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "cantonios",
"id": 2538739,
"node_id": "MDQ6VXNlcjI1Mzg3Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2538739?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cantonios",
"html_url": "https://github.com/cantonios",
"followers_url": "https://api.github.com/users/cantonios/followers",
"following_url": "https://api.github.com/users/cantonios/following{/other_user}",
"gists_url": "https://api.github.com/users/cantonios/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cantonios/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cantonios/subscriptions",
"organizations_url": "https://api.github.com/users/cantonios/orgs",
"repos_url": "https://api.github.com/users/cantonios/repos",
"events_url": "https://api.github.com/users/cantonios/events{/privacy}",
"received_events_url": "https://api.github.com/users/cantonios/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@youchangkim I tried to replicate the provided code, could you have a look at the log below and confirm the same?\r\n```\r\n\r\ntf.Tensor(\r\n[[1.234 2.346 3.457]\r\n [4.566 5.68 6.79 ]], shape=(2, 3), dtype=float16)\r\ntf.Tensor(\r\n[[1.25 2.25 3.5]\r\n [4.5 5.5 7]], shape=(2, 3), dtype=float8_e4m3fn)\r\n---------------------------------------------------------------------------\r\nNotFoundError Traceback (most recent call last)\r\n<ipython-input-1-4393d1d31d50> in <cell line: 10>()\r\n 8 print(a_fp8)\r\n 9 \r\n---> 10 b = a_fp8[1:2] # tensorflow.python.framework.errors_impl.NotFoundError\r\n 11 b = tf.transpose(a_fp8, [1, 0]) # tensorflow.python.framework.errors_impl.NotFoundError\r\n\r\n1 frames\r\n/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)\r\n 7260 def raise_from_not_ok_status(e, name):\r\n 7261 e.message += (\" name: \" + name if name is not None else \"\")\r\n-> 7262 raise core._status_to_exception(e) from None # pylint: disable=protected-access\r\n 7263 \r\n 7264 \r\n\r\nNotFoundError: Could not find device for node: {{node StridedSlice}} = StridedSlice[Index=DT_INT32, T=DT_FLOAT8_E4M3FN, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=0]\r\nAll kernels registered for op StridedSlice:\r\n device='XLA_GPU_JIT'; Index in [DT_INT32, DT_INT16, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 930109355527764061, DT_HALF, DT_UINT32, DT_UINT64, DT_FLOAT8_E5M2, DT_FLOAT8_E4M3FN]\r\n device='XLA_CPU_JIT'; Index in [DT_INT32, DT_INT16, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 930109355527764061, DT_HALF, DT_UINT32, DT_UINT64, DT_FLOAT8_E5M2, DT_FLOAT8_E4M3FN]\r\n device='CPU'; T in [DT_QINT32]\r\n device='CPU'; T in [DT_QUINT8]\r\n device='CPU'; T in [DT_QINT8]\r\n device='CPU'; T in [DT_VARIANT]\r\n device='CPU'; T in [DT_RESOURCE]\r\n device='CPU'; T in [DT_STRING]\r\n device='CPU'; T in [DT_BOOL]\r\n device='CPU'; T in [DT_COMPLEX128]\r\n device='CPU'; T in [DT_COMPLEX64]\r\n device='CPU'; T in [DT_DOUBLE]\r\n device='CPU'; T in [DT_FLOAT]\r\n device='CPU'; T in [DT_BFLOAT16]\r\n device='CPU'; T in [DT_HALF]\r\n device='CPU'; T in [DT_INT32]\r\n device='CPU'; T in [DT_INT8]\r\n device='CPU'; T in [DT_UINT8]\r\n device='CPU'; T in [DT_INT16]\r\n device='CPU'; T in [DT_UINT16]\r\n device='CPU'; T in [DT_UINT32]\r\n device='CPU'; T in [DT_INT64]\r\n device='CPU'; T in [DT_UINT64]\r\n device='GPU'; T in [DT_INT32]\r\n device='GPU'; T in [DT_BOOL]\r\n device='GPU'; T in [DT_COMPLEX128]\r\n device='GPU'; T in [DT_COMPLEX64]\r\n device='GPU'; T in [DT_DOUBLE]\r\n device='GPU'; T in [DT_FLOAT]\r\n device='GPU'; T in [DT_BFLOAT16]\r\n device='GPU'; T in [DT_HALF]\r\n device='GPU'; T in [DT_INT8]\r\n device='GPU'; T in [DT_UINT8]\r\n device='GPU'; T in [DT_INT16]\r\n device='GPU'; T in [DT_UINT16]\r\n device='GPU'; T in [DT_UINT32]\r\n device='GPU'; T in [DT_INT64]\r\n device='GPU'; T in [DT_UINT64]\r\n [Op:StridedSlice] name: strided_slice/\r\n\r\n```\r\nThank you!",
"@sushreebarsa Yes. The ask is to register float8 types to these data movement ops.",
"@sachinprasadhs I was able to replicate the issue reported here using TF [2.12](https://colab.research.google.com/gist/sushreebarsa/74637819d391171f4f910485036fb301/61374.ipynb) and [2.13 ](https://colab.research.google.com/gist/sushreebarsa/2ce5f5ab09db398b0456196555f760e4/61374.ipynb)as well. Please have a look ?\r\nThank you! ",
"Fixed with https://github.com/tensorflow/tensorflow/pull/61749"
] | 2023-07-25T04:10:16 | 2023-09-07T23:32:14 | 2023-09-07T23:32:14 | NONE | null | null | null | ### Issue type
Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.12.0
### Custom code
No
### OS platform and distribution
macOS-13.2.1-arm64-arm-64bit
### Mobile device
_No response_
### Python version
3.9.6
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Please add FP8 datatype support for array ops (like Reshape, Transpose, GatherV2, ExpandDims, Squeeze, ConcatV2, Split, Pack, Unpack, and StridedSlice).
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
from tensorflow.python.framework import dtypes
a = tf.constant([[1.2345678, 2.3456789, 3.4567891], [4.5678912, 5.6789123, 6.7891234]], dtype=dtypes.float16)
print(a)
a_fp8 = tf.cast(a, dtypes.float8_e4m3fn)
print(a_fp8)
b = a_fp8[1:2] # tensorflow.python.framework.errors_impl.NotFoundError
b = tf.transpose(a_fp8, [1, 0]) # tensorflow.python.framework.errors_impl.NotFoundError
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61374/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61373 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61373/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61373/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61373/events | https://github.com/tensorflow/tensorflow/issues/61373 | 1,819,507,667 | I_kwDOArmXAs5sc3vT | 61,373 | Invalid work group size | {
"login": "njuhang",
"id": 46017704,
"node_id": "MDQ6VXNlcjQ2MDE3NzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/46017704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njuhang",
"html_url": "https://github.com/njuhang",
"followers_url": "https://api.github.com/users/njuhang/followers",
"following_url": "https://api.github.com/users/njuhang/following{/other_user}",
"gists_url": "https://api.github.com/users/njuhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njuhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njuhang/subscriptions",
"organizations_url": "https://api.github.com/users/njuhang/orgs",
"repos_url": "https://api.github.com/users/njuhang/repos",
"events_url": "https://api.github.com/users/njuhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/njuhang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 2671339633,
"node_id": "MDU6TGFiZWwyNjcxMzM5NjMz",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteGpuDelegate",
"name": "TFLiteGpuDelegate",
"color": "F71F04",
"default": false,
"description": "TFLite Gpu delegate issue"
},
{
"id": 4511033337,
"node_id": "LA_kwDOArmXAs8AAAABDODn-Q",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.10",
"name": "TF 2.10",
"color": "C15088",
"default": false,
"description": ""
}
] | closed | false | {
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
},
{
"login": "pkgoogle",
"id": 132095473,
"node_id": "U_kgDOB9-d8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/132095473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkgoogle",
"html_url": "https://github.com/pkgoogle",
"followers_url": "https://api.github.com/users/pkgoogle/followers",
"following_url": "https://api.github.com/users/pkgoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/pkgoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkgoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkgoogle/subscriptions",
"organizations_url": "https://api.github.com/users/pkgoogle/orgs",
"repos_url": "https://api.github.com/users/pkgoogle/repos",
"events_url": "https://api.github.com/users/pkgoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkgoogle/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @njuhang \r\n\r\nCan you please pull from master branch and let us know if you are facing same issue?\r\n\r\nWork groups from the below are currently used in TFLite GPU and their stability is statistically proven. While\r\nthey do not necessarily result in peak optimal time across all parameters, they are reliable in giving top 10% performance\r\nregardless of the convolution parameters.\r\n\r\n\r\n\r\nAdreno GPU Model | CONV 2D | DEPTHWISE CONV\r\n-- | -- | --\r\n630 | (4, 8, 4) | (4, 4, 8)\r\n540 | (8, 2, 2) | (8, 8, 2)\r\n510 | (8, 4, 4) | (8, 4, 4)\r\n509 | (8, 4, 8) | (8, 4, 2)\r\n50X/4XX | (8, 4, 8) | (8, 4, 8)\r\n\r\nPlease check this paper for reference: https://arxiv.org/abs/1907.01989\r\n\r\nThanks.",
"Thanks for reply!\r\nI download benchmark file from https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex, then benchmark on my sm6225 device and got the same error. \r\nCommand is ./android_aarch64_benchmark_model --graph=./models/mobilenet_v2_float.tflite --use_gpu=true",
"@njuhang Thanks for the information.\r\n\r\nAs this [comment](https://github.com/tensorflow/tensorflow/issues/34064#issuecomment-551936622) suggests,\r\n\r\nThe max Local Work group size of each device is often 256. We may have to experiment with the workgroup sizes and it is observed that Adreno is more susceptible to workgroup changes. Change in group size will be affecting in the performance.\r\n\r\nThanks.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"I see it. But I think it will result some errors on some devices such as sm6225, and it need to be solved.",
"@njuhang Thanks for the information.\r\n\r\n@pkgoogle Could you please look into this issue?\r\n\r\nThanks.",
"Hi @njuhang, thanks for reporting the issue. Can you help me with more context. Are you using the flex version of android_aarch64_benchmark_model? Because your command doesn't have it but your download link does. Can you upload the .tflite model so we can replicate? Are you doing this in android studio? Can you show me the code around where you invoke the model? A toy project with just the offending code will be preferable so that we can reproduce it locally.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61373\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61373\">No</a>\n"
] | 2023-07-25T03:57:58 | 2023-08-25T17:29:58 | 2023-08-25T01:47:47 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.10.0
### Custom code
Yes
### OS platform and distribution
ubuntu 20.04
### Mobile device
Qualcomm SM6225
### Python version
3.8
### Bazel version
5.1.1
### GCC/compiler version
9.4.0
### CUDA/cuDNN version
no
### GPU model and memory
QUALCOMM Adreno(TM) QUALCOMM OpenCL C 2.0 Adreno(TM) 610
### Current behavior?
use tflite to invoke mobilnet-v2 or any model with op softmax on mobile device, will encounter an error:
TfLiteGpuDelegate Invoke: Failed to clEnqueueNDRangeKernel - Invalid work group size.
More details:
if INFERENCE_FORCE_FP16 is false, no error occured.
if work group size is smaller than 256, no error occured.
### Standalone code to reproduce the issue
```shell
same code as tflite invoke on android phone will reproduce this issue.
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61373/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61372 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61372/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61372/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61372/events | https://github.com/tensorflow/tensorflow/issues/61372 | 1,819,391,081 | I_kwDOArmXAs5scbRp | 61,372 | "ValueError: Cannot take the length of shape with unknown rank." when using MultiHeadRelativeAttention | {
"login": "mizuirosakura",
"id": 107431581,
"node_id": "U_kgDOBmdGnQ",
"avatar_url": "https://avatars.githubusercontent.com/u/107431581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mizuirosakura",
"html_url": "https://github.com/mizuirosakura",
"followers_url": "https://api.github.com/users/mizuirosakura/followers",
"following_url": "https://api.github.com/users/mizuirosakura/following{/other_user}",
"gists_url": "https://api.github.com/users/mizuirosakura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mizuirosakura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mizuirosakura/subscriptions",
"organizations_url": "https://api.github.com/users/mizuirosakura/orgs",
"repos_url": "https://api.github.com/users/mizuirosakura/repos",
"events_url": "https://api.github.com/users/mizuirosakura/events{/privacy}",
"received_events_url": "https://api.github.com/users/mizuirosakura/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
}
] | closed | false | {
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61372\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61372\">No</a>\n"
] | 2023-07-25T01:30:55 | 2023-07-25T07:30:05 | 2023-07-25T07:30:02 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.13.1
### Custom code
Yes
### OS platform and distribution
mac M2 pro
### Mobile device
mac M2 pro
### Python version
3.10.9
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
when using MultiHeadRelativeAttention from official.nlp.modeling.layers, I face on this error, "ValueError: Cannot take the length of shape with unknown rank." I'm sorry I'm not good at English. Thank you!
### Standalone code to reproduce the issue
```shell
from official.nlp.modeling.layers import MultiHeadRelativeAttention
import tensorflow as tf
vec= tf.constant([[[[0.1]*4]*3]*3])
layers=MultiHeadRelativeAttention(num_heads=4,key_dim=3)
output=layers(vec,vec,content_attention_bias=0.1, positional_attention_bias=0.1)
```
### Relevant log output
```shell
ValueError: Cannot take the length of shape with unknown rank.
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61372/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61371 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61371/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61371/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61371/events | https://github.com/tensorflow/tensorflow/issues/61371 | 1,819,311,709 | I_kwDOArmXAs5scH5d | 61,371 | RNG generators make python abort in python3.11 and tf2.12, conda-forge | {
"login": "WenjieZ",
"id": 6860682,
"node_id": "MDQ6VXNlcjY4NjA2ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6860682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenjieZ",
"html_url": "https://github.com/WenjieZ",
"followers_url": "https://api.github.com/users/WenjieZ/followers",
"following_url": "https://api.github.com/users/WenjieZ/following{/other_user}",
"gists_url": "https://api.github.com/users/WenjieZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenjieZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenjieZ/subscriptions",
"organizations_url": "https://api.github.com/users/WenjieZ/orgs",
"repos_url": "https://api.github.com/users/WenjieZ/repos",
"events_url": "https://api.github.com/users/WenjieZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenjieZ/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1093464312,
"node_id": "MDU6TGFiZWwxMDkzNDY0MzEy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:others",
"name": "type:others",
"color": "159b2e",
"default": false,
"description": "issues not falling in bug, perfromance, support, build and install or feature"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @WenjieZ ,\r\n\r\nOfficially we don't recommend users to install tensorflow with conda-forge.We officially recommend to use only Pip since TensorFlow is only officially released to PyPI.\r\n\r\nPlease refer to attached [source](https://www.tensorflow.org/install/pip#:~:text=Note%3A%20Do%20not%20install%20TensorFlow%20with%20conda.%20It%20may%20not%20have%20the%20latest%20stable%20version.%20pip%20is%20recommended%20since%20TensorFlow%20is%20only%20officially%20released%20to%20PyPI.) for same.Thanks!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Thanks for the reply. I thought that the conda-forge version was released by Google as well. ",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61371\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61371\">No</a>\n"
] | 2023-07-24T23:58:21 | 2023-08-02T13:46:55 | 2023-08-02T13:46:52 | CONTRIBUTOR | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.12.1
### Custom code
Yes
### OS platform and distribution
MacOS 13.4.1(c)
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Unable to make random number generators. This happens with conda-forge (python 3.11.3 + tensorflow.2.12.1). It works fine with pip (python 3.11.3 + tensorflow 2.12.0)
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
g = tf.random.Generator.from_seed(1)
g.normal(shape=[3])
```
### Relevant log output
```shell
Assertion failed: (f == nullptr || dynamic_cast<To>(f) != nullptr), function down_cast, file ./tensorflow/tsl/platform/default/casts.h, line 58.
[1] 18173 abort python
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61371/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61370 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61370/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61370/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61370/events | https://github.com/tensorflow/tensorflow/issues/61370 | 1,819,272,703 | I_kwDOArmXAs5sb-X_ | 61,370 | converting LSTM layer to tflite with float16 fails | {
"login": "sronen71",
"id": 4361027,
"node_id": "MDQ6VXNlcjQzNjEwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4361027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sronen71",
"html_url": "https://github.com/sronen71",
"followers_url": "https://api.github.com/users/sronen71/followers",
"following_url": "https://api.github.com/users/sronen71/following{/other_user}",
"gists_url": "https://api.github.com/users/sronen71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sronen71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sronen71/subscriptions",
"organizations_url": "https://api.github.com/users/sronen71/orgs",
"repos_url": "https://api.github.com/users/sronen71/repos",
"events_url": "https://api.github.com/users/sronen71/events{/privacy}",
"received_events_url": "https://api.github.com/users/sronen71/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 1661751498,
"node_id": "MDU6TGFiZWwxNjYxNzUxNDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteConverter",
"name": "TFLiteConverter",
"color": "bfdadc",
"default": false,
"description": "For issues related to TFLite converter"
},
{
"id": 2671351731,
"node_id": "MDU6TGFiZWwyNjcxMzUxNzMx",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ModelOptimizationToolkit",
"name": "ModelOptimizationToolkit",
"color": "BFD629",
"default": false,
"description": "TF Model Optimization Toolkit"
},
{
"id": 2787066190,
"node_id": "MDU6TGFiZWwyNzg3MDY2MTkw",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/RNN",
"name": "RNN",
"color": "AB0FB3",
"default": false,
"description": "RNN related issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "abattery",
"id": 3203059,
"node_id": "MDQ6VXNlcjMyMDMwNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3203059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abattery",
"html_url": "https://github.com/abattery",
"followers_url": "https://api.github.com/users/abattery/followers",
"following_url": "https://api.github.com/users/abattery/following{/other_user}",
"gists_url": "https://api.github.com/users/abattery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abattery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abattery/subscriptions",
"organizations_url": "https://api.github.com/users/abattery/orgs",
"repos_url": "https://api.github.com/users/abattery/repos",
"events_url": "https://api.github.com/users/abattery/events{/privacy}",
"received_events_url": "https://api.github.com/users/abattery/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "abattery",
"id": 3203059,
"node_id": "MDQ6VXNlcjMyMDMwNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3203059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abattery",
"html_url": "https://github.com/abattery",
"followers_url": "https://api.github.com/users/abattery/followers",
"following_url": "https://api.github.com/users/abattery/following{/other_user}",
"gists_url": "https://api.github.com/users/abattery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abattery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abattery/subscriptions",
"organizations_url": "https://api.github.com/users/abattery/orgs",
"repos_url": "https://api.github.com/users/abattery/repos",
"events_url": "https://api.github.com/users/abattery/events{/privacy}",
"received_events_url": "https://api.github.com/users/abattery/received_events",
"type": "User",
"site_admin": false
},
{
"login": "pkgoogle",
"id": 132095473,
"node_id": "U_kgDOB9-d8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/132095473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkgoogle",
"html_url": "https://github.com/pkgoogle",
"followers_url": "https://api.github.com/users/pkgoogle/followers",
"following_url": "https://api.github.com/users/pkgoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/pkgoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkgoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkgoogle/subscriptions",
"organizations_url": "https://api.github.com/users/pkgoogle/orgs",
"repos_url": "https://api.github.com/users/pkgoogle/repos",
"events_url": "https://api.github.com/users/pkgoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkgoogle/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@sronen71 For LSTM conversion, this is an [article](https://www.tensorflow.org/lite/convert/rnn) and colab describing the process, and you can follow the [colab gist](https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/experimental_new_converter/Keras_LSTM_fusion_Codelab.ipynb) to do it. For [4 kinds of quantizations](https://www.tensorflow.org/lite/performance/model_optimization), maybe you can also try other ways like dynamic range. Hope it could help? Please let us know. Thank you!",
"@sushreebarsa \r\nI checked that information and colab example previously, and based my colab gist on that.\r\nI find the original colab example is broken right now, because of tf-nightly error.\r\nI was able to run it yesterday successfully, however, it only uses the vanilla convert.\r\nWhen I try to use the optimize option with float16 target, as in the colab gist I attached, the reported defect is observed.",
"@pkgoogle I was able to reproduce this issue. Please find this [gist](https://colab.research.google.com/gist/pjpratik/f814b84dc79edfee54977c82c98cc34c/61370.ipynb).\r\n\r\nCould you please look into this?\r\n\r\nThanks.",
"I was able to replicate with @pjpratik's exact same gist, @abattery, can you please take a look? Thanks.",
"Hi, can i work on this issue ?",
"Hi @devadigapratham, we always welcome any contribution from the community. Please let us know if you have any questions.",
"Also running into this issue for float16 tflite conversions"
] | 2023-07-24T23:03:50 | 2024-05-02T07:58:29 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.13.0
### Custom code
Yes
### OS platform and distribution
Colab
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Tflite converter fails to convert model with LSTM layer to float16 target.
It runs for a very long time, increases RAM consumption to the maximum system available, then crashes.
Expected behavior: should convert.
Workaround: instead of LSTM layer, use RNN wrapper of LSTMCell instead of the LSTM layer.
Tflite is successful in converting the alternative with float16 target.
### Standalone code to reproduce the issue
```shell
See this gist:
https://colab.research.google.com/gist/sronen71/9b016245f507280f867841a7161fad8d/keras-lstm-fusion-codelab.ipynb
```
### Relevant log output
```shell
Program crash after it is out of RAM.
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61370/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61369 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61369/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61369/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61369/events | https://github.com/tensorflow/tensorflow/issues/61369 | 1,818,875,455 | I_kwDOArmXAs5sadY_ | 61,369 | StringLookup layer does not retrieve vocabulary after saving and loading the model | {
"login": "brunolucatto",
"id": 26310856,
"node_id": "MDQ6VXNlcjI2MzEwODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/26310856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brunolucatto",
"html_url": "https://github.com/brunolucatto",
"followers_url": "https://api.github.com/users/brunolucatto/followers",
"following_url": "https://api.github.com/users/brunolucatto/following{/other_user}",
"gists_url": "https://api.github.com/users/brunolucatto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brunolucatto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brunolucatto/subscriptions",
"organizations_url": "https://api.github.com/users/brunolucatto/orgs",
"repos_url": "https://api.github.com/users/brunolucatto/repos",
"events_url": "https://api.github.com/users/brunolucatto/events{/privacy}",
"received_events_url": "https://api.github.com/users/brunolucatto/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097546578,
"node_id": "MDU6TGFiZWwxMDk3NTQ2NTc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:keras",
"name": "comp:keras",
"color": "0052cc",
"default": false,
"description": "Keras related issues"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | open | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @brunolucatto,\r\n\r\nPickling isn't natively supported in keras as it is generally [not secure](https://docs.python.org/3/library/pickle.html). Therefore I don't think this is actionable on Keras side, and overriding methods for `tf.keras.layers.serialize` is necessary. \r\n\r\nPlease refer to the similar issue [here](https://github.com/keras-team/tf-keras/issues/626).\r\n\r\nPlease look at the [gist](https://colab.research.google.com/gist/Varsha-anjanappa/f87d4145f9d2a2a9e46069790154b4f6/61369.ipynb) here.\r\n\r\nPlease let us know if it works.\r\n\r\nThank you!!\r\n",
"Hi @Varsha-anjanappa, thank you so much for the prompt reply.\r\n\r\nI am afraid your example on gist missed the part where it fails, which is in the deserialization, not in the serialization. I added the tf.keras.layers.deserialize function, and it throws the same error as the pickle one.\r\n\r\n```\r\nimport tensorflow as tf\r\nimport pickle\r\n\r\nmodel_input = tf.keras.Input(shape=(1,), dtype=tf.int64)\r\nlookup = tf.keras.layers.StringLookup(vocabulary=['a', 'b'])(model_input)\r\nlookup = tf.keras.layers.Flatten()(lookup)\r\noutput = tf.keras.layers.Dense(10)(lookup)\r\nfull_model = tf.keras.Model(model_input, output)\r\n\r\n# this part should work\r\nmodel_bytes = pickle.dumps(full_model)\r\nmodel_recovered = pickle.loads(model_bytes)\r\n\r\n\r\n# this part should throw an error\r\nfull_model.save(\"/tmp/temp_model\")\r\nfull_model_loaded = tf.keras.models.load_model(\"/tmp/temp_model\")\r\nmodel_bytes_2 = tf.keras.layers.serialize(full_model_loaded)\r\nmodel_recovered_2 = tf.keras.layers.deserialize(model_bytes_2)\r\n```\r\n\r\nMoreover, I also added the proposed solution for the problem (i.e., changing the get_config method of the StringLookup class) to show that it also works using Keras' proper serialization.\r\n\r\n```\r\nimport tensorflow as tf\r\nimport pickle\r\n\r\[email protected]_keras_serializable()\r\nclass MyStringLookup(tf.keras.layers.StringLookup):\r\n def get_config(self):\r\n base_config = super().get_config()\r\n custom = {\"vocabulary\": self.get_vocabulary()}\r\n return {**base_config, **custom}\r\n\r\nmodel_input = tf.keras.Input(shape=(1,), dtype=tf.int64)\r\nlookup = MyStringLookup(vocabulary=['a', 'b'])(model_input)\r\nlookup = tf.keras.layers.Flatten()(lookup)\r\noutput = tf.keras.layers.Dense(10)(lookup)\r\nfull_model = tf.keras.Model(model_input, output)\r\n\r\n# this part should work\r\nmodel_bytes = pickle.dumps(full_model)\r\nmodel_recovered = pickle.loads(model_bytes)\r\n\r\n\r\n# this part should throw an error\r\nfull_model.save(\"/tmp/temp_model\")\r\nfull_model_loaded = tf.keras.models.load_model(\"/tmp/temp_model\")\r\nmodel_bytes_2 = tf.keras.layers.serialize(full_model_loaded)\r\nmodel_recovered_2 = tf.keras.layers.deserialize(model_bytes_2)\r\n```\r\n\r\n#-----------------\r\n\r\nAs a side note, I am using pickle because I want to distribute the model on spark using broadcast, so I can make predictions using a pandas UDF more efficiently. The implementation of pyspark.broadcast itself uses pickle, I tried to change that by creating a custom broadcast function but it was not an easy task for me. Loading the model from the disk inside the pandas UDF is also an alternative to broadcast, but it is many times slower.\r\n\r\n",
"Hi @Varsha-anjanappa,\r\n\r\nJust following up to know if you were able to reproduce the error with the changes above.\r\n\r\nPlease let me know if I can provide any further information.",
"Hi @brunolucatto ,\r\n\r\nPlease find the attached [gist](https://colab.research.google.com/gist/Varsha-anjanappa/89c9def29103fc1b3aa3735f427362a7/61369_v2.ipynb).\r\n\r\nAs you can see in the gist, when we serialize, the vocabulary values aren't captured, it gives an empty list. Whereas the vocab size is getting captured. There is some issue in the source code. We'll check and get back to you.\r\n\r\nThank you ",
"@SuryanarayanaY , can you please look into it",
"we are seeing the same issue using tf 2.6.2 with python 3.6.9",
"@brunolucatto ,\r\n\r\nIt seems a valid issue for me. It needs to dig more for the root cause.",
"I have done some work, on this bug. The issue seems to be in this file load.py. \r\n\r\n\r\nAs I have highlighted, parse from string is generating the data as nodes via proto buff.\r\n\r\n\r\n\r\nThe output nodes are as follows, \r\n\r\nnodes {\r\n node_id: 1\r\n node_path: \"root.layer-0\"\r\n identifier: \"_tf_keras_input_layer\"\r\n metadata: \"{\\\"class_name\\\": \\\"InputLayer\\\", \\\"name\\\": \\\"input_1\\\", \\\"dtype\\\": \\\"int64\\\", \\\"sparse\\\": false, \\\"ragged\\\": false, \\\"batch_input_shape\\\": {\\\"class_name\\\": \\\"__tuple__\\\", \\\"items\\\": [null, 1]}, \\\"config\\\": {\\\"batch_input_shape\\\": {\\\"class_name\\\": \\\"__tuple__\\\", \\\"items\\\": [null, 1]}, \\\"dtype\\\": \\\"int64\\\", \\\"sparse\\\": false, \\\"ragged\\\": false, \\\"name\\\": \\\"input_1\\\"}}\"\r\n version {\r\n producer: 2\r\n min_consumer: 1\r\n }\r\n}\r\nnodes {\r\n node_id: 2\r\n node_path: \"root.layer-1\"\r\n identifier: \"_tf_keras_layer\"\r\n metadata: \"{\\\"name\\\": \\\"string_lookup\\\", \\\"trainable\\\": true, \\\"expects_training_arg\\\": false, \\\"dtype\\\": \\\"int64\\\", \\\"batch_input_shape\\\": null, \\\"stateful\\\": false, \\\"must_restore_from_config\\\": true, \\\"preserve_input_structure_in_config\\\": false, \\\"autocast\\\": true, \\\"class_name\\\": \\\"StringLookup\\\", \\\"config\\\": {\\\"name\\\": \\\"string_lookup\\\", \\\"trainable\\\": true, \\\"dtype\\\": \\\"int64\\\", \\\"invert\\\": false, \\\"max_tokens\\\": null, \\\"num_oov_indices\\\": 1, \\\"oov_token\\\": \\\"[UNK]\\\", \\\"mask_token\\\": null, \\\"output_mode\\\": \\\"int\\\", \\\"sparse\\\": false, \\\"pad_to_max_tokens\\\": false, \\\"idf_weights\\\": null, \\\"vocabulary\\\": null, \\\"vocabulary_size\\\": 3, \\\"encoding\\\": \\\"utf-8\\\", \\\"has_input_vocabulary\\\": true}, \\\"inbound_nodes\\\": [[[\\\"input_1\\\", 0, 0, {}]]], \\\"shared_object_id\\\": 1, \\\"build_input_shape\\\": {\\\"class_name\\\": \\\"TensorShape\\\", \\\"items\\\": [null, 1]}}\"\r\n version {\r\n producer: 2\r\n min_consumer: 1\r\n }\r\n}\r\nnodes {\r\n node_id: 3\r\n node_path: \"root.layer_with_weights-0\"\r\n identifier: \"_tf_keras_layer\"\r\n metadata: \"{\\\"name\\\": \\\"dense\\\", \\\"trainable\\\": true, \\\"expects_training_arg\\\": false, \\\"dtype\\\": \\\"float32\\\", \\\"batch_input_shape\\\": null, \\\"stateful\\\": false, \\\"must_restore_from_config\\\": false, \\\"preserve_input_structure_in_config\\\": false, \\\"autocast\\\": true, \\\"class_name\\\": \\\"Dense\\\", \\\"config\\\": {\\\"name\\\": \\\"dense\\\", \\\"trainable\\\": true, \\\"dtype\\\": \\\"float32\\\", \\\"units\\\": 10, \\\"activation\\\": \\\"linear\\\", \\\"use_bias\\\": true, \\\"kernel_initializer\\\": {\\\"class_name\\\": \\\"GlorotUniform\\\", \\\"config\\\": {\\\"seed\\\": null}, \\\"shared_object_id\\\": 2}, \\\"bias_initializer\\\": {\\\"class_name\\\": \\\"Zeros\\\", \\\"config\\\": {}, \\\"shared_object_id\\\": 3}, \\\"kernel_regularizer\\\": null, \\\"bias_regularizer\\\": null, \\\"activity_regularizer\\\": null, \\\"kernel_constraint\\\": null, \\\"bias_constraint\\\": null}, \\\"inbound_nodes\\\": [[[\\\"string_lookup\\\", 0, 0, {}]]], \\\"shared_object_id\\\": 4, \\\"input_spec\\\": {\\\"class_name\\\": \\\"InputSpec\\\", \\\"config\\\": {\\\"dtype\\\": null, \\\"shape\\\": null, \\\"ndim\\\": null, \\\"max_ndim\\\": null, \\\"min_ndim\\\": 2, \\\"axes\\\": {\\\"-1\\\": 1}}, \\\"shared_object_id\\\": 7}, \\\"build_input_shape\\\": {\\\"class_name\\\": \\\"TensorShape\\\", \\\"items\\\": [null, 1]}}\"\r\n version {\r\n producer: 2\r\n min_consumer: 1\r\n }\r\n\r\nin node_id:2 the vocabulary is getting dropped where as it is present in file content. Issue seems to be with protobuff and parsing. It's 2 Am and I have worked on this for straight 4 hours and am taking a break. If you can wait will resume working on it 🙂.\r\n\r\n\r\n",
"My bad, I think that there is some mistake in how model is saving metadata for the layers.\r\n\r\nOpened the model and looked at metadata:\r\n\r\n\r\n",
"Hi folks, has there been any work on this by any chance or is there a workaround for this? I am currently running into this issue as well. ",
"@sharyar Isn't the workaround proposed in the original post working in your case?\r\n\r\n> We were able to circumvent the issue by creating a new class as follows:\r\n> \r\n> ```\r\n> @tf.keras.utils.register_keras_serializable()\r\n> class MyStringLookup(tf.keras.layers.StringLookup):\r\n> def get_config(self):\r\n> base_config = super().get_config()\r\n> custom = {\"vocabulary\": self.get_vocabulary()}\r\n> return {**base_config, **custom}\r\n> ```",
"> We were able to circumvent the issue by creating a new class as follows:\r\n> \r\n> ```\r\n> @tf.keras.utils.register_keras_serializable()\r\n> class MyStringLookup(tf.keras.layers.StringLookup):\r\n> def get_config(self):\r\n> base_config = super().get_config()\r\n> custom = {\"vocabulary\": self.get_vocabulary()}\r\n> return {**base_config, **custom}\r\n> ```\r\n> \r\n> However, it would be nice if we didn't have to create this wrapper.\r\n\r\nHi @brunolucatto ,\r\nThe decorator tf.keras.utils.register_keras_serializable() is one of the way for serializing a class/method into Keras object which can then be saved as .keras model and the able to reload successfully.\r\n\r\nThere are 3 ways we can do this as mentioned here in [custom_objects](https://www.tensorflow.org/guide/keras/serialization_and_saving#custom_objects) serialization_and_saving guide . You can choose any one.\r\n\r\n\r\n",
"Hi @SuryanarayanaY, thanks for the reply!\r\n\r\nI think the main issue is not the decorator, but the fact that we need to create this class to make the serialization work. The only thing this new class is doing is \"saving again\" the vocabulary attribute of the instance, and all of this would be unnecessary if the StringLookup layer serialization was working properly. (*)\r\n\r\nCreating this class is a somewhat ugly workaround for the problem, as it does not address the root cause directly. This is the reason I did not submit it as a potential fix, although it has been working nicely so far.\r\n\r\n(*) _In the original post, I referred to this new class as a wrapper, which in hindsight makes it look like it was only the decorator that was making me unhappy. Sorry for the ambiguity there._",
"I ran into this issue recently when passing a keras model object to sklearn [permutation_importance()](https://scikit-learn.org/stable/modules/generated/sklearn.inspection.permutation_importance.html) and was hitting this error when using `n_jobs` in that function. \r\n\r\nIn my case i had a model object that has `StringLookup` layers i need to replace with the `MyStringLookup` layers. Using [`clone_model`](https://www.tensorflow.org/api_docs/python/tf/keras/models/clone_model) helped like this: \r\n\r\n```\r\nimport tensorflow as tf\r\nimport pickle\r\n\r\ntf.keras.utils.get_custom_objects().clear()\r\n\r\n\r\[email protected]_keras_serializable()\r\nclass MyStringLookup(tf.keras.layers.StringLookup):\r\n def get_config(self):\r\n base_config = super().get_config()\r\n vocabulary = self.get_vocabulary()\r\n custom = {\"vocabulary\": vocabulary}\r\n return {**base_config, **custom}\r\n\r\n\r\ndef my_clone_function(layer):\r\n if isinstance(layer, tf.keras.layers.StringLookup):\r\n clone_layer = MyStringLookup(vocabulary=layer.get_vocabulary())\r\n return clone_layer\r\n return layer\r\n\r\n\r\n# create model\r\nmodel_input = tf.keras.Input(shape=(1,), dtype=tf.int64)\r\nlookup = tf.keras.layers.StringLookup(vocabulary=['a', 'b'])(model_input)\r\noutput = tf.keras.layers.Dense(10)(lookup)\r\nmodel = tf.keras.Model(model_input, output)\r\n\r\n# model StringLookup has vocabulary set\r\nprint('\\ncorrect:\\n')\r\nprint(model.layers[1].get_config()['vocabulary'])\r\n# ListWrapper(['a', 'b'])\r\n\r\n# save model\r\nmodel.save(\"/tmp/temp_model\")\r\n\r\n# load model\r\nmodel_orginal_reloaded = tf.keras.models.load_model(\"/tmp/temp_model\")\r\n\r\n# model StringLookup now has no vocabulary set\r\nprint('\\nbroken:\\n')\r\nprint(model_orginal_reloaded.layers[1].get_config()['vocabulary'])\r\n# ListWrapper([])\r\n\r\n# fix the model\r\nmodel_fixed = tf.keras.models.clone_model(\r\n model,\r\n clone_function=my_clone_function\r\n)\r\n\r\n# we now see a correct model vocab in the MyStringLookup layer\r\nprint('\\nfixed:\\n')\r\nprint(model_fixed.layers[1].get_config()['vocabulary'])\r\n```\r\n\r\n[Here is a google colab](https://colab.research.google.com/drive/1qmdgR7a06oHQgFs1_jiOWggQJJJyYrqV?usp=sharing) to show what i mean.\r\n\r\nJust sharing this in case is useful for anyone else in a similar situation."
] | 2023-07-24T17:57:32 | 2024-06-06T12:00:36 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.11, 2.12, 2.13
### Custom code
No
### OS platform and distribution
Linux Ubuntu 20.04.1
### Mobile device
_No response_
### Python version
3.9.5
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
We noticed we could pickle the model right after building it, but unpickling would fail after saving and loading it from the disk. Upon further investigation, we realized the error was due to the vocabulary of the StringLookup layer, which was becoming an empty list after the tf.keras.models.load_model operation.
The unpickling issue happens on TF 2.11 onwards. The unpickling worked on TF 2.8, 2.9, and 2.10, even though the empty vocabulary issue was still there.
#--------------------
Using the minimal reproducible example below, before saving the model, if we inspect the StringLookup layer we get:
```
full_model.layers[1].get_config()
Out[9]: {'name': 'string_lookup',
'trainable': True,
'dtype': 'int64',
'invert': False,
'max_tokens': None,
'num_oov_indices': 1,
'oov_token': '[UNK]',
'mask_token': None,
'output_mode': 'int',
'sparse': False,
'pad_to_max_tokens': False,
'vocabulary': ListWrapper(['a', 'b']),
'idf_weights': None,
'encoding': 'utf-8'}
```
After saving and loading to the disk, we get:
```
full_model_loaded.layers[1].get_config()
Out[10]: {'name': 'string_lookup',
'trainable': True,
'dtype': 'int64',
'invert': False,
'max_tokens': None,
'num_oov_indices': 1,
'oov_token': '[UNK]',
'mask_token': None,
'output_mode': 'int',
'sparse': False,
'pad_to_max_tokens': False,
'vocabulary': ListWrapper([]),
'idf_weights': None,
'encoding': 'utf-8'}
```
#-----------------------
We were able to circumvent the issue by creating a new class as follows:
```
@tf.keras.utils.register_keras_serializable()
class MyStringLookup(tf.keras.layers.StringLookup):
def get_config(self):
base_config = super().get_config()
custom = {"vocabulary": self.get_vocabulary()}
return {**base_config, **custom}
```
However, it would be nice if we didn't have to create this wrapper.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
import pickle
model_input = tf.keras.Input(shape=(1,), dtype=tf.int64)
lookup = tf.keras.layers.StringLookup(vocabulary=['a', 'b'])(model_input)
output = tf.keras.layers.Dense(10)(lookup)
full_model = tf.keras.Model(model_input, output)
# this part should work
model_bytes = pickle.dumps(full_model)
model_recovered = pickle.loads(model_bytes)
# this part should throw an error
full_model.save("/tmp/temp_model")
full_model_loaded = tf.keras.models.load_model("/tmp/temp_model")
model_bytes_2 = pickle.dumps(full_model_loaded)
model_recovered_2 = pickle.loads(model_bytes_2)
```
### Relevant log output
```shell
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File <command-901450068412846>:1
----> 1 model_recovered_2 = pickle.loads(model_bytes_2)
File /databricks/python/lib/python3.9/site-packages/keras/saving/pickle_utils.py:48, in deserialize_model_from_bytecode(serialized_model)
46 model = saving_lib.load_model(filepath)
47 except Exception as e:
---> 48 raise e
49 else:
50 return model
File /databricks/python/lib/python3.9/site-packages/keras/saving/pickle_utils.py:46, in deserialize_model_from_bytecode(serialized_model)
40 f.write(serialized_model)
41 # When loading, direct import will work for most custom objects
42 # though it will require get_config() to be implemented.
43 # Some custom objects (e.g. an activation in a Dense layer,
44 # serialized as a string by Dense.get_config()) will require
45 # a custom_object_scope.
---> 46 model = saving_lib.load_model(filepath)
47 except Exception as e:
48 raise e
File /databricks/python/lib/python3.9/site-packages/keras/saving/experimental/saving_lib.py:196, in load_model(filepath, custom_objects)
194 h5_file.close()
195 except Exception as e:
--> 196 raise e
197 else:
198 return model
File /databricks/python/lib/python3.9/site-packages/keras/saving/experimental/saving_lib.py:183, in load_model(filepath, custom_objects)
181 config_dict = json.loads(config_json)
182 # Construct the model from the configuration file in the archive.
--> 183 model = deserialize_keras_object(config_dict, custom_objects)
184 h5_file = h5py.File(tf.io.gfile.join(temp_path, _VARS_FNAME), "r")
185 _print_h5_file(h5_file, action="loading")
File /databricks/python/lib/python3.9/site-packages/keras/saving/experimental/serialization_lib.py:318, in deserialize_keras_object(config, custom_objects)
315 # Instantiate the class from its config inside a custom object scope
316 # so that we can catch any custom objects that the config refers to.
317 with object_registration.custom_object_scope(custom_objects):
--> 318 return cls.from_config(inner_config)
File /databricks/python/lib/python3.9/site-packages/keras/engine/training.py:3114, in Model.from_config(cls, config, custom_objects)
3107 functional_model_keys = [
3108 "name",
3109 "layers",
3110 "input_layers",
3111 "output_layers",
3112 ]
3113 if all(key in config for key in functional_model_keys):
-> 3114 inputs, outputs, layers = functional.reconstruct_from_config(
3115 config, custom_objects
3116 )
3117 model = cls(
3118 inputs=inputs, outputs=outputs, name=config.get("name")
3119 )
3120 functional.connect_ancillary_layers(model, layers)
File /databricks/python/lib/python3.9/site-packages/keras/engine/functional.py:1470, in reconstruct_from_config(config, custom_objects, created_layers)
1468 # First, we create all layers and enqueue nodes to be processed
1469 for layer_data in config["layers"]:
-> 1470 process_layer(layer_data)
1471 # Then we process nodes in order of layer depth.
1472 # Nodes that cannot yet be processed (if the inbound node
1473 # does not yet exist) are re-enqueued, and the process
1474 # is repeated until all nodes are processed.
1475 while unprocessed_nodes:
File /databricks/python/lib/python3.9/site-packages/keras/engine/functional.py:1451, in reconstruct_from_config.<locals>.process_layer(layer_data)
1447 else:
1448 # Instantiate layer.
1449 from keras.layers import deserialize as deserialize_layer
-> 1451 layer = deserialize_layer(layer_data, custom_objects=custom_objects)
1452 created_layers[layer_name] = layer
1454 node_count_by_layer[layer] = int(_should_skip_first_node(layer))
File /databricks/python/lib/python3.9/site-packages/keras/layers/serialization.py:252, in deserialize(config, custom_objects)
215 """Instantiates a layer from a config dictionary.
216
217 Args:
(...)
249
250 """
251 populate_deserializable_objects()
--> 252 return serialization.deserialize_keras_object(
253 config,
254 module_objects=LOCAL.ALL_OBJECTS,
255 custom_objects=custom_objects,
256 printable_module_name="layer",
257 )
File /databricks/python/lib/python3.9/site-packages/keras/saving/legacy/serialization.py:527, in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
525 else:
526 with object_registration.CustomObjectScope(custom_objects):
--> 527 deserialized_obj = cls.from_config(cls_config)
528 else:
529 # Then `cls` may be a function returning a class.
530 # in this case by convention `config` holds
531 # the kwargs of the function.
532 custom_objects = custom_objects or {}
File /databricks/python/lib/python3.9/site-packages/keras/engine/base_layer.py:860, in Layer.from_config(cls, config)
844 @classmethod
845 def from_config(cls, config):
846 """Creates a layer from its config.
847
848 This method is the reverse of `get_config`,
(...)
858 A layer instance.
859 """
--> 860 return cls(**config)
File /databricks/python/lib/python3.9/site-packages/keras/layers/preprocessing/string_lookup.py:333, in StringLookup.__init__(self, max_tokens, num_oov_indices, mask_token, oov_token, vocabulary, idf_weights, encoding, invert, output_mode, sparse, pad_to_max_tokens, **kwargs)
329 del kwargs["dtype"]
331 self.encoding = encoding
--> 333 super().__init__(
334 max_tokens=max_tokens,
335 num_oov_indices=num_oov_indices,
336 mask_token=mask_token,
337 oov_token=oov_token,
338 vocabulary=vocabulary,
339 vocabulary_dtype=tf.string,
340 idf_weights=idf_weights,
341 invert=invert,
342 output_mode=output_mode,
343 sparse=sparse,
344 pad_to_max_tokens=pad_to_max_tokens,
345 **kwargs
346 )
347 base_preprocessing_layer.keras_kpl_gauge.get_cell("StringLookup").set(
348 True
349 )
File /databricks/python/lib/python3.9/site-packages/keras/layers/preprocessing/index_lookup.py:323, in IndexLookup.__init__(self, max_tokens, num_oov_indices, mask_token, oov_token, vocabulary_dtype, vocabulary, idf_weights, invert, output_mode, sparse, pad_to_max_tokens, **kwargs)
320 self.idf_weights_const = self.idf_weights.value()
322 if vocabulary is not None:
--> 323 self.set_vocabulary(vocabulary, idf_weights)
324 else:
325 # When restoring from a keras SavedModel, the loading code will
326 # expect to find and restore a lookup_table attribute on the layer.
327 # This table needs to be uninitialized as a StaticHashTable cannot
328 # be initialized twice.
329 self.lookup_table = self._uninitialized_lookup_table()
File /databricks/python/lib/python3.9/site-packages/keras/layers/preprocessing/index_lookup.py:510, in IndexLookup.set_vocabulary(self, vocabulary, idf_weights)
507 idf_weights = np.array(idf_weights)
509 if vocabulary.size == 0:
--> 510 raise ValueError(
511 f"Cannot set an empty vocabulary, you passed {vocabulary}."
512 )
514 oov_start = self._oov_start_index()
515 token_start = self._token_start_index()
ValueError: Cannot set an empty vocabulary, you passed [].
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61369/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61369/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61368 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61368/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61368/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61368/events | https://github.com/tensorflow/tensorflow/pull/61368 | 1,818,835,146 | PR_kwDOArmXAs5WQPmU | 61,368 | [tosa] Support legalization of tfl.gather with multiple dynamic dimensions | {
"login": "sabauma",
"id": 2251823,
"node_id": "MDQ6VXNlcjIyNTE4MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2251823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sabauma",
"html_url": "https://github.com/sabauma",
"followers_url": "https://api.github.com/users/sabauma/followers",
"following_url": "https://api.github.com/users/sabauma/following{/other_user}",
"gists_url": "https://api.github.com/users/sabauma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sabauma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sabauma/subscriptions",
"organizations_url": "https://api.github.com/users/sabauma/orgs",
"repos_url": "https://api.github.com/users/sabauma/repos",
"events_url": "https://api.github.com/users/sabauma/events{/privacy}",
"received_events_url": "https://api.github.com/users/sabauma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1169365682,
"node_id": "MDU6TGFiZWwxMTY5MzY1Njgy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:L",
"name": "size:L",
"color": "adafea",
"default": false,
"description": "CL Change Size: Large"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).\n\nView this [failed invocation](https://github.com/tensorflow/tensorflow/pull/61368/checks?check_run_id=15299055578) of the CLA check for more information.\n\nFor the most up to date status, view the checks section at the bottom of the pull request.",
"Hi @sabauma Can you please resolve conflicts? Thank you!",
"I think I've resolved all the existing issues. Just waiting for @rsuderman to give another pass.",
"The changes that were originally in this PR now appear in the upstream repo on the master branch. I'm not sure what is going on, as this PR was never accepted. This PR now appears to be mostly empty due to all the changes being moved upstream.\r\n\r\n@gbaned would you have some insight into what is going on?",
"Hi @rsuderman Can you please assist on above comments from @sabauma. Thank you!",
"Hi @rsuderman Any update on this PR? Please. Thank you!",
"Hi @rsuderman Any update on this PR? Please. Thank you!",
"Hi @rsuderman Any update on this PR? Please. Thank you!",
"Hi @rsuderman Any update on this PR? Please. Thank you!",
"Hi @rdzhabarov Can you please assist on above [comments](https://github.com/tensorflow/tensorflow/pull/61368#issuecomment-1669593469) from @sabauma. Thank you!\r\n\r\n"
] | 2023-07-24T17:26:52 | 2024-03-18T17:38:00 | 2024-03-18T17:37:57 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61368",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61368",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61368.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61368.patch",
"merged_at": null
} | The existing lowering of tfl.gather targets a sequence of tosa.reshape and tosa.gather operations. Currently, tosa.reshape supports, at most, a single dynamic dimension. This change adds support for multiple dynamic dimensions by targeting the tensor.reshape operator in the presence of multiple dynamic dimensions. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61368/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61366 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61366/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61366/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61366/events | https://github.com/tensorflow/tensorflow/issues/61366 | 1,818,786,639 | I_kwDOArmXAs5saHtP | 61,366 | Unable to save model when using EfficientNetB0 | {
"login": "Seferovic8",
"id": 66976321,
"node_id": "MDQ6VXNlcjY2OTc2MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/66976321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Seferovic8",
"html_url": "https://github.com/Seferovic8",
"followers_url": "https://api.github.com/users/Seferovic8/followers",
"following_url": "https://api.github.com/users/Seferovic8/following{/other_user}",
"gists_url": "https://api.github.com/users/Seferovic8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Seferovic8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Seferovic8/subscriptions",
"organizations_url": "https://api.github.com/users/Seferovic8/orgs",
"repos_url": "https://api.github.com/users/Seferovic8/repos",
"events_url": "https://api.github.com/users/Seferovic8/events{/privacy}",
"received_events_url": "https://api.github.com/users/Seferovic8/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1105108936,
"node_id": "MDU6TGFiZWwxMTA1MTA4OTM2",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:model",
"name": "comp:model",
"color": "0052cc",
"default": false,
"description": "Model related issues"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Seferovic8 ,\r\n\r\nThe issue resolved in TF2.13v. Please refer to attached [gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/dbdb3fc33d50e28dc9dc5093ddc7ae77/61366.ipynb) and may test yourself. Since it is resolved in latest version its unlikely to cherry pick for TF2.12 version.",
"Thank you, this works for me.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61366\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61366\">No</a>\n",
"Yes\r\n\r\nOn Tue, 25 Jul 2023 at 16:41, google-ml-butler[bot] <\r\n***@***.***> wrote:\r\n\r\n> Are you satisfied with the resolution of your issue?\r\n> Yes\r\n> <https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61366>\r\n> No\r\n> <https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61366>\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/tensorflow/tensorflow/issues/61366#issuecomment-1649975456>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AP67UQMQM5KVXEQMSSPJVO3XR7LH3ANCNFSM6AAAAAA2V3OBRU>\r\n> .\r\n> You are receiving this because you modified the open/close state.Message\r\n> ID: ***@***.***>\r\n>\r\n"
] | 2023-07-24T16:51:23 | 2023-07-25T14:42:36 | 2023-07-25T14:40:44 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.12.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I’m trying to use EfficientNetB0 to create a model and save the model to my local disk. However, when saving it, it throws the error below.
> TypeError: Unable to serialize [2.0896919 2.1128857 2.1081853] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
Also, I tried to downgrade tensorflow from V2.12.0 to V2.9.1, this works as expected. In other words, this is a bug in 2.12.0. Hope it helps and please fix this bug for V2.12.0
### Standalone code to reproduce the issue
```shell
model = tf.keras.applications.EfficientNetB0()
model.save("model")
```
### Relevant log output
```shell
TypeError: Unable to serialize [2.0896919 2.1128857 2.1081853] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61366/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61365 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61365/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61365/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61365/events | https://github.com/tensorflow/tensorflow/pull/61365 | 1,818,723,987 | PR_kwDOArmXAs5WP3cc | 61,365 | [oneDNN]: Update oneDNN from v2.7.3 to v3.2 on Linux and Windows x86 builds | {
"login": "bhavani-subramanian",
"id": 28113241,
"node_id": "MDQ6VXNlcjI4MTEzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/28113241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavani-subramanian",
"html_url": "https://github.com/bhavani-subramanian",
"followers_url": "https://api.github.com/users/bhavani-subramanian/followers",
"following_url": "https://api.github.com/users/bhavani-subramanian/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavani-subramanian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavani-subramanian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavani-subramanian/subscriptions",
"organizations_url": "https://api.github.com/users/bhavani-subramanian/orgs",
"repos_url": "https://api.github.com/users/bhavani-subramanian/repos",
"events_url": "https://api.github.com/users/bhavani-subramanian/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavani-subramanian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 703812914,
"node_id": "MDU6TGFiZWw3MDM4MTI5MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/kokoro:force-run",
"name": "kokoro:force-run",
"color": "1d76db",
"default": false,
"description": "Tests on submitted change"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1104829434,
"node_id": "MDU6TGFiZWwxMTA0ODI5NDM0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:mkl",
"name": "comp:mkl",
"color": "0052cc",
"default": false,
"description": "MKL related issues"
},
{
"id": 1169365682,
"node_id": "MDU6TGFiZWwxMTY5MzY1Njgy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:L",
"name": "size:L",
"color": "adafea",
"default": false,
"description": "CL Change Size: Large"
}
] | closed | false | {
"login": "penpornk",
"id": 38085909,
"node_id": "MDQ6VXNlcjM4MDg1OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/38085909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penpornk",
"html_url": "https://github.com/penpornk",
"followers_url": "https://api.github.com/users/penpornk/followers",
"following_url": "https://api.github.com/users/penpornk/following{/other_user}",
"gists_url": "https://api.github.com/users/penpornk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penpornk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penpornk/subscriptions",
"organizations_url": "https://api.github.com/users/penpornk/orgs",
"repos_url": "https://api.github.com/users/penpornk/repos",
"events_url": "https://api.github.com/users/penpornk/events{/privacy}",
"received_events_url": "https://api.github.com/users/penpornk/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "penpornk",
"id": 38085909,
"node_id": "MDQ6VXNlcjM4MDg1OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/38085909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penpornk",
"html_url": "https://github.com/penpornk",
"followers_url": "https://api.github.com/users/penpornk/followers",
"following_url": "https://api.github.com/users/penpornk/following{/other_user}",
"gists_url": "https://api.github.com/users/penpornk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penpornk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penpornk/subscriptions",
"organizations_url": "https://api.github.com/users/penpornk/orgs",
"repos_url": "https://api.github.com/users/penpornk/repos",
"events_url": "https://api.github.com/users/penpornk/events{/privacy}",
"received_events_url": "https://api.github.com/users/penpornk/received_events",
"type": "User",
"site_admin": false
},
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@nSircombe @milpuz01 @cfRod this should not impact the ARM build as it will continue to be built with ver2 macro set. Please test and let us know.",
"@penpornk Thank you for reviewing this PR. I have addressed your review comments. Please take a look.",
"@penpornk Thanks for approving the PR. Please note that we also need to add onednn repo in TSL's workspace [file](https://github.com/google/tsl/blob/main/workspace2.bzl#L157). Would you be able to add it? Thanks.",
"@bhavani-subramanian Will do. Thank you for the heads up!\r\n\r\nQuick notes on CIs for future reference:\r\n\r\n[ROCm](http://ml-ci.amd.com:21096/blue/organizations/jenkins/tensorflow%2Fgithub-prs-upstream-master%2FAMD-ROCm-Community-CI-Build/detail/PR-61365/7/pipeline) errors are unrelated. I've seen them on other PRs:\r\n```\r\nERROR: /workspace/tensorflow/compiler/xla/service/gpu/BUILD:853:11: in deps attribute of cc_library rule //tensorflow/compiler/xla/service/gpu:gpu_executable: Label '//tensorflow/tsl/platform:random' is duplicated\r\n\r\nERROR: /workspace/tensorflow/compiler/xla/service/gpu/BUILD:853:11: Analysis of target '//tensorflow/compiler/xla/service/gpu:gpu_executable' failed\r\n```\r\n[MacOS](https://source.cloud.google.com/results/invocations/5a867fb3-7b77-4530-9041-1ed3a8403069/log) failure is unrelated. (Test timed out)\r\n```\r\n[38,742 / 38,767] 771 / 1022 tests, 1 failed; Testing //tensorflow/python/data/kernel_tests:interleave_test (shard 1 of 24); 18s local, remote-cache ... (16 actions running)\r\n[38,742 / 38,767] 771 / 1022 tests, 1 failed; Testing //tensorflow/python/data/kernel_tests:interleave_test (shard 1 of 24); 27s local, remote-cache ... (16 actions running)\r\n\r\n\r\nERROR: Aborting VM command due to timeout of 43200 seconds\r\n```\r\n\r\n[Py+CPP Ubuntu CPU](https://source.cloud.google.com/results/invocations/3df54b96-e627-4995-9afa-0ef383d7f6ab/log) failure is unrelated. All 6 failed unit tests timed out.\r\n\r\n[Py+CPP Ubuntu GPU](https://source.cloud.google.com/results/invocations/c3b02db3-3730-4e1a-bd80-10e662c6eb89/log) failure is unrelated. The failed test timed out.\r\n```\r\n//tensorflow/python/kernel_tests/linalg:linear_operator_identity_test_gpu TIMEOUT in 2 out of 5 in 466.9s\r\n```\r\n\r\n"
] | 2023-07-24T16:07:56 | 2023-07-26T14:04:53 | 2023-07-26T13:29:08 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61365",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61365",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61365.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61365.patch",
"merged_at": "2023-07-26T13:29:08"
} | - This PR turns ON oneDNN v3.x by default on both Linux and Windows x86 builds.
- No change is needed from end-users who build TF from source since oneDNN v3.x will be automatically fetched.
- ARM builds will not be affected since they will continue to use oneDNN v2.7.
- This PR also removes conflg settings which was recently added to simultaneously support both oneDNN v2.x and v3.x.
- Support for oneDNN v2.x in TF will be dropped in a future PR.
NOTE: We also need to add `onednn` repo in TSL's workspace [file](https://github.com/google/tsl/blob/main/workspace2.bzl#L157). | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61365/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61364 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61364/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61364/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61364/events | https://github.com/tensorflow/tensorflow/issues/61364 | 1,818,565,898 | I_kwDOArmXAs5sZR0K | 61,364 | tf.io.gfile.rename not working for directories in S3 | {
"login": "j99ca",
"id": 2538015,
"node_id": "MDQ6VXNlcjI1MzgwMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2538015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j99ca",
"html_url": "https://github.com/j99ca",
"followers_url": "https://api.github.com/users/j99ca/followers",
"following_url": "https://api.github.com/users/j99ca/following{/other_user}",
"gists_url": "https://api.github.com/users/j99ca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j99ca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j99ca/subscriptions",
"organizations_url": "https://api.github.com/users/j99ca/orgs",
"repos_url": "https://api.github.com/users/j99ca/repos",
"events_url": "https://api.github.com/users/j99ca/events{/privacy}",
"received_events_url": "https://api.github.com/users/j99ca/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097547147,
"node_id": "MDU6TGFiZWwxMDk3NTQ3MTQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:ops",
"name": "comp:ops",
"color": "0052cc",
"default": false,
"description": "OPs related issues"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | open | false | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@j99ca Could you please make sure that the source and destination paths are appropriate? As per the doc [here](https://www.tensorflow.org/api_docs/python/tf/io/gfile/rename) , it should raise [errors.OpError](https://www.tensorflow.org/api_docs/python/tf/errors/OpError) if it fails?\r\nThank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"@sushreebarsa I have indeed confirmed the behaviour. It works fine when I revert the tensorflow version number. Can you replicate it in S3?",
"Hi, \r\n\r\nCould you please close your old issue here https://github.com/tensorflow/tensorflow/issues/60165 to track the status at single place.\r\n",
"@sachinprasadhs I have closed the other story",
"@sachinprasadhs is there any resolution to this issue? It is still preventing us from migrating to a newer version of tensorflow above 2.5",
"Has anyone found a resolution or workaround for this issue?"
] | 2023-07-24T14:39:54 | 2023-12-13T17:06:57 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.12
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The following code used to work in Tensorflow < 2.6. Upon Tensorflow 2.6, we had to import tensorflow_io. However, the tf.io.gfile.rename function which used to work on directories in S3 no longer works. I would like to update to a newer version of Tensorflow but this issue is preventing our organization from doing so, as some libraries we use use tf.io.gfile.rename to change folder names during training.
tf.io.gfile.rename should work on directories according to the Tensorflow documentation
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
import tensorflow_io as tfio
SOURCE_DIR = 's3://.../old_name/'
DEST_DIR = 's3://.../new_name/'
tf.io.gfile.rename(SOURCE_DIR, DEST_DIR)
```
### Relevant log output
```shell
Traceback (most recent call last):
File "/home/eigen/.config/JetBrains/PyCharm2023.1/scratches/tf2_s3_rename_test.py", line 9, in <module>
tf.io.gfile.rename(SOURCE_DIR, DEST_DIR)
File "/home/eigen/venvs/eigen-ml-tf2-12/lib/python3.10/site-packages/tensorflow/python/lib/io/file_io.py", line 622, in rename_v2
_pywrap_file_io.RenameFile(
tensorflow.python.framework.errors_impl.FailedPreconditionError: Source is a directory or empty file
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61364/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61363 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61363/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61363/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61363/events | https://github.com/tensorflow/tensorflow/issues/61363 | 1,817,943,295 | I_kwDOArmXAs5sW5z_ | 61,363 | Performance drop with tensorflow 2.13 | {
"login": "nhuet",
"id": 23269019,
"node_id": "MDQ6VXNlcjIzMjY5MDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23269019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nhuet",
"html_url": "https://github.com/nhuet",
"followers_url": "https://api.github.com/users/nhuet/followers",
"following_url": "https://api.github.com/users/nhuet/following{/other_user}",
"gists_url": "https://api.github.com/users/nhuet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nhuet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nhuet/subscriptions",
"organizations_url": "https://api.github.com/users/nhuet/orgs",
"repos_url": "https://api.github.com/users/nhuet/repos",
"events_url": "https://api.github.com/users/nhuet/events{/privacy}",
"received_events_url": "https://api.github.com/users/nhuet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097546578,
"node_id": "MDU6TGFiZWwxMDk3NTQ2NTc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:keras",
"name": "comp:keras",
"color": "0052cc",
"default": false,
"description": "Keras related issues"
},
{
"id": 1463677878,
"node_id": "MDU6TGFiZWwxNDYzNjc3ODc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:performance",
"name": "type:performance",
"color": "159b2e",
"default": false,
"description": "Performance Issue"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@nhuet I tried to replicate the issue on colab using TF [v2.12](https://colab.research.google.com/gist/sushreebarsa/e498716b10932a7d557ab3a3a9fccb76/61363-2-12.ipynb) and [2.13](https://colab.research.google.com/gist/sushreebarsa/3ec5e639c34bb5c3d965c531fe1090b3/untitled813.ipynb#scrollTo=UpjWS8k2G3ms), could you please have a look at the attached gists and confirm the same?\r\nThank you!",
"It seems to show the issue (even though only 3* slower, this is still abnormal)",
"@nhuet thank you for the quick response\r\n@sachinprasadhs Could you please have a look at this. Thank you!",
"Hi @nhuet, thank you for bringing up this issue! Would you be able to try to replicate this with the `keras_core` (https://keras.io/keras_core/) package as this will supersede current Keras. Thank you!",
"Hi @grasskin. You are right I should have definitely tried this. And this works very well indeed (even better than keras2.12 + tf 2.12). Here are the new timings:\r\n- keras 2.12.0 + tensorflow 2.12.1: 7.8 s\r\n- keras 2.13.1 + tensorflow 2.13.0: 24.1 s\r\n- keras-core 0.1.3 + tensorflow 2.12.1: 0.4 s\r\n- keras-core 0.1.3 + tensorflow 2.13.0: 0.3 s\r\n\r\n(and even increasing the number of iteration, there is not noticeable differences between keras-core + tf2.12 or keras-core + tf2.13)\r\n\r\nThus, the \"bug\" came from keras 2.13, and the performance is way better with keras-core than with previous keras 2.12.\r\nThanks a lot!",
"Hi @nhuet , Thanks for confirming your findings with the keras-core. \r\nI'm glad that keras-core + tensorflow gave the good performance result.\r\n\r\nFor latest package details, please visit https://pypi.org/project/keras-core/\r\n\r\nFeel free to close the issue. Thanks!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61363\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61363\">No</a>\n"
] | 2023-07-24T08:47:46 | 2023-08-17T01:45:50 | 2023-08-17T01:45:47 | NONE | null | null | null | ### Issue type
Performance
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
v2.13.0-rc2-7-g1cb1a030a62 2.13.0
### Custom code
Yes
### OS platform and distribution
Debian GNU/Linux 12 (bookworm)
### Mobile device
_No response_
### Python version
3.9
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I notice a big performance drop between tensorflow 2.12 (CPU) and tensorflow 2.13 (CPU). With the last release (and also with tf-nightly '2.14.0-dev20230724') it takes *4 times* longer to perform a simple sum. It is between keras inputs though some I am not sure if this is directly related to tensorflow or if it comes from keras.
See code below for a very simple example. Note that this is with tensorflow CPU only.
The timings are:
- for tensorflow 2.12.1 + keras 2.12.0: **5.3 s**
- for tensorflow 2.13.0 + keras 2.13.1: **24 s**
- for tensorflow 2.14.0-dev20230724 + keras 2.14.0.dev2023072407: 26 s
### Standalone code to reproduce the issue
```shell
import time
from tensorflow.keras import Input
number_of_executions = 3000
x = Input((1,))
y = Input((1,))
start = time.time()
for i in range(number_of_executions):
x + y
duration = time.time() - start
print(f"Duration: {duration:.1f}")
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61363/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61362 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61362/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61362/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61362/events | https://github.com/tensorflow/tensorflow/issues/61362 | 1,817,860,925 | I_kwDOArmXAs5sWls9 | 61,362 | When compiling TensorFlow 2.13 using bazel+clang, an error was reported as fatal error: 'stddef. h' file not found | {
"login": "mars1248",
"id": 62137145,
"node_id": "MDQ6VXNlcjYyMTM3MTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/62137145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mars1248",
"html_url": "https://github.com/mars1248",
"followers_url": "https://api.github.com/users/mars1248/followers",
"following_url": "https://api.github.com/users/mars1248/following{/other_user}",
"gists_url": "https://api.github.com/users/mars1248/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mars1248/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mars1248/subscriptions",
"organizations_url": "https://api.github.com/users/mars1248/orgs",
"repos_url": "https://api.github.com/users/mars1248/repos",
"events_url": "https://api.github.com/users/mars1248/events{/privacy}",
"received_events_url": "https://api.github.com/users/mars1248/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 1222092379,
"node_id": "MDU6TGFiZWwxMjIyMDkyMzc5",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:bazel",
"name": "subtype:bazel",
"color": "b619ea",
"default": false,
"description": "Bazel related Build_Installation issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "angerson",
"id": 32465472,
"node_id": "MDQ6VXNlcjMyNDY1NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/32465472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/angerson",
"html_url": "https://github.com/angerson",
"followers_url": "https://api.github.com/users/angerson/followers",
"following_url": "https://api.github.com/users/angerson/following{/other_user}",
"gists_url": "https://api.github.com/users/angerson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/angerson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/angerson/subscriptions",
"organizations_url": "https://api.github.com/users/angerson/orgs",
"repos_url": "https://api.github.com/users/angerson/repos",
"events_url": "https://api.github.com/users/angerson/events{/privacy}",
"received_events_url": "https://api.github.com/users/angerson/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "angerson",
"id": 32465472,
"node_id": "MDQ6VXNlcjMyNDY1NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/32465472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/angerson",
"html_url": "https://github.com/angerson",
"followers_url": "https://api.github.com/users/angerson/followers",
"following_url": "https://api.github.com/users/angerson/following{/other_user}",
"gists_url": "https://api.github.com/users/angerson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/angerson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/angerson/subscriptions",
"organizations_url": "https://api.github.com/users/angerson/orgs",
"repos_url": "https://api.github.com/users/angerson/repos",
"events_url": "https://api.github.com/users/angerson/events{/privacy}",
"received_events_url": "https://api.github.com/users/angerson/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I have encountered the same issue as you.",
"@mars1248,\r\nCould you please confirm the bazel version you are trying to compile and also whether you are building from source or any other way?\r\nLooks like the t**hird_party/gpus/rocm_configure.bzl** file does not include a path for clang 16.0.0. Could you please try manually include it now and please report back whether this allows it to compile successfully.\r\n\r\n Thank you!",
"I use tensorflow/build:2.13-python3.11 image, with bazel 5.3.0. and build from source code.\r\nI add this **inc_dirs.append(\"/usr/lib/llvm-16/lib/clang/16/include\")** code to **third_party/gpus/rocm_configure.bzl** but get this error.\r\n`ERROR: /root/.cache/bazel/_bazel_root/645133528a7a8476e9bedcf54eb858b4/external/llvm-project/llvm/BUILD.bazel:184:11: Compiling llvm/lib/Support/Atomic.cpp failed: (Exit 1): clang failed: error executing command /usr/bin/clang -MD -MF bazel-out/k8-opt-exec-50AE0418/bin/external/llvm-project/llvm/_objs/Support/Atomic.d ... (remaining 87 arguments skipped)\r\nIn file included from external/llvm-project/llvm/lib/Support/Atomic.cpp:13:\r\nIn file included from external/llvm-project/llvm/include/llvm/Support/Atomic.h:20:\r\nIn file included from external/llvm-project/llvm/include/llvm/Support/DataTypes.h:19:\r\nIn file included from external/llvm-project/llvm/include/llvm-c/DataTypes.h:43:\r\n/usr/include/x86_64-linux-gnu/sys/types.h:144:10: fatal error: 'stddef.h' file not found\r\n#include <stddef.h>\r\n ^~~~~~~~~~\r\n1 error generated.`",
"Use ```/usr/lib/llvm-16/bin/clang``` instead of ```/usr/bin/clang-16```,this may help.\r\n\r\nmy build env(not using Docker):\r\n- Ubuntu 22.04\r\n- Clang 16.0.6 from https://apt.llvm.org/\r\n- Python 3.10\r\n- Bazel 5.3.0\r\n- cuDnn 8.6\r\n- CUDA 11.8\r\nconfigure:\r\n- No ROCm support\r\n- cuda and tensorrt support\r\n"
] | 2023-07-24T07:58:58 | 2023-08-08T07:56:24 | null | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.13
### Custom code
No
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I use tensorflow/build:2.13-python3.11 image
When compiling TensorFlow 2.13 using bazel+clang, an error was reported as fatal error: 'stddef. h' file not found.
but I can find 'stddef. h' in docker container path **/usr/lib/llvm-16/lib/clang/16/include** ,if I manual append **-isystem /usr/lib/llvm-16/lib/clang/16/include** to clang compile command, the bellow code can run success
### Standalone code to reproduce the issue
```shell
"cd /root/.cache/bazel/_bazel_root/645133528a7a8476e9bedcf54eb858b4/execroot/org_tensorflow && \
exec env - \
DOCKER_HOST_CACHEBUSTER=1682977560680045781 \
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib \
PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
PWD=/proc/self/cwd \
/usr/bin/clang-16 -MD -MF bazel-out/host/bin/external/zlib/_objs/zlib/uncompr.d '-frandom-seed=bazel-out/host/bin/external/zlib/_objs/zlib/uncompr.o' -iquote external/zlib -iquote bazel-out/host/bin/external/zlib -isystem external/zlib -isystem bazel-out/host/bin/external/zlib -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fPIE -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wno-invalid-partial-specialization -fno-omit-frame-pointer -no-canonical-prefixes -DNDEBUG -g0 -O2 -ffunction-sections -fdata-sections '--cuda-path=/usr/local/cuda-11.8' -g0 -w -Wno-shift-negative-value -DZ_HAVE_UNISTD_H -c external/zlib/uncompr.c -o bazel-out/host/bin/external/zlib/_objs/zlib/uncompr.o"
this code will get fatal error: 'stddef. h' file not found.
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61362/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61362/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61361 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61361/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61361/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61361/events | https://github.com/tensorflow/tensorflow/issues/61361 | 1,817,826,668 | I_kwDOArmXAs5sWdVs | 61,361 | converter issue | {
"login": "Ajim0907",
"id": 122783606,
"node_id": "U_kgDOB1GHdg",
"avatar_url": "https://avatars.githubusercontent.com/u/122783606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ajim0907",
"html_url": "https://github.com/Ajim0907",
"followers_url": "https://api.github.com/users/Ajim0907/followers",
"following_url": "https://api.github.com/users/Ajim0907/following{/other_user}",
"gists_url": "https://api.github.com/users/Ajim0907/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ajim0907/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ajim0907/subscriptions",
"organizations_url": "https://api.github.com/users/Ajim0907/orgs",
"repos_url": "https://api.github.com/users/Ajim0907/repos",
"events_url": "https://api.github.com/users/Ajim0907/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ajim0907/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1661751498,
"node_id": "MDU6TGFiZWwxNjYxNzUxNDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteConverter",
"name": "TFLiteConverter",
"color": "bfdadc",
"default": false,
"description": "For issues related to TFLite converter"
}
] | closed | false | {
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Ajim0907 \r\n\r\nWe see that template has not been filled. Could you please fill the issue template ?\r\n\r\nThanks.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further."
] | 2023-07-24T07:36:16 | 2023-08-09T01:52:03 | 2023-08-09T01:52:03 | NONE | null | null | null | ### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- TensorFlow installation (pip package or built from source):
- TensorFlow library (version, if pip package or github SHA, if built from source):
### 2. Code
Provide code to help us reproduce your issues using one of the following options:
#### Option A: Reference colab notebooks
1) Reference [TensorFlow Model Colab](https://colab.research.google.com/gist/ymodak/e96a4270b953201d5362c61c1e8b78aa/tensorflow-datasets.ipynb?authuser=1): Demonstrate how to build your TF model.
2) Reference [TensorFlow Lite Model Colab](https://colab.research.google.com/gist/ymodak/0dfeb28255e189c5c48d9093f296e9a8/tensorflow-lite-debugger-colab.ipynb): Demonstrate how to convert your TF model to a TF Lite model (with quantization, if used) and run TFLite Inference (if possible).
```
(You can paste links or attach files by dragging & dropping them below)
- Provide links to your updated versions of the above two colab notebooks.
- Provide links to your TensorFlow model and (optionally) TensorFlow Lite Model.
```
#### Option B: Paste your code here or provide a link to a custom end-to-end colab
```
(You can paste links or attach files by dragging & dropping them below)
- Include code to invoke the TFLite Converter Python API and the errors.
- Provide links to your TensorFlow model and (optionally) TensorFlow Lite Model.
```
### 3. Failure after conversion
If the conversion is successful, but the generated model is wrong, then state what is wrong:
- Model produces wrong results and/or has lesser accuracy.
- Model produces correct results, but it is slower than expected.
### 4. (optional) RNN conversion support
If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.
### 5. (optional) Any other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61361/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61360 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61360/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61360/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61360/events | https://github.com/tensorflow/tensorflow/issues/61360 | 1,817,765,045 | I_kwDOArmXAs5sWOS1 | 61,360 | cannot find explicitly assigned device for op | {
"login": "xiedeacc",
"id": 8072296,
"node_id": "MDQ6VXNlcjgwNzIyOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8072296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiedeacc",
"html_url": "https://github.com/xiedeacc",
"followers_url": "https://api.github.com/users/xiedeacc/followers",
"following_url": "https://api.github.com/users/xiedeacc/following{/other_user}",
"gists_url": "https://api.github.com/users/xiedeacc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiedeacc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiedeacc/subscriptions",
"organizations_url": "https://api.github.com/users/xiedeacc/orgs",
"repos_url": "https://api.github.com/users/xiedeacc/repos",
"events_url": "https://api.github.com/users/xiedeacc/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiedeacc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097547147,
"node_id": "MDU6TGFiZWwxMDk3NTQ3MTQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:ops",
"name": "comp:ops",
"color": "0052cc",
"default": false,
"description": "OPs related issues"
},
{
"id": 3797168204,
"node_id": "LA_kwDOArmXAs7iVDBM",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.8",
"name": "TF 2.8",
"color": "5DC9D0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@xiedeacc \r\nWill it be possible for you to upgrade the TF version to the latest?\r\nCould you please have a look at the below configurations and let us know if it helps?\r\n\r\nVersion | Python version | Compiler | Build tools\r\n-- | -- | -- | --\r\ntensorflow-2.13.0 | 3.8-3.11 | Clang 16.0.0 | Bazel 5.3.0\r\n \r\nPlease follow [this](https://www.tensorflow.org/install/source) document for more information on this. \r\nThank you!",
"fix by config.set_allow_soft_placement(true)",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61360\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61360\">No</a>\n",
"@xiedeacc Glad it worked fine for you. Thank you!"
] | 2023-07-24T06:53:39 | 2023-07-26T08:12:39 | 2023-07-26T08:09:05 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.8.4
### Custom code
No
### OS platform and distribution
ubuntu18.04
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
5.2.0
### GCC/compiler version
gcc-11.3.0
### CUDA/cuDNN version
none
### GPU model and memory
none
### Current behavior?
model trained use GPU, but infer machine don't have GPU device, so will meet cannot find assigend device at model load stage. here is log
```
E20230724 13:02:27.982717 35915 tf_model_core.cc:80] Failed to load model in /data/mfs6/new_wifi_models/tf_ranking_model/cvr_model/cvr_dynamic_embedding/2023-07-11.23/, Cannot assign a device for operation lr_weight-parameter_mht_1of1_lookup_table_export_values/TFRA>CuckooHashTableExport: {{node lr_weight-parameter_mht_1of1_lookup_table_export_values/TFRA>CuckooHashTableExport}} was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
```
### Standalone code to reproduce the issue
```shell
no code
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61360/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61359 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61359/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61359/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61359/events | https://github.com/tensorflow/tensorflow/pull/61359 | 1,817,481,302 | PR_kwDOArmXAs5WLqeP | 61,359 | Add ComplexOp as a builtin to TFLite | {
"login": "drubinstein",
"id": 577149,
"node_id": "MDQ6VXNlcjU3NzE0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/577149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drubinstein",
"html_url": "https://github.com/drubinstein",
"followers_url": "https://api.github.com/users/drubinstein/followers",
"following_url": "https://api.github.com/users/drubinstein/following{/other_user}",
"gists_url": "https://api.github.com/users/drubinstein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drubinstein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drubinstein/subscriptions",
"organizations_url": "https://api.github.com/users/drubinstein/orgs",
"repos_url": "https://api.github.com/users/drubinstein/repos",
"events_url": "https://api.github.com/users/drubinstein/events{/privacy}",
"received_events_url": "https://api.github.com/users/drubinstein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 1169365682,
"node_id": "MDU6TGFiZWwxMTY5MzY1Njgy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:L",
"name": "size:L",
"color": "adafea",
"default": false,
"description": "CL Change Size: Large"
}
] | open | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I noticed that I have `legalize-tf.mlir` tests failing due to the new tests I added. How do I go about fixing them? I'm not really sure what I'm doing differently compared to say the test for `tfl.real` which I based my tests on.\r\n\r\nQuick edit - fixed the test.",
"Bump. I've noticed that other people are making changes to similar files. Before I go and resolve the conflicts/regenerate the schema, is there any plan to review this PR in the near term?",
"Hi @drubinstein Sorry for the delay, can you please resolve the conflicts? once it done we will process it further. Thank you!",
"Thanks @gbaned . I resolved all the conflicts. I have a similar PR, #61280, that has similar conflicts as well that I'm also awaiting review for.",
"@gbaned @alankelly , it looks like there are conflicts again and they seem to be happening faster than an assigned reviewer can get to this PR. Are there currently any plans related TFLite and HLO that may be delaying the review of this PR?",
"Hi @alankelly Can you please assist on above [comments](https://github.com/tensorflow/tensorflow/pull/61359#issuecomment-1702645376) from @drubinstein. Thank you!",
"Hi @zichuan-wei Can you please assist on above https://github.com/tensorflow/tensorflow/pull/61359#issuecomment-1702645376 from @drubinstein. Thank you!",
"Hi @zichuan-wei / @alankelly Any update on this PR? Please. Thank you!",
"Hi @zichuan-wei / @alankelly Any update on this PR? Please. Thank you!",
"Hi @zichuan-wei / @alankelly Any update on this PR? Please. Thank you!",
"Hi @drubinstein Can you please resolve conflicts? Thank you!",
"Hi @gbaned , I resolved the PR conflicts. Does that mean I should expect a review soon and the tflite kernels are going to be stable for a bit?",
"Hi @drubinstein Can you please resolve conflicts? Thank you!",
"@gbaned as I've asked before, if I resolve the conflicts, will these files stay stable enough for an actual review? It's been 9 months since I've opened this PR and it feels like I've done conflict resolution a half dozen times with no review (or a hint of review) from the reviewers.",
"> @gbaned as I've asked before, if I resolve the conflicts, will these files stay stable enough for an actual review? It's been 9 months since I've opened this PR and it feels like I've done conflict resolution a half dozen times with no review (or a hint of review) from the reviewers.\r\n\r\nHi @drubinstein Sorry for the delay, please resolve the conflicts, we will process the PR for the review. Thank you so much!",
"@gbaned , I resolved the conflicts and pushed the updates.",
"Hi @Ferev Can you please review this PR? Thank you!",
"Hi @drubinstein Can you please resolve conflicts? Thank you!"
] | 2023-07-24T02:30:09 | 2024-06-07T16:09:23 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61359",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61359",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61359.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61359.patch",
"merged_at": null
} | For #61290 . This PR adds the complex op as a builtin to TFLite. I tried to follow instructions given to me in #61290 . Unfortunately, I'm developing this PR on a Mac M1 and have linker errors when trying to build the TFLite testing suite. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61359/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 2,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61359/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61358 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61358/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61358/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61358/events | https://github.com/tensorflow/tensorflow/pull/61358 | 1,817,470,462 | PR_kwDOArmXAs5WLoSv | 61,358 | [NextPluggableDevice] Add TF_TemporaryVariable C API | {
"login": "jzhoulon",
"id": 6346853,
"node_id": "MDQ6VXNlcjYzNDY4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzhoulon",
"html_url": "https://github.com/jzhoulon",
"followers_url": "https://api.github.com/users/jzhoulon/followers",
"following_url": "https://api.github.com/users/jzhoulon/following{/other_user}",
"gists_url": "https://api.github.com/users/jzhoulon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzhoulon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzhoulon/subscriptions",
"organizations_url": "https://api.github.com/users/jzhoulon/orgs",
"repos_url": "https://api.github.com/users/jzhoulon/repos",
"events_url": "https://api.github.com/users/jzhoulon/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzhoulon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1169365494,
"node_id": "MDU6TGFiZWwxMTY5MzY1NDk0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:M",
"name": "size:M",
"color": "adafea",
"default": false,
"description": "CL Change Size: Medium"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@jyingl3 can you have a look? thanks",
"> @jyingl3 can you have a look? thanks\r\n\r\nThanks Zhoulong. It looks good to me. Added Penporn as it is adding new TF C APIs.",
"@penpornk can you help to have a look? thanks"
] | 2023-07-24T02:15:21 | 2023-08-11T22:41:15 | 2023-08-11T22:41:15 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61358",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61358",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61358.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61358.patch",
"merged_at": "2023-08-11T22:41:15"
} | Currently, TemporaryVariableOp will use context->allocate_temp() to allocate a tensor and do the DToH copy through pjrt_device_context->CopyDeviceTensorToCPU(), however, when the buffer is allocated through AsyncValueAllocator, pjrt_buffer is a nullptr in TemporaryVariableOp, thus crashed.
TF_TemporaryVariable allows plugin implements the op with its own tensor allocator. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61358/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61357 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61357/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61357/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61357/events | https://github.com/tensorflow/tensorflow/issues/61357 | 1,817,031,256 | I_kwDOArmXAs5sTbJY | 61,357 | ColumnReduceKernel: min() type casting error and improvement | {
"login": "johnnkp",
"id": 22496821,
"node_id": "MDQ6VXNlcjIyNDk2ODIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22496821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnkp",
"html_url": "https://github.com/johnnkp",
"followers_url": "https://api.github.com/users/johnnkp/followers",
"following_url": "https://api.github.com/users/johnnkp/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnkp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnkp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnkp/subscriptions",
"organizations_url": "https://api.github.com/users/johnnkp/orgs",
"repos_url": "https://api.github.com/users/johnnkp/repos",
"events_url": "https://api.github.com/users/johnnkp/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnkp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 1097547147,
"node_id": "MDU6TGFiZWwxMDk3NTQ3MTQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:ops",
"name": "comp:ops",
"color": "0052cc",
"default": false,
"description": "OPs related issues"
},
{
"id": 1463677878,
"node_id": "MDU6TGFiZWwxNDYzNjc3ODc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:performance",
"name": "type:performance",
"color": "159b2e",
"default": false,
"description": "Performance Issue"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I think following changes can solve the TODO:\r\n\r\n[line 351-364](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/reduction_gpu_kernels.cu.h#L351):\r\n```\r\n#if GOOGLE_CUDA || TENSORFLOW_USE_ROCM\r\n __shared__ storage_type<value_type>\r\n partial_sums[TF_RED_WARPSIZE * (TF_RED_WARPSIZE + 1)];\r\n#endif\r\n```\r\n\r\n[line 392](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/reduction_gpu_kernels.cu.h#L392):\r\n`min(blockDim.y, num_rows - blockIdx.y * blockDim.y); // MSVC type casting fix`\r\n\r\nThese changes can be compiled successfully in Windows CUDA environment. Can someone confirm if that's the meaning of the TODO?",
"Hi @johnnkp \r\n\r\nProposed PR has been merged now. Could you please check and close this issue.\r\n\r\nThank you!!",
"The mentioned PR does not include my proposed changes. I am going to create a PR for this issue.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61357\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61357\">No</a>\n",
"Hi @johnnkp \r\n```\r\nmin(blockDim.y, num_rows - blockIdx.y * blockDim.y); // MSVC type casting fix\r\n```\r\nthis caused a lot of build errors due to ```call to 'min' is ambiguous```\r\nhttps://github.com/tensorflow/tensorflow/pull/61638\r\n"
] | 2023-07-23T07:59:49 | 2023-08-18T18:48:31 | 2023-08-06T20:57:28 | CONTRIBUTOR | null | null | null | ### Issue type
Performance
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.13.0
### Custom code
Yes
### OS platform and distribution
Windows 10 22H2
### Mobile device
_No response_
### Python version
Anaconda 2023.07-1
### Bazel version
6.2.1
### GCC/compiler version
Visual Studio 2022 (build tools 14.36) + msys2-x86_64-20230718
### CUDA/cuDNN version
CUDA 11.8 + CUDNN 8.6.0 + TensorRT 8.5.3
### GPU model and memory
GTX 750 Ti 2GB
### Current behavior?
There are two type casting errors in reduction_gpu_kernels.cu.h under MSVC. One of them is fixed in https://github.com/tensorflow/tensorflow/pull/61339. Another is related to a TODO.
in [ColumnReduceKernel()](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/reduction_gpu_kernels.cu.h#L341), the TODO said the followings:
> 1D array necessary due to bug in CUDA 9 compiler.
> TODO(nluehr) revert to 2D array when compiler is ready.
> This is to mimic the following, but without constructors:
> __shared__ storage_type<value_type> partial_sums[TF_RED_WARPSIZE *
> (TF_RED_WARPSIZE + 1)];
Since latest version required CUDA 11, it's time to address the TODO and apply bug fix together.
### Standalone code to reproduce the issue
```shell
1. download https://github.com/tensorflow/tensorflow/archive/refs/tags/v2.13.0.zip and extract
2. comment out Windows CUDA build rejection code in configure.py
3. run `python configure.py` to configure Windows CUDA build
4. run `bazel build --config=opt --define=no_tensorflow_py_deps=true //tensorflow/tools/pip_package:build_pip_package`
```
### Relevant log output
```shell
external/com_google_absl\absl/status/status.h(796): warning #2810-D: ignoring return value type with "nodiscard" attribute
.\tensorflow/tsl/platform/file_system.h(571): warning #611-D: overloaded virtual function "tsl::FileSystem::FilesExist" is only partially overridden in class "tsl::WrappedFileSyste
m"
.\tensorflow/tsl/platform/file_system.h(571): warning #611-D: overloaded virtual function "tsl::FileSystem::CreateDir" is only partially overridden in class "tsl::WrappedFileSystem
"
.\tensorflow/tsl/platform/env.h(500): warning #611-D: overloaded virtual function "tsl::Env::RegisterFileSystem" is only partially overridden in class "tsl::EnvWrapper"
.\tensorflow/tsl/platform/float8.h(936): warning #177-D: variable "aligned_highest" was declared but never referenced
detected during:
instantiation of "To tsl::float8_internal::ConvertImpl<From, To, kSaturate, kTruncate, std::enable_if_t<<expression>, void>>::run(const From &) [with From=tsl::float8_i
nternal::float8_e4m3b11, To=tsl::float8_internal::float8_e4m3fn, kSaturate=false, kTruncate=false]"
(1018): here
instantiation of "Derived tsl::float8_internal::float8_base<Derived>::ConvertFrom(const From &) [with Derived=tsl::float8_internal::float8_e4m3fn, kSaturate=false, kTru
ncate=false, From=tsl::float8_internal::float8_e4m3b11]"
(277): here
.\tensorflow/tsl/platform/float8.h(936): warning #177-D: variable "aligned_highest" was declared but never referenced
detected during:
instantiation of "To tsl::float8_internal::ConvertImpl<From, To, kSaturate, kTruncate, std::enable_if_t<<expression>, void>>::run(const From &) [with From=tsl::float8_i
nternal::float8_e4m3b11, To=float, kSaturate=false, kTruncate=false]"
(1024): here
instantiation of "To tsl::float8_internal::float8_base<Derived>::ConvertTo<To,kSaturate,kTruncate>(const Derived &) [with Derived=tsl::float8_internal::float8_e4m3b11,
To=float, kSaturate=false, kTruncate=false]"
(75): here
instantiation of "tsl::float8_internal::float8_base<Derived>::operator float() const [with Derived=tsl::float8_internal::float8_e4m3b11]"
(116): here
instantiation of "Derived tsl::float8_internal::float8_base<Derived>::operator-(const Derived &) const [with Derived=tsl::float8_internal::float8_e4m3b11]"
(302): here
.\tensorflow/core/kernels/reduction_gpu_kernels.cu.h(392): error: no instance of overloaded function "tensorflow::min" matches the argument list
argument types are: (int, unsigned int)
detected during:
instantiation of "void tensorflow::functor::ColumnReduceKernel(T, OUT_T, int, int, Op, std::iterator_traits<T>::value_type) [with T=const float *, OUT_T=float *, Op=cub
::Max]"
(828): here
instantiation of "void tensorflow::functor::LaunchColumnReduction_LTE4096Cols(tensorflow::OpKernelContext *, OUT_T, IN_T, int, int, Op, T, const gpuStream_t &) [with T=
float, Op=cub::Max, OUT_T=float *, IN_T=const float *]"
(862): here
instantiation of "void tensorflow::functor::LaunchColumnReduction(tensorflow::OpKernelContext *, OUT_T, IN_T, int, int, Op, T, const gpuStream_t &) [with T=float, Op=cu
b::Max, OUT_T=float *, IN_T=const float *]"
(1088): here
instantiation of "void tensorflow::functor::ReduceImpl<T,Op,OUT_T,IN_T,ReductionAxes>(tensorflow::OpKernelContext *, OUT_T, IN_T, int, int, int, int, int, const Reducti
onAxes &, Op) [with T=float, Op=cub::Max, OUT_T=float *, IN_T=const float *, ReductionAxes=const Eigen::array<Eigen::DenseIndex, 1ULL> &]"
E:\_bazel_tensorflow\4zvk5ci6\execroot\org_tensorflow\tensorflow\core\kernels\multinomial_op_gpu.cu.cc(111): here
instantiation of "void tensorflow::functor::MultinomialFunctor<tensorflow::functor::GPUDevice, T, OutputType>::operator()(tensorflow::OpKernelContext *, const tensorflo
w::functor::GPUDevice &, tensorflow::TTypes<T, 1, Eigen::DenseIndex>::ConstMatrix, tensorflow::TTypes<float, 1, Eigen::DenseIndex>::Flat, tensorflow::TTypes<float, 1, Eigen::DenseI
ndex>::Flat, tensorflow::TTypes<float, 1, Eigen::DenseIndex>::Flat, int, int, int, const tsl::random::PhiloxRandom &, tensorflow::TTypes<OutputType, 1, Eigen::DenseIndex>::Matrix)
[with T=Eigen::half, OutputType=tsl::int32]"
E:\_bazel_tensorflow\4zvk5ci6\execroot\org_tensorflow\tensorflow\core\kernels\multinomial_op_gpu.cu.cc(126): here
1 error detected in the compilation of "tensorflow/core/kernels/multinomial_op_gpu.cu.cc".
nvcc warning : The 'compute_35', 'compute_37', 'sm_35', and 'sm_37' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppres
s warning).
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 1996.828s, Critical Path: 480.15s
INFO: 441 processes: 7 internal, 434 local.
FAILED: Build did NOT complete successfully
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61357/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61354 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61354/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61354/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61354/events | https://github.com/tensorflow/tensorflow/issues/61354 | 1,816,771,832 | I_kwDOArmXAs5sSbz4 | 61,354 | rocm_helpers missing dependency declarations | {
"login": "MrTreev",
"id": 16656570,
"node_id": "MDQ6VXNlcjE2NjU2NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16656570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MrTreev",
"html_url": "https://github.com/MrTreev",
"followers_url": "https://api.github.com/users/MrTreev/followers",
"following_url": "https://api.github.com/users/MrTreev/following{/other_user}",
"gists_url": "https://api.github.com/users/MrTreev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MrTreev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MrTreev/subscriptions",
"organizations_url": "https://api.github.com/users/MrTreev/orgs",
"repos_url": "https://api.github.com/users/MrTreev/repos",
"events_url": "https://api.github.com/users/MrTreev/events{/privacy}",
"received_events_url": "https://api.github.com/users/MrTreev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 1205615612,
"node_id": "MDU6TGFiZWwxMjA1NjE1NjEy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:%20ubuntu/linux",
"name": "subtype: ubuntu/linux",
"color": "b619ea",
"default": false,
"description": "Ubuntu/Linux Build/Installation Issues"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @MrTreev ,\r\n\r\nCould you please test with the below configurations of Clang and Bazel and let us know if problem still persists.Because higher versions may or may not compatible. It seems your Clang version is 17.0 against tested version of 16.0.0. Same for Bazel also where it seems you have 6.1.0 installed and tested version is 5.3.0 for Tf2.13.\r\n\r\n\r\n\r\nVersion | Python version | Compiler | Build tools\r\n-- | -- | -- | --\r\ntensorflow-2.13.0 | 3.8-3.11 | Clang 16.0.0 | Bazel 5.3.0\r\n\r\nYou can find the build instructions [here](https://www.tensorflow.org/install/source#install_python_and_the_tensorflow_package_dependencies).\r\n",
"I certainly can try that either later tonight or tomorrow morning (AEST). I'll get back to you when that's done.",
"On the r2.13 branch, I've switched to Bazel 5.3.0, I'll try the other changes in the morning tomorrow, but so far, no difference in the error (Been doing full clean builds each time)",
"Hi @MrTreev ,\r\n\r\nKindly update on this. Thanks!",
"Hi @SuryanarayanaY, I'm attempting to get Clang 16 reliably working at the moment, Sadly the archlinux repos currently have only 15 and 17, so I'm having to do it manually and trying not to break the rest of my environment while doing so is proving a little tricky. Thankfully I should be able to dedicate a good bit of time over the next couple of days to this, so I hope to have an update soon.",
"I've found a set of working rocm packages with clang-16 included, since I've switched to them I have gotten a different error, which I believe should be able to be fixed by adding the files somewhere in the bazel build system, I'm trying to figure out where exactly at the moment, but if there's anyone that could look at this that'd be appreciated.\r\n\r\n```\r\nINFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (624 packages loaded, 43746 targets configured).\r\nINFO: Found 1 target...\r\nERROR: /home/user/Repos/tensorflow/tensorflow/compiler/xla/stream_executor/rocm/BUILD:463:11: Compiling tensorflow/compiler/xla/stream_executor/rocm/rocm_helpers.cu.cc [for host] failed: undeclared inclusion(s) in rule '//tensorflow/compiler/xla/stream_executor/rocm:rocm_helpers':\r\nthis rule is missing dependency declarations for the following files included by 'tensorflow/compiler/xla/stream_executor/rocm/rocm_helpers.cu.cc':\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/__clang_hip_runtime_wrapper.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/cuda_wrappers/cmath'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/stddef.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/__clang_hip_libdevice_declares.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/__clang_hip_math.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/cuda_wrappers/algorithm'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/cuda_wrappers/new'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/limits.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/stdint.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/__clang_hip_stdlib.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/__clang_cuda_math_forward_declares.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/__clang_hip_cmath.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/__clang_cuda_complex_builtins.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/cuda_wrappers/complex'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/__stddef_max_align_t.h'\r\n '/opt/rocm/llvm/lib/clang/16.0.0/include/stdarg.h'\r\n/home/user/.cache/bazel/_bazel_user/057ab612123f87ae7f238751a7c28667/execroot/org_tensorflow/external/local_config_rocm/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc:23: DeprecationWarning: 'pipes' is deprecated and slated for removal in Python 3.13\r\n import pipes\r\nclang-16: warning: argument unused during compilation: '-fcuda-flush-denormals-to-zero' [-Wunused-command-line-argument]\r\nTarget //tensorflow/tools/pip_package:build_pip_package failed to build\r\nUse --verbose_failures to see the command lines of failed build steps.\r\nINFO: Elapsed time: 27.215s, Critical Path: 6.75s\r\nINFO: 536 processes: 125 internal, 411 local.\r\nFAILED: Build did NOT complete successfully\r\n```",
"I found that ROCm tensorflow-upstream goes further in the build process, so I'm looking at the differences at the moment to try to find a fix ",
"I don't think there's a simple fix I can apply, and the best place for my issue is likely in the RadeonOpenCompute fork until the changes I need are merged. ",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61354\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61354\">No</a>\n"
] | 2023-07-22T13:48:50 | 2023-07-29T04:13:23 | 2023-07-29T04:13:21 | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
master/nightly
### Custom code
Yes
### OS platform and distribution
`Arch Linux (Linux 6.4.4-arch1-1 #1 SMP PREEMPT_DYNAMIC x86_64 GNU/Linux)`
### Mobile device
N/A
### Python version
3.10
### Bazel version
6.1.0
### GCC/compiler version
gcc (GCC) 13.1.1 20230714
### CUDA/cuDNN version
None
### GPU model and memory
AMD Radeon RX 7900 XT
### Current behavior?
After adding `#include <stdint.h>` to line 16 of `tensorflow/tsl/lib/io/cache.cc` to fix a different error, and using the installation method described in the reproduce field.
Bazel gives the error described in the attached log.
This persists through different Bazel versions, and full cleans.
I am using the following archlinux packages for ROCm:
```
local/opencl-amd 1:5.6.0-2
ROCr OpenCL stack
local/opencl-amd-dev 1:5.6.0-1
OpenCL SDK / HIP SDK / ROCM Compiler.
```
### Standalone code to reproduce the issue
```shell
./configure
You have bazel 6.1.0 installed.
Please specify the location of python. [Default is /usr/bin/python3]:
Found possible Python library paths:
/usr/lib/python3.11/site-packages
Please input the desired Python library path to use. Default is [/usr/lib/python3.11/site-packages]
Do you wish to build TensorFlow with ROCm support? [y/N]: y
ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]:
No CUDA support will be enabled for TensorFlow.
Do you want to use Clang to build TensorFlow? [Y/n]:
Clang will be used to compile TensorFlow.
Please specify the path to clang executable. [Default is /usr/bin/clang]:
You have Clang 17.0.0 installed.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
Not configuring the WORKSPACE for Android builds.
bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_package
```
### Relevant log output
```shell
ERROR: /home/user/Repos/tensorflow/tensorflow/compiler/xla/stream_executor/rocm/BUILD:527:11: Compiling tensorflow/compiler/xla/stream_executor/rocm/rocm_helpers.cu.cc [for tool] failed: undeclared inclusion(s) in rule '//tensorflow/compiler/xla/stream_executor/rocm:rocm_helpers':
this rule is missing dependency declarations for the following files included by 'tensorflow/compiler/xla/stream_executor/rocm/rocm_helpers.cu.cc':
'/opt/rocm-5.6.0/include/hip/hip_version.h'
'/opt/rocm-5.6.0/include/hip/hip_runtime.h'
'/opt/rocm-5.6.0/include/hip/hip_common.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_hip_runtime.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_hip_common.h'
'/opt/rocm-5.6.0/include/hip/hip_runtime_api.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/host_defines.h'
'/opt/rocm-5.6.0/include/hip/driver_types.h'
'/opt/rocm-5.6.0/include/hip/texture_types.h'
'/opt/rocm-5.6.0/include/hip/channel_descriptor.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_channel_descriptor.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_hip_vector_types.h'
'/opt/rocm-5.6.0/include/hip/surface_types.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_hip_runtime_pt_api.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/hip_ldg.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_hip_atomic.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_device_functions.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/math_fwd.h'
'/opt/rocm-5.6.0/include/hip/hip_vector_types.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/device_library_decls.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_warp_functions.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_hip_unsafe_atomics.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_surface_functions.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/ockl_image.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/texture_fetch_functions.h'
'/opt/rocm-5.6.0/include/hip/hip_texture_types.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/texture_indirect_functions.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_math_functions.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/hip_fp16_math_fwd.h'
'/opt/rocm-5.6.0/include/hip/library_types.h'
'/opt/rocm-5.6.0/include/hip/hip_bfloat16.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_hip_bfloat16.h'
'/opt/rocm-5.6.0/include/hip/hip_fp16.h'
'/opt/rocm-5.6.0/include/hip/amd_detail/amd_hip_fp16.h'
clang-16: warning: argument unused during compilation: '-fcuda-flush-denormals-to-zero' [-Wunused-command-line-argument]
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 3.270s, Critical Path: 3.10s
INFO: 77 processes: 53 internal, 24 local.
FAILED: Build did NOT complete successfully
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61354/timeline | null | not_planned | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61367 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61367/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61367/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61367/events | https://github.com/tensorflow/tensorflow/issues/61367 | 1,818,800,858 | I_kwDOArmXAs5saLLa | 61,367 | Need Help with TensorFlow Lite Model Running on GPU - Output Interpretation Issue (Android Studio Kotlin) | {
"login": "rafrxx",
"id": 140249538,
"node_id": "U_kgDOCFwJwg",
"avatar_url": "https://avatars.githubusercontent.com/u/140249538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafrxx",
"html_url": "https://github.com/rafrxx",
"followers_url": "https://api.github.com/users/rafrxx/followers",
"following_url": "https://api.github.com/users/rafrxx/following{/other_user}",
"gists_url": "https://api.github.com/users/rafrxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafrxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafrxx/subscriptions",
"organizations_url": "https://api.github.com/users/rafrxx/orgs",
"repos_url": "https://api.github.com/users/rafrxx/repos",
"events_url": "https://api.github.com/users/rafrxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafrxx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473184161,
"node_id": "MDU6TGFiZWw0NzMxODQxNjE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:support",
"name": "type:support",
"color": "159b2e",
"default": false,
"description": "Support issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 2671339633,
"node_id": "MDU6TGFiZWwyNjcxMzM5NjMz",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteGpuDelegate",
"name": "TFLiteGpuDelegate",
"color": "F71F04",
"default": false,
"description": "TFLite Gpu delegate issue"
},
{
"id": 4989164230,
"node_id": "LA_kwDOArmXAs8AAAABKWCaxg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/Android",
"name": "Android",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | {
"login": "pkgoogle",
"id": 132095473,
"node_id": "U_kgDOB9-d8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/132095473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkgoogle",
"html_url": "https://github.com/pkgoogle",
"followers_url": "https://api.github.com/users/pkgoogle/followers",
"following_url": "https://api.github.com/users/pkgoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/pkgoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkgoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkgoogle/subscriptions",
"organizations_url": "https://api.github.com/users/pkgoogle/orgs",
"repos_url": "https://api.github.com/users/pkgoogle/repos",
"events_url": "https://api.github.com/users/pkgoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkgoogle/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pkgoogle",
"id": 132095473,
"node_id": "U_kgDOB9-d8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/132095473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkgoogle",
"html_url": "https://github.com/pkgoogle",
"followers_url": "https://api.github.com/users/pkgoogle/followers",
"following_url": "https://api.github.com/users/pkgoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/pkgoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkgoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkgoogle/subscriptions",
"organizations_url": "https://api.github.com/users/pkgoogle/orgs",
"repos_url": "https://api.github.com/users/pkgoogle/repos",
"events_url": "https://api.github.com/users/pkgoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkgoogle/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @rafrxx, so as I understand your situation, Your code runs but you're not sure if it ran correctly? It seems like you're running a Computer Vision model of some sort... are you trying to do classification or something else? What is the model's intended output? It seems like feature extraction -- in which case most humans probably can't interpret that. Any additional information will be helpful!",
"Hello. Thank you for your response. This is an object detection model. I have a short code that displays the Frames Per Second (FPS) between captured frames. On a good phone without model inference, it runs at 80-100 FPS. When uncommenting the line responsible for model inference, the refresh rate drops to 8-10 FPS. When using GPU, it reaches 20-23 FPS. However, I'm unsure how to interpret the output.",
"Hi @rafrxx, what is your exact model? or how did you build/architecture it? Where did you get the .tflite model from? It seems like it's just doing feature extraction and not doing the actual classification. Basically my current hypothesis is your tflite model is headless, it computes the features but doesn't have a classification head... but I am not sure how you created or got to that state.",
"This model is based on ssd-mobilenet-v2-fpnlite-320 architecture. It provides output containing the locations, classes, and probabilities that I can utilize in the subsequent part of the code. I prepared the data myself, but I'm not as proficient in this area. It's possible that what I'm trying to achieve may not be feasible. Nevertheless, thank you for your response.",
"Hi @rafrxx, no worries, what format are the expected outputs and what exactly are you seeing? It's kind of hard for me to help as I don't know how they look like and I don't have the model file. If you can upload a toy model, toy project or both that I can reproduce locally that'll be the most helpful.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61367\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61367\">No</a>\n",
"I also have the concern @rafrxx. I created a machine learning model for object detection. I got the code above from a sample code provided by android studio. Now, I am not sure on how to output the prediction. "
] | 2023-07-22T11:56:03 | 2023-08-19T07:40:14 | 2023-08-10T01:54:17 | NONE | null | null | null | Hello. I have created an Android application in Android Studio that uses a tflite model. Its implementation works without any issues and looks as follows:
val model = Ssd.newInstance(context)
// Creates inputs for reference.
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 320, 320, 3), DataType.UINT8)
inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result.
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
val outputFeature1 = outputs.outputFeature1AsTensorBuffer
val outputFeature2 = outputs.outputFeature2AsTensorBuffer
val outputFeature3 = outputs.outputFeature3AsTensorBuffer
// Releases model resources if no longer used.
model.close()
However, the application is running slowly, and I would like to perform the model computations on the GPU.
I am facing an issue with the input and output parts.
I couldn't find any information about it anywhere. The current code looks like this:
val options = Interpreter.Options().apply {
if(compatList.isDelegateSupportedOnThisDevice) {
val delegateOptions = compatList.bestOptionsForThisDevice
this.addDelegate(GpuDelegate(delegateOptions))
} else {
this.setNumThreads(4)
}
}
interpreter = Interpreter(loadModelFile(assets,"Ssd.tflite"), options)
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 320, 320, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)
Then, I should create the input.buffer for the main line:
interpreter.run(inputFeature0.buffer, outputs.buffer)
I tried doing some adjustments, but the outputs.buffer I got as a result was something I couldn't interpret. Has anyone encountered a similar problem? If so, please, I would appreciate your help.
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61367/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61353 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61353/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61353/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61353/events | https://github.com/tensorflow/tensorflow/issues/61353 | 1,816,657,998 | I_kwDOArmXAs5sSABO | 61,353 | TFLite Error | {
"login": "MogilipalemHemanthKumar",
"id": 107172150,
"node_id": "U_kgDOBmNRNg",
"avatar_url": "https://avatars.githubusercontent.com/u/107172150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MogilipalemHemanthKumar",
"html_url": "https://github.com/MogilipalemHemanthKumar",
"followers_url": "https://api.github.com/users/MogilipalemHemanthKumar/followers",
"following_url": "https://api.github.com/users/MogilipalemHemanthKumar/following{/other_user}",
"gists_url": "https://api.github.com/users/MogilipalemHemanthKumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MogilipalemHemanthKumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MogilipalemHemanthKumar/subscriptions",
"organizations_url": "https://api.github.com/users/MogilipalemHemanthKumar/orgs",
"repos_url": "https://api.github.com/users/MogilipalemHemanthKumar/repos",
"events_url": "https://api.github.com/users/MogilipalemHemanthKumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/MogilipalemHemanthKumar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 2915920098,
"node_id": "MDU6TGFiZWwyOTE1OTIwMDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite-flex",
"name": "comp:lite-flex",
"color": "2E0DFE",
"default": false,
"description": ""
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @MogilipalemHemanthKumar \r\n\r\nWe need to add the select tf ops dependencies as mentioned error if the converted model has select ops.\r\n\r\nYou can specify this in your build.gradle dependencies by adding it alongside the standard TensorFlow Lite AAR as follows:\r\n```\r\ndependencies {\r\n implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-SNAPSHOT'\r\n // This dependency adds the necessary TF op support.\r\n implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly-SNAPSHOT'\r\n}\r\n```\r\nThanks.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61353\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61353\">No</a>\n"
] | 2023-07-22T07:11:34 | 2023-08-09T01:52:13 | 2023-08-09T01:52:09 | NONE | null | null | null | **System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows 11
- TensorFlow installed from (source or binary):source
- TensorFlow version (or github SHA if from source):2.15.0
**Provide the text output from tflite_convert**
The below is the code, I am using to convert the deep learning model to tflite
converter = tf.lite.TFLiteConverter.from_keras_model(best_model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
with open('compressed_model.tflite', 'wb') as f:
f.write(tflite_model)
**Standalone code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate
the problem. If possible, please share a link to Colab/Jupyter/any notebook.
https://colab.research.google.com/drive/1QlquN0xR94xMdiUNer00Nu0n5UDXdiWQ

| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61353/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61352 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61352/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61352/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61352/events | https://github.com/tensorflow/tensorflow/issues/61352 | 1,816,587,517 | I_kwDOArmXAs5sRuz9 | 61,352 | LSTM loss does not work TPU with bfloat16 | {
"login": "sronen71",
"id": 4361027,
"node_id": "MDQ6VXNlcjQzNjEwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4361027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sronen71",
"html_url": "https://github.com/sronen71",
"followers_url": "https://api.github.com/users/sronen71/followers",
"following_url": "https://api.github.com/users/sronen71/following{/other_user}",
"gists_url": "https://api.github.com/users/sronen71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sronen71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sronen71/subscriptions",
"organizations_url": "https://api.github.com/users/sronen71/orgs",
"repos_url": "https://api.github.com/users/sronen71/repos",
"events_url": "https://api.github.com/users/sronen71/events{/privacy}",
"received_events_url": "https://api.github.com/users/sronen71/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097541661,
"node_id": "MDU6TGFiZWwxMDk3NTQxNjYx",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:tpus",
"name": "comp:tpus",
"color": "0052cc",
"default": false,
"description": "tpu, tpuestimator"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@sronen71,\r\nAs you mentioned, it was the issue on tensorflow v2.12. When I tried to execute the same mentioned code on tensorflow v2.13`(tpu)` it was executed without any issue/error. Kindly find the gist of it [here](https://colab.research.google.com/gist/tilakrayal/349987540ae6320ed3726d2e86a996d4/untitled1255.ipynb). Thank you!",
"That's great, thank you @tilakrayal !\r\n",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61352\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61352\">No</a>\n"
] | 2023-07-22T03:19:56 | 2023-07-24T14:46:45 | 2023-07-24T14:46:41 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.12.0
### Custom code
Yes
### OS platform and distribution
Google Colab + TPU
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
LSTM on TPU works in float32.
Gives error message in bfloat16:
Value passed to parameter 'input' has DataType bfloat16 not in list of allowed values: float16, float32, float64
### Standalone code to reproduce the issue
```shell
Colab Code in this gist:
https://colab.research.google.com/gist/sronen71/cacdc527ea3a7d267e5f47e6dcc8f17f/working_with_rnns.ipynb.
Run with TPU.
```
### Relevant log output
```shell
TypeError Traceback (most recent call last)
<ipython-input-12-d2060b3c689a> in <cell line: 1>()
1 with strategy.scope():
----> 2 model = build_model()
3
4 model.compile(
5 loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
3 frames
/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/op_def_library.py in _SatisfiesTypeConstraint(dtype, attr_def, param_name)
54 allowed_values = ", ".join(dtypes.as_dtype(x).name for x in allowed_list)
55 if dtype not in allowed_list:
---> 56 raise TypeError(
57 f"Value passed to parameter '{param_name}' has DataType "
58 f"{dtypes.as_dtype(dtype).name} not in list of allowed values: "
TypeError: Exception encountered when calling layer "lstm_4" (type LSTM).
Value passed to parameter 'input' has DataType bfloat16 not in list of allowed values: float16, float32, float64
Call arguments received by layer "lstm_4" (type LSTM):
• inputs=tf.Tensor(shape=(None, None, 128), dtype=bfloat16)
• mask=None
• training=None
• initial_state=None
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61352/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61351 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61351/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61351/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61351/events | https://github.com/tensorflow/tensorflow/issues/61351 | 1,816,208,893 | I_kwDOArmXAs5sQSX9 | 61,351 | Tensorflow Load Datasets Failure for Python 3.11.4 and Tensorflow 2.13.0 | {
"login": "teddy661",
"id": 76535893,
"node_id": "MDQ6VXNlcjc2NTM1ODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/76535893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teddy661",
"html_url": "https://github.com/teddy661",
"followers_url": "https://api.github.com/users/teddy661/followers",
"following_url": "https://api.github.com/users/teddy661/following{/other_user}",
"gists_url": "https://api.github.com/users/teddy661/gists{/gist_id}",
"starred_url": "https://api.github.com/users/teddy661/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/teddy661/subscriptions",
"organizations_url": "https://api.github.com/users/teddy661/orgs",
"repos_url": "https://api.github.com/users/teddy661/repos",
"events_url": "https://api.github.com/users/teddy661/events{/privacy}",
"received_events_url": "https://api.github.com/users/teddy661/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1205615612,
"node_id": "MDU6TGFiZWwxMjA1NjE1NjEy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:%20ubuntu/linux",
"name": "subtype: ubuntu/linux",
"color": "b619ea",
"default": false,
"description": "Ubuntu/Linux Build/Installation Issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Appears to be fixed in nightly 2.14.0-dev20230721",
"Hi @teddy661 ,\r\n\r\nI have replicated the reported behaviour with TF2.13 and python=3.11 and the reported error was observed. However with tf-nightly there is no such error which also same as per your observation. Attached below are the logs for checking same.\r\n\r\n[61351-logs.txt](https://github.com/tensorflow/tensorflow/files/12142597/61351-logs.txt)\r\n\r\n\r\nIt's already resolved in tf-nightly. We need to check whether the error is actually stopping from loading dataset. Could you please check whether the dataset actually loaded or not ? You can use the below code for checking.\r\n\r\n`ds_train.take(1).get_single_element()\r\n`",
"It does appear to load the dataset and return the first element correctly. I'll try to run a model fit with it later to see if I'm getting the same results. I didn't want to move forward with it since I wasn't sure what exactly was going on with the entirety of the data. \r\n\r\nIn [2]: ds_train.take(1).get_single_element()\r\nOut[2]:\r\n({'attention_mask': <tf.Tensor: shape=(386,), dtype=int32, numpy=\r\n array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>,\r\n 'input_ids': <tf.Tensor: shape=(386,), dtype=int32, numpy=\r\n array([ 101, 2043, 2106, 20773, 2707, 3352, 2759, 1029, 102,\r\n 20773, 21025, 19358, 22815, 1011, 5708, 1006, 1013, 12170,\r\n 23432, 29715, 3501, 29678, 12325, 29685, 1013, 10506, 1011,\r\n 10930, 2078, 1011, 2360, 1007, 1006, 2141, 2244, 1018,\r\n 1010, 3261, 1007, 2003, 2019, 2137, 3220, 1010, 6009,\r\n 1010, 2501, 3135, 1998, 3883, 1012, 2141, 1998, 2992,\r\n 1999, 5395, 1010, 3146, 1010, 2016, 2864, 1999, 2536,\r\n 4823, 1998, 5613, 6479, 2004, 1037, 2775, 1010, 1998,\r\n 3123, 2000, 4476, 1999, 1996, 2397, 4134, 2004, 2599,\r\n 3220, 1997, 1054, 1004, 1038, 2611, 1011, 2177, 10461,\r\n 1005, 1055, 2775, 1012, 3266, 2011, 2014, 2269, 1010,\r\n 25436, 22815, 1010, 1996, 2177, 2150, 2028, 1997, 1996,\r\n 2088, 1005, 1055, 2190, 1011, 4855, 2611, 2967, 1997,\r\n 2035, 2051, 1012, 2037, 14221, 2387, 1996, 2713, 1997,\r\n 20773, 1005, 1055, 2834, 2201, 1010, 20754, 1999, 2293,\r\n 1006, 2494, 1007, 1010, 2029, 2511, 2014, 2004, 1037,\r\n 3948, 3063, 4969, 1010, 3687, 2274, 8922, 2982, 1998,\r\n 2956, 1996, 4908, 2980, 2531, 2193, 1011, 2028, 3895,\r\n 1000, 4689, 1999, 2293, 1000, 1998, 1000, 3336, 2879,\r\n 1000, 1012, 102, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0],\r\n dtype=int32)>,\r\n 'qas_id': <tf.Tensor: shape=(), dtype=string, numpy=b'56be85543aeaaa14008c9063'>,\r\n 'feature_index': <tf.Tensor: shape=(), dtype=int64, numpy=0>,\r\n 'token_type_ids': <tf.Tensor: shape=(386,), dtype=int32, numpy=\r\n array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>},\r\n {'end_positions': <tf.Tensor: shape=(), dtype=int64, numpy=78>,\r\n 'cls_index': <tf.Tensor: shape=(), dtype=int64, numpy=0>,\r\n 'is_impossible': <tf.Tensor: shape=(), dtype=int32, numpy=0>,\r\n 'start_positions': <tf.Tensor: shape=(), dtype=int64, numpy=75>,\r\n 'p_mask': <tf.Tensor: shape=(386,), dtype=int32, numpy=\r\n array([0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)>})\r\n",
"Hi @teddy661 ,\r\n\r\nFor me it seems issue with TF2.13 which eventually resolved in tf-nightly.\r\n\r\nI have added `ds_train.take(1).get_single_element()` at last just to ensure dataset loaded. Because we are able to extract first element which indicates dataset is loaded indeed which addresses this issue.\r\n\r\nPlease check the training with tf-nightly and let us know if still have problem. If not please feel free to close the issue if resolved.\r\n\r\nThanks!",
"Looks ok in tf-nightly",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61351\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61351\">No</a>\n",
"Hi, I am unable to install tensorflow nightly. Any suggestions ?\r\n\r\nI am running tensor flow 2.13.1 ?"
] | 2023-07-21T18:21:26 | 2023-10-14T11:14:25 | 2023-07-30T00:24:12 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
2.13.0
### Custom code
No
### OS platform and distribution
Rocky Linux 8.7
### Mobile device
_No response_
### Python version
3.11.4
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
11.8/8.9.0.131-1
### GPU model and memory
_No response_
### Current behavior?
Dataset written and loaded in python 3.11.4 and tensorflow 2.12.1 should load in tensorflow 2.13
However, loading dataset in Tensorflow 2.13 with python 3.11.4 fails on Linux and windows:
TensorFlow version: 2.13.0
Python version: 3.11.4
Download dev dataset...
Extract dev dataset...
Loading dev dataset...
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:337] Error parsing text-format tensorflow.data.experimental.DistributedSnapshotMetadata: 1:1: Invalid control characters encountered in text.
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:337] Error parsing text-format tensorflow.data.experimental.DistributedSnapshotMetadata: 1:3: Expected identifier, got: 14022746025082002701
Download train dataset...
Extract train dataset...
Loading train dataset...
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:337] Error parsing text-format tensorflow.data.experimental.DistributedSnapshotMetadata: 1:1: Invalid control characters encountered in text.
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:337] Error parsing text-format tensorflow.data.experimental.DistributedSnapshotMetadata: 1:3: Expected identifier, got: 10775564831112808841
### Standalone code to reproduce the issue
```shell
import io
import sys
from zipfile import ZipFile
import requests
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
print("Python version:", sys.version.split()[0])
dev_url = (
"https://drive.google.com/uc?export=download&id=1-MJAgrTNZkaMpyBQLIgqqwM8gP9LKdDL"
)
print("Download dev dataset...")
r = requests.get(dev_url)
z = ZipFile(io.BytesIO(r.content))
print("Extract dev dataset...")
z.extractall()
print("\nLoading dev dataset...")
ds_dev = tf.data.Dataset.load("squadv2_dev_tf")
train_url = (
"https://drive.google.com/uc?export=download&id=1-NWGcJz0ZaFGFeHOPG2PKvn8gmf3MwKn"
)
print("Download train dataset...")
r = requests.get(train_url)
z = ZipFile(io.BytesIO(r.content))
print("Extract train dataset...")
z.extractall()
print("\nLoading train dataset...")
ds_train = tf.data.Dataset.load("squadv2_train_tf")
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61351/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61350 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61350/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61350/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61350/events | https://github.com/tensorflow/tensorflow/pull/61350 | 1,816,101,789 | PR_kwDOArmXAs5WHS_4 | 61,350 | Update to ACL 23.05.1, add ACL reorders | {
"login": "cfRod",
"id": 65665931,
"node_id": "MDQ6VXNlcjY1NjY1OTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/65665931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cfRod",
"html_url": "https://github.com/cfRod",
"followers_url": "https://api.github.com/users/cfRod/followers",
"following_url": "https://api.github.com/users/cfRod/following{/other_user}",
"gists_url": "https://api.github.com/users/cfRod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cfRod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cfRod/subscriptions",
"organizations_url": "https://api.github.com/users/cfRod/orgs",
"repos_url": "https://api.github.com/users/cfRod/repos",
"events_url": "https://api.github.com/users/cfRod/events{/privacy}",
"received_events_url": "https://api.github.com/users/cfRod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1104829434,
"node_id": "MDU6TGFiZWwxMTA0ODI5NDM0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:mkl",
"name": "comp:mkl",
"color": "0052cc",
"default": false,
"description": "MKL related issues"
},
{
"id": 1173072136,
"node_id": "MDU6TGFiZWwxMTczMDcyMTM2",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:XL",
"name": "size:XL",
"color": "adafea",
"default": false,
"description": "CL Change Size:Extra Large"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@penpornk ",
"Please help resolve conflicts / rebase as well. :)"
] | 2023-07-21T16:46:10 | 2023-07-24T22:46:43 | 2023-07-24T22:46:43 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61350",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61350",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61350.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61350.patch",
"merged_at": "2023-07-24T22:46:42"
} | Raising PR on behalf of @davsva01 for reverted PR https://github.com/tensorflow/tensorflow/pull/61123 | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61350/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61349 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61349/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61349/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61349/events | https://github.com/tensorflow/tensorflow/pull/61349 | 1,815,920,980 | PR_kwDOArmXAs5WGr2q | 61,349 | [TFLite] Align register_ref.cc TRANSPOSE_CONV max version to the register.cc one | {
"login": "Tessil",
"id": 21028116,
"node_id": "MDQ6VXNlcjIxMDI4MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/21028116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tessil",
"html_url": "https://github.com/Tessil",
"followers_url": "https://api.github.com/users/Tessil/followers",
"following_url": "https://api.github.com/users/Tessil/following{/other_user}",
"gists_url": "https://api.github.com/users/Tessil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tessil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tessil/subscriptions",
"organizations_url": "https://api.github.com/users/Tessil/orgs",
"repos_url": "https://api.github.com/users/Tessil/repos",
"events_url": "https://api.github.com/users/Tessil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tessil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 1169364259,
"node_id": "MDU6TGFiZWwxMTY5MzY0MjU5",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:XS",
"name": "size:XS",
"color": "adafea",
"default": false,
"description": "CL Change Size: Extra Small"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @alankelly \r\nWill you have time to review this?\r\nThanks!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"This was indirectly fixed by https://github.com/tensorflow/tensorflow/commit/56edb7fc3b9b5e6f1eee637ac17b7fdce395503e#diff-9fc5de52478bebd19cc4b48121d9a65dc396a9f59e15968d27d4c29379021c2a\r\n\r\nClosing the PR."
] | 2023-07-21T14:36:25 | 2023-09-14T10:29:54 | 2023-09-14T10:29:51 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61349",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61349",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61349.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61349.patch",
"merged_at": null
} | Hi,
Commit https://github.com/tensorflow/tensorflow/commit/1eb92f086261f37605986c521e5a14fd610568c6 added support for fused activation in the `TRANSPOSE_CONV` and updated both the optimized and reference kernels. It forgot though to update the `register_ref` for the operator.
This PR fixes that so that we can use the reference kernels `OpResolverType.BUILTIN_REF` on models with `TRANSPOSE_CONV` having a fused activation. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61349/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61348 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61348/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61348/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61348/events | https://github.com/tensorflow/tensorflow/issues/61348 | 1,815,253,869 | I_kwDOArmXAs5sMpNt | 61,348 | esrgan re-convert to tflite fail | {
"login": "xufuji456",
"id": 20047648,
"node_id": "MDQ6VXNlcjIwMDQ3NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/20047648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xufuji456",
"html_url": "https://github.com/xufuji456",
"followers_url": "https://api.github.com/users/xufuji456/followers",
"following_url": "https://api.github.com/users/xufuji456/following{/other_user}",
"gists_url": "https://api.github.com/users/xufuji456/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xufuji456/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xufuji456/subscriptions",
"organizations_url": "https://api.github.com/users/xufuji456/orgs",
"repos_url": "https://api.github.com/users/xufuji456/repos",
"events_url": "https://api.github.com/users/xufuji456/events{/privacy}",
"received_events_url": "https://api.github.com/users/xufuji456/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1463677878,
"node_id": "MDU6TGFiZWwxNDYzNjc3ODc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:performance",
"name": "type:performance",
"color": "159b2e",
"default": false,
"description": "Performance Issue"
},
{
"id": 1661751498,
"node_id": "MDU6TGFiZWwxNjYxNzUxNDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteConverter",
"name": "TFLiteConverter",
"color": "bfdadc",
"default": false,
"description": "For issues related to TFLite converter"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @xufuji456 \r\n\r\nThe ESRGAN expects the input shape of `50x50x3`, hence we need to resize the input tensor before allocating the tensors.\r\n```\r\n# Load TFLite model and allocate tensors.\r\ninterpreter = tf.lite.Interpreter(model_path=esrgan_model_path)\r\ninterpreter.resize_tensor_input(input_details[0]['index'],[1,50,50,3])\r\ninterpreter.allocate_tensors()\r\n```\r\nPlease find the [gist](https://colab.research.google.com/gist/pjpratik/e31c0319abbb9efb94b6c2c11af90537/61348.ipynb) modified to be comptatible for TF2.13 and let us know if helps.\r\n\r\nThanks. ",
"@pjpratik Thank you for your reply.\r\n if I want to convert into 640x360 resolution, how should I do?\r\nI see here support re-convert model https://github.com/tensorflow/examples/blob/master/lite/examples/super_resolution/ml/super_resolution.ipynb",
"Hi @xufuji456 \r\n\r\nThe ESRGAN produces x4 Super Resolution Image from images. If you want to convert to 640x360 resolution, you may need to pass 160x90 image and modify the input accordingly.\r\n```\r\nconcrete_func.inputs[0].set_shape([1, 160, 90, 3])\r\n```\r\nand after the conversion, accordingly we need to to resize the tensor \r\n```\r\ninterpreter.resize_tensor_input(input_details[0]['index'],[1,50,50,3])\r\n```\r\nand the pass the input image matching the size.\r\n\r\nThanks.\r\n",
"Hello @pjpratik \r\nIt does work, thanks a lot. \r\nESRGAN has a good result, However, it takes a lot of times.\r\nIf I want to use SRGAN with x2 super resolution, for example, convert 640x360 into 1280x720.\r\nHow to convert the model into TensorFlow-lite, that running on platform of Android.",
"Hi @xufuji456 \r\n\r\nGlad it worked.\r\n\r\nIf you would like to use SRGAN, you have convert SRGAN into TFLite model using different conversion options like `from_saved_model` format and `from_keras_model`.\r\n\r\nPlease refer to this [document](https://www.tensorflow.org/lite/models/convert/convert_models) on different ways of model for converting to TFLite.\r\n\r\nOnce the model is converted with specified model input dimesnions, it can be used for inference on Android.\r\n\r\nThanks.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61348\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61348\">No</a>\n"
] | 2023-07-21T06:51:14 | 2023-08-01T13:41:55 | 2023-08-01T13:41:53 | NONE | null | null | null | ### 1. System information
- OS Platform and Distribution: macOS 12.2.1; Apple M1; MacBook Pro
- TensorFlow installation : pip3 install tensorflow
- TensorFlow library: 2.13.0
### 2. Code
Provide code to help us reproduce your issues using one of the following options:
#### Option A: Reference colab notebooks
1) Reference [TensorFlow Model Colab](https://colab.research.google.com/gist/ymodak/e96a4270b953201d5362c61c1e8b78aa/tensorflow-datasets.ipynb?authuser=1): Demonstrate how to build your TF model.
2) Reference [TensorFlow Lite Model Colab](https://colab.research.google.com/gist/ymodak/0dfeb28255e189c5c48d9093f296e9a8/tensorflow-lite-debugger-colab.ipynb): Demonstrate how to convert your TF model to a TF Lite model (with quantization, if used) and run TFLite Inference (if possible).
```
convert tflite: https://github.com/tensorflow/examples/blob/master/lite/examples/super_resolution/ml/super_resolution.ipynb
```
#### Option B: Paste your code here or provide a link to a custom end-to-end colab
```
test demo: https://github.com/tensorflow/examples/tree/master/lite/examples/super_resolution
```
### 3. Failure after conversion
If the conversion is successful, but the generated model is wrong, then state what is wrong:
- Model produces some errors
### 4. (optional) RNN conversion support
model is esrgan
### 5. (optional) Any other info / logs
case1. enable optimize like "tf.lite.Optimize.DEFAULT", load model fail: Didn't find op for builtin opcode 'DEQUANTIZE' version '5'
case2. disable optimize, set_shape 50x50, run fail msg: Something went wrong when copying input buffer to input tensor
case3. disable optimize, set_shape 640x360, run fail msg: signal 11 (SIGSEGV): stack pointer is in a non-existent map; likely due to stack overflow. function crash: SuperResolution.cpp->DoSuperResolution() line at: TfLiteInterpreterAllocateTensors
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61348/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61347 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61347/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61347/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61347/events | https://github.com/tensorflow/tensorflow/issues/61347 | 1,815,228,430 | I_kwDOArmXAs5sMjAO | 61,347 | AssertionError: Tried to export a function which references an 'untracked' resource. TensorFlow objects(e.g. tf.Variable) | {
"login": "gymoon10",
"id": 44194558,
"node_id": "MDQ6VXNlcjQ0MTk0NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/44194558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gymoon10",
"html_url": "https://github.com/gymoon10",
"followers_url": "https://api.github.com/users/gymoon10/followers",
"following_url": "https://api.github.com/users/gymoon10/following{/other_user}",
"gists_url": "https://api.github.com/users/gymoon10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gymoon10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gymoon10/subscriptions",
"organizations_url": "https://api.github.com/users/gymoon10/orgs",
"repos_url": "https://api.github.com/users/gymoon10/repos",
"events_url": "https://api.github.com/users/gymoon10/events{/privacy}",
"received_events_url": "https://api.github.com/users/gymoon10/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097546578,
"node_id": "MDU6TGFiZWwxMDk3NTQ2NTc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:keras",
"name": "comp:keras",
"color": "0052cc",
"default": false,
"description": "Keras related issues"
},
{
"id": 4511033337,
"node_id": "LA_kwDOArmXAs8AAAABDODn-Q",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.10",
"name": "TF 2.10",
"color": "C15088",
"default": false,
"description": ""
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @gymoon10 \r\n\r\nIn order to expedite the trouble-shooting process, please provide the complete code snippet to reproduce the issue reported.\r\n\r\nThank you.",
"Hi @Varsha-anjanappa \r\n\r\nI edited the code part as you mentioned. There is no problem with compiling the model and applying the .fit() method, but an error is occurring when saving the trained model with h5 format. I also confirmed that there was no problem when I didn't use tf.Variable().\r\n\r\nThank you for your hard work.",
"Hi @gymoon10 ,\r\n\r\nIf you are using custom layer and its __init__ method has `non_python` arguments then in order to make it serializable/deserializable you need to explicitly override get_config() and from_config() methods.\r\n\r\nPlease refer the attached [documentation](https://www.tensorflow.org/guide/keras/serialization_and_saving#custom_objects) for more details.\r\n\r\nYou need to change the code to something like below.\r\n\r\n```\r\[email protected]_keras_serializable()\r\nclass MDTA(keras.layers.Layer):\r\n '''***IMPORTANT*** - The channels must be zero when divided by num_heads'''\r\n def __init__(self, num_heads):\r\n super(MDTA, self).__init__()\r\n self.num_heads = num_heads\r\n self.temperature = tf.Variable([[[[1.]] for _ in range(self.num_heads)]], shape=[None, self.num_heads, 1, 1], trainable=True)\r\n\r\n def get_config(self):\r\n base_config = super().get_config()\r\n config = {\r\n \"temperature\": keras.saving.serialize_keras_object(self.temperature),\r\n }\r\n return {**base_config, **config}\r\n\r\n @classmethod\r\n def from_config(cls, config):\r\n custom_config = config.pop(\"temperature\")\r\n temperature = keras.saving.deserialize_keras_object(custom_config)\r\n return cls(sublayer, **config)\r\n```\r\nPlease try this and let us know.Thanks!\r\n\r\n",
"@gymoon10 , Please make a note that the above code suitable for .`keras` format. ",
"I want to save the full network, not the self.temperature alone",
"@gymoon10 ,\r\n\r\nI have given an example of class MDTA. You need to do same for all subclassed Classes in your code in order to get the complete model saved. This is needed for subclassed models/layers only.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61347\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61347\">No</a>\n"
] | 2023-07-21T06:26:47 | 2023-09-19T01:47:43 | 2023-09-19T01:47:40 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.10
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
3.8
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
As you can see in the code below, the operation of multiplying **attn** by **temperature** at **MDTA** is being performed. And the **temperature** is defined by tf.Variable().
The model using the attention module below runs normally until training(model.fit()), but an AssertionError occurs when saving the model. I would appreciate it if you could tell me how to make the **temperature** variable trackable.
### Standalone code to reproduce the issue
```shell
# import tensorflow.compat.v2 as tf
import tensorflow as tf
import keras
from keras import backend
from keras.applications import imagenet_utils
from keras.engine import training
from keras.layers import VersionAwareLayers
from keras.utils import data_utils
from keras.utils import layer_utils
from keras.layers import Layer, Activation, ReLU, Concatenate, Conv2D, Add, Dense, MaxPool2D, AvgPool2D, Flatten, multiply, Softmax
from keras.layers import Dropout, Dense, GlobalAveragePooling2D, Input, BatchNormalization, DepthwiseConv2D, ZeroPadding2D, LayerNormalization
from tensorflow.keras import backend as K
from keras.models import Model
#import tensorflow.keras
# 정상적으로 작동 (temperature제외)
# isort: off
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHT_PATH = (
"https://storage.googleapis.com/tensorflow/" "keras-applications/mobilenet/"
)
@keras_export(
"keras.applications.mobilenet.MobileNet", "keras.applications.MobileNet"
)
def MobileNet(
input_shape=None,
alpha=1.0,
depth_multiplier=1,
dropout=1e-3,
include_top=True,
weights="imagenet",
input_tensor=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
**kwargs,
):
# global layers
# if "layers" in kwargs:
# layers = kwargs.pop("layers")
# else:
# layers = VersionAwareLayers()
if kwargs:
raise ValueError(f"Unknown argument(s): {(kwargs,)}")
if not (weights in {"imagenet", None} or tf.io.gfile.exists(weights)):
raise ValueError(
"The `weights` argument should be either "
"`None` (random initialization), `imagenet` "
"(pre-training on ImageNet), "
"or the path to the weights file to be loaded. "
f"Received weights={weights}"
)
if weights == "imagenet" and include_top and classes != 1000:
raise ValueError(
'If using `weights` as `"imagenet"` with `include_top` '
"as true, `classes` should be 1000. "
f"Received classes={classes}"
)
# Determine proper input shape and default size.
if input_shape is None:
default_size = 224
else:
if backend.image_data_format() == "channels_first":
rows = input_shape[1]
cols = input_shape[2]
else:
rows = input_shape[0]
cols = input_shape[1]
if rows == cols and rows in [128, 160, 192, 224]:
default_size = rows
else:
default_size = 224
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=default_size,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights,
)
if backend.image_data_format() == "channels_last":
row_axis, col_axis = (0, 1)
else:
row_axis, col_axis = (1, 2)
rows = input_shape[row_axis]
cols = input_shape[col_axis]
if weights == "imagenet":
if depth_multiplier != 1:
raise ValueError(
"If imagenet weights are being loaded, "
"depth multiplier must be 1. "
f"Received depth_multiplier={depth_multiplier}"
)
if alpha not in [0.25, 0.50, 0.75, 1.0]:
raise ValueError(
"If imagenet weights are being loaded, "
"alpha can be one of"
"`0.25`, `0.50`, `0.75` or `1.0` only. "
f"Received alpha={alpha}"
)
if rows != cols or rows not in [128, 160, 192, 224]:
rows = 224
logging.warning(
"`input_shape` is undefined or non-square, "
"or `rows` is not in [128, 160, 192, 224]. "
"Weights for input shape (224, 224) will be "
"loaded as the default."
)
if input_tensor is None:
img_input = Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
num_heads = 4
expansion_factor = 3
x = _conv_block(img_input, 32, alpha, strides=(2, 2))
x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(x, 64, alpha, depth_multiplier, block_id=1)
x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(
x, 128, alpha, depth_multiplier, strides=(2, 2), block_id=2)
x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(x, 128, alpha, depth_multiplier, block_id=3)
x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(
x, 256, alpha, depth_multiplier, strides=(2, 2), block_id=4)
x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(x, 256, alpha, depth_multiplier, block_id=5)
x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(
x, 512, alpha, depth_multiplier, strides=(2, 2), block_id=6)
#x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=7)
#x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=8)
#x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=9)
#x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=10)
#x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=11)
#x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(
x, 1024, alpha, depth_multiplier, strides=(2, 2), block_id=12)
#x = _transformer_block(x, num_heads, expansion_factor)
x = _depthwise_conv_block(x, 1024, alpha, depth_multiplier, block_id=13)
#x = _transformer_block(x, num_heads, expansion_factor)
if include_top:
x = layers.GlobalAveragePooling2D(keepdims=True)(x)
x = layers.Dropout(dropout, name="dropout")(x)
x = layers.Conv2D(classes, (1, 1), padding="same", name="conv_preds")(x)
x = layers.Reshape((classes,), name="reshape_2")(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Activation(
activation=classifier_activation, name="predictions"
)(x)
else:
if pooling == "avg":
x = GlobalAveragePooling2D()(x)
elif pooling == "max":
x = GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name="mobilenet_%0.2f_%s" % (alpha, rows))
# Load weights.
if weights == "imagenet":
if alpha == 1.0:
alpha_text = "1_0"
elif alpha == 0.75:
alpha_text = "7_5"
elif alpha == 0.50:
alpha_text = "5_0"
else:
alpha_text = "2_5"
if include_top:
model_name = "mobilenet_%s_%d_tf.h5" % (alpha_text, rows)
weight_path = BASE_WEIGHT_PATH + model_name
weights_path = data_utils.get_file(
model_name, weight_path, cache_subdir="models"
)
else:
model_name = "mobilenet_%s_%d_tf_no_top.h5" % (alpha_text, rows)
weight_path = BASE_WEIGHT_PATH + model_name
weights_path = data_utils.get_file(
model_name, weight_path, cache_subdir="models"
)
model.load_weights(weights_path, by_name=True)
elif weights is not None:
model.load_weights(weights, by_name=True)
return model
class MDTA(keras.layers.Layer):
'''***IMPORTANT*** - The channels must be zero when divided by num_heads'''
def __init__(self, num_heads):
super(MDTA, self).__init__()
self.num_heads = num_heads
#self.temperature = tf.Variable([[[[1.]] for _ in range(self.num_heads)]], shape=[None, self.num_heads, 1, 1], trainable=True)
def build(self, inputs):
'''(N, H, W, C) -> (N, H, W, C)
Output of MDTA feature should be added to input feature x'''
b, h, w, c = inputs.shape
# -------------------- Layers --------------------
qkv = Conv2D(filters=c*3, kernel_size=1, use_bias=False)
qkv_conv = Conv2D(c*3, kernel_size=3, padding='same', groups=c*3, use_bias=False)
project_out = Conv2D(filters=c, kernel_size=1, use_bias=False)
temperature = tf.Variable([[[[1.]] for _ in range(self.num_heads)]], shape=[None, self.num_heads, 1, 1], trainable=True)
# -------------------- forward --------------------
q, k, v = tf.split(qkv_conv(qkv(inputs)), num_or_size_splits=3, axis=-1)
# divide the # of channels into heads & learn separate attention map
q = tf.reshape(q, [-1, self.num_heads, c//self.num_heads, h * w]) # (N, num_heads, C/num_heads, HW)
k = tf.reshape(k, [-1, self.num_heads, c//self.num_heads, h * w])
v = tf.reshape(v, [-1, self.num_heads, c//self.num_heads, h * w])
q, k = tf.nn.l2_normalize(q, axis=-1), tf.nn.l2_normalize(k, axis=-1)
# CxC Attention map instead of HWxHW (when num_heads=1)
attn = tf.matmul(q, k, transpose_b=True)
attn = multiply([attn, temperature])
attn = Softmax(axis=-1)(attn)
out = tf.matmul(attn, v)
shape = [tf.shape(out)[k] for k in range(4)] # [Batch, num_heads, c/num_heads, H*W]
out = tf.reshape(out, [shape[0], h, w, shape[1]*shape[2]])
out = project_out(out) # attn*v: (N, num_heads, C/num_heads, HW)
return out
def __call__(self, inputs):
return self.build(inputs)
class GDFN(keras.layers.Layer):
def __init__(self):
super(GDFN, self).__init__()
self.expansion_factor = 2
def build(self, inputs):
'''(N, H, W, C) -> (N, H, W, C) with expansion_factor=r
Output of GDFN feature should be added to input feature x'''
b, h, w, c = inputs.shape
hidden_channels = int(c * self.expansion_factor) # channel expansion
# -------------------- Layers --------------------
project_in = Conv2D(hidden_channels * 2, kernel_size=1, use_bias=False)
conv = Conv2D(hidden_channels * 2, kernel_size=3, padding='same',
groups=hidden_channels * 2, use_bias=False)
project_out = Conv2D(c, kernel_size=1, use_bias=False)
# -------------------- Forward --------------------
x = project_in(inputs) # (N, H, W, 2Cr)
x = conv(x) # (N, H, W, 2Cr)
x1, x2 = tf.split(x, num_or_size_splits=2, axis=-1) # (N, H, W, Cr), (N, H, W, Cr)
# Gating: the element-wise product of 2 parallel paths of linear transformation layers
out = ReLU()(x1) * x2 # (N, H, W, Cr)
out = project_out(out) # (N, H, W, Cr)
return out
def __call__(self, inputs):
return self.build(inputs)
def _transformer_block(inputs, num_heads, expansion_factor):
'''(N, H, W, C) -> (N, H, W, C)'''
shape = [tf.shape(inputs)[k] for k in range(4)]
b, h, w, c = inputs.shape[0], inputs.shape[1], inputs.shape[2], inputs.shape[3]
assert c % num_heads == 0
norm1 = LayerNormalization() # default: axis=-1
attn = MDTA(num_heads)
norm2 = LayerNormalization()
ffn = GDFN()
# Add MDTA output feature
inputs_norm1 = norm1(tf.reshape(inputs, [-1, h*w, c]))
inputs_norm1 = tf.reshape(inputs_norm1, [-1, h, w, c])
inputs = inputs + attn(inputs_norm1)
# ADD GDFN output feature
inputs_norm2 = norm2(tf.reshape(inputs, [-1, h*w, c]))
inputs_norm2 = tf.reshape(inputs_norm2, [-1, h, w, c])
x = inputs + ffn(inputs_norm2)
return x
def _conv_block(inputs, filters, alpha, kernel=(3, 3), strides=(1, 1)):
channel_axis = 1 if backend.image_data_format() == "channels_first" else -1
filters = int(filters * alpha)
x = Conv2D(
filters,
kernel,
padding="same",
use_bias=False,
strides=strides,
name="conv1",
)(inputs)
x = BatchNormalization(axis=channel_axis, name="conv1_bn")(x)
return ReLU(6.0, name="conv1_relu")(x)
def _depthwise_conv_block(
inputs,
pointwise_conv_filters,
alpha,
depth_multiplier=1,
strides=(1, 1),
block_id=1,
):
channel_axis = 1 if backend.image_data_format() == "channels_first" else -1
pointwise_conv_filters = int(pointwise_conv_filters * alpha)
if strides == (1, 1):
x = inputs
else:
x = ZeroPadding2D(
((0, 1), (0, 1)), name="conv_pad_%d" % block_id
)(inputs)
x = DepthwiseConv2D(
(3, 3),
padding="same" if strides == (1, 1) else "valid",
depth_multiplier=depth_multiplier,
strides=strides,
use_bias=False,
name="conv_dw_%d" % block_id,
)(x)
x = BatchNormalization(
axis=channel_axis, name="conv_dw_%d_bn" % block_id
)(x)
x = ReLU(6.0, name="conv_dw_%d_relu" % block_id)(x)
x = Conv2D(
pointwise_conv_filters,
(1, 1),
padding="same",
use_bias=False,
strides=(1, 1),
name="conv_pw_%d" % block_id,
)(x)
x = BatchNormalization(
axis=channel_axis, name="conv_pw_%d_bn" % block_id
)(x)
return ReLU(6.0, name="conv_pw_%d_relu" % block_id)(x)
def gen_mobilenetv1_mdta(input_shape, dropout_rate, num_class):
if input_shape==(224, 224, 3):
weights = 'imagenet'
else:
weights = None
base_model = MobileNet(weights=weights,
include_top=False,
input_tensor=Input(input_shape),
input_shape=input_shape)
base_model.trainable = True
x = base_model.output
head_layer = tf.keras.Sequential([tf.keras.layers.GlobalAveragePooling2D(name='simple_classifier_pooling'),
tf.keras.layers.Dropout(dropout_rate, name='simple_classifier_dropout'),
tf.keras.layers.Dense(512, activation='relu', name='simple_classifier_dense1'),
tf.keras.layers.Dense(num_class, activation='softmax'),])
predictions = head_layer(x)
# this is the model we will train
model = tf.keras.models.Model(inputs=base_model.input, outputs=predictions)
#print(model)
return model
input_shape = (768, 768, 3)
x = Input(input_shape)
model = gen_mobilenetv1_mdta(input_shape, 0.3, 6)
out = model(x)
save_path = 'D:/model_mdta.h5'
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['acc'])
model.save(save_path )
```
### Relevant log output
In the vscode-terminal
```shell
AssertionError: Tried to export a function which references an 'untracked' resource. TensorFlow objects (e.g. tf.Variable) captured by functions must be 'tracked' by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly. See the information below:
Function name = b'__inference_signature_wrapper_18418'
Captured Tensor = <ResourceHandle(name="Resource-40-at-0x55f74d07bcf0", device="/job:localhost/replica:0/task:0/device:CPU:0", container="Anonymous", type="tensorflow::Var", dtype and shapes : "[ DType enum: 1, Shape: [?,4,1,1] ]")>
Trackable referencing this tensor = <tf.Variable 'Variable:0' shape=(None, 4, 1, 1) dtype=float32>
Internal Tensor = Tensor("18172:0", shape=(), dtype=resource)
During handling of the above exception, another exception occurred:
```
In the Jupyter-Noetebook
```shell
ValueError Traceback (most recent call last)
/tmp/ipykernel_828829/1178680124.py in <module>
1 model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['acc'])
----> 2 model.save('D:/model_mdta.h5')
~/conda/envs/hrvi2/lib/python3.8/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
~/conda/envs/hrvi2/lib/python3.8/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
~/conda/envs/hrvi2/lib/python3.8/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
ValueError: Unable to serialize VariableSpec(shape=(None, 4, 1, 1), dtype=tf.float32, trainable=True, alias_id=None) to JSON, because the TypeSpec class <class 'tensorflow.python.ops.resource_variable_ops.VariableSpec'> has not been registered.
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61347/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61346 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61346/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61346/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61346/events | https://github.com/tensorflow/tensorflow/pull/61346 | 1,814,843,321 | PR_kwDOArmXAs5WDEWW | 61,346 | [INTEL oneDNN][Bug Fix]Use correct data type for dim size | {
"login": "gzmkl",
"id": 29215195,
"node_id": "MDQ6VXNlcjI5MjE1MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/29215195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gzmkl",
"html_url": "https://github.com/gzmkl",
"followers_url": "https://api.github.com/users/gzmkl/followers",
"following_url": "https://api.github.com/users/gzmkl/following{/other_user}",
"gists_url": "https://api.github.com/users/gzmkl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gzmkl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gzmkl/subscriptions",
"organizations_url": "https://api.github.com/users/gzmkl/orgs",
"repos_url": "https://api.github.com/users/gzmkl/repos",
"events_url": "https://api.github.com/users/gzmkl/events{/privacy}",
"received_events_url": "https://api.github.com/users/gzmkl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1169365494,
"node_id": "MDU6TGFiZWwxMTY5MzY1NDk0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:M",
"name": "size:M",
"color": "adafea",
"default": false,
"description": "CL Change Size: Medium"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
}
] | closed | false | {
"login": "penpornk",
"id": 38085909,
"node_id": "MDQ6VXNlcjM4MDg1OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/38085909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penpornk",
"html_url": "https://github.com/penpornk",
"followers_url": "https://api.github.com/users/penpornk/followers",
"following_url": "https://api.github.com/users/penpornk/following{/other_user}",
"gists_url": "https://api.github.com/users/penpornk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penpornk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penpornk/subscriptions",
"organizations_url": "https://api.github.com/users/penpornk/orgs",
"repos_url": "https://api.github.com/users/penpornk/repos",
"events_url": "https://api.github.com/users/penpornk/events{/privacy}",
"received_events_url": "https://api.github.com/users/penpornk/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "penpornk",
"id": 38085909,
"node_id": "MDQ6VXNlcjM4MDg1OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/38085909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penpornk",
"html_url": "https://github.com/penpornk",
"followers_url": "https://api.github.com/users/penpornk/followers",
"following_url": "https://api.github.com/users/penpornk/following{/other_user}",
"gists_url": "https://api.github.com/users/penpornk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penpornk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penpornk/subscriptions",
"organizations_url": "https://api.github.com/users/penpornk/orgs",
"repos_url": "https://api.github.com/users/penpornk/repos",
"events_url": "https://api.github.com/users/penpornk/events{/privacy}",
"received_events_url": "https://api.github.com/users/penpornk/received_events",
"type": "User",
"site_admin": false
},
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"All 3 CI failures are unrelated.\r\n* [Py+CPP Ubuntu CPU](https://source.cloud.google.com/results/invocations/3fdb42ac-da26-4a19-a735-e8f1aa45c9e8/log): Tests timed out.\r\n* [Py+CPP Ubuntu GPU](https://source.cloud.google.com/results/invocations/5280cdca-cf13-4318-a10c-2eb1b0beabaa/log): Tests timed out.\r\n* [AMD ROCm](http://ml-ci.amd.com:21096/blue/organizations/jenkins/tensorflow%2Fgithub-prs-upstream-master%2FAMD-ROCm-Community-CI-Build/detail/PR-61346/2/pipeline): Error unrelated to this PR.\r\n```\r\nAnalyzing: target //tensorflow/tools/pip_package:build_pip_package (648 packages loaded, 30206 targets configured)\r\n\r\nERROR: /workspace/tensorflow/compiler/xla/service/gpu/BUILD:853:11: in deps attribute of cc_library rule //tensorflow/compiler/xla/service/gpu:gpu_executable: Label '//tensorflow/tsl/platform:random' is duplicated\r\n\r\nERROR: /workspace/tensorflow/compiler/xla/service/gpu/BUILD:853:11: Analysis of target '//tensorflow/compiler/xla/service/gpu:gpu_executable' failed\r\n```"
] | 2023-07-20T21:51:39 | 2023-07-25T08:49:48 | 2023-07-25T08:49:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61346",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61346",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61346.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61346.patch",
"merged_at": "2023-07-25T08:49:48"
} | This is a follow-up PR of https://github.com/tensorflow/tensorflow/pull/60568
This PR makes sure that all occurrences of dim size usage is tied with int64_t data type (versus int)
within all oneDNN kernel op implementation. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61346/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61345 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61345/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61345/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61345/events | https://github.com/tensorflow/tensorflow/issues/61345 | 1,814,825,359 | I_kwDOArmXAs5sLAmP | 61,345 | Cannot build tensorflow2.7-gpu version with bazel3.7.2 | {
"login": "xiaxia-wang",
"id": 45707181,
"node_id": "MDQ6VXNlcjQ1NzA3MTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/45707181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaxia-wang",
"html_url": "https://github.com/xiaxia-wang",
"followers_url": "https://api.github.com/users/xiaxia-wang/followers",
"following_url": "https://api.github.com/users/xiaxia-wang/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaxia-wang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaxia-wang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaxia-wang/subscriptions",
"organizations_url": "https://api.github.com/users/xiaxia-wang/orgs",
"repos_url": "https://api.github.com/users/xiaxia-wang/repos",
"events_url": "https://api.github.com/users/xiaxia-wang/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaxia-wang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1205615612,
"node_id": "MDU6TGFiZWwxMjA1NjE1NjEy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:%20ubuntu/linux",
"name": "subtype: ubuntu/linux",
"color": "b619ea",
"default": false,
"description": "Ubuntu/Linux Build/Installation Issues"
},
{
"id": 3531398540,
"node_id": "LA_kwDOArmXAs7SfN2M",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.7",
"name": "TF 2.7",
"color": "77237D",
"default": false,
"description": "Issues related to TF 2.7.0"
}
] | closed | false | {
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@xiaxia-wang,\r\n\r\nI was able to clone the tensorflow repository without any problem on Ubuntu. I observed that you are using GCC 11, CUDA 12.1 and cuDNN 8.9 which is incompatible with TF v2.7.0. And also TF v2.7.0 is a pretty older version, please try to install the latest stable version.\r\n\r\nCould you please create a virtual environment and try to install the tensorflow as mentioned in this official document [link](https://www.tensorflow.org/install/source) and have a look at the [compatible](https://www.tensorflow.org/install/source#gpu) tested build configurations as well. Please find the attached screenshot for reference.\r\n\r\n\r\n",
"Thanks for your reply. \r\n\r\nI was able to clone the tensorflow repository without any problem. The checkout process also works well. My problem arises in the bazel build process. I explicitly follow every step mentioned in the official document [link](https://www.tensorflow.org/install/source), but when I run `bazel build --config=cuda //tensorflow/tools/pip_package:build_pip_package`, it turns out as the issue provided above. \r\n\r\nI have also read the table of compatible tests, but it shows the latest tested TF-GPU version is 2.6.0. Could you please explain more about the incompatibility? Is my tensorflow too old for the cuda version? If I remember it correctly, higher cuda version is compatible with lower requirements.\r\n\r\n",
"@xiaxia-wang,\r\nIs there any specific reason to install tensorflow v2.7, because as mentioned above v2.7 is the pretty older version. It's unlikely for TF 2.7 version to receive any bug fixes except when we have security patches. There is a high possibility that this was fixed with later TF versions. \r\n\r\nWe request please create a virtual environment and try to install the tensorflow latest stable v2.13 as mentioned in this official document [link](https://www.tensorflow.org/install/source) and have a look at the [compatible](https://www.tensorflow.org/install/source#gpu) tested build configurations as well. Thank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61345\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61345\">No</a>\n",
"Hi! i'm encountering the exact issue, may i ask have you resolved this?",
"Unfortunately, no. They only told me to use newer versions of tensorflow...In the end I turned to pytorch and reconstructed my program, which for me is better and easier to use."
] | 2023-07-20T21:31:05 | 2023-12-19T08:15:06 | 2023-08-09T01:52:12 | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf2.7
### Custom code
No
### OS platform and distribution
Linux Ubuntu 22.04
### Mobile device
Linux Ubuntu 22.04
### Python version
3.8
### Bazel version
3.7.2
### GCC/compiler version
gcc 11
### CUDA/cuDNN version
12.1/8.9
### GPU model and memory
Quadro RTX 8000, 48GB
### Current behavior?
Context: I can use the exact same source codes to successfully build the CPU-only version. I hope to get a gpu version with the same code.
So I ran `bazel clean`, it looks fine.
Then I run `bazel build --config=cuda //tensorflow/tools/pip_package:build_pip_package`. I have tried for several times, but everytime got the following error: (I only paste the main part here, before this there were some regular bazel output, e.g., INFO:...)
```
ERROR: An error occurred during the fetch of repository 'local_config_cuda':
Traceback (most recent call last):
File "/home/xiaxia/tensorflow/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
_create_local_cuda_repository(repository_ctx)
File "/home/xiaxia/tensorflow/third_party/gpus/cuda_configure.bzl", line 1076, column 27, in _create_local_cuda_repository
cuda_libs = _find_libs(repository_ctx, check_cuda_libs_script, cuda_config)
File "/home/xiaxia/tensorflow/third_party/gpus/cuda_configure.bzl", line 606, column 21, in _find_libs
_check_cuda_libs(repository_ctx, check_cuda_libs_script, check_cuda_libs_params.values())
File "/home/xiaxia/tensorflow/third_party/gpus/cuda_configure.bzl", line 501, column 28, in _check_cuda_libs
checked_paths = execute(repository_ctx, [python_bin, "-c", cmd]).stdout.splitlines()
File "/home/xiaxia/tensorflow/third_party/remote_config/common.bzl", line 230, column 13, in execute
fail(
Error in fail: Repository command failed
Expected even number of arguments
INFO: Found applicable config definition build:cuda in file /home/xiaxia/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//c
INFO: Found applicable config definition build:cuda in file /home/xiaxia/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//c
WARNING: The following configs were expanded more than once: [cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
ERROR: @local_config_cuda//:enable_cuda :: Error loading option @local_config_cuda//:enable_cuda: Repository command failed
Expected even number of arguments
```
Note that, I do not have root/admin role of the remote ubuntu machine, will it be a problem? I also tried to implement cuda and cudnn in my /home/ directory, but it still not help.
Please provide me with any advice, and thanks a lot!
### Standalone code to reproduce the issue
```shell
Sorry I don't know how to share my problem as reproducible, it happens in the process of building tensorflow.
```
### Relevant log output
```shell
WARNING: The following configs were expanded more than once: [cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=157
INFO: Reading rc options for 'build' from /home/xiaxia/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/xiaxia/tensorflow/.bazelrc:
'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true
INFO: Reading rc options for 'build' from /home/xiaxia/tensorflow/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/home/xiaxia/anaconda3/envs/tf-cus-gpu/bin/python3 --action_env PYTHON_LIB_PATH=/home/xiaxia/anaconda3/envs/tf-cus-gpu/lib/python3.8/site-packages --python_path=/home/xiaxia/anaconda3/envs/tf-cus-gpu/bin/python3 --action_env TF_CUDA_VERSION=12.2 --action_env TF_CUDNN_VERSION=8 --action_env TF_NCCL_VERSION= --action_env TF_CUDA_PATHS=/home/xiaxia/cuda-12.2 --action_env CUDA_TOOLKIT_PATH=/home/xiaxia/cuda-12.2 --action_env TF_CUDA_COMPUTE_CAPABILITIES=3.5,7.0 --action_env LD_LIBRARY_PATH=/usr/local/cuda-12.1/targets/x86_64-linux/lib:/usr/local/cuda-12.1/targets/x86_64-linux/lib:/usr/local/cuda-12.1/targets/x86_64-linux/lib::/home/xiaxia/cuda-12.2/lib64:/home/xiaxia/cuda-12.2/lib64 --action_env GCC_HOST_COMPILER_PATH=/usr/bin/x86_64-linux-gnu-gcc-11 --config=cuda
INFO: Reading rc options for 'build' from /home/xiaxia/tensorflow/.bazelrc:
'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/fallback,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
INFO: Found applicable config definition build:short_logs in file /home/xiaxia/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/xiaxia/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:cuda in file /home/xiaxia/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
INFO: Found applicable config definition build:cuda in file /home/xiaxia/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
INFO: Found applicable config definition build:linux in file /home/xiaxia/tensorflow/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/xiaxia/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
INFO: Repository local_config_cuda instantiated at:
/home/xiaxia/tensorflow/WORKSPACE:15:14: in <toplevel>
/home/xiaxia/tensorflow/tensorflow/workspace2.bzl:1080:19: in workspace
/home/xiaxia/tensorflow/tensorflow/workspace2.bzl:94:19: in _tf_toolchains
Repository rule cuda_configure defined at:
/home/xiaxia/tensorflow/third_party/gpus/cuda_configure.bzl:1448:33: in <toplevel>
ERROR: An error occurred during the fetch of repository 'local_config_cuda':
Traceback (most recent call last):
File "/home/xiaxia/tensorflow/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
_create_local_cuda_repository(repository_ctx)
File "/home/xiaxia/tensorflow/third_party/gpus/cuda_configure.bzl", line 1076, column 27, in _create_local_cuda_repository
cuda_libs = _find_libs(repository_ctx, check_cuda_libs_script, cuda_config)
File "/home/xiaxia/tensorflow/third_party/gpus/cuda_configure.bzl", line 606, column 21, in _find_libs
_check_cuda_libs(repository_ctx, check_cuda_libs_script, check_cuda_libs_params.values())
File "/home/xiaxia/tensorflow/third_party/gpus/cuda_configure.bzl", line 501, column 28, in _check_cuda_libs
checked_paths = execute(repository_ctx, [python_bin, "-c", cmd]).stdout.splitlines()
File "/home/xiaxia/tensorflow/third_party/remote_config/common.bzl", line 230, column 13, in execute
fail(
Error in fail: Repository command failed
Expected even number of arguments
INFO: Found applicable config definition build:cuda in file /home/xiaxia/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
INFO: Found applicable config definition build:cuda in file /home/xiaxia/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
WARNING: The following configs were expanded more than once: [cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
ERROR: @local_config_cuda//:enable_cuda :: Error loading option @local_config_cuda//:enable_cuda: Repository command failed
Expected even number of arguments
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61345/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61344 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61344/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61344/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61344/events | https://github.com/tensorflow/tensorflow/issues/61344 | 1,814,763,337 | I_kwDOArmXAs5sKxdJ | 61,344 | Unbounded Memory leak when using tf.py_function in tf.data.Dataset.map() | {
"login": "Pyrestone",
"id": 20396757,
"node_id": "MDQ6VXNlcjIwMzk2NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/20396757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pyrestone",
"html_url": "https://github.com/Pyrestone",
"followers_url": "https://api.github.com/users/Pyrestone/followers",
"following_url": "https://api.github.com/users/Pyrestone/following{/other_user}",
"gists_url": "https://api.github.com/users/Pyrestone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pyrestone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pyrestone/subscriptions",
"organizations_url": "https://api.github.com/users/Pyrestone/orgs",
"repos_url": "https://api.github.com/users/Pyrestone/repos",
"events_url": "https://api.github.com/users/Pyrestone/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pyrestone/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1463677878,
"node_id": "MDU6TGFiZWwxNDYzNjc3ODc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:performance",
"name": "type:performance",
"color": "159b2e",
"default": false,
"description": "Performance Issue"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"see also: #35084 and #51839 (both closed, but the issue re-appeared in 2.13.0)",
"Hi @Pyrestone ,\r\n\r\nI have replicated the reported behaviour and attached [gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/84a602e1d03dd977317080607592bd0a/61344_-tf_memory_leak_py_function.ipynb) for reference. \r\n\r\nI can observe memory increased from batch:0 to batch:3600. Needs to be investigated.",
"Note for people finding this issue in the meantime:\r\n\r\nAs a temporary (or permanent I suppose) **workaround** is using `tf.numpy_function` instead of `tf.py_function` in tf.data.Dataset.map(). \r\n\r\nIt behaves very similarly, except the function then receives and returns numpy arrays instead of eager tensors. \r\nThe only other notable difference is that `tf.numpy_function` doesn't support gradient computation (which probably shouldn't matter in a dataset.map() call).",
"@Pyrestone Thanks for your solution."
] | 2023-07-20T20:36:51 | 2023-09-07T10:21:52 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
v2.13.0-rc2-7-g1cb1a030a62 2.13.0
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04, Google Colab
### Mobile device
_No response_
### Python version
3.8
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
11.8 / 8.6
### GPU model and memory
various, e.g. 2080ti, 3080ti mobile, Colab T4
### Current behavior?
Using tf.py_function in a function that is applied to a tf.data.Dataset via its map() function causes a (C++-level) memory leak.
In my real training with more complex code inside the py_function, this lead to the python script eventually consuming upwards of 30 GB of RAM during a model.fit() loop, despite taking less that 3GB of RAM during the initial epoch.
tf.py_function also more generally causes memory leaks in all kinds of places. See the flags at the top of the linked Collab for details.
### Standalone code to reproduce the issue
```shell
see Collab: https://colab.research.google.com/drive/1auVJPyHApl4__4FF-rV3xNcJrqYZc38R?usp=sharing
Iterating through a dataset with a tf.py_function in it causes unbounded linear memory consumption growth.
```
### Relevant log output
```shell
**Batch 0**
Memory usage: 1732120576
Delta: 1651.88 MiB
**Batch 200**
Memory usage: 1736859648
Delta: 4.52 MiB
**Batch 400**
Memory usage: 1740644352
Delta: 3.61 MiB
**Batch 600**
Memory usage: 1744699392
Delta: 3.87 MiB
Average Delta since start: 3.87 MiB/iteration
Estimated growth per 1000 steps: 19.34 MiB
**Batch 800**
Memory usage: 1748750336
Delta: 3.86 MiB
Average Delta since start: 3.87 MiB/iteration
Estimated growth per 1000 steps: 19.33 MiB
**Batch 1000**
Memory usage: 1752805376
Delta: 3.87 MiB
Average Delta since start: 3.87 MiB/iteration
Estimated growth per 1000 steps: 19.33 MiB
**Batch 1200**
Memory usage: 1757401088
Delta: 4.38 MiB
Average Delta since start: 4.00 MiB/iteration
Estimated growth per 1000 steps: 19.98 MiB
**Batch 1400**
Memory usage: 1761456128
Delta: 3.87 MiB
Average Delta since start: 3.97 MiB/iteration
Estimated growth per 1000 steps: 19.85 MiB
**Batch 1600**
Memory usage: 1765240832
Delta: 3.61 MiB
Average Delta since start: 3.91 MiB/iteration
Estimated growth per 1000 steps: 19.55 MiB
**Batch 1800**
Memory usage: 1769025536
Delta: 3.61 MiB
Average Delta since start: 3.87 MiB/iteration
Estimated growth per 1000 steps: 19.33 MiB
**Batch 2000**
Memory usage: 1773621248
Delta: 4.38 MiB
Average Delta since start: 3.93 MiB/iteration
Estimated growth per 1000 steps: 19.66 MiB
**Batch 2200**
Memory usage: 1777676288
Delta: 3.87 MiB
Average Delta since start: 3.92 MiB/iteration
Estimated growth per 1000 steps: 19.62 MiB
**Batch 2400**
Memory usage: 1781731328
Delta: 3.87 MiB
Average Delta since start: 3.92 MiB/iteration
Estimated growth per 1000 steps: 19.59 MiB
**Batch 2600**
Memory usage: 1785786368
Delta: 3.87 MiB
Average Delta since start: 3.91 MiB/iteration
Estimated growth per 1000 steps: 19.57 MiB
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61344/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61343 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61343/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61343/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61343/events | https://github.com/tensorflow/tensorflow/issues/61343 | 1,814,303,098 | I_kwDOArmXAs5sJBF6 | 61,343 | RuntimeError: Quantization to 16x8-bit not yet supported for op: 'FLOOR_MOD' | {
"login": "soudabehmousavi99",
"id": 140095875,
"node_id": "U_kgDOCFmxgw",
"avatar_url": "https://avatars.githubusercontent.com/u/140095875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soudabehmousavi99",
"html_url": "https://github.com/soudabehmousavi99",
"followers_url": "https://api.github.com/users/soudabehmousavi99/followers",
"following_url": "https://api.github.com/users/soudabehmousavi99/following{/other_user}",
"gists_url": "https://api.github.com/users/soudabehmousavi99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soudabehmousavi99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soudabehmousavi99/subscriptions",
"organizations_url": "https://api.github.com/users/soudabehmousavi99/orgs",
"repos_url": "https://api.github.com/users/soudabehmousavi99/repos",
"events_url": "https://api.github.com/users/soudabehmousavi99/events{/privacy}",
"received_events_url": "https://api.github.com/users/soudabehmousavi99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1463677878,
"node_id": "MDU6TGFiZWwxNDYzNjc3ODc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:performance",
"name": "type:performance",
"color": "159b2e",
"default": false,
"description": "Performance Issue"
},
{
"id": 1661751498,
"node_id": "MDU6TGFiZWwxNjYxNzUxNDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteConverter",
"name": "TFLiteConverter",
"color": "bfdadc",
"default": false,
"description": "For issues related to TFLite converter"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@soudabehmousavi99 If 16x8 quantization is not supported for some operators in the model, then the model still can be quantized, but unsupported operators kept in float. The following option should be added to the target_spec to allow this.\r\n\r\n```\r\nimport tensorflow as tf\r\nconverter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)\r\nconverter.representative_dataset = representative_dataset\r\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\r\nconverter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8,\r\ntf.lite.OpsSet.TFLITE_BUILTINS]\r\ntflite_quant_model = converter.convert()\r\n```\r\n\r\nCould you please have a look at [this](https://www.tensorflow.org/lite/performance/post_training_quantization) doc and let us know if it helps?\r\nThank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61343\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61343\">No</a>\n"
] | 2023-07-20T15:43:03 | 2023-08-09T01:52:19 | 2023-08-09T01:52:14 | NONE | null | null | null | ### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
google colab
- TensorFlow installation (pip package or built from source):
- pip
- TensorFlow library (version, if pip package or github SHA, if built from source):
- 2.12.0
### 2. Code
I have following model.
```python
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images.astype(np.float32) / 255.0
test_images = test_images.astype(np.float32) / 255.0
# Define a simple sequential model
model_infrence = tf.keras.Sequential([
MyDense(512, activation='relu', input_shape=(784,)),
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
model_infrence.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
# Create a basic model instance
# Display the model's architecture
model_infrence.summary()
# Create a basic model instance
# Display the model's architecture
model_infrence.summary()
```
Then I custom one of the dense layers like so
```python
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the Dense layer."""
import tensorflow.compat.v2 as tf
from keras import activations
from keras import backend
from keras import constraints
from keras import initializers
from keras import regularizers
from keras.dtensor import utils
from keras.engine.base_layer import Layer
from keras.engine.input_spec import InputSpec
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.Dense")
class MyDense(Layer):
"""Just your regular densely-connected NN layer.
`Dense` implements the operation:
`output = activation(dot(input, kernel) + bias)`
where `activation` is the element-wise activation function
passed as the `activation` argument, `kernel` is a weights matrix
created by the layer, and `bias` is a bias vector created by the layer
(only applicable if `use_bias` is `True`). These are all attributes of
`Dense`.
Note: If the input to the layer has a rank greater than 2, then `Dense`
computes the dot product between the `inputs` and the `kernel` along the
last axis of the `inputs` and axis 0 of the `kernel` (using `tf.tensordot`).
For example, if input has dimensions `(batch_size, d0, d1)`, then we create
a `kernel` with shape `(d1, units)`, and the `kernel` operates along axis 2
of the `input`, on every sub-tensor of shape `(1, 1, d1)` (there are
`batch_size * d0` such sub-tensors). The output in this case will have
shape `(batch_size, d0, units)`.
Besides, layer attributes cannot be modified after the layer has been called
once (except the `trainable` attribute).
When a popular kwarg `input_shape` is passed, then keras will create
an input layer to insert before the current layer. This can be treated
equivalent to explicitly defining an `InputLayer`.
Example:
>>> # Create a `Sequential` model and add a Dense layer as the first layer.
>>> model = tf.keras.models.Sequential()
>>> model.add(tf.keras.Input(shape=(16,)))
>>> model.add(tf.keras.layers.Dense(32, activation='relu'))
>>> # Now the model will take as input arrays of shape (None, 16)
>>> # and output arrays of shape (None, 32).
>>> # Note that after the first layer, you don't need to specify
>>> # the size of the input anymore:
>>> model.add(tf.keras.layers.Dense(32))
>>> model.output_shape
(None, 32)
Args:
units: Positive integer, dimensionality of the output space.
activation: Activation function to use.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to
the `kernel` weights matrix.
bias_constraint: Constraint function applied to the bias vector.
Input shape:
N-D tensor with shape: `(batch_size, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(batch_size, input_dim)`.
Output shape:
N-D tensor with shape: `(batch_size, ..., units)`.
For instance, for a 2D input with shape `(batch_size, input_dim)`,
the output would have shape `(batch_size, units)`.
"""
@utils.allow_initializer_layout
def __init__(
self,
units,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs,
):
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
self.units = int(units) if not isinstance(units, int) else units
if self.units < 0:
raise ValueError(
"Received an invalid value for `units`, expected "
f"a positive integer. Received: units={units}"
)
self.activation = activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.input_spec = InputSpec(min_ndim=2)
self.supports_masking = True
def build(self, input_shape):
dtype = tf.as_dtype(self.dtype or backend.floatx())
if not (dtype.is_floating or dtype.is_complex):
raise TypeError(
"A Dense layer can only be built with a floating-point "
f"dtype. Received: dtype={dtype}"
)
input_shape = tf.TensorShape(input_shape)
last_dim = tf.compat.dimension_value(input_shape[-1])
if last_dim is None:
raise ValueError(
"The last dimension of the inputs to a Dense layer "
"should be defined. Found None. "
f"Full input shape received: {input_shape}"
)
self.input_spec = InputSpec(min_ndim=2, axes={-1: last_dim})
self.kernel = self.add_weight(
"kernel",
shape=[last_dim, self.units],
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
dtype=self.dtype,
trainable=True,
)
if self.use_bias:
self.bias = self.add_weight(
"bias",
shape=[
self.units,
],
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
dtype=self.dtype,
trainable=True,
)
else:
self.bias = None
self.built = True
def call(self, inputs):
if inputs.dtype.base_dtype != self._compute_dtype_object.base_dtype:
inputs = tf.cast(inputs, dtype=self._compute_dtype_object)
is_ragged = isinstance(inputs, tf.RaggedTensor)
if is_ragged:
# In case we encounter a RaggedTensor with a fixed last dimension
# (last dimension not ragged), we can flatten the input and restore
# the ragged dimensions at the end.
if tf.compat.dimension_value(inputs.shape[-1]) is None:
raise ValueError(
"Dense layer only supports RaggedTensors when the "
"innermost dimension is non-ragged. Received: "
f"inputs.shape={inputs.shape}."
)
original_inputs = inputs
if inputs.flat_values.shape.rank > 1:
inputs = inputs.flat_values
else:
# Innermost partition is encoded using uniform_row_length.
# (This is unusual, but we can handle it.)
if inputs.shape.rank == 2:
inputs = inputs.to_tensor()
is_ragged = False
else:
for _ in range(original_inputs.ragged_rank - 1):
inputs = inputs.values
inputs = inputs.to_tensor()
original_inputs = tf.RaggedTensor.from_nested_row_splits(
inputs, original_inputs.nested_row_splits[:-1]
)
rank = inputs.shape.rank
if rank == 2 or rank is None:
# We use embedding_lookup_sparse as a more efficient matmul
# operation for large sparse input tensors. The op will result in a
# sparse gradient, as opposed to
# sparse_ops.sparse_tensor_dense_matmul which results in dense
# gradients. This can lead to sigfinicant speedups, see b/171762937.
if isinstance(inputs, tf.SparseTensor):
# We need to fill empty rows, as the op assumes at least one id
# per row.
inputs, _ = tf.sparse.fill_empty_rows(inputs, 0)
# We need to do some munging of our input to use the embedding
# lookup as a matrix multiply. We split our input matrix into
# separate ids and weights tensors. The values of the ids tensor
# should be the column indices of our input matrix and the
# values of the weights tensor can continue to the actual matrix
# weights. The column arrangement of ids and weights will be
# summed over and does not matter. See the documentation for
# sparse_ops.sparse_tensor_dense_matmul a more detailed
# explanation of the inputs to both ops.
ids = tf.SparseTensor(
indices=inputs.indices,
values=inputs.indices[:, 1],
dense_shape=inputs.dense_shape,
)
weights = inputs
outputs = tf.nn.embedding_lookup_sparse(
self.kernel, ids, weights, combiner="sum"
)
else:
print(inputs)
quotient, x = divmod(inputs, (2**n))
#x = inputs % (2**n);
quotient1, x1 = divmod(inputs, (2**n - 1))
#x1 = inputs % (2**n - 1);
quotient2, x2 = divmod(inputs, (2**n + 1))
#x2 = inputs % (2**n + 1);
# w = self.w % (2**n);
quotient3, w = divmod(self.w, (2**n))
# w1 = self.w % (2**n - 1);
quotient4, w1 = divmod(self.w, (2**n - 1))
# w2 = self.w % (2**n + 1)
quotient5, w2 = divmod(self.w, (2**n + 1))
quotient6, z = divmod((tf.matmul(x, w) + self.b), (2**n))
# z = (tf.matmul(x, w) + self.b) % (2**n)
quotient7, z1 = divmod((tf.matmul(x, w) + self.b), (2**n - 1))
# z1 = (tf.matmul(x1, w1) + self.b) % (2**n - 1)
quotient8, z2 = divmod((tf.matmul(x, w) + self.b), (2**n + 1))
# z2 = tf.matmul(x2, w2) + self.b % (2**n + 1)
Dm = (2**n) * (2**n - 1) * (2**n + 1);
m1 = math.floor(((2**n) * (2**n - 1) * (2**n + 1))/(2**n));
m2 = math.floor(((2**n) * (2**n - 1) * (2**n + 1))/(2**n - 1));
m3 = math.floor(((2**n) * (2**n - 1) * (2**n + 1))/(2**n + 1));
outputs = rns_to_decimal(Dm, z, z1, z2, m1, m2, m3, n)
print(outputs)
# Broadcast kernel to inputs.
else:
outputs = tf.tensordot(inputs, self.kernel, [[rank - 1], [0]])
# Reshape the output back to the original ndim of the input.
if not tf.executing_eagerly():
shape = inputs.shape.as_list()
output_shape = shape[:-1] + [self.kernel.shape[-1]]
outputs.set_shape(output_shape)
# if self.use_bias:
# outputs = tf.nn.bias_add(outputs, self.bias)
if self.activation is not None:
outputs = self.activation(outputs)
if is_ragged:
outputs = original_inputs.with_flat_values(outputs)
return outputs
def compute_output_shape(self, input_shape):
input_shape = tf.TensorShape(input_shape)
input_shape = input_shape.with_rank_at_least(2)
if tf.compat.dimension_value(input_shape[-1]) is None:
raise ValueError(
"The last dimension of the input shape of a Dense layer "
"should be defined. Found None. "
f"Received: input_shape={input_shape}"
)
return input_shape[:-1].concatenate(self.units)
def get_config(self):
config = super().get_config()
config.update(
{
"units": self.units,
"activation": activations.serialize(self.activation),
"use_bias": self.use_bias,
"kernel_initializer": initializers.serialize(
self.kernel_initializer
),
"bias_initializer": initializers.serialize(
self.bias_initializer
),
"kernel_regularizer": regularizers.serialize(
self.kernel_regularizer
),
"bias_regularizer": regularizers.serialize(
self.bias_regularizer
),
"activity_regularizer": regularizers.serialize(
self.activity_regularizer
),
"kernel_constraint": constraints.serialize(
self.kernel_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
}
)
return config
def rns_to_decimal(dm, z1, z2, z3, m1, m2, m3, n = 6):
M1 = 2**n
M2 = 2**n - 1
M3 = 2**n + 1
x1 = 1
x2 = 1
x3 = 1
quotient, mm1 = divmod(m1, M1)
# mm1 = m1 % M1
for i in range(1, M1):
quotient1, X = divmod((i * mm1), M1)
if X == 1:
x1 = i
quotient2, mm2 = divmod(m2, M2)
# mm2 = m2 % M2
for i in range(1, M2):
quotient3, X1 = divmod((i * mm2), M2)
if X1 == 1:
x2 = i
# mm3 = m3 % M3
quotient4, mm3 = divmod(m3, M3)
for i in range(1, M3):
quotient5, X2 = divmod((i * mm3), M3)
if X2 == 1:
x3 = i
quotient5, num = divmod((z1 * m1 * x1 + z2 * m2 * x2 + z3 * m3 * x3), dm)
# num = (z1 * m1 * x1 + z2 * m2 * x2 + z3 * m3 * x3) % dm
return num
```
The only change to original layer is following code:
```python
quotient, x = divmod(inputs, (2**n))
#x = inputs % (2**n);
quotient1, x1 = divmod(inputs, (2**n - 1))
#x1 = inputs % (2**n - 1);
quotient2, x2 = divmod(inputs, (2**n + 1))
#x2 = inputs % (2**n + 1);
# w = self.w % (2**n);
quotient3, w = divmod(self.w, (2**n))
# w1 = self.w % (2**n - 1);
quotient4, w1 = divmod(self.w, (2**n - 1))
# w2 = self.w % (2**n + 1)
quotient5, w2 = divmod(self.w, (2**n + 1))
quotient6, z = divmod((tf.matmul(x, w) + self.b), (2**n))
# z = (tf.matmul(x, w) + self.b) % (2**n)
quotient7, z1 = divmod((tf.matmul(x, w) + self.b), (2**n - 1))
# z1 = (tf.matmul(x1, w1) + self.b) % (2**n - 1)
quotient8, z2 = divmod((tf.matmul(x, w) + self.b), (2**n + 1))
# z2 = tf.matmul(x2, w2) + self.b % (2**n + 1)
Dm = (2**n) * (2**n - 1) * (2**n + 1);
m1 = math.floor(((2**n) * (2**n - 1) * (2**n + 1))/(2**n));
m2 = math.floor(((2**n) * (2**n - 1) * (2**n + 1))/(2**n - 1));
m3 = math.floor(((2**n) * (2**n - 1) * (2**n + 1))/(2**n + 1));
outputs = rns_to_decimal(Dm, z, z1, z2, m1, m2, m3, n)
```
Provide code to help us reproduce your issues using one of the following options:
#### Option A: Reference colab notebooks
1) Reference [TensorFlow Model Colab](https://colab.research.google.com/gist/ymodak/e96a4270b953201d5362c61c1e8b78aa/tensorflow-datasets.ipynb?authuser=1): Demonstrate how to build your TF model.
2) Reference [TensorFlow Lite Model Colab](https://colab.research.google.com/gist/ymodak/0dfeb28255e189c5c48d9093f296e9a8/tensorflow-lite-debugger-colab.ipynb): Demonstrate how to convert your TF model to a TF Lite model (with quantization, if used) and run TFLite Inference (if possible).
```
(You can paste links or attach files by dragging & dropping them below)
- Provide links to your updated versions of the above two colab notebooks.
- Provide links to your TensorFlow model and (optionally) TensorFlow Lite Model.
```
#### Option B: Paste your code here or provide a link to a custom end-to-end colab
```
(You can paste links or attach files by dragging & dropping them below)
- Include code to invoke the TFLite Converter Python API and the errors.
- Provide links to your TensorFlow model and (optionally) TensorFlow Lite Model.
```
### 3. Failure after conversion
If the conversion is successful, but the generated model is wrong, then state what is wrong:
- Model produces wrong results and/or has lesser accuracy.
- Model produces correct results, but it is slower than expected.
### 4. (optional) RNN conversion support
If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.
### 5. (optional) Any other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61343/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61342 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61342/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61342/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61342/events | https://github.com/tensorflow/tensorflow/pull/61342 | 1,813,720,523 | PR_kwDOArmXAs5V_MXO | 61,342 | Update Jax to TFLite example to use Jax2TF | {
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 1169365494,
"node_id": "MDU6TGFiZWwxMTY5MzY1NDk0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:M",
"name": "size:M",
"color": "adafea",
"default": false,
"description": "CL Change Size: Medium"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Check out this pull request on <a href=\"https://app.reviewnb.com/tensorflow/tensorflow/pull/61342\"><img align=\"absmiddle\" alt=\"ReviewNB\" height=\"28\" class=\"BotMessageButtonImage\" src=\"https://raw.githubusercontent.com/ReviewNB/support/master/images/button_reviewnb.png\"/></a> \n\n See visual diffs & provide feedback on Jupyter Notebooks. \n\n---\n\n <i>Powered by <a href='https://www.reviewnb.com/?utm_source=gh'>ReviewNB</a></i>",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @renjie-liu Can you please review this PR ? Thank you!\r\n\r\n",
"Hi @renjie-liu Can you please review this PR ? Thank you!",
"Hi @renjie-liu Can you please review this PR ? Thank you!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @alankelly Can you please review this PR ? Thank you!",
"Hi @pkgoogle Can you please resolve conflicts? Thank you!",
"This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This PR was closed because it has been inactive for 14 days since being marked as stale. Please reopen if you'd like to work on this further."
] | 2023-07-20T11:00:36 | 2024-04-07T01:48:50 | 2024-04-07T01:48:42 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61342",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61342",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61342.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61342.patch",
"merged_at": null
} | The nightly versions says `experimental_from_jax` is deprecated and `Jax2TF` is recommended way of converting the Jax models to TFLite.
Added an option in the example to use `Jax2TF` for TFLite conversion using concrete functions.
Thanks. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61342/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61341 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61341/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61341/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61341/events | https://github.com/tensorflow/tensorflow/issues/61341 | 1,813,567,478 | I_kwDOArmXAs5sGNf2 | 61,341 | Failed to build tensorflow on Apple silicon. | {
"login": "sun1638650145",
"id": 47380395,
"node_id": "MDQ6VXNlcjQ3MzgwMzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/47380395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sun1638650145",
"html_url": "https://github.com/sun1638650145",
"followers_url": "https://api.github.com/users/sun1638650145/followers",
"following_url": "https://api.github.com/users/sun1638650145/following{/other_user}",
"gists_url": "https://api.github.com/users/sun1638650145/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sun1638650145/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sun1638650145/subscriptions",
"organizations_url": "https://api.github.com/users/sun1638650145/orgs",
"repos_url": "https://api.github.com/users/sun1638650145/repos",
"events_url": "https://api.github.com/users/sun1638650145/events{/privacy}",
"received_events_url": "https://api.github.com/users/sun1638650145/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 1205765054,
"node_id": "MDU6TGFiZWwxMjA1NzY1MDU0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:macOS",
"name": "subtype:macOS",
"color": "b619ea",
"default": false,
"description": "macOS Build/Installation issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "nitins17",
"id": 29348997,
"node_id": "MDQ6VXNlcjI5MzQ4OTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/29348997?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nitins17",
"html_url": "https://github.com/nitins17",
"followers_url": "https://api.github.com/users/nitins17/followers",
"following_url": "https://api.github.com/users/nitins17/following{/other_user}",
"gists_url": "https://api.github.com/users/nitins17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nitins17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nitins17/subscriptions",
"organizations_url": "https://api.github.com/users/nitins17/orgs",
"repos_url": "https://api.github.com/users/nitins17/repos",
"events_url": "https://api.github.com/users/nitins17/events{/privacy}",
"received_events_url": "https://api.github.com/users/nitins17/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "nitins17",
"id": 29348997,
"node_id": "MDQ6VXNlcjI5MzQ4OTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/29348997?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nitins17",
"html_url": "https://github.com/nitins17",
"followers_url": "https://api.github.com/users/nitins17/followers",
"following_url": "https://api.github.com/users/nitins17/following{/other_user}",
"gists_url": "https://api.github.com/users/nitins17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nitins17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nitins17/subscriptions",
"organizations_url": "https://api.github.com/users/nitins17/orgs",
"repos_url": "https://api.github.com/users/nitins17/repos",
"events_url": "https://api.github.com/users/nitins17/events{/privacy}",
"received_events_url": "https://api.github.com/users/nitins17/received_events",
"type": "User",
"site_admin": false
},
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@sun1638650145 Please try to use the new version of XCode 14.3 to include an updated version of the linker ld. \r\nCould you refer to this Commit: https://github.com/tensorflow/tensorflow/commit/55939be39409283c4b35a62049fe548a2de3ce44 and let us know if it would work for you as a solved environment? Thank you!",
"@sushreebarsa Hello, I am well aware that this is not caused by `ld` because I have already addressed this issue when building previous versions (as documented in this [issue](https://github.com/tensorflow/tensorflow/issues/57914)).",
"Hi @sun1638650145 ,\r\n\r\nIf I am not wrong Xcode 14.3 resolved build issue last time in which you also involved and confirmed.Could you please confirm whether TF2.13 build fails with Xcode 14.3 also ?\r\n\r\n",
"@SuryanarayanaY Yes, after using `Xcode 14.3`, I was able to build `tensorflow 2.12` successfully. However, now when trying to build `tensorflow 2.13`, a different error occurs (the log information provided above). I'm not sure if this issue is caused by Xcode or something else.",
"@sun1638650145 ,\r\n\r\nCould you please confirm TF2.13 build was failing even with XCode 14.3 since I can see in the template you are using 14.0.3.\r\n\r\nI have tried build with 14.0.3 and replicated the reported build failure.Logas attached below for reference. I couldn't update it 14.3.x due to access related issues.\r\n\r\n```\r\n(base) suryanarayanay-macbookpro:tensorflow suryanarayanay$ bazel build //tensorflow/tools/pip_package:build_pip_package\r\nExtracting Bazel installation...\r\nStarting local Bazel server and connecting to it...\r\nINFO: Options provided by the client:\r\n Inherited 'common' options: --isatty=1 --terminal_columns=157\r\nINFO: Reading rc options for 'build' from /Users/suryanarayanay/tensorflow/.bazelrc:\r\n Inherited 'common' options: --experimental_repo_remote_exec\r\nINFO: Reading rc options for 'build' from /Users/suryanarayanay/tensorflow/.bazelrc:\r\n 'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility\r\nINFO: Reading rc options for 'build' from /Users/suryanarayanay/tensorflow/.tf_configure.bazelrc:\r\n 'build' options: --action_env PYTHON_BIN_PATH=/opt/homebrew/opt/[email protected]/bin/python3.11 --action_env PYTHON_LIB_PATH=/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages --python_path=/opt/homebrew/opt/[email protected]/bin/python3.11\r\nINFO: Reading rc options for 'build' from /Users/suryanarayanay/tensorflow/.bazelrc:\r\n 'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils\r\nINFO: Found applicable config definition build:short_logs in file /Users/suryanarayanay/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING\r\nINFO: Found applicable config definition build:v2 in file /Users/suryanarayanay/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1\r\nINFO: Found applicable config definition build:macos in file /Users/suryanarayanay/tensorflow/.bazelrc: --apple_platform_type=macos --copt=-DGRPC_BAZEL_BUILD --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17\r\nWARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/0aaa6e679847a4eeb407136e7b0bcef93ec652e6.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found\r\nWARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/llvm/llvm-project/archive/a52054cfa29d665c43141c66c20a7b8f7a96b546.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found\r\nWARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/benchmark/archive/f7547e29ccaed7b64ef4f7495ecfff1c9f6f3d03.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found\r\nWARNING: Download from https://mirror.bazel.build/github.com/bazelbuild/rules_cc/archive/081771d4a0e9d7d3aa0eed2ef389fa4700dfb23e.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found\r\nWARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/boringssl/archive/c00d7ca810e93780bd0c8ee4eea28f4f2ea4bcdc.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found\r\nWARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/XNNPACK/archive/7adae8e6ded8fff33d92212ca1028d2419cd34d4.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found\r\nWARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/openxla/stablehlo/archive/458bdd95771e9861e6488868e315a1b0340058ba.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found\r\nWARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/pybind/pybind11_abseil/archive/2c4932ed6f6204f1656e245838f4f5eae69d2e29.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found\r\nINFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (606 packages loaded, 43912 targets configured).\r\nINFO: Found 1 target...\r\nERROR: /Users/suryanarayanay/tensorflow/tensorflow/BUILD:1128:21: declared output 'tensorflow/libtensorflow_framework.2.dylib' was not created by genrule. This is probably because the genrule actually didn't create this output, or because the output was a directory and the genrule was run remotely (note that only the contents of declared file outputs are copied from genrules run remotely)\r\nERROR: /Users/suryanarayanay/tensorflow/tensorflow/BUILD:1128:21: Executing genrule //tensorflow:libtensorflow_framework.2.dylib_sym failed: not all outputs were created or valid\r\nrealpath: illegal option -- -\r\nusage: realpath [-q] [path ...]\r\nTarget //tensorflow/tools/pip_package:build_pip_package failed to build\r\nUse --verbose_failures to see the command lines of failed build steps.\r\nINFO: Elapsed time: 448.591s, Critical Path: 28.99s\r\nINFO: 6015 processes: 2038 internal, 3977 local.\r\nFAILED: Build did NOT complete successfully\r\n(base) suryanarayanay-macbookpro:tensorflow suryanarayanay$ ld -v\r\n@(#)PROGRAM:ld PROJECT:ld64-857.1\r\nBUILD 23:13:29 May 7 2023\r\nconfigured to support archs: armv6 armv7 armv7s arm64 arm64e arm64_32 i386 x86_64 x86_64h armv6m armv7k armv7m armv7em\r\nLTO support using: LLVM version 14.0.3, (clang-1403.0.22.14.1) (static support for 29, runtime is 29)\r\nTAPI support using: Apple TAPI version 14.0.3 (tapi-1403.0.5.1)\r\n(base) suryanarayanay-macbookpro:tensorflow suryanarayanay$ \r\n```\r\n\r\n\r\n\r\n",
"@SuryanarayanaY Yes, I can confirm that my current Xcode and Command Line Tools versions are `14.3.1`, and they include clang version `14.0.3`. (There is currently no clang `14.3`.)",
"@nitins17 , Could you please have a look into the issue. Thanks!",
"It is likely still picking up the macOS version of realpath. See https://github.com/tensorflow/tensorflow/issues/60088#issuecomment-1499794693 ",
"@sushreebarsa @SuryanarayanaY @nitins17 Thank you for your assistance. It was indeed due to the use of `realpath` that caused this compilation error. I have now resolved this issue.",
"It is necessary to install `coreutils` and configure environment variables before proceeding with the official tutorial. I have placed the my tutorial [here](https://github.com/sun1638650145/Libraries-and-Extensions-for-TensorFlow-for-Apple-Silicon/blob/main/tutorials/tensorflow/tensorflow.md), hoping it can also assist those in need.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61341\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61341\">No</a>\n"
] | 2023-07-20T09:33:43 | 2023-08-07T02:23:05 | 2023-08-07T02:23:02 | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.13
### Custom code
No
### OS platform and distribution
macOS 13.4.1
### Mobile device
None
### Python version
3.11
### Bazel version
5.3.0-homebrew
### GCC/compiler version
Apple clang version 14.0.3 (clang-1403.0.22.14.1)
### CUDA/cuDNN version
None
### GPU model and memory
None
### Current behavior?
After failing to build tensorflow using the default options, I attempted the solution suggested in this issue (https://github.com/tensorflow/tensorflow/issues/60179). However, I found that installing `coreutils` directly still resulted in the same problem.
### Standalone code to reproduce the issue
Default settings used for all options.
```shell
bazel build //tensorflow/tools/pip_package:build_pip_package
```
### Relevant log output
```shell
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=80
INFO: Reading rc options for 'build' from /Users/sunruiqi/Desktop/tensorflow-2.13.0/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /Users/sunruiqi/Desktop/tensorflow-2.13.0/.bazelrc:
'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Reading rc options for 'build' from /Users/sunruiqi/Desktop/tensorflow-2.13.0/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/Users/sunruiqi/miniconda3/envs/tensorflow-macos/bin/python3 --action_env PYTHON_LIB_PATH=/Users/sunruiqi/miniconda3/envs/tensorflow-macos/lib/python3.11/site-packages --python_path=/Users/sunruiqi/miniconda3/envs/tensorflow-macos/bin/python3
INFO: Reading rc options for 'build' from /Users/sunruiqi/Desktop/tensorflow-2.13.0/.bazelrc:
'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils,tensorflow/core/tfrt/utils/debug
INFO: Found applicable config definition build:short_logs in file /Users/sunruiqi/Desktop/tensorflow-2.13.0/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /Users/sunruiqi/Desktop/tensorflow-2.13.0/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:macos in file /Users/sunruiqi/Desktop/tensorflow-2.13.0/.bazelrc: --apple_platform_type=macos --copt=-DGRPC_BAZEL_BUILD --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17
INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (611 packages loaded, 37637 targets configured).
INFO: Found 1 target...
ERROR: /Users/sunruiqi/Desktop/tensorflow-2.13.0/tensorflow/BUILD:1134:21: declared output 'tensorflow/libtensorflow_framework.2.dylib' was not created by genrule. This is probably because the genrule actually didn't create this output, or because the output was a directory and the genrule was run remotely (note that only the contents of declared file outputs are copied from genrules run remotely)
ERROR: /Users/sunruiqi/Desktop/tensorflow-2.13.0/tensorflow/BUILD:1134:21: Executing genrule //tensorflow:libtensorflow_framework.2.dylib_sym failed: not all outputs were created or valid
realpath: illegal option -- -
usage: realpath [-q] [path ...]
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 438.472s, Critical Path: 37.63s
INFO: 5437 processes: 1278 internal, 4159 local.
FAILED: Build did NOT complete successfully
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61341/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61340 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61340/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61340/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61340/events | https://github.com/tensorflow/tensorflow/issues/61340 | 1,813,268,737 | I_kwDOArmXAs5sFEkB | 61,340 | `tf.data.Dataset` only supports Python-style iteration in eager mode or within tf.function. | {
"login": "lmx666-gif",
"id": 52613795,
"node_id": "MDQ6VXNlcjUyNjEzNzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/52613795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lmx666-gif",
"html_url": "https://github.com/lmx666-gif",
"followers_url": "https://api.github.com/users/lmx666-gif/followers",
"following_url": "https://api.github.com/users/lmx666-gif/following{/other_user}",
"gists_url": "https://api.github.com/users/lmx666-gif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lmx666-gif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmx666-gif/subscriptions",
"organizations_url": "https://api.github.com/users/lmx666-gif/orgs",
"repos_url": "https://api.github.com/users/lmx666-gif/repos",
"events_url": "https://api.github.com/users/lmx666-gif/events{/privacy}",
"received_events_url": "https://api.github.com/users/lmx666-gif/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1114343535,
"node_id": "MDU6TGFiZWwxMTE0MzQzNTM1",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:data",
"name": "comp:data",
"color": "0052cc",
"default": false,
"description": "tf.data related issues"
},
{
"id": 2691123225,
"node_id": "MDU6TGFiZWwyNjkxMTIzMjI1",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:tf.function",
"name": "comp:tf.function",
"color": "0052cc",
"default": false,
"description": "tf.function related issues"
}
] | closed | false | {
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@lmx666-gif,\r\nI tried to execute the mentioned code and it was failing with the different error. Kindly find the gist of it [here](https://colab.research.google.com/gist/tilakrayal/5f35a72261875affb691a5549eaa5ca3/untitled1250.ipynb) and provide the complete code, dependencies and the tensorflow version you are using which helps to analyse the issue in an effective way. \r\n\r\n**tfds** is compatible with both eager and graph modes, but that doesn't mean they can be used the same way in either context.\r\n If you want a tensor representation of the inputs, you can use **outputs = dataset.make_one_shot_iterator().get_next().**\r\nAlso when you apply the `tf.function` decorator to a Python function that uses `tf.data.Dataset`, Tensorflow 2.0 will automatically convert the Python function into a graph that can be optimized for performance. This can lead to significant performance improvements, especially when working with large datasets.\r\nThank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Closing this issue as stale. Please reopen if this is still a valid request. Thank you!"
] | 2023-07-20T06:34:55 | 2023-10-21T01:01:58 | 2023-10-21T01:01:57 | NONE | null | null | null | The first is the map function:
def map_function(example):
feature_map = {"wav_raw": tf.io.FixedLenFeature([], tf.string)}
parsed_example = tf.io.parse_single_example(example, features=feature_map)
wav_slice = tf.io.decode_raw(parsed_example["wav_raw"], out_type=tf.float64)
wav_slice = tf.cast(wav_slice, tf.float32) / 2 ** 15
return wav_slice
The second one is the training process:
for epoch in range(args.num_epochs):
trainset = tf.data.TFRecordDataset(args.trainset_tfrecords_path)
trainset = trainset.map(map_func=map_function,
num_parallel_calls=num_cpus) # num_parallel_calls should be number of cpu cores
#trainset = trainset.shuffle(buffer_size=args.batch_size * 200, reshuffle_each_iteration=True)
trainset = trainset.batch(batch_size=args.batch_size)
trainset = trainset.prefetch(buffer_size=args.batch_size)
# train_loss for each epoch
train_loss_epoch = []
train_loss = 0.0
# record the train time for each epoch
start = time.time()
# MASK参数
#EMA_MODEL来选择mask的index
binary_mask = RandomMaskingGenerator(input_size,frame_length,mask_ratio)
# bmr:0为掩码,1为未掩码;
# bm_T:1为掩码,0为未掩码;
# bm,bm_T = binary_mask()
for step, _input in enumerate(trainset):
bm, bm_T, _ = binary_mask.random_mask(_input.shape[0],alpha_e_max)#.totally_random_mask(_input.shape[0])
print("_input",_input)
# print("bm_shape", bm.shape)
# print("bm_T_shape", bm_T.shape)
loss_value = train_step(_input,_input*bm,bm_T)
loss_float = float(loss_value)
train_loss_epoch.append(loss_float)
# Calculate the accumulated train loss value
train_loss += loss_float
# average train loss for each epoch
train_loss /= (step + 1)
train_loss_all.append(train_loss)
# print log
log = "train epoch {}/{}, train_loss = {:.06f}, time = {:.06f}"
The third one is the train_step function:
@tf.function
def train_step(_input,_input_mask,bm_T):
with tf.GradientTape() as tape:
enc_output, batch_mean, batch_var = sem_enc(_input_mask)
#输入进去semantic decoder
print("main:",enc_output)
_output = sem_dec([enc_output, batch_mean, batch_var])
loss_value = mse_loss(tf.multiply(_input, bm_T), _output)
tf.print(loss_value)
loss_whole = loss_value
grads = tape.gradient(loss_whole, weights_all) # compute gradients
optimizer.apply_gradients(zip(grads, weights_all)) # update parameters
return loss_whole
The _ouput generated by sem_dec() is the value I wanted change it from kerasTensor to TF.tensor; I could send you the whole codes, if you need, this is my email: [email protected]. Thank you for your reply!
_Originally posted by @lmx666-gif in https://github.com/tensorflow/tensorflow/issues/61307#issuecomment-1643164557_
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61340/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61339 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61339/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61339/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61339/events | https://github.com/tensorflow/tensorflow/pull/61339 | 1,813,222,448 | PR_kwDOArmXAs5V9elY | 61,339 | PR https://github.com/tensorflow/tensorflow/pull/61309: Fix MSVC compile errors | {
"login": "johnnkp",
"id": 22496821,
"node_id": "MDQ6VXNlcjIyNDk2ODIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22496821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnkp",
"html_url": "https://github.com/johnnkp",
"followers_url": "https://api.github.com/users/johnnkp/followers",
"following_url": "https://api.github.com/users/johnnkp/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnkp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnkp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnkp/subscriptions",
"organizations_url": "https://api.github.com/users/johnnkp/orgs",
"repos_url": "https://api.github.com/users/johnnkp/repos",
"events_url": "https://api.github.com/users/johnnkp/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnkp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1169364458,
"node_id": "MDU6TGFiZWwxMTY5MzY0NDU4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:S",
"name": "size:S",
"color": "adafea",
"default": false,
"description": "CL Change Size: Small"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-07-20T06:03:09 | 2023-07-25T18:53:56 | 2023-07-25T18:53:56 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61339",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61339",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61339.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61339.patch",
"merged_at": "2023-07-25T18:53:56"
} | This is a finalized PR of https://github.com/tensorflow/tensorflow/pull/61309. Add `int tf_min/max()` overloading, fix operator errors and casting errors. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61339/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61338 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61338/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61338/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61338/events | https://github.com/tensorflow/tensorflow/issues/61338 | 1,813,211,134 | I_kwDOArmXAs5sE2f- | 61,338 | How to get raw buffer pointer from python tf.Tensor | {
"login": "zhumakhan",
"id": 26182270,
"node_id": "MDQ6VXNlcjI2MTgyMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/26182270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhumakhan",
"html_url": "https://github.com/zhumakhan",
"followers_url": "https://api.github.com/users/zhumakhan/followers",
"following_url": "https://api.github.com/users/zhumakhan/following{/other_user}",
"gists_url": "https://api.github.com/users/zhumakhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhumakhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhumakhan/subscriptions",
"organizations_url": "https://api.github.com/users/zhumakhan/orgs",
"repos_url": "https://api.github.com/users/zhumakhan/repos",
"events_url": "https://api.github.com/users/zhumakhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhumakhan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473173272,
"node_id": "MDU6TGFiZWw0NzMxNzMyNzI=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:feature",
"name": "type:feature",
"color": "159b2e",
"default": false,
"description": "Feature requests"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @zhumakhan ,\r\n\r\nAFAIK, tf.Tensor has no method for getting pointer directly. But we have method called `ref` which can return a hashable reference object to the Tensor. Could you please check the source [here](https://www.tensorflow.org/api_docs/python/tf/Tensor#ref) for more details and let us know whether this can solve your purpose.\r\n\r\nThanks!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further."
] | 2023-07-20T05:53:22 | 2023-08-04T01:51:45 | 2023-08-04T01:51:45 | NONE | null | null | null | ### Issue type
Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
2.13
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
In C++ api, tensorflow::Tensor has `data()` method which returns pointer to memory array. While in python api tf.Tensor does not allow to get raw data pointer. Is there any solution or workaround for this?
### Standalone code to reproduce the issue
```shell
tensorflow::Tensor.data()
tf.Tensor
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61338/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61337 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61337/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61337/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61337/events | https://github.com/tensorflow/tensorflow/issues/61337 | 1,813,024,096 | I_kwDOArmXAs5sEI1g | 61,337 | AttributeError: cython_sources when installing tflite-model-maker | {
"login": "ljmerza",
"id": 4085765,
"node_id": "MDQ6VXNlcjQwODU3NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4085765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ljmerza",
"html_url": "https://github.com/ljmerza",
"followers_url": "https://api.github.com/users/ljmerza/followers",
"following_url": "https://api.github.com/users/ljmerza/following{/other_user}",
"gists_url": "https://api.github.com/users/ljmerza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ljmerza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljmerza/subscriptions",
"organizations_url": "https://api.github.com/users/ljmerza/orgs",
"repos_url": "https://api.github.com/users/ljmerza/repos",
"events_url": "https://api.github.com/users/ljmerza/events{/privacy}",
"received_events_url": "https://api.github.com/users/ljmerza/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
},
{
"id": 5664422260,
"node_id": "LA_kwDOArmXAs8AAAABUaA5dA",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteModelMaker",
"name": "TFLiteModelMaker",
"color": "257569",
"default": false,
"description": "TFLite Model Maker related issues"
}
] | open | false | {
"login": "lu-wang-g",
"id": 47436172,
"node_id": "MDQ6VXNlcjQ3NDM2MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/47436172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lu-wang-g",
"html_url": "https://github.com/lu-wang-g",
"followers_url": "https://api.github.com/users/lu-wang-g/followers",
"following_url": "https://api.github.com/users/lu-wang-g/following{/other_user}",
"gists_url": "https://api.github.com/users/lu-wang-g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lu-wang-g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lu-wang-g/subscriptions",
"organizations_url": "https://api.github.com/users/lu-wang-g/orgs",
"repos_url": "https://api.github.com/users/lu-wang-g/repos",
"events_url": "https://api.github.com/users/lu-wang-g/events{/privacy}",
"received_events_url": "https://api.github.com/users/lu-wang-g/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lu-wang-g",
"id": 47436172,
"node_id": "MDQ6VXNlcjQ3NDM2MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/47436172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lu-wang-g",
"html_url": "https://github.com/lu-wang-g",
"followers_url": "https://api.github.com/users/lu-wang-g/followers",
"following_url": "https://api.github.com/users/lu-wang-g/following{/other_user}",
"gists_url": "https://api.github.com/users/lu-wang-g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lu-wang-g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lu-wang-g/subscriptions",
"organizations_url": "https://api.github.com/users/lu-wang-g/orgs",
"repos_url": "https://api.github.com/users/lu-wang-g/repos",
"events_url": "https://api.github.com/users/lu-wang-g/events{/privacy}",
"received_events_url": "https://api.github.com/users/lu-wang-g/received_events",
"type": "User",
"site_admin": false
},
{
"login": "pkgoogle",
"id": 132095473,
"node_id": "U_kgDOB9-d8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/132095473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkgoogle",
"html_url": "https://github.com/pkgoogle",
"followers_url": "https://api.github.com/users/pkgoogle/followers",
"following_url": "https://api.github.com/users/pkgoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/pkgoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkgoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkgoogle/subscriptions",
"organizations_url": "https://api.github.com/users/pkgoogle/orgs",
"repos_url": "https://api.github.com/users/pkgoogle/repos",
"events_url": "https://api.github.com/users/pkgoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkgoogle/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"facing this on macOS too. Any solutions for this?",
"@ljmerza There are a few dependencies like numpy that need a specific version installation. Did you try to upgrade the numpy version by using` pip install numpy==1.23.4`. Could you also use colab fallback as a workaround?\r\nThank you!\r\n",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"@sushreebarsa I tried your fix and I'm still running into same error",
"Hi @ljmerza @temilolafaith \r\n\r\nThere is a known issue of tflite model maker installation if you are using python >=3.10. Please use Python 3.9 or [Mediapipe Model Maker](https://developers.google.com/mediapipe/solutions/model_maker) as a workaround.\r\n\r\nThanks.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"> Hi @ljmerza @temilolafaith\r\n> \r\n> There is a known issue of tflite model maker installation if you are using python >=3.10. Please use Python 3.9 or [Mediapipe Model Maker](https://developers.google.com/mediapipe/solutions/model_maker) as a workaround.\r\n> \r\n> Thanks.\r\n\r\nAlso mediapipe-model-maker is broken. On windows I get this error during setup when it starts to get PyYAML dependency\r\n\r\n```\r\nDownloading PyYAML-5.4.1.tar.gz (175 kB)\r\n ---------------------------------------- 175.1/175.1 kB 2.6 MB/s eta 0:00:00\r\n Installing build dependencies ... done\r\n Getting requirements to build wheel ... error\r\n error: subprocess-exited-with-error\r\n\r\n × Getting requirements to build wheel did not run successfully.\r\n\r\n...\r\n\\Lib\\site-packages\\setuptools\\_distutils\\cmd.py\", line 107, in __getattr__\r\n raise AttributeError(attr)\r\n AttributeError: cython_sources\r\n [end of output]\r\n```",
"Hi @JohnFarl \r\n\r\nCould you please try again as I was successfully able to install mediapipe model maker on ubuntu(colab). Please find the [gist](https://colab.research.google.com/gist/pjpratik/dbb294b5537ee15bde72ca1471f0d834/61337.ipynb).\r\n\r\nThanks.",
"> Hi @JohnFarl\r\n> \r\n> Could you please try again as I was successfully able to install mediapipe model maker on ubuntu(colab). Please find the [gist](https://colab.research.google.com/gist/pjpratik/dbb294b5537ee15bde72ca1471f0d834/61337.ipynb).\r\n> \r\n> Thanks.\r\n\r\nI confirm that mediapipe model maker installation fails on Windows with the error mentioned above. (Python 3.11)",
"Hi @ljmerza, I was able to install if I downgraded Python to 3.9.17, can you try that out to see if you are able to continue that way?\r\n\r\nIf you are using conda you can do so like this:\r\n```py\r\nconda create -n your_env_name python=3.9\r\nconda activate your_env_name\r\npip install -q tflite-model-maker\r\n```",
"> Hi @ljmerza @temilolafaith\r\n> \r\n> There is a known issue of tflite model maker installation if you are using python >=3.10. Please use Python 3.9 or [Mediapipe Model Maker](https://developers.google.com/mediapipe/solutions/model_maker) as a workaround.\r\n> \r\n> Thanks.\r\n\r\nDowngraded from 3.10.9 to Python 3.9.6. \r\nStill experiencing the same error.\r\n\r\n% python3 --version\r\nPython 3.9.6",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61337\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61337\">No</a>\n",
"Problem is still here unsolved. This automatic issue close should be disabled or set to a reasonable amount of time. ",
"Hi @lu-wang-g, assigning this to you to consolidate tflite-model-maker issues. Thanks.",
"I got the same issue and was able to fix by using this [hack](https://discuss.python.org/t/getting-requirements-to-build-wheel-did-not-run-successfully-exit-code-1/30365/2):\r\n\r\n```bash\r\necho \"Cython<3\" > cython_constraint.txt\r\n$ PIP_CONSTRAINT=cython_constraint.txt pip install \"tflite-model-maker\"\r\n```",
"I was able to solve that dependency issue by installing `cython` and `pyyaml` before installing the model maker, based on [that answer on Stackoverflow](https://stackoverflow.com/a/77491847/1181162), with:\r\n```\r\n$ pip install \"cython<3.0.0\" wheel\r\n$ pip install \"pyyaml==5.4.1\" // pip install \"pyyaml<6.0\" probably also works\r\n```\r\n\r\nUntil the next dependency conflict...",
"What ended up working for me was a combination of @bduyng and @vpmalley's solutions.\r\n\r\nThis is what I did, in order\r\n```\r\npip install \"cython<3.0.0\" wheel // succeeded\r\n\r\npip install \"pyyaml==5.4.1\" // failed with the same error\r\n\r\necho \"Cython<3\" > cython_constraint.txt\r\nPIP_CONSTRAINT=cython_constraint.txt pip install \"pyyaml==5.4.1\" // succeeded\r\n\r\nPIP_CONSTRAINT=cython_constraint.txt pip install \"tflite-model-maker\" // failed with the same error\r\n\r\npip install \"tflite-model-maker\" // worked!\r\n```"
] | 2023-07-20T02:21:10 | 2024-01-24T01:09:12 | null | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
v2.13.0-rc2-7-g1cb1a030a62 2.13.0
### Custom code
Yes
### OS platform and distribution
ubuntu 20
### Mobile device
_No response_
### Python version
3.10.6
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
when trying to install `tflite-model-maker` i get an error:
```
running egg_info
writing lib3/PyYAML.egg-info/PKG-INFO
writing dependency_links to lib3/PyYAML.egg-info/dependency_links.txt
writing top-level names to lib3/PyYAML.egg-info/top_level.txt
Traceback (most recent call last):
File "/home/cubxi/.local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/cubxi/.local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/home/cubxi/.local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 271, in <module>
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/__init__.py", line 107, in setup
return distutils.core.setup(**attrs)
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/dist.py", line 1234, in run_command
super().run_command(command)
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/command/egg_info.py", line 314, in run
self.find_sources()
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/command/egg_info.py", line 322, in find_sources
mm.run()
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/command/egg_info.py", line 551, in run
self.add_defaults()
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/command/egg_info.py", line 589, in add_defaults
sdist.add_defaults(self)
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/command/sdist.py", line 104, in add_defaults
super().add_defaults()
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/_distutils/command/sdist.py", line 251, in add_defaults
self._add_defaults_ext()
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/_distutils/command/sdist.py", line 336, in _add_defaults_ext
self.filelist.extend(build_ext.get_source_files())
File "<string>", line 201, in get_source_files
File "/tmp/pip-build-env-koysxeoc/overlay/local/lib/python3.10/dist-packages/setuptools/_distutils/cmd.py", line 107, in __getattr__
raise AttributeError(attr)
AttributeError: cython_sources
```
### Standalone code to reproduce the issue
```shell
run `pip install -q tflite-model-maker`
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61337/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61337/timeline | null | reopened | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61336 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61336/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61336/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61336/events | https://github.com/tensorflow/tensorflow/issues/61336 | 1,812,966,508 | I_kwDOArmXAs5sD6xs | 61,336 | Using F1 Score for Model Checkpoint throws an error | {
"login": "kurkurzz",
"id": 64152220,
"node_id": "MDQ6VXNlcjY0MTUyMjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/64152220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kurkurzz",
"html_url": "https://github.com/kurkurzz",
"followers_url": "https://api.github.com/users/kurkurzz/followers",
"following_url": "https://api.github.com/users/kurkurzz/following{/other_user}",
"gists_url": "https://api.github.com/users/kurkurzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kurkurzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kurkurzz/subscriptions",
"organizations_url": "https://api.github.com/users/kurkurzz/orgs",
"repos_url": "https://api.github.com/users/kurkurzz/repos",
"events_url": "https://api.github.com/users/kurkurzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/kurkurzz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097545817,
"node_id": "MDU6TGFiZWwxMDk3NTQ1ODE3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:apis",
"name": "comp:apis",
"color": "0052cc",
"default": false,
"description": "Highlevel API related issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @kurkurzz ,\r\n\r\nPlease provide the complete code snippet to reproduce the issue reported. I've tried to reproduce the issue [here](https://colab.research.google.com/gist/Varsha-anjanappa/6855e9a19e74be64edca3d3d45a85121/61336_v1.ipynb), please provide the dataset or dummy data.\r\n\r\nThank you!!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"I am having the same issue but with tracking MSE. This is also seen by others (https://stackoverflow.com/questions/76701617/the-following-arguments-are-not-supported-with-the-native-keras-format-opti). Appears to be related to ModelCheckpoint and passing an \"options\" argument which should no longer be present in the latest version of Keras.",
"Hello,\r\nThis issue occurs when you try to use the F1Score metric with the default value for the weight argument in a multi-class or multi-label problem. This works only for binary classification problems. To make this work, you need to add 'macro' or 'weighted' depending on your data class imbalance in the F1Score instance during model compilation. \r\nThis should fix the issue. ",
"Hi @kurkurzz ,\r\n\r\nIn the current implementation,for F1-score metric if we go with default value for argument `average = None` and the use case is either `multi-class classification` or `multi-label classification` then this problem occurs. With all other options like `average = 'weighted'` or `'micro'` or `'macro'` it works as intended. Attached [gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/b183243a24924ebe56c63c7622288587/f1_score.ipynb) for reference.\r\n\r\nIf `average=None`, no averaging is performed and `result()` will return the score for each class which is causing the error.",
"Hi @kurkurzz ,\r\n\r\nSince monitor = val_f1_score returns f1_score for each class when averaging=None, it is not possible to `save_best_only` model as it it ambiguous to compare the arrays of multiple values which is best.Hence the error but ambiguous.\r\n\r\nFor this case I am trying to propose a PR to fall back to `save_best_only = False` instead of raising an exception."
] | 2023-07-20T01:10:25 | 2024-01-30T09:55:59 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.13
### Custom code
Yes
### OS platform and distribution
Google Colab
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
T4 15GB
### Current behavior?
Was using `tf.keras.metrics.F1Score` in `model.compile`. It works well, it will return f1_score and val_f1_score after every epoch. However, the problem arises if I also include `tf.keras.callbacks.ModelCheckpoint` in the `model.fit`, I got an error after the first epoch training.
The error are `The following argument(s) are not supported with the native Keras format: ['options']` if I use `monitor='val_loss'` and `ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()` if I use `monitor='val_f1_score'` .
Additional context: I am training a multi-label classification model. This might be the cause.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras import models, layers
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
def create_model():
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
model = models.Sequential()
model.add(base_model)
model.add(layers.GlobalAveragePooling2D())
model.add(layers.Dense(10, activation='sigmoid'))
return model
model = create_model()
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(
filepath=os.path.join('v5.keras'),
monitor='val_f1_score',
mode='max',
save_best_only=True
)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(learning_rate=0.00003),
metrics = [ tf.keras.metrics.F1Score(threshold=0.5)]
)
history = model.fit(
X_train,
y_train,
batch_size=64,
epochs=1,
validation_data=(X_test, y_test),
callbacks=[model_checkpoint]
)
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61336/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61335 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61335/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61335/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61335/events | https://github.com/tensorflow/tensorflow/pull/61335 | 1,812,951,662 | PR_kwDOArmXAs5V8ka5 | 61,335 | [oneDNN] Fix failing resnet50 benchmark tests with V2 | {
"login": "kanvi-nervana",
"id": 42224278,
"node_id": "MDQ6VXNlcjQyMjI0Mjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/42224278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kanvi-nervana",
"html_url": "https://github.com/kanvi-nervana",
"followers_url": "https://api.github.com/users/kanvi-nervana/followers",
"following_url": "https://api.github.com/users/kanvi-nervana/following{/other_user}",
"gists_url": "https://api.github.com/users/kanvi-nervana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kanvi-nervana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kanvi-nervana/subscriptions",
"organizations_url": "https://api.github.com/users/kanvi-nervana/orgs",
"repos_url": "https://api.github.com/users/kanvi-nervana/repos",
"events_url": "https://api.github.com/users/kanvi-nervana/events{/privacy}",
"received_events_url": "https://api.github.com/users/kanvi-nervana/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1104829434,
"node_id": "MDU6TGFiZWwxMTA0ODI5NDM0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:mkl",
"name": "comp:mkl",
"color": "0052cc",
"default": false,
"description": "MKL related issues"
},
{
"id": 1169364458,
"node_id": "MDU6TGFiZWwxMTY5MzY0NDU4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:S",
"name": "size:S",
"color": "adafea",
"default": false,
"description": "CL Change Size: Small"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "penpornk",
"id": 38085909,
"node_id": "MDQ6VXNlcjM4MDg1OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/38085909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penpornk",
"html_url": "https://github.com/penpornk",
"followers_url": "https://api.github.com/users/penpornk/followers",
"following_url": "https://api.github.com/users/penpornk/following{/other_user}",
"gists_url": "https://api.github.com/users/penpornk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penpornk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penpornk/subscriptions",
"organizations_url": "https://api.github.com/users/penpornk/orgs",
"repos_url": "https://api.github.com/users/penpornk/repos",
"events_url": "https://api.github.com/users/penpornk/events{/privacy}",
"received_events_url": "https://api.github.com/users/penpornk/received_events",
"type": "User",
"site_admin": false
},
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-07-20T00:49:26 | 2023-07-20T19:14:49 | 2023-07-20T19:14:49 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61335",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61335",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61335.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61335.patch",
"merged_at": "2023-07-20T19:14:49"
} | This PR fixes the following 2 benchmark tests with v2
//tensorflow/python/eager/benchmarks/resnet50:resnet50_test_cpu
//tensorflow/python/eager/benchmarks/resnet50:hvp_test_cpu
The failure started after this commit: https://github.com/tensorflow/tensorflow/commit/b44d7918070a2a192645af7b0c96b0c4e399161e | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61335/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61334 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61334/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61334/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61334/events | https://github.com/tensorflow/tensorflow/pull/61334 | 1,812,945,340 | PR_kwDOArmXAs5V8jBP | 61,334 | [oneDNN] Fix for swish and mish fusion | {
"login": "sachinmuradi",
"id": 43043975,
"node_id": "MDQ6VXNlcjQzMDQzOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/43043975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinmuradi",
"html_url": "https://github.com/sachinmuradi",
"followers_url": "https://api.github.com/users/sachinmuradi/followers",
"following_url": "https://api.github.com/users/sachinmuradi/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinmuradi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinmuradi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinmuradi/subscriptions",
"organizations_url": "https://api.github.com/users/sachinmuradi/orgs",
"repos_url": "https://api.github.com/users/sachinmuradi/repos",
"events_url": "https://api.github.com/users/sachinmuradi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinmuradi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1097545273,
"node_id": "MDU6TGFiZWwxMDk3NTQ1Mjcz",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:grappler",
"name": "comp:grappler",
"color": "0052cc",
"default": false,
"description": "Grappler related issues"
},
{
"id": 1169364458,
"node_id": "MDU6TGFiZWwxMTY5MzY0NDU4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:S",
"name": "size:S",
"color": "adafea",
"default": false,
"description": "CL Change Size: Small"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-07-20T00:42:11 | 2023-07-24T19:45:43 | 2023-07-24T19:45:43 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61334",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61334",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61334.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61334.patch",
"merged_at": "2023-07-24T19:45:43"
} | With the current pattern matching we have for Swish fusion in [remapper.cc ](https://github.com/tensorflow/tensorflow/blob/1cd941acb2ccd4582a52a223a11ebc8288c27fae/tensorflow/core/grappler/optimizers/remapper.cc#L1834), it will fuse ops into 'Swish' op if one of the inputs to 'Mul' is from same op as the input to Sigmoid. However in the case reported in https://github.com/tensorflow/tensorflow/issues/60941

the ops should not be fused, as in this case 'Split' op has two outputs. The fix for this corner case is to make sure not only the input ops match, but the tensors also should match. The same issue will be there in 'Mish' fusion also. This PR also includes a fix for 'Mish' fusion. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61334/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61332 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61332/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61332/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61332/events | https://github.com/tensorflow/tensorflow/pull/61332 | 1,812,826,223 | PR_kwDOArmXAs5V8IXr | 61,332 | [oneDNN] Add 2 new patterns for layernorm fusion | {
"login": "kanvi-nervana",
"id": 42224278,
"node_id": "MDQ6VXNlcjQyMjI0Mjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/42224278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kanvi-nervana",
"html_url": "https://github.com/kanvi-nervana",
"followers_url": "https://api.github.com/users/kanvi-nervana/followers",
"following_url": "https://api.github.com/users/kanvi-nervana/following{/other_user}",
"gists_url": "https://api.github.com/users/kanvi-nervana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kanvi-nervana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kanvi-nervana/subscriptions",
"organizations_url": "https://api.github.com/users/kanvi-nervana/orgs",
"repos_url": "https://api.github.com/users/kanvi-nervana/repos",
"events_url": "https://api.github.com/users/kanvi-nervana/events{/privacy}",
"received_events_url": "https://api.github.com/users/kanvi-nervana/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1097545273,
"node_id": "MDU6TGFiZWwxMDk3NTQ1Mjcz",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:grappler",
"name": "comp:grappler",
"color": "0052cc",
"default": false,
"description": "Grappler related issues"
},
{
"id": 1169365682,
"node_id": "MDU6TGFiZWwxMTY5MzY1Njgy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:L",
"name": "size:L",
"color": "adafea",
"default": false,
"description": "CL Change Size: Large"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "penpornk",
"id": 38085909,
"node_id": "MDQ6VXNlcjM4MDg1OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/38085909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penpornk",
"html_url": "https://github.com/penpornk",
"followers_url": "https://api.github.com/users/penpornk/followers",
"following_url": "https://api.github.com/users/penpornk/following{/other_user}",
"gists_url": "https://api.github.com/users/penpornk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penpornk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penpornk/subscriptions",
"organizations_url": "https://api.github.com/users/penpornk/orgs",
"repos_url": "https://api.github.com/users/penpornk/repos",
"events_url": "https://api.github.com/users/penpornk/events{/privacy}",
"received_events_url": "https://api.github.com/users/penpornk/received_events",
"type": "User",
"site_admin": false
},
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @ezhulenev, Can you please take a look at this PR? Thank you!"
] | 2023-07-19T22:29:34 | 2023-08-24T18:44:52 | 2023-08-24T18:44:51 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61332",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61332",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61332.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61332.patch",
"merged_at": "2023-08-24T18:44:51"
} | co-author: @ustcuna
Following pattern is seen in 3 models. It looks similar to InstanceNorm pattern but it is actually LayerNorm based on the reduction axis. Under right conditions, this pattern will be fused as LayerNorm to improve performance.

With this change we saw ~20% improvement in performance for the 3 models
These are the repo links for 2 of the 3 models
BERT_LARGE : https://github.com/mlperf/training/tree/master/language_model/tensorflow/bert
BERT_BASE : https://github.com/google-research/bert
The other pattern is seen in another customer model and brings in 10-20% improvement
 | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61332/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61331 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61331/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61331/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61331/events | https://github.com/tensorflow/tensorflow/pull/61331 | 1,812,715,749 | PR_kwDOArmXAs5V7wIG | 61,331 | [oneDNN] Add BFloat16 Specialization Functor for Mean Op | {
"login": "akhilgoe",
"id": 114951738,
"node_id": "U_kgDOBtoGOg",
"avatar_url": "https://avatars.githubusercontent.com/u/114951738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akhilgoe",
"html_url": "https://github.com/akhilgoe",
"followers_url": "https://api.github.com/users/akhilgoe/followers",
"following_url": "https://api.github.com/users/akhilgoe/following{/other_user}",
"gists_url": "https://api.github.com/users/akhilgoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akhilgoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akhilgoe/subscriptions",
"organizations_url": "https://api.github.com/users/akhilgoe/orgs",
"repos_url": "https://api.github.com/users/akhilgoe/repos",
"events_url": "https://api.github.com/users/akhilgoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/akhilgoe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1097545273,
"node_id": "MDU6TGFiZWwxMDk3NTQ1Mjcz",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:grappler",
"name": "comp:grappler",
"color": "0052cc",
"default": false,
"description": "Grappler related issues"
},
{
"id": 1169364458,
"node_id": "MDU6TGFiZWwxMTY5MzY0NDU4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:S",
"name": "size:S",
"color": "adafea",
"default": false,
"description": "CL Change Size: Small"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @ezhulenev, can you please take a look at this PR? Thanks!",
"Hi @ezhulenev, Can you please take a look at this PR? Thanks!",
"Hi @ezhulenev - this PR got reverted, can you please point me to the error so we can fix it? Thanks."
] | 2023-07-19T20:54:36 | 2023-08-31T23:24:34 | 2023-08-28T22:06:01 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61331",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61331",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61331.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61331.patch",
"merged_at": "2023-08-28T22:06:01"
} | Currently BFloat16 Mean Op causes BFloat16 accumulation which may result in incorrect output. This prevents Mean to be used with the lower BFloat16 precision. This PR:
1. Just like the existing implementation of the Sum op, ensures BFloat16 Mean accumulation happens in FP32 by adding a Casting Specialization registration.
2. Adds Benchmark and Kernel tests to verify the implementation.
3. Adds Mean back to the Infer List from Deny List and fixes a typo in the Infer List initialization. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61331/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61330 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61330/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61330/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61330/events | https://github.com/tensorflow/tensorflow/issues/61330 | 1,812,318,866 | I_kwDOArmXAs5sBcqS | 61,330 | Tesla v100 Tensorflow CUDA Support | {
"login": "anand-shubham",
"id": 47324013,
"node_id": "MDQ6VXNlcjQ3MzI0MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/47324013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anand-shubham",
"html_url": "https://github.com/anand-shubham",
"followers_url": "https://api.github.com/users/anand-shubham/followers",
"following_url": "https://api.github.com/users/anand-shubham/following{/other_user}",
"gists_url": "https://api.github.com/users/anand-shubham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anand-shubham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anand-shubham/subscriptions",
"organizations_url": "https://api.github.com/users/anand-shubham/orgs",
"repos_url": "https://api.github.com/users/anand-shubham/repos",
"events_url": "https://api.github.com/users/anand-shubham/events{/privacy}",
"received_events_url": "https://api.github.com/users/anand-shubham/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473184161,
"node_id": "MDU6TGFiZWw0NzMxODQxNjE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:support",
"name": "type:support",
"color": "159b2e",
"default": false,
"description": "Support issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097547538,
"node_id": "MDU6TGFiZWwxMDk3NTQ3NTM4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:gpu",
"name": "comp:gpu",
"color": "0052cc",
"default": false,
"description": "GPU related issues"
},
{
"id": 2477739347,
"node_id": "MDU6TGFiZWwyNDc3NzM5MzQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.4",
"name": "TF 2.4",
"color": "5319e7",
"default": false,
"description": "for issues related to TF 2.4"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @anand-shubham ,\r\n\r\nPlease find the tested configurations of CUDA and cuDNN [here](https://www.tensorflow.org/install/source#gpu).\r\n\r\n\r\ntensorflow-2.4.0 | 3.6-3.8 | GCC 7.3.1 | Bazel 3.1.0 | 8.0 | 11.0\r\n-- | -- | -- | -- | -- | --\r\n\r\nFor TF2.4v you should use CUDA 11.0 and cuDNN 8.0 and GCC versions should be 7.3.1.\r\n\r\nPlease upgrade GCC to 7.3.1 and downgrade CUDA and cuDNN to above mentioned configurations because we can't assure backward compatibility of latest CUDA/cuDNN for older TF versions.\r\n\r\nPlease do above changes and let us know if still having problem.\r\n\r\nThanks!\r\n\r\n",
"Hi @SuryanarayanaY,\r\n\r\nWhile installing the Nvidia driver we have an option to select the CUDA TOOLKIT version. Does that affect the CUDA, cuDNN we install later?\r\nAs I have already tried with the For TF2.4v you should use CUDA 11.0 and cuDNN 8.0.\r\n\r\nThe GPU installed is Tesla V100 Volta Architecture. \r\n\r\nCurrent RHEL Version :Red Hat Enterprise Linux Server release 7.9 (Maipo) \r\n\r\nCurrent GCC Verison: gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44) .",
"Hi @anand-shubham ,\r\n\r\nAs per official documentation it recommends to install CUDA and cuDNN toolkits using Conda explicitly. As you are using TF2.4 version which is quite older versions which we are not supporting now.Also official documentation keeping only the instructions for latest TF release which you can find [here](https://www.tensorflow.org/install/pip#step-by-step_instructions). Please note that the attached instructions are applicable to latest TF version which is Tf2.13 now.\r\n\r\nI am attaching the instructions which I have used for Tf2.11 .\r\n\r\n```\r\nconda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0\r\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/\r\nmkdir -p $CONDA_PREFIX/etc/conda/activate.d\r\necho 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/' > $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh\r\n```\r\nYou can try replacing with `cudatoolkit=11.0 cudnn=8.0` and follow same instructions for path setting mentioned above.\r\n\r\nFirst you need to install GPU driver and verify it using `nvidia-smi` command and then follow the instructions mentioned above for CUDA/cuDNN toolkit installation and path setting.\r\n\r\nCould you also please check your GPU compatibility from Nvidia website may be from [here](https://developer.nvidia.com/cuda-gpus). \r\n\r\nBut we recommend to use latest TF versions that can be better supported by us. Thanks!\r\n",
"2023-07-24 18:29:10.026158: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:428] Loaded cuDNN version 8100\r\n2023-07-24 18:29:40.514393: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:219] failed to create cublas handle: cublas error\r\n2023-07-24 18:29:40.514447: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:221] Failure to initialize cublas may be due to OOM (cublas needs some free memory when you initialize it, and your deep-learning framework may have preallocated more than its fair share), or may be because this binary was not built with support for the GPU in your machine.\r\n2023-07-24 18:29:40.514769: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at conv_ops_fused_impl.h:621 : NOT_FOUND: No algorithm worked! Error messages:\r\n Profiling failure on CUDNN engine 1#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 1: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 0#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 0: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 2#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 2: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 4#TC: UNKNOWN: CUDNN_STATUS_INTERNAL_ERROR\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 4: UNKNOWN: CUDNN_STATUS_INTERNAL_ERROR\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 6#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 6: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 5#TC: UNKNOWN: CUDNN_STATUS_INTERNAL_ERROR\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 5: UNKNOWN: CUDNN_STATUS_INTERNAL_ERROR\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 7#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 7: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\nTraceback (most recent call last):\r\n File \"/data01/PatternClassClassification/Model_2_PatternClass_imgSize150x150.py\", line 366, in <module>\r\n history = stn_model.fit(\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 70, in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/tensorflow/python/eager/execute.py\", line 52, in quick_execute\r\n tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,\r\ntensorflow.python.framework.errors_impl.NotFoundError: Graph execution error:\r\n\r\nDetected at node 'model_1/conv2d_5/Relu' defined at (most recent call last):\r\n File \"/data01/PatternClassClassification/Model_2.py\", line 366, in <module>\r\n history = stn_model.fit(\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 65, in error_handler\r\n return fn(*args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1650, in fit\r\n tmp_logs = self.train_function(iterator)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1249, in train_function\r\n return step_function(self, iterator)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1233, in step_function\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1222, in run_step\r\n outputs = model.train_step(data)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1023, in train_step\r\n y_pred = self(x, training=True)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 65, in error_handler\r\n return fn(*args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 561, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 65, in error_handler\r\n return fn(*args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/base_layer.py\", line 1132, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 96, in error_handler\r\n return fn(*args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/functional.py\", line 511, in call\r\n return self._run_internal_graph(inputs, training=training, mask=mask)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/functional.py\", line 668, in _run_internal_graph\r\n outputs = node.layer(*args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 65, in error_handler\r\n return fn(*args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/base_layer.py\", line 1132, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 96, in error_handler\r\n return fn(*args, **kwargs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/layers/convolutional/base_conv.py\", line 314, in call\r\n return self.activation(outputs)\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/activations.py\", line 317, in relu\r\n return backend.relu(\r\n File \"/data01/Software_installations/Anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/backend.py\", line 5369, in relu\r\n x = tf.nn.relu(x)\r\nNode: 'model_1/conv2d_5/Relu'\r\nNo algorithm worked! Error messages:\r\n Profiling failure on CUDNN engine 1#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 1: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 0#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 0: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 2#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 2: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 4#TC: UNKNOWN: CUDNN_STATUS_INTERNAL_ERROR\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 4: UNKNOWN: CUDNN_STATUS_INTERNAL_ERROR\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 6#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 6: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 5#TC: UNKNOWN: CUDNN_STATUS_INTERNAL_ERROR\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 5: UNKNOWN: CUDNN_STATUS_INTERNAL_ERROR\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 7#TC: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n Profiling failure on CUDNN engine 7: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED\r\nin tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5150): 'status'\r\n [[{{node model_1/conv2d_5/Relu}}]] [Op:__inference_train_function_1992]\r\n2023-07-24 18:29:40.809535: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.\r\n [[{{node PyFunc}}]]",
"> 2023-07-24 18:29:40.514447: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:221] Failure to initialize cublas may be due to OOM (cublas needs some free memory when you initialize it, and your deep-learning framework may have preallocated more than its fair share), or may be because this binary was not built with support for the GPU in your machine.\r\n\r\n@anand-shubham , Could you please refer to the above error and comment. This seems not an issue with tensorflow.",
"WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.\r\nEpoch 1/200\r\n2023-07-26 16:31:45.274459: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:428] Loaded cuDNN version 8100\r\n2023-07-26 16:33:46.507694: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory\r\n2023-07-26 16:33:46.508174: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory\r\n2023-07-26 16:33:46.508206: W tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:85] Couldn't get ptxas version string: INTERNAL: Couldn't invoke ptxas --version\r\n2023-07-26 16:33:46.508519: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory\r\n2023-07-26 16:33:46.508591: W tensorflow/compiler/xla/stream_executor/gpu/redzone_allocator.cc:318] INTERNAL: Failed to launch ptxas\r\nRelying on driver to perform ptx compilation.\r\nModify $PATH to customize ptxas location.\r\nThis message will be only logged once.\r\n2023-07-26 16:38:08.228703: I tensorflow/compiler/xla/service/service.cc:173] XLA service 0x1e177620 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\r\n2023-07-26 16:38:08.228760: I tensorflow/compiler/xla/service/service.cc:181] StreamExecutor device (0): GRID V100L-8Q, Compute Capability 7.0\r\n2023-07-26 16:38:08.234554: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\r\n2023-07-26 16:38:08.256595: W tensorflow/compiler/xla/service/gpu/nvptx_helper.cc:56] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice.\r\nSearched for CUDA in the following directories:\r\n ./cuda_sdk_lib\r\n /usr/local/cuda-11.2\r\n /usr/local/cuda\r\n .\r\nYou can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions. For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda will work.\r\n2023-07-26 16:38:08.257645: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:326] libdevice is required by this HLO module but was not found at ./libdevice.10.bc\r\n2023-07-26 16:38:08.257856: I tensorflow/compiler/jit/xla_compilation_cache.cc:477] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\r\n2023-07-26 16:38:08.257985: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:446 : INTERNAL: libdevice not found at ./libdevice.10.bc\r\n2023-07-26 16:38:08.280213: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:326] libdevice is required by this HLO module but was not found at ./libdevice.10.bc\r\n2023-07-26 16:38:08.280541: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:446 : INTERNAL: libdevice not found at ./libdevice.10.bc\r\n2023-07-26 16:38:09.127454: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:326] libdevice is required by this HLO module but was not found at ./libdevice.10.bc\r\n2023-07-26 16:38:09.127784: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:446 : INTERNAL: libdevice not found at ./libdevice.10.bc\r\n2023-07-26 16:38:09.149194: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:326] libdevice is required by this HLO module but was not found at ./libdevice.10.bc\r\n2023-07-26 16:38:09.149491: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:446 : INTERNAL: libdevice not found at ./libdevice.10.bc\r\n2023-07-26 16:39:35.064225: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:326] libdevice is required by this HLO module but was not found at ./libdevice.10.bc\r\n2023-07-26 16:39:35.064572: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:446 : INTERNAL: libdevice not found at ./libdevice.10.bc\r\n2023-07-26 16:39:35.085963: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:326] libdevice is required by this HLO module but was not found at ./libdevice.10.bc\r\n2023-07-26 16:39:35.086272: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:446 : INTERNAL: libdevice not found at ./libdevice.10.bc\r\n2023-07-26 16:39:47.420330: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:326] libdevice is required by this HLO module but was not found at ./libdevice.10.bc\r\n2023-07-26 16:39:47.420685: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:446 : INTERNAL: libdevice not found at ./libdevice.10.bc\r\n2023-07-26 16:39:47.442196: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:326] libdevice is required by this HLO module but was not found at ./libdevice.10.bc\r\n2023-07-26 16:39:47.442501: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:446 : INTERNAL: libdevice not found at ./libdevice.10.bc\r\nTraceback (most recent call last):\r\n File \"/data01/PatternClassClassification/test_gpu1.py\", line 71, in <module>\r\n history = model.fit(\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 70, in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/tensorflow/python/eager/execute.py\", line 52, in quick_execute\r\n tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,\r\ntensorflow.python.framework.errors_impl.InternalError: Graph execution error:\r\n\r\nDetected at node 'StatefulPartitionedCall_6' defined at (most recent call last):\r\n File \"/data01/PatternClassClassification/test_gpu1.py\", line 71, in <module>\r\n history = model.fit(\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 65, in error_handler\r\n return fn(*args, **kwargs)\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1650, in fit\r\n tmp_logs = self.train_function(iterator)\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1249, in train_function\r\n return step_function(self, iterator)\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1233, in step_function\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1222, in run_step\r\n outputs = model.train_step(data)\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/engine/training.py\", line 1027, in train_step\r\n self.optimizer.minimize(loss, self.trainable_variables, tape=tape)\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 527, in minimize\r\n self.apply_gradients(grads_and_vars)\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 1140, in apply_gradients\r\n return super().apply_gradients(grads_and_vars, name=name)\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 634, in apply_gradients\r\n iteration = self._internal_apply_gradients(grads_and_vars)\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 1166, in _internal_apply_gradients\r\n return tf.__internal__.distribute.interim.maybe_merge_call(\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 1216, in _distributed_apply_gradients_fn\r\n distribution.extended.update(\r\n File \"/data01/Software_installations/anaconda3/envs/gpu_env/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 1211, in apply_grad_to_update_var\r\n return self._update_step_xla(grad, var, id(self._var_key(var)))\r\nNode: 'StatefulPartitionedCall_6'\r\nlibdevice not found at ./libdevice.10.bc\r\n [[{{node StatefulPartitionedCall_6}}]] [Op:__inference_train_function_1141]\r\n2023-07-26 16:39:47.722164: W tensorflow/core/kernels/data/generator_dataset_op.cc:108] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.\r\n [[{{node PyFunc}}]]\r\n\r\n\r\n\r\nThe code runs with the CPU version of Tensorflow\r\n\r\nThe model is compiled and I can see the Architecture as the output, but when the training is about to start the program crashes",
"Hi @anand-shubham ,\r\n\r\nSince you are using TF2.4v which is not supported anymore, can you please try with any of latest TF versions and let us know if issue still persists on latest versions ?\r\n\r\nAlso we officially supports Ubuntu instructions only which may work for other linux distros also. I can see from your inputs its not problem with Ubuntu right ? In that case it may not be possible to look into this issue unless this is a problem with latest TF versions also.\r\n\r\nPlease try with latest TF version and let us know the outcome. Thanks!",
"hi @SuryanarayanaY ,\r\n\r\nThe above output is with TF2.11v",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61330\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61330\">No</a>\n",
"@SuryanarayanaY after installing CUDA cuDNN by installing without conda the CNN has worked, but there is no speedup is that a Tensorflow issue?\r\n\r\nThe ETA for 1 epoch is much higher than what I get when running on CPU.",
"Hi @anand-shubham ,\r\n\r\nCould you please submit a code snippet along with logs to test it on Ubuntu?\r\n\r\nThanks!\r\n\r\n",
"Also it seems now issue is quite different than earlier.If earlier issue resolved and current problem is different it would be better to create a new ticket if you are ok with it.\r\n\r\nThanks!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61330\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61330\">No</a>\n"
] | 2023-07-19T16:43:13 | 2023-08-16T01:46:44 | 2023-08-16T01:46:42 | NONE | null | null | null | ### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.4
### Custom code
Yes
### OS platform and distribution
RHEL 7.9
### Mobile device
_No response_
### Python version
3.6
### Bazel version
_No response_
### GCC/compiler version
4.3
### CUDA/cuDNN version
11/8, 10.1/7.6
### GPU model and memory
Tesla V100 2GB Vram
### Current behavior?
Attempting to fetch value instead of handling error Internal: failed to get device attribute 13 for device 0: CUDA_ERROR_UNKNOWN: unknown error.
NVIDIA-SMI give the following output:| NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: 11.0
nvcc-V the following output:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Fri_Feb__8_19:08:17_PST_2019
Cuda compilation tools, release 10.1, V10.1.105
### Standalone code to reproduce the issue
```shell
Doesnt happen with Windows or Ubuntu systems.
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61330/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61329 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61329/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61329/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61329/events | https://github.com/tensorflow/tensorflow/issues/61329 | 1,811,975,812 | I_kwDOArmXAs5sAI6E | 61,329 | LSTM support for quantisation aware training. | {
"login": "TayyabaZainab0807",
"id": 36931312,
"node_id": "MDQ6VXNlcjM2OTMxMzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/36931312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TayyabaZainab0807",
"html_url": "https://github.com/TayyabaZainab0807",
"followers_url": "https://api.github.com/users/TayyabaZainab0807/followers",
"following_url": "https://api.github.com/users/TayyabaZainab0807/following{/other_user}",
"gists_url": "https://api.github.com/users/TayyabaZainab0807/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TayyabaZainab0807/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TayyabaZainab0807/subscriptions",
"organizations_url": "https://api.github.com/users/TayyabaZainab0807/orgs",
"repos_url": "https://api.github.com/users/TayyabaZainab0807/repos",
"events_url": "https://api.github.com/users/TayyabaZainab0807/events{/privacy}",
"received_events_url": "https://api.github.com/users/TayyabaZainab0807/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1661751498,
"node_id": "MDU6TGFiZWwxNjYxNzUxNDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteConverter",
"name": "TFLiteConverter",
"color": "bfdadc",
"default": false,
"description": "For issues related to TFLite converter"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@TayyabaZainab0807 Could you please have a look at the post-training quantization. As per this [thread](https://discuss.tensorflow.org/t/quantizing-lstm-layers/1876) of TF forum unfortunately, Quantization aware training is not supports LSTM/RNN layers yet. It is in the process to be implemented. Thank you!\r\n\r\n",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further."
] | 2023-07-19T13:39:46 | 2023-08-08T01:51:21 | 2023-08-08T01:51:21 | NONE | null | null | null | I wonder if quantization-aware training has the support for lstm? | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61329/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61328 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61328/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61328/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61328/events | https://github.com/tensorflow/tensorflow/issues/61328 | 1,811,895,049 | I_kwDOArmXAs5r_1MJ | 61,328 | Random predictions with intel-tensorflow when OMP_THREAD_LIMIT is set | {
"login": "fcouziniedevy",
"id": 53623276,
"node_id": "MDQ6VXNlcjUzNjIzMjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/53623276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fcouziniedevy",
"html_url": "https://github.com/fcouziniedevy",
"followers_url": "https://api.github.com/users/fcouziniedevy/followers",
"following_url": "https://api.github.com/users/fcouziniedevy/following{/other_user}",
"gists_url": "https://api.github.com/users/fcouziniedevy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fcouziniedevy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fcouziniedevy/subscriptions",
"organizations_url": "https://api.github.com/users/fcouziniedevy/orgs",
"repos_url": "https://api.github.com/users/fcouziniedevy/repos",
"events_url": "https://api.github.com/users/fcouziniedevy/events{/privacy}",
"received_events_url": "https://api.github.com/users/fcouziniedevy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 5244472328,
"node_id": "LA_kwDOArmXAs8AAAABOJhMCA",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:cpu-intel",
"name": "subtype:cpu-intel",
"color": "507561",
"default": false,
"description": "To track windows cpu issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @fcouziniedevy ,\r\n\r\nI'm unable to replicate your problem. The predictions using OMP_NUM_THREAD is constant and not random. Please refer the [gist](https://colab.research.google.com/gist/Varsha-anjanappa/ae9f5662513c39ace7c3a078e74d5bff/61328.ipynb).\r\n\r\nThank you!!",
"Hi,\r\nsorry my post was not consistent: the problematic env variable is *OMP_THREAD_LIMIT* not *OMP_NUM_THREADS*. I used the correct variable everywhere in my post except in the example code... I edited my first post and by modifiing your example colab (replacing \"OMP_NUM_THREAD\" by \"OMP_THREAD_LIMIT\"), I can reproduce the problem."
] | 2023-07-19T12:58:52 | 2023-08-29T20:41:44 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
The problem arises when using *intel-tensorflow*
### Source
binary
### TensorFlow version
unknown, 2.13.0 (package intel_tensorflow-2.13.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl)
### Custom code
No
### OS platform and distribution
Linux Ubuntu 20.04
### Python version
3.8, 3.9, 3.10, 3.11
### Current behavior?
This problem only happens when using the *intel-tensorflow* package which uses the mkl library. When the environment variable OMP_THREAD_LIMIT is set, the predictions of some standard models become random when running on cpu.
During the run, a warning from OMP is shown:
```
OMP: Warning #96: Cannot form a team with 36 threads, using 12 instead.
OMP: Hint Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set).
```
This warning suggests to unset the variable _OMP_THREAD_LIMIT_ but I think a randomness/instability in the prediction of models needs more than a warning with a "Hint".
To reproduce the problem:
* Create an environment with *intel-tensorflow* installed
* Copy the code in the following section into a file *test_script.py*
* Run the following commands:
* `python test_script.py`
* `OMP_THREAD_LIMIT=2 python test_script.py` (edited)
### Standalone code to reproduce the issue
```python
import tensorflow as tf
if __name__ == "__main__":
with tf.device("/cpu:0"):
model = tf.keras.applications.efficientnet.EfficientNetB0()
img = tf.ones((1, 224, 224, 3))
pred = model(img)
print(f"Result: {pred[0, :5]}")
```
### Relevant log output
```shell
# Run without OMP_THREAD_LIMIT
...
Result: [0.00066389 0.00075261 0.00108045 0.00210853 0.00316559]
# Run with OMP_THREAD_LIMIT
...
OMP: Warning #96: Cannot form a team with 6 threads, using 2 instead.
OMP: Hint Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set).
Result: [0.00030835 0.00062532 0.00057299 0.00088017 0.00140722]
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61328/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61327 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61327/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61327/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61327/events | https://github.com/tensorflow/tensorflow/issues/61327 | 1,811,873,824 | I_kwDOArmXAs5r_wAg | 61,327 | tensorflow/core/common_runtime/gpu/gpu_util.cc:293] GPU->CPU Memcpy failed | {
"login": "HAMZA12337",
"id": 66895798,
"node_id": "MDQ6VXNlcjY2ODk1Nzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/66895798?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HAMZA12337",
"html_url": "https://github.com/HAMZA12337",
"followers_url": "https://api.github.com/users/HAMZA12337/followers",
"following_url": "https://api.github.com/users/HAMZA12337/following{/other_user}",
"gists_url": "https://api.github.com/users/HAMZA12337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HAMZA12337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HAMZA12337/subscriptions",
"organizations_url": "https://api.github.com/users/HAMZA12337/orgs",
"repos_url": "https://api.github.com/users/HAMZA12337/repos",
"events_url": "https://api.github.com/users/HAMZA12337/events{/privacy}",
"received_events_url": "https://api.github.com/users/HAMZA12337/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097547538,
"node_id": "MDU6TGFiZWwxMDk3NTQ3NTM4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:gpu",
"name": "comp:gpu",
"color": "0052cc",
"default": false,
"description": "GPU related issues"
},
{
"id": 1548890241,
"node_id": "MDU6TGFiZWwxNTQ4ODkwMjQx",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%201.15",
"name": "TF 1.15",
"color": "e99695",
"default": false,
"description": "for issues seen on TF 1.15"
}
] | closed | false | {
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@HAMZA12337,\r\nWe see that you are using tf version 1.15, 1.x is not actively supported, please update to 2.x and let us know if you are facing the same issue.\r\n\r\nAnd also you have provided the code which is suitable for 1.x version. Could you please try to migrate your TensorFlow code from TensorFlow 1.x to TensorFlow 2 with the help of this official document.\r\nhttps://www.tensorflow.org/guide/migrate\r\n\r\nAlso please try to install the tensorflow latest stable version by referring to the official build documentation.\r\nhttps://www.tensorflow.org/install\r\n\r\nThank you!",
"But i don't now why i am using tensorflow 1.15 with gpu == geforce gtx 1070 it's work but with gpu rtx 4070 no",
"@HAMZA12337,\r\n Its unlikely for TF 1.x version to receive any bug fixes. There is a high possibility that this was fixed with later TF versions. Perhaps you can use latest tf versions for your case. \r\nhttps://www.tensorflow.org/install\r\n\r\nCould you please try to migrate your TensorFlow code from TensorFlow 1.x to TensorFlow 2 with the help of this official document. Also **tf.compat.v1.Session** API was designed for TensorFlow v1. See the [TensorFlow v1 to TensorFlow v2 migration guide](https://www.tensorflow.org/guide/migrate) for instructions on how to migrate the rest of your code.\r\n\r\nThank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61327\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61327\">No</a>\n"
] | 2023-07-19T12:46:52 | 2023-08-12T01:45:50 | 2023-08-12T01:45:47 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
1.15
### Custom code
Yes
### OS platform and distribution
Windows
### Mobile device
_No response_
### Python version
3.7
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
cuda version = 10.0
cudnn=7.6.4
_No response_
### GPU model and memory
Geforce RTX 4070 TI 12 GB
_No response_
### Current behavior?
I am using
gpu geforce rtx 4070 ti 12 gb
i add in my training file
config1 = tf.compat.v1.ConfigProto()
config1.gpu_options.allow_growth = True
session = tf.compat.v1.Session(config=config1)
But Nothing happen I get this issue
### Standalone code to reproduce the issue
```shell
2023-07-19 13:40:37.744821: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2023-07-19 13:40:37.744955: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2023-07-19 13:40:37.745047: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll
2023-07-19 13:40:37.745166: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll
2023-07-19 13:40:37.745290: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll
2023-07-19 13:40:37.745374: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll
2023-07-19 13:40:37.745493: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2023-07-19 13:40:37.745596: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2023-07-19 13:40:37.745706: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2023-07-19 13:40:37.745789: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2023-07-19 13:40:37.745866: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2023-07-19 13:40:37.746009: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10400 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 4070 Ti, pci bus id: 0000:01:00.0, compute capability: 8.9)
WARNING:tensorflow:From C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\backend\tensorflow_backend.py:300: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\backend\tensorflow_backend.py:308: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
2023-07-19 13:40:40.254879: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2023-07-19 13:43:12.728939: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows
Relying on driver to perform ptx compilation. This message will be only logged once.
2023-07-19 13:43:12.932855: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2023-07-19 13:43:21.066511: E tensorflow/stream_executor/cuda/cuda_blas.cc:428] failed to run cuBLAS routine: CUBLAS_STATUS_EXECUTION_FAILED
Exception: Blas GEMM launch failed : a.shape=(2, 2048), b.shape=(2, 36), m=2048, n=36, k=2
[[node gradients_1/dense_regress_10/MatMul_grad/MatMul_1 (defined at C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
Original stack trace for 'gradients_1/dense_regress_10/MatMul_grad/MatMul_1':
File "c:/Users/user/Desktop/Binarios/keras_frcnn-master-atelier-B/keras_frcnn-master/train_frcnn_kitti.py", line 262, in <module>
train_kitti()
File "c:/Users/user/Desktop/Binarios/keras_frcnn-master-atelier-B/keras_frcnn-master/train_frcnn_kitti.py", line 205, in train_kitti
[Y1[:, sel_samples, :], Y2[:, sel_samples, :]])
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\engine\training.py", line 1620, in train_on_batch
self._make_train_function()
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\engine\training.py", line 1002, in _make_train_function
self.total_loss)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\optimizers.py", line 381, in get_updates
grads = self.get_gradients(loss, params)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\optimizers.py", line 47, in get_gradients
grads = K.gradients(loss, params)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\backend\tensorflow_backend.py", line 2138, in gradients
return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\gradients_impl.py", line 158, in gradients
unconnected_gradients)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\gradients_util.py", line 679, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\gradients_util.py", line 350, in _MaybeCompile
return grad_fn() # Exit early
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\gradients_util.py", line 679, in <lambda>
lambda: grad_fn(op, *out_grads))
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\math_grad.py", line 1586, in _MatMulGrad
grad_b = gen_math_ops.mat_mul(a, grad, transpose_a=True)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\gen_math_ops.py", line 6136, in mat_mul
name=name)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
...which was originally created as op 'dense_regress_10/MatMul', defined at:
File "c:/Users/user/Desktop/Binarios/keras_frcnn-master-atelier-B/keras_frcnn-master/train_frcnn_kitti.py", line 262, in <module>
train_kitti()
File "c:/Users/user/Desktop/Binarios/keras_frcnn-master-atelier-B/keras_frcnn-master/train_frcnn_kitti.py", line 88, in train_kitti
classifier = nn.classifier(shared_layers, roi_input, cfg.num_rois, nb_classes=len(classes_count), trainable=True)
File "c:\Users\user\Desktop\Binarios\keras_frcnn-master-atelier-B\keras_frcnn-master\keras_frcnn\resnet.py", line 270, in classifier
name='dense_regress_{}'.format(nb_classes))(out)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\engine\topology.py", line 578, in __call__
output = self.call(inputs, **kwargs)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\layers\wrappers.py", line 177, in call
y = self.layer.call(inputs) # (num_samples * timesteps, ...)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\layers\core.py", line 840, in call
output = K.dot(inputs, self.kernel)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\keras\backend\tensorflow_backend.py", line 848, in dot
out = tf.matmul(x, y)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\util\dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\math_ops.py", line 2754, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\gen_math_ops.py", line 6136, in mat_mul
name=name)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "C:\Users\user\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
2023-07-19 13:43:21.297408: I tensorflow/stream_executor/stream.cc:1990] [stream=0000029EFE927EC0,impl=0000029EB937CFB0] did not wait for [stream=0000029EFE926CC0,impl=0000029EB937CF20]
2023-07-19 13:43:21.297815: I tensorflow/stream_executor/stream.cc:4925] [stream=0000029EFE927EC0,impl=0000029EB937CFB0] did not memcpy device-to-host; source: 00000007129B6500
2023-07-19 13:43:21.298246: F tensorflow/core/common_runtime/gpu/gpu_util.cc:293] GPU->CPU Memcpy failed
2023-07-19 13:43:21.298255: I tensorflow/stream_executor/stream.cc:1990] [stream=0000029EFE927EC0,impl=0000029EB937CFB0] did not wait for [stream=0000029EFE926CC0,impl=0000029EB937CF20]
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61327/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61326 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61326/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61326/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61326/events | https://github.com/tensorflow/tensorflow/issues/61326 | 1,811,869,931 | I_kwDOArmXAs5r_vDr | 61,326 | Could not load dynamic library 'libcublasLt.so.12'; dlerror: libcublasLt.so.12: cannot open shared object file: No such file or directory | {
"login": "jagiella",
"id": 4679834,
"node_id": "MDQ6VXNlcjQ2Nzk4MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4679834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jagiella",
"html_url": "https://github.com/jagiella",
"followers_url": "https://api.github.com/users/jagiella/followers",
"following_url": "https://api.github.com/users/jagiella/following{/other_user}",
"gists_url": "https://api.github.com/users/jagiella/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jagiella/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jagiella/subscriptions",
"organizations_url": "https://api.github.com/users/jagiella/orgs",
"repos_url": "https://api.github.com/users/jagiella/repos",
"events_url": "https://api.github.com/users/jagiella/events{/privacy}",
"received_events_url": "https://api.github.com/users/jagiella/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Okay I fixed it! I needed to install another flavor of the libcudnn matching my cuda version:\r\n\r\n```\r\nsudo apt-get install libcudnn8=8.9.3.28-1+cuda11.8\r\n```",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61326\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61326\">No</a>\n"
] | 2023-07-19T12:44:33 | 2023-07-19T13:02:39 | 2023-07-19T13:02:37 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
v2.13.0-rc2-7-g1cb1a030a62 2.13.0
### Custom code
No
### OS platform and distribution
Linux Ubuntu 23.10
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
11.8.0-1/8.9.3.28-1+cuda12.1
### GPU model and memory
NVIDIA GeForce GTX 960M, 4096MiB
### Current behavior?
running the mobilenet from keras included by tensorflow leads to the following error:
```
Could not load library libcublasLt.so.12. Error: libcublasLt.so.12: cannot open shared object file: No such file or directory
Abgebrochen (Speicherabzug geschrieben)
```
### Standalone code to reproduce the issue
```shell
python -c 'from tensorflow.keras.applications.mobilenet import MobileNet; import numpy as np; m = MobileNet(); m.predict(np.zeros((32,224,224,3)))'
```
### Relevant log output
```shell
Could not load library libcublasLt.so.12. Error: libcublasLt.so.12: cannot open shared object file: No such file or directory
Abgebrochen (Speicherabzug geschrieben)
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61326/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61325 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61325/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61325/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61325/events | https://github.com/tensorflow/tensorflow/issues/61325 | 1,811,597,505 | I_kwDOArmXAs5r-sjB | 61,325 | How can I profile "Inference" by Profiler, and view performance profile by tensorboard | {
"login": "Pan17WJ",
"id": 8042317,
"node_id": "MDQ6VXNlcjgwNDIzMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8042317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pan17WJ",
"html_url": "https://github.com/Pan17WJ",
"followers_url": "https://api.github.com/users/Pan17WJ/followers",
"following_url": "https://api.github.com/users/Pan17WJ/following{/other_user}",
"gists_url": "https://api.github.com/users/Pan17WJ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pan17WJ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pan17WJ/subscriptions",
"organizations_url": "https://api.github.com/users/Pan17WJ/orgs",
"repos_url": "https://api.github.com/users/Pan17WJ/repos",
"events_url": "https://api.github.com/users/Pan17WJ/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pan17WJ/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 284285184,
"node_id": "MDU6TGFiZWwyODQyODUxODQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:tensorboard",
"name": "comp:tensorboard",
"color": "0052cc",
"default": false,
"description": "Tensorboard related issues"
},
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473184161,
"node_id": "MDU6TGFiZWw0NzMxODQxNjE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:support",
"name": "type:support",
"color": "159b2e",
"default": false,
"description": "Support issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 4032183365,
"node_id": "LA_kwDOArmXAs7wVjxF",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.9",
"name": "TF 2.9",
"color": "1CF842",
"default": false,
"description": "Issues found in the TF 2.9 release (or RCs)"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@Pan17WJ Could you please have a look at this [doc](https://www.tensorflow.org/tfx/serving/tensorboard) on Tensor-board profiling and try with the latest TF version 2.13 ?Thank you!",
"It doesn't seem to work. It generates the following file. When I opens the browser http://localhost:6006 by tensorboard, the display is\" No dashboards are active for the current data set. \"\r\n\r\n├── events.out.tfevents.1690428266.wj.profile-empty\r\n└── plugins\r\n └── profile\r\n └── 2023_07_27_11_24_26\r\n ├── wj.input_pipeline.pb\r\n ├── wj.kernel_stats.pb\r\n ├── wj.memory_profile.json.gz\r\n ├── wj.overview_page.pb\r\n ├── wj.tensorflow_stats.pb\r\n ├── wj.trace.json.gz\r\n └── wj.xplane.pb",
"@Pan17WJ Is it failing for the latest TF v2.13 as well? If so please provide the complete standalone code to replicate this issue? Thank you!\r\n",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61325\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61325\">No</a>\n"
] | 2023-07-19T09:54:26 | 2023-08-12T01:45:53 | 2023-08-12T01:45:49 | NONE | null | null | null | ### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.9.3
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.7
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I want to profiling "Inference" by Profiler.
However, the test of profiling training is success. But, when I try to profiling inference, the profiling log generated is empty and there are no active dashboards for the current data set.
How can I find the tutorial about analyzing the performance of inference?
### Standalone code to reproduce the issue
```shell
saved_model_loaded = tf.saved_model.load(FLAGS.weights, tags=[tag_constants.SERVING])
infer = saved_model_loaded.signatures['serving_default']
batch_data = tf.constant(images_data)
options = tf.profiler.experimental.ProfilerOptions(host_tracer_level = 3, python_tracer_level = 1, device_tracer_level = 1)
tf.profiler.experimental.start('logdir', options)
pred_bbox = infer(batch_data)
tf.profiler.experimental.stop()
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61325/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61324 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61324/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61324/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61324/events | https://github.com/tensorflow/tensorflow/issues/61324 | 1,811,567,720 | I_kwDOArmXAs5r-lRo | 61,324 | tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] layout failed: INVALID_ARGUMENT: Size of values 0 does not match size of permutation 4 @ fanin shape inmodel_6/dropout_12/dropout/SelectV2-2-TransposeNHWCToNCHW-LayoutOptimizer | {
"login": "Armanasq",
"id": 60850934,
"node_id": "MDQ6VXNlcjYwODUwOTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/60850934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Armanasq",
"html_url": "https://github.com/Armanasq",
"followers_url": "https://api.github.com/users/Armanasq/followers",
"following_url": "https://api.github.com/users/Armanasq/following{/other_user}",
"gists_url": "https://api.github.com/users/Armanasq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Armanasq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Armanasq/subscriptions",
"organizations_url": "https://api.github.com/users/Armanasq/orgs",
"repos_url": "https://api.github.com/users/Armanasq/repos",
"events_url": "https://api.github.com/users/Armanasq/events{/privacy}",
"received_events_url": "https://api.github.com/users/Armanasq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Armanasq,\r\n\r\nIn order to expedite the trouble-shooting process, please provide the complete code snippet to reproduce the issue reported here. I've tried to reproduce the issue [here](https://colab.research.google.com/gist/Varsha-anjanappa/55f59c3611c314f93550473980130af8/61324.ipynb), could you please take a look and let me know if i'm missing something . \r\n\r\nPlease find a similar issue [here](https://github.com/tensorflow/tensorflow/issues/34499) and let us know if it helps.\r\n\r\nThank you!!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Same issue",
"Hi @YanSte ,\r\n\r\nPlease share colab link or simple standalone code with supporting files to reproduce the issue in our environment.It helps us in localizing the issue faster.\r\n\r\nThanks!",
"Here are the projects:\r\n[Rice Classification](https://www.kaggle.com/code/yannicksteph/cnn-cv-rice-classification )\r\n[MNIST Classification](https://www.kaggle.com/code/yannicksteph/cnn-cv-mnist-classification)\r\n\r\nAlways at the first epoch, I have:\r\n`2023-08-09 13:10:29.937931: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] layout failed: INVALID_ARGUMENT: Size of values 0 does not match size of permutation 4 @ fanin shape insequential_2/dropout/dropout/SelectV2-2-TransposeNHWCToNCHW-LayoutOptimizer`\r\n\r\nIt seems that adding a Dropout after a MaxPooling2D, Conv2D creates this problem.\r\n\r\n`TF version 1.7.0-rc1`\r\n\r\nDo you need more information?",
"I got the same issue. \r\n[One notebook](https://www.kaggle.com/code/tsrdjan/second-try#Train) shows the issue, while [the other](https://www.kaggle.com/code/tsrdjan/kaggle-xy-h5-drugi-pokusaj#6.-Build-Deep-Learning-Model) does not.\r\n\r\nBoth of them are transfer learning. What is interesting to me is that I got this issue only when I try to use EfficientNets. At first, it was working fine (like it did in the second notebook, `kaggle-xy.h5-drugi-pokusaj`), but my newer attempts always have this issue. I tried to replicate architecture from 2nd notebook but it still doesn't remove the issue. So, as far as I can tell, this maybe has something to do with the input and processing part.\r\n\r\nThe 2nd notebook does have `Dropout` after pre-trained part, `GlobalAveragePooling2D` and `BatchNormalization` (in this order) and it was not an issue in my case.",
"> I got the same issue. [One notebook](https://www.kaggle.com/code/tsrdjan/second-try#Train) shows the issue, while [the other](https://www.kaggle.com/code/tsrdjan/kaggle-xy-h5-drugi-pokusaj#6.-Build-Deep-Learning-Model) does not.\r\n> \r\n> Both of them are transfer learning. What is interesting to me is that I got this issue only when I try to use EfficientNets. At first, it was working fine (like it did in the second notebook, `kaggle-xy.h5-drugi-pokusaj`), but my newer attempts always have this issue. I tried to replicate architecture from 2nd notebook but it still doesn't remove the issue. So, as far as I can tell, this maybe has something to do with the input and processing part.\r\n> \r\n> The 2nd notebook does have `Dropout` after pre-trained part, `GlobalAveragePooling2D` and `BatchNormalization` (in this order) and it was not an issue in my case.\r\n\r\nUPDATE: I managed to fix the issue by recreating the architecture and adding `training=False`.\r\n\r\n**First version**:\r\n```Python\r\nimport tensorflow as tf\r\nfrom tensorflow.keras.applications.efficientnet_v2 import EfficientNetV2M\r\nfrom tensorflow.keras.models import Model\r\nfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D, BatchNormalization, Dropout\r\nfrom tensorflow.keras.optimizers import Adam\r\nfrom tensorflow.keras import regularizers\r\n\r\nbase_model = EfficientNetV2M(weights='imagenet', include_top=False, input_shape=(380, 380, 3), include_preprocessing=True)\r\n\r\nbase_model.trainable = False\r\n\r\ntop_dropout_rate = 0.2\r\nx = base_model.output\r\nx = GlobalAveragePooling2D()(x)\r\nx = Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.05))(x)\r\nx = BatchNormalization()(x)\r\nx = Dropout(top_dropout_rate, name=\"top_dropout\")(x)\r\nx = Dense(1, activation='sigmoid', name=\"prediction\")(x)\r\n\r\nmodel = tf.keras.Model(inputs=base_model.input, outputs=x)\r\n```\r\n\r\n**Second version**:\r\n```Python\r\nfrom tensorflow.keras.applications.efficientnet_v2 import EfficientNetV2M, preprocess_input\r\nfrom tensorflow.keras.models import Model\r\nfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D, BatchNormalization, Dropout, Input\r\nfrom tensorflow.keras.optimizers import Adam\r\nfrom tensorflow.keras import regularizers\r\n\r\nbase_model = EfficientNetV2M(weights='imagenet', include_top=False, input_shape=input_shape, include_preprocessing=True)\r\nbase_model.trainable = False\r\n\r\ntop_dropout_rate = 0.2\r\ninput_tensor = Input(shape=input_shape)\r\npreprocessed_input = preprocess_input(input_tensor)\r\n\r\nx = base_model(preprocessed_input, training=False)\r\nx = GlobalAveragePooling2D()(x)\r\nx = Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.05))(x)\r\nx = BatchNormalization()(x)\r\nx = Dropout(top_dropout_rate, name=\"top_dropout\")(x)\r\nx = Dense(1, activation='sigmoid', name=\"prediction\")(x)\r\n\r\nmodel = Model(inputs=input_tensor, outputs=x)\r\n```\r\n\r\nThe final version similar to 2nd, but with `x = base_model(preprocessed_input)` changed to `x = base_model(preprocessed_input, training=False)`. Adding this fixed the issue in second version.\r\n\r\nThe information for these changes was found in [keras's documentation for transfer learning](https://keras.io/guides/transfer_learning/).",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Confirmed the error even after the modification by @TodorovicSrdjan (similar to version 2, using MobileNet V2 for transfer learning). Here the code:\r\n\r\n```python\r\n # Create the base model from the pre-trained MobileNet V2\r\n base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,\r\n include_top=False,\r\n weights='imagenet')\r\n base_model.trainable = False\r\n data_augmentation = tf.keras.Sequential(\r\n [\r\n tf.keras.layers.RandomFlip(\"horizontal\"),\r\n tf.keras.layers.RandomRotation(0.2),\r\n ]\r\n )\r\n\r\n inputs = tf.keras.Input(shape=IMG_SHAPE)\r\n\r\n x = data_augmentation(inputs) # Apply random data augmentation\r\n x = tf.keras.applications.mobilenet_v2.preprocess_input(x)\r\n x = base_model(x, training=False)\r\n x = tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu')(x)\r\n x = tf.keras.layers.Dropout(0.2)(x)\r\n x = tf.keras.layers.GlobalAveragePooling2D()(x)\r\n x = tf.keras.layers.Dense(units=N_CLASSES, activation='softmax')(x)\r\n model = tf.keras.models.Model(inputs=inputs, outputs=x)\r\n``` \r\n\r\nAnd the output:\r\n\r\n```\r\nModel: \"model\"\r\n_________________________________________________________________\r\n Layer (type) Output Shape Param #\r\n=================================================================\r\n input_2 (InputLayer) [(None, 160, 160, 3)] 0\r\n\r\n sequential (Sequential) (None, 160, 160, 3) 0\r\n\r\n mobilenetv2_1.00_160 (Func (None, 5, 5, 1280) 2257984\r\n tional)\r\n\r\n conv2d (Conv2D) (None, 3, 3, 32) 368672\r\n\r\n dropout (Dropout) (None, 3, 3, 32) 0\r\n\r\n global_average_pooling2d ( (None, 32) 0\r\n GlobalAveragePooling2D)\r\n\r\n dense (Dense) (None, 7) 231\r\n\r\n=================================================================\r\nTotal params: 2626887 (10.02 MB)\r\nTrainable params: 368903 (1.41 MB)\r\nNon-trainable params: 2257984 (8.61 MB)\r\n_________________________________________________________________\r\nNumber of trainable weights = 4\r\nEpoch 1/3\r\n2023-08-21 12:39:38.540515: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] layout failed: INVALID_ARGUMENT: Size of values 0 does not match size of permutation 4 @ fanin shape inmodel/dropout/dropout/SelectV2-2-TransposeNHWCToNCHW-LayoutOptimizer\r\n```\r\n\r\nTF 2.13.0 on cuDNN 8.6.0 and CUDA 12.2 on Linux x86_64.\r\n\r\n**Update: same code does not generate any error on MacBookPro M1 Pro with TF 2.13.0 and Metal backend**",
"@Varsha-anjanappa This bug appears in many open issues, including [61687](https://github.com/tensorflow/tensorflow/issues/61687), has there been any progress on fixing the issue?",
"@meekus-fischer I think it is related to adding dropout after the 2D CNN. after removing them the error has been fixed. However I am not sure. I decided to use PyTorch.",
"Same issue here on TF 2.14 with a custom model, with layers like below as others have mentioned previously:\r\n\r\n```\r\ntf.keras.layers.Conv2D(64, 3, activation='relu'),\r\ntf.keras.layers.MaxPooling2D(),\r\ntf.keras.layers.Dropout(0.4),\r\n```\r\n\r\n",
"I have similar/same error: \"2023-11-03 17:20:43.612803: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:961] layout failed: INVALID_ARGUMENT: Size of values 0 does not match size of permutation 4 @ fanin shape insequential_10/dropout_31/dropout/SelectV2-2-TransposeNHWCToNCHW-LayoutOptimizer\"\r\n\r\nI can confirm it goes away when I comment out the dropout after the MaxPooling2D layers.\r\n\r\n```\r\n\t\tmodel.add(Conv2D(64, (3, 3), padding=\"same\"))\r\n\t\tmodel.add(Activation(\"relu\"))\r\n\t\tmodel.add(BatchNormalization(axis=chanDim))\r\n\t\tmodel.add(Conv2D(64, (3, 3), padding=\"same\"))\r\n\t\tmodel.add(Activation(\"relu\"))\r\n\t\tmodel.add(BatchNormalization(axis=chanDim))\r\n\t\tmodel.add(MaxPooling2D(pool_size=(2, 2)))\r\n\t\tmodel.add(Dropout(0.25)) <--- commenting out this line and the error goes away\r\n```",
"See edumotya's response in #34499, as he suggests disabling the optimizer. This appears to have worked around the issue for me, although there's still something to be understood or resolved in the optimizer and/or Dropout's channel-first vs channel-last approach when the optimizer is enabled.",
"> Confirmed the error even after the modification by @TodorovicSrdjan (similar to version 2, using MobileNet V2 for transfer learning). Here the code:\r\n> \r\n> ```python\r\n> # Create the base model from the pre-trained MobileNet V2\r\n> base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,\r\n> include_top=False,\r\n> weights='imagenet')\r\n> base_model.trainable = False\r\n> data_augmentation = tf.keras.Sequential(\r\n> [\r\n> tf.keras.layers.RandomFlip(\"horizontal\"),\r\n> tf.keras.layers.RandomRotation(0.2),\r\n> ]\r\n> )\r\n> \r\n> inputs = tf.keras.Input(shape=IMG_SHAPE)\r\n> \r\n> x = data_augmentation(inputs) # Apply random data augmentation\r\n> x = tf.keras.applications.mobilenet_v2.preprocess_input(x)\r\n> x = base_model(x, training=False)\r\n> x = tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu')(x)\r\n> x = tf.keras.layers.Dropout(0.2)(x)\r\n> x = tf.keras.layers.GlobalAveragePooling2D()(x)\r\n> x = tf.keras.layers.Dense(units=N_CLASSES, activation='softmax')(x)\r\n> model = tf.keras.models.Model(inputs=inputs, outputs=x)\r\n> ```\r\n> \r\n> And the output:\r\n> \r\n> ```\r\n> Model: \"model\"\r\n> _________________________________________________________________\r\n> Layer (type) Output Shape Param #\r\n> =================================================================\r\n> input_2 (InputLayer) [(None, 160, 160, 3)] 0\r\n> \r\n> sequential (Sequential) (None, 160, 160, 3) 0\r\n> \r\n> mobilenetv2_1.00_160 (Func (None, 5, 5, 1280) 2257984\r\n> tional)\r\n> \r\n> conv2d (Conv2D) (None, 3, 3, 32) 368672\r\n> \r\n> dropout (Dropout) (None, 3, 3, 32) 0\r\n> \r\n> global_average_pooling2d ( (None, 32) 0\r\n> GlobalAveragePooling2D)\r\n> \r\n> dense (Dense) (None, 7) 231\r\n> \r\n> =================================================================\r\n> Total params: 2626887 (10.02 MB)\r\n> Trainable params: 368903 (1.41 MB)\r\n> Non-trainable params: 2257984 (8.61 MB)\r\n> _________________________________________________________________\r\n> Number of trainable weights = 4\r\n> Epoch 1/3\r\n> 2023-08-21 12:39:38.540515: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] layout failed: INVALID_ARGUMENT: Size of values 0 does not match size of permutation 4 @ fanin shape inmodel/dropout/dropout/SelectV2-2-TransposeNHWCToNCHW-LayoutOptimizer\r\n> ```\r\n> \r\n> TF 2.13.0 on cuDNN 8.6.0 and CUDA 12.2 on Linux x86_64.\r\n> \r\n> **Update: same code does not generate any error on MacBookPro M1 Pro with TF 2.13.0 and Metal backend**\r\n\r\nHi @zzambia , I have tested your code and it seems working fine on colab though. Please check the [gist](https://colab.research.google.com/gist/SuryanarayanaY/7d76bd7471d3f17da6dedc5785d9c533/61324.ipynb) and confirm .\r\n\r\nHi @Armanasq , Could you please confirm whether this is still an issue. If so please submit a minimal code snippet to reproduce the issue. \r\n\r\nThanks!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61324\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61324\">No</a>\n",
"using `tf.nn.dropout(x, 0.5)` throws `E tensorflow/core/grappler/optimizers/meta_optimizer.cc:961] layout failed: INVALID_ARGUMENT: Size of values 0 does not match size of permutation 4 @ fanin shape inencoder_decoder_1/decoder_1/decoder_block_1/dropout/SelectV2-2-TransposeNHWCToNCHW-LayoutOptimizer`\r\n\r\nHowever, replacing it with `tf.keras.layers.Dropout(0.5)` resolved the issue for me. \r\n\r\nTF version: 2.17.0"
] | 2023-07-19T09:36:07 | 2024-02-23T15:18:18 | 2024-02-13T01:47:20 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.12.0
### Custom code
Yes
### OS platform and distribution
Kaggle
### Mobile device
_No response_
### Python version
3.10.12
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I am trying to train model below
```
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import *
# Set the input shape of the images (adjust based on the input image size)
input_shape = (128, 128, 3) # Adjust based on the input image size
# Set the number of segmentation classes
n_classes = 1 # Number of segmentation classes
# Define the model architecture
inputs = Input(shape=input_shape) # Define the input layer with the specified input shape
# Encoder
conv1 = Conv2D(64, (3, 3), activation='relu', padding='same')(inputs) # First convolutional layer with 64 filters
conv1 = BatchNormalization()(conv1) # Apply batch normalization to normalize the activations
conv1 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv1) # Second convolutional layer with 64 filters
conv1 = BatchNormalization()(conv1) # Apply batch normalization to normalize the activations
pool1 = MaxPooling2D((2, 2))(conv1) # Max pooling layer with a pool size of (2, 2)
conv2 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool1) # Convolutional layer with 128 filters
conv2 = BatchNormalization()(conv2) # Apply batch normalization to normalize the activations
conv2 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv2) # Convolutional layer with 128 filters
conv2 = BatchNormalization()(conv2) # Apply batch normalization to normalize the activations
pool2 = MaxPooling2D((2, 2))(conv2) # Max pooling layer with a pool size of (2, 2)
conv3 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool2) # Convolutional layer with 256 filters
conv3 = BatchNormalization()(conv3) # Apply batch normalization to normalize the activations
conv3 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv3) # Convolutional layer with 256 filters
conv3 = BatchNormalization()(conv3) # Apply batch normalization to normalize the activations
pool3 = MaxPooling2D((2, 2))(conv3) # Max pooling layer with a pool size of (2, 2)
conv4 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool3) # Convolutional layer with 512 filters
conv4 = BatchNormalization()(conv4) # Apply batch normalization to normalize the activations
conv4 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv4) # Convolutional layer with 512 filters
conv4 = BatchNormalization()(conv4) # Apply batch normalization to normalize the activations
drop4 = Dropout(0.5)(conv4) # Apply dropout regularization with a rate of 0.5
pool4 = MaxPooling2D((2, 2))(drop4) # Max pooling layer with a pool size of (2, 2)
conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same')(pool4) # Convolutional layer with 1024 filters
conv5 = BatchNormalization()(conv5) # Apply batch normalization to normalize the activations
conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same')(conv5) # Convolutional layer with 1024 filters
conv5 = BatchNormalization()(conv5) # Apply batch normalization to normalize the activations
drop5 = Dropout(0.5)(conv5) # Apply dropout regularization with a rate of 0.5
# Decoder
up6 = concatenate([UpSampling2D((2, 2))(drop5), conv4], axis=-1) # Upsampling layer with a scale factor of (2, 2)
conv6 = Conv2D(512, (3, 3), activation='relu', padding='same')(up6) # Convolutional layer with 512 filters
conv6 = BatchNormalization()(conv6) # Apply batch normalization to normalize the activations
conv6 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv6) # Convolutional layer with 512 filters
conv6 = BatchNormalization()(conv6) # Apply batch normalization to normalize the activations
up7 = concatenate([UpSampling2D((2, 2))(conv6), conv3], axis=-1) # Upsampling layer with a scale factor of (2, 2)
conv7 = Conv2D(256, (3, 3), activation='relu', padding='same')(up7) # Convolutional layer with 256 filters
conv7 = BatchNormalization()(conv7) # Apply batch normalization to normalize the activations
conv7 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv7) # Convolutional layer with 256 filters
conv7 = BatchNormalization()(conv7) # Apply batch normalization to normalize the activations
up8 = concatenate([UpSampling2D((2, 2))(conv7), conv2], axis=-1) # Upsampling layer with a scale factor of (2, 2)
conv8 = Conv2D(128, (3, 3), activation='relu', padding='same')(up8) # Convolutional layer with 128 filters
conv8 = BatchNormalization()(conv8) # Apply batch normalization to normalize the activations
conv8 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv8) # Convolutional layer with 128 filters
conv8 = BatchNormalization()(conv8) # Apply batch normalization to normalize the activations
up9 = concatenate([UpSampling2D((2, 2))(conv8), conv1], axis=-1) # Upsampling layer with a scale factor of (2, 2)
conv9 = Conv2D(64, (3, 3), activation='relu', padding='same')(up9) # Convolutional layer with 64 filters
conv9 = BatchNormalization()(conv9) # Apply batch normalization to normalize the activations
conv9 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv9) # Convolutional layer with 64 filters
conv9 = BatchNormalization()(conv9) # Apply batch normalization to normalize the activations
outputs = Conv2D(n_classes, (1, 1), activation='softmax')(conv9) # Convolutional layer for output
# Create the model
model = Model(inputs=inputs, outputs=outputs)
# Print the model summary
model.summary()
# Set the optimizer for the model
optimizer = tf.keras.optimizers.Adam(lr=1e-4)
# Compile the model with loss function and metrics
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
images_path = '/kaggle/input/coco-2014-dataset-for-yolov3/coco2014/images/val2014'
masks_path = '/kaggle/working/mask_val_2014'
batch_size = 8
val_generator = CustomDataGenerator(images_path, masks_path, batch_size)
```
Also to ensure that all the input and output have proper shape I run the code below
```
def print_preprocessed_image_shapes(model, generator):
"""
Print the shapes of preprocessed images generated by the provided model and generator.
Args:
model (tf.keras.Model): The trained model.
generator (CustomDataGenerator): Instance of the CustomDataGenerator class.
"""
for i in range(len(generator)):
# Get a batch of preprocessed images from the generator
batch_images, batch_masks = generator[i]
# Print the shapes of the preprocessed images
for image in batch_images:
print(f"Shape of preprocessed image: {image.shape}")
for mask in batch_maskss:
print(f"Shape of preprocessed image: {mask.shape}")
# Print the shapes of preprocessed images
print_preprocessed_image_shapes(model, val_generator)
```
As a result of this error, the model was unable to undergo the training process.
### Standalone code to reproduce the issue
```shell
# Fit the model with the training generator
train_steps = len(os.listdir( "/kaggle/working/mask_train_2014/"))/batch_size
model.fit(train_generator,validation_data = val_generator, steps_per_epoch = train_steps , epochs=20)
```
### Relevant log output
```shell
tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] layout failed: INVALID_ARGUMENT: Size of values 0 does not match size of permutation 4 @ fanin shape inmodel_5/dropout_10/dropout/SelectV2-2-TransposeNHWCToNCHW-LayoutOptimizer
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61324/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61323 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61323/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61323/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61323/events | https://github.com/tensorflow/tensorflow/issues/61323 | 1,811,559,409 | I_kwDOArmXAs5r-jPx | 61,323 | TensorFlow Lite Converter wraps unpack operator with dequantize/quantize | {
"login": "willisacs-arm",
"id": 112401409,
"node_id": "U_kgDOBrMcAQ",
"avatar_url": "https://avatars.githubusercontent.com/u/112401409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willisacs-arm",
"html_url": "https://github.com/willisacs-arm",
"followers_url": "https://api.github.com/users/willisacs-arm/followers",
"following_url": "https://api.github.com/users/willisacs-arm/following{/other_user}",
"gists_url": "https://api.github.com/users/willisacs-arm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willisacs-arm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willisacs-arm/subscriptions",
"organizations_url": "https://api.github.com/users/willisacs-arm/orgs",
"repos_url": "https://api.github.com/users/willisacs-arm/repos",
"events_url": "https://api.github.com/users/willisacs-arm/events{/privacy}",
"received_events_url": "https://api.github.com/users/willisacs-arm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1661751498,
"node_id": "MDU6TGFiZWwxNjYxNzUxNDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteConverter",
"name": "TFLiteConverter",
"color": "bfdadc",
"default": false,
"description": "For issues related to TFLite converter"
},
{
"id": 2671351731,
"node_id": "MDU6TGFiZWwyNjcxMzUxNzMx",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ModelOptimizationToolkit",
"name": "ModelOptimizationToolkit",
"color": "BFD629",
"default": false,
"description": "TF Model Optimization Toolkit"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | open | false | {
"login": "abattery",
"id": 3203059,
"node_id": "MDQ6VXNlcjMyMDMwNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3203059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abattery",
"html_url": "https://github.com/abattery",
"followers_url": "https://api.github.com/users/abattery/followers",
"following_url": "https://api.github.com/users/abattery/following{/other_user}",
"gists_url": "https://api.github.com/users/abattery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abattery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abattery/subscriptions",
"organizations_url": "https://api.github.com/users/abattery/orgs",
"repos_url": "https://api.github.com/users/abattery/repos",
"events_url": "https://api.github.com/users/abattery/events{/privacy}",
"received_events_url": "https://api.github.com/users/abattery/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "abattery",
"id": 3203059,
"node_id": "MDQ6VXNlcjMyMDMwNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3203059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abattery",
"html_url": "https://github.com/abattery",
"followers_url": "https://api.github.com/users/abattery/followers",
"following_url": "https://api.github.com/users/abattery/following{/other_user}",
"gists_url": "https://api.github.com/users/abattery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abattery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abattery/subscriptions",
"organizations_url": "https://api.github.com/users/abattery/orgs",
"repos_url": "https://api.github.com/users/abattery/repos",
"events_url": "https://api.github.com/users/abattery/events{/privacy}",
"received_events_url": "https://api.github.com/users/abattery/received_events",
"type": "User",
"site_admin": false
},
{
"login": "pkgoogle",
"id": 132095473,
"node_id": "U_kgDOB9-d8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/132095473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkgoogle",
"html_url": "https://github.com/pkgoogle",
"followers_url": "https://api.github.com/users/pkgoogle/followers",
"following_url": "https://api.github.com/users/pkgoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/pkgoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkgoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkgoogle/subscriptions",
"organizations_url": "https://api.github.com/users/pkgoogle/orgs",
"repos_url": "https://api.github.com/users/pkgoogle/repos",
"events_url": "https://api.github.com/users/pkgoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkgoogle/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@willisacs-arm Did you try with the latest TF version 2.13 and let us know the outcome?\r\nHave a look at [this](https://www.tensorflow.org/lite/performance/post_training_quantization) reference for more information on quantization. Thank you!",
"Hi yes the behaviour is the same for all versions above 2.12",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61323\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61323\">No</a>\n",
"accidentally closed issue",
"Hi @willisacs-arm \r\n\r\nI did observe that it does not add Quantize/Dequantize stubs for intermittent runs.\r\n\r\n@pkgoogle Could you please look into this issue?\r\n\r\nPlease find the reproducible [gist](https://colab.research.google.com/gist/pjpratik/bcd8711087797fe3874bf56073cb27d5/61323.ipynb) in TF 2.13.\r\n\r\nThanks.",
"Hi @willisacs-arm, so currently the quantization is unable to quantize the unpack operator, so it actually processes that op in the original bit size. Would you like a feature request to make it quantizable? Are you working with a system that requires 8-bit operations Only?",
"Hi, just to reiterate, the quantization sometimes works already if one just tries it enough times, so this doesn't seem to require a new feature, unless the fact that the quantization sometimes works is the bug here. It also works in patches before 2.12.0. \r\n\r\nYes I am working with a system that requires int 16 or int 8 activations. ",
"Also if I understand it correctly the documentation says that the converter should throw an error if an operation cannot be quantized?\r\n\r\nhttps://www.tensorflow.org/lite/performance/post_training_quantization#integer_only",
"Hi @willisacs-arm, yeah that documentation is incorrect, we have not finalized on intended behavior so we have not updated it to prevent further confusion. That being said, \"the quantization sometimes works already if one just tries it enough times\" doesn't sound good, is there any way you can provide us information on when it starts working? like... What is your model where this happens (especially which ops are included), and after how many times does it start working? If you can show us the converted state after each time, that'll also be great. Thanks!",
"> Hi @willisacs-arm\r\n> \r\n> I did observe that it does not add Quantize/Dequantize stubs for intermittent runs.\r\n> \r\n> @pkgoogle Could you please look into this issue?\r\n> \r\n> Please find the reproducible [gist](https://colab.research.google.com/gist/pjpratik/bcd8711087797fe3874bf56073cb27d5/61323.ipynb) in TF 2.13.\r\n> \r\n> Thanks.\r\n\r\nAh alright, thanks. \r\n\r\nThe model is the one that was provided in the first comment in this issue, it has an unpack/unstack operator. It has also been provided neatly by pjpratik in the gist quoted in this reply. To reproduce the issue, run the notebook/gist and then repeat the last 2 steps until a quantized operator without dequantize/quantize ops appear in the Netron window. \r\n\r\nThere seems to just be a random chance every time one runs the converter to get a functioning quantized unstack operator. A speculation would be that there's a memory overwrite somewhere. The behaviour is present even when running the converter on a single core and while keeping the representative dataset constant. The model file to convert was also kept constant. ",
"I was able to reproduce the somewhat random behavior after 7 attempts with the attached gist. @abattery, can you please take a look? Thanks.",
"did this has solution? i got this promblem too"
] | 2023-07-19T09:31:11 | 2024-06-03T10:12:51 | null | NONE | null | null | null | ### System information
- Linux Ubuntu 20.04
- TensorFlow installed from: pip source
- TensorFlow versions: 2.12.0-current master
### Code
Provide code to help us reproduce your issues using one of the following options:
```
import tensorflow as tf
from tensorflow.keras.layers import Layer, Input
from tensorflow.keras.models import Model
import numpy as np
class UnstackLayer(Layer):
def __init__(self, **kwargs):
super(UnstackLayer, self).__init__(**kwargs)
def call(self, inputs):
unstacked = tf.unstack(inputs, axis=1)
# only last output is used as input to the add operator
return unstacked[-1]
input_tensor = Input(shape=(4, 16, 32))
x = UnstackLayer()(input_tensor)
output_tensor = tf.add(x, 1)
model = Model(inputs=input_tensor, outputs=output_tensor)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def representative_data_gen():
for input_value in [np.random.randn(1, 4, 16, 32).astype(np.float32) for _ in range(10)]:
yield [input_value]
converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.target_spec.supported_types = {tf.int8}
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
tflite_model = converter.convert()
# Save the TFLite model to a .tflite file
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
```
### Failure after conversion
Input Model:

Output Model:

Behaviour in TF 2.11 and below is that no dequantize/quantize ops appear, which is expected.
### Other info
The conversion started failing with version TF 2.12.0. Note that the conversion succeeds intermittently when converting the same network many times, but on average it fails. This intermittent behaviour is still present if one runs the converter on a single core and keeps the representative dataset constant. Similar issues seems to be present when converting split operators as well.
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61323/timeline | null | reopened | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61322 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61322/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61322/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61322/events | https://github.com/tensorflow/tensorflow/issues/61322 | 1,811,535,020 | I_kwDOArmXAs5r-dSs | 61,322 | Tensorflow and onednn logs | {
"login": "akote123",
"id": 133775732,
"node_id": "U_kgDOB_lBdA",
"avatar_url": "https://avatars.githubusercontent.com/u/133775732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akote123",
"html_url": "https://github.com/akote123",
"followers_url": "https://api.github.com/users/akote123/followers",
"following_url": "https://api.github.com/users/akote123/following{/other_user}",
"gists_url": "https://api.github.com/users/akote123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akote123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akote123/subscriptions",
"organizations_url": "https://api.github.com/users/akote123/orgs",
"repos_url": "https://api.github.com/users/akote123/repos",
"events_url": "https://api.github.com/users/akote123/events{/privacy}",
"received_events_url": "https://api.github.com/users/akote123/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473184161,
"node_id": "MDU6TGFiZWw0NzMxODQxNjE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:support",
"name": "type:support",
"color": "159b2e",
"default": false,
"description": "Support issues"
},
{
"id": 1104829434,
"node_id": "MDU6TGFiZWwxMTA0ODI5NDM0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:mkl",
"name": "comp:mkl",
"color": "0052cc",
"default": false,
"description": "MKL related issues"
}
] | open | false | {
"login": "TensorFlow-MKL",
"id": 44416303,
"node_id": "MDQ6VXNlcjQ0NDE2MzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/44416303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TensorFlow-MKL",
"html_url": "https://github.com/TensorFlow-MKL",
"followers_url": "https://api.github.com/users/TensorFlow-MKL/followers",
"following_url": "https://api.github.com/users/TensorFlow-MKL/following{/other_user}",
"gists_url": "https://api.github.com/users/TensorFlow-MKL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TensorFlow-MKL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TensorFlow-MKL/subscriptions",
"organizations_url": "https://api.github.com/users/TensorFlow-MKL/orgs",
"repos_url": "https://api.github.com/users/TensorFlow-MKL/repos",
"events_url": "https://api.github.com/users/TensorFlow-MKL/events{/privacy}",
"received_events_url": "https://api.github.com/users/TensorFlow-MKL/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "TensorFlow-MKL",
"id": 44416303,
"node_id": "MDQ6VXNlcjQ0NDE2MzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/44416303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TensorFlow-MKL",
"html_url": "https://github.com/TensorFlow-MKL",
"followers_url": "https://api.github.com/users/TensorFlow-MKL/followers",
"following_url": "https://api.github.com/users/TensorFlow-MKL/following{/other_user}",
"gists_url": "https://api.github.com/users/TensorFlow-MKL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TensorFlow-MKL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TensorFlow-MKL/subscriptions",
"organizations_url": "https://api.github.com/users/TensorFlow-MKL/orgs",
"repos_url": "https://api.github.com/users/TensorFlow-MKL/repos",
"events_url": "https://api.github.com/users/TensorFlow-MKL/events{/privacy}",
"received_events_url": "https://api.github.com/users/TensorFlow-MKL/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @akote123 , could you share the original verbose log, thanks!",
"Hi @huiyan2021, I have uploaded here [vit_intel_log.zip](https://github.com/tensorflow/tensorflow/files/12105129/vit_intel_log.zip) \r\n",
"I guess the reason is that _mklops are logged by <<, where the outputs are fully buffered if they are redirected to a file, while oneDNN verbose always flush stdout immediately: https://github.com/search?q=repo%3Aoneapi-src%2FoneDNN+fflush&type=code",
"@huiyan2021 , one more observation what I found is some ops are common_runtime/eager/execute.cc and some are in common_runtime/executor.cc , here I am not able to understand why there is two path for execution ",
"Hi @akote123,\r\n\r\nfrom the log I can see you are using xla, so for ops can not be jitted, they come to common_runtime/eager/execute.cc, otherwise they come to common_runtime/executor.cc\r\n\r\nalso, suggest that you use [trace viewer](https://www.tensorflow.org/guide/profiler#trace_viewer) to trace executions. ",
"@huiyan2021 , do we have file location tensorflow source code where checking happen whether to go common_runtime/eager/execute.cc or the other one.\r\n",
"@akote123 , you can refer to this article: https://whatdhack.medium.com/tensorflow-graph-graphdef-grappler-xla-mlir-llvm-etc-615191e96ebc, see XLA Flow part and call stack",
"@huiyan2021 , In tensorflow the single model can go in both XLA and oneDNN or is it like either it will use XLA or oneDNN only\r\n",
"Both. There may be different scenarios: \r\n1. Some parts of the model go in XLA path, some parts go in oneDNN path.\r\n2. Intel recently submitted a [pilot PR](https://github.com/tensorflow/tensorflow/pull/61237) to accelerate XLA’s Dot op with oneDNN.\r\n\r\nYou can refer to this RFC: https://docs.google.com/document/d/1ZzMcrjxITJeN2IjjgbzUjHh-4W1YgDUus3j25Dvn9ng/edit",
"@huiyan2021 ,Thank you for the pointers I will got through it .\r\nActually for pretrained models how we can enable XLA both for inference and transfer learning",
"same as training, you can refer to https://www.tensorflow.org/xla#enable_xla_for_tensorflow_models"
] | 2023-07-19T09:17:40 | 2023-11-01T02:54:00 | null | NONE | null | null | null | Hi,
I am trying to analyse the call flow of tensorflow and onednn. I am setting up environment vars as
export ONEDNN_VERBOSE=1
export TF_CPP_MAX_VLOG_LEVEL=1
export omp_num_threads=1
I am collecting the logs. I was trying to map _mklops with onednn primitive. But here the onednn and mkl calls are random in log that is after 10 mkl calls I am seeing 20 onednn calls.
Is there any flags need to be set to get logs with correct mapping or do we need to map manually
[filtered_intel_log.txt](https://github.com/tensorflow/tensorflow/files/12093285/filtered_intel_log.txt)
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61322/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61321 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61321/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61321/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61321/events | https://github.com/tensorflow/tensorflow/issues/61321 | 1,811,458,501 | I_kwDOArmXAs5r-KnF | 61,321 | Android GPU Delegate Error while using groups in Conv2d | {
"login": "HoshinoAris",
"id": 69342739,
"node_id": "MDQ6VXNlcjY5MzQyNzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/69342739?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HoshinoAris",
"html_url": "https://github.com/HoshinoAris",
"followers_url": "https://api.github.com/users/HoshinoAris/followers",
"following_url": "https://api.github.com/users/HoshinoAris/following{/other_user}",
"gists_url": "https://api.github.com/users/HoshinoAris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HoshinoAris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HoshinoAris/subscriptions",
"organizations_url": "https://api.github.com/users/HoshinoAris/orgs",
"repos_url": "https://api.github.com/users/HoshinoAris/repos",
"events_url": "https://api.github.com/users/HoshinoAris/events{/privacy}",
"received_events_url": "https://api.github.com/users/HoshinoAris/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 2671339633,
"node_id": "MDU6TGFiZWwyNjcxMzM5NjMz",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteGpuDelegate",
"name": "TFLiteGpuDelegate",
"color": "F71F04",
"default": false,
"description": "TFLite Gpu delegate issue"
}
] | closed | false | {
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@stellaly As the error message indicates, please call invoke on the same thread where the GPU delegate is created as noted [here](https://www.tensorflow.org/lite/performance/gpu#android) and use the latest version of TFLiteGPUDelegate.\r\nThank you!",
"> @stellaly As the error message indicates, please call invoke on the same thread where the GPU delegate is created as noted [here](https://www.tensorflow.org/lite/performance/gpu#android) and use the latest version of TFLiteGPUDelegate. Thank you!\r\n\r\nOf cource I call invoke on the same thread where the GPU delegate is created\r\nHere is my Android code\r\n```kotlin\r\n val modelloader = ModelLoader(context)\r\n \r\n val options = Interpreter.Options()\r\n options.addDelegate(GpuDelegate())\r\n// options.addDelegate(NnApiDelegate())\r\n\r\n val tflite = Interpreter(modelloader.loadMappedFile(\"model/tf_test.tflite\"), options)\r\n val inputShape0 = tflite.getInputTensorFromSignature(\"x\", \"compress_stage0\").shape()\r\n val inputShape1 = tflite.getInputTensorFromSignature(\"lambda_rd\", \"compress_stage0\").shape()\r\n \r\n val inputTensor0 = TensorBuffer.createFixedSize(\r\n inputShape0, DataType.FLOAT32\r\n )\r\n val inputTensor1 = TensorBuffer.createFixedSize(\r\n inputShape1, DataType.FLOAT32\r\n )\r\n\r\n val outputTensor = TensorBuffer.createFixedSize(\r\n tflite.getOutputTensorFromSignature(\"output_0\", \"compress_stage0\").shape(),\r\n DataType.FLOAT32\r\n )\r\n\r\n val inputsMap = mapOf(\"x\" to inputTensor0.buffer, \"lambda_rd\" to inputTensor1.buffer)\r\n val outputsMap = mapOf(\"output_0\" to outputTensor.buffer)\r\n tflite.runSignature(\r\n inputsMap,\r\n outputsMap,\r\n \"compress_stage0\"\r\n )\r\n```",
"Hi @stellaly \r\n\r\nCan you please specify which version of TFLite you are using?\r\n\r\nFor Android 12 and above , please add the following lines in manifest file to detect GPU and let us know if it resolves the issue.\r\n\r\n```\r\n<uses-library android:name=\"libOpenCL.so\"\r\n android:required=\"false\"/>\r\n\r\n<uses-library android:name=\"libOpenCL-pixel.so\"\r\n android:required=\"false\"/>\r\n```\r\nThanks.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further."
] | 2023-07-19T08:39:43 | 2023-08-04T01:51:54 | 2023-08-04T01:51:54 | NONE | null | null | null | **System information**
- Android 13:
- TensorFlow installed from binary:
**use GPU Delegate**
**use model**
```python
self.g_a0 = tf.keras.layers.Conv2D(
N,
kernel_size=5,
strides=2,
padding="same",
# data_format="channels_first",
)
self.g_a1_test1 = tf.keras.layers.Conv2D(
N,
3,
padding="same",
groups=N,
# data_format="channels_first",
)
```
**error on android**
```
java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: Can not open OpenCL library on this device - undefined symbol: clGetCommandBufferInfoKHR
Falling back to OpenGL
TfLiteGpuDelegate Init: No shader implementation for split
TfLiteGpuDelegate Prepare: delegate is not initialized
Node number 2 (TfLiteGpuDelegateV2) failed to prepare.
Restored original execution plan after delegate application failure.
```
**use anothor model**
```python
self.g_a0 = tf.keras.layers.Conv2D(
N,
kernel_size=5,
strides=2,
padding="same",
# data_format="channels_first",
)
self.g_a1_test1 = tf.keras.layers.Conv2D(
N,
3,
padding="same",
# groups=N,
# data_format="channels_first",
)
```
No errors on Android | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61321/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61320 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61320/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61320/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61320/events | https://github.com/tensorflow/tensorflow/pull/61320 | 1,811,201,352 | PR_kwDOArmXAs5V2jMx | 61,320 | refactor: _postprocess_flat_outputs() | {
"login": "arjun-234",
"id": 103405661,
"node_id": "U_kgDOBinYXQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103405661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjun-234",
"html_url": "https://github.com/arjun-234",
"followers_url": "https://api.github.com/users/arjun-234/followers",
"following_url": "https://api.github.com/users/arjun-234/following{/other_user}",
"gists_url": "https://api.github.com/users/arjun-234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arjun-234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjun-234/subscriptions",
"organizations_url": "https://api.github.com/users/arjun-234/orgs",
"repos_url": "https://api.github.com/users/arjun-234/repos",
"events_url": "https://api.github.com/users/arjun-234/events{/privacy}",
"received_events_url": "https://api.github.com/users/arjun-234/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1133285679,
"node_id": "MDU6TGFiZWwxMTMzMjg1Njc5",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:xla",
"name": "comp:xla",
"color": "0052cc",
"default": false,
"description": "XLA"
},
{
"id": 1169364458,
"node_id": "MDU6TGFiZWwxMTY5MzY0NDU4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:S",
"name": "size:S",
"color": "adafea",
"default": false,
"description": "CL Change Size: Small"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Did this show up in a performance profile or why did you decide to optimize this code?\r\n\r\nReadability is always subjective and one could argue that list comprehensions are easier to read.",
"Hey @sherhut, Thanks for your feedback! The code optimization aimed to improve efficiency by consolidating loops. While I didn't run a performance profile, I believe it can enhance execution. Regarding readability, I'll consider list comprehensions. Open to other suggestions too :)",
"Hey @sherhut,\r\nCould you please review it once you have a moment :)",
"Hi @sherhut, Can you please review this PR ? Thank you!",
"I agree this looks like a reasonable optimization on paper. But without seeing it improve something on a performance profile, it is hard to judge whether it is worth it. Readability is always subjective and I find both versions equally easy to read, if not preferring the original."
] | 2023-07-19T05:48:28 | 2023-09-12T07:15:47 | 2023-09-12T07:15:44 | NONE | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61320",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61320",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61320.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61320.patch",
"merged_at": null
} | In this refactored version, I have changed the code to consolidate two loops into one loop for the same method, improving efficiency and readability. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61320/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61319 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61319/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61319/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61319/events | https://github.com/tensorflow/tensorflow/issues/61319 | 1,811,168,902 | I_kwDOArmXAs5r9D6G | 61,319 | Converted(quantized) model of simple dense neural network returns same repeated output | {
"login": "sinban04",
"id": 15901475,
"node_id": "MDQ6VXNlcjE1OTAxNDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/15901475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinban04",
"html_url": "https://github.com/sinban04",
"followers_url": "https://api.github.com/users/sinban04/followers",
"following_url": "https://api.github.com/users/sinban04/following{/other_user}",
"gists_url": "https://api.github.com/users/sinban04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinban04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinban04/subscriptions",
"organizations_url": "https://api.github.com/users/sinban04/orgs",
"repos_url": "https://api.github.com/users/sinban04/repos",
"events_url": "https://api.github.com/users/sinban04/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinban04/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 1463677878,
"node_id": "MDU6TGFiZWwxNDYzNjc3ODc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:performance",
"name": "type:performance",
"color": "159b2e",
"default": false,
"description": "Performance Issue"
},
{
"id": 1661751498,
"node_id": "MDU6TGFiZWwxNjYxNzUxNDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteConverter",
"name": "TFLiteConverter",
"color": "bfdadc",
"default": false,
"description": "For issues related to TFLite converter"
},
{
"id": 3531398540,
"node_id": "LA_kwDOArmXAs7SfN2M",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.7",
"name": "TF 2.7",
"color": "77237D",
"default": false,
"description": "Issues related to TF 2.7.0"
}
] | closed | false | {
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@sinban04 Could you please try to rerun the model using the latest TF version as you're using an older one?\r\nHave you gone through [this](https://www.tensorflow.org/lite/performance/post_training_quantization#full_integer_quantization) part already?\r\nThank you!",
"@sushreebarsa \r\nSure, I already did try those in the link you gave. I wrote the code with the content in the link in the first place. \r\n\r\n> Could you please try to rerun the model using the latest TF version as you're using an older one?\r\n\r\nIn addition, I tried with the latest version as i said above (sorry, the content text is too long)\r\n\r\n> I tried other tensorflow versions (2.13.0, or nightly: 2.14.0-dev20230706) but still same as before.\r\n\r\n",
"@sinban04 Thank you for the kind response!\r\n@pjpratik Could you please have a look at this?\r\nThank you!",
"Hi @sinban04 \r\n\r\nI have tried to reproduce the issue with a toy model similar to the one provided and observed that model outputs different results when inputs change for full integer quantization and dynamic range quantization. Am I missing something here?\r\n\r\nPlease find the [gist](https://colab.research.google.com/gist/pjpratik/e969599feef00d0a7f00240a39caf0bb/61319.ipynb) and let us know if it helps.\r\n\r\nThanks.",
"@pjpratik \r\nOkay, at first, thank you for your attention. \r\nI saw your gist code and even on your example, we can see the output is converging to a single value around 55~57.\r\nTo be clear, I wrote the code on the colab and the same problem is reproduced. \r\nPlz check this out: [mygist](https://colab.research.google.com/drive/1A-y90n8MLFqS71BGmPoZuEWIR7gBbB71?usp=sharing)\r\nYou would grasp the problem much better :)\r\n\r\nFYI, my original dataset is 501 line, and 24KB",
"Hi @sinban04 \r\n\r\nThanks for the gist. \r\n\r\nI did observe similar output for the full integer quantization but for float32 input/output I did observe different outputs in your gist as well.\r\n>I saw your gist code and even on your example, we can see the output is converging to a single value around 55~57.\r\n\r\nYes, the reason is the quantization effect on the input/output. Please check this [Quantization scheme](https://www.tensorflow.org/lite/performance/quantization_spec) used by TFLite for 8 bit quantization.\r\n\r\n",
"@pjpratik \r\n \r\n> but for float32 input/output I did observe different outputs in your gist as well.\r\n\r\nI tried as well after your comment. It's making different out for now, \r\nbut with all my dataset, it returns same floating value as i summarized the issue above\r\nAnd as you see, with integer only mode, it returns same output regardless of the input.\r\n\r\nFrom my experience, I assume that \r\nwhen the model is too complex (I suppose my model is not that complex though)\r\nduring quantization, the model weights are kinda quantized even not to function the proper prediction task.\r\n\r\nSo the simpler model you made returned some outputs (slightly different) but my model (more complex) did not.\r\nFor the same reason, with float16 quantization, \r\nwhen the model weights kinda \"burst\", it becomes impotent not to make proper predictions. \r\n(Y = Wx+b and it seems the weights are converging to 0, and the same output comes from bias)\r\n\r\nDo you think my theory is valid ? And there's no way to fix this problem for now ?\r\nFurthermore, i'm not sure what kind of model is appropriate for quantizaton and not (model compatibility for quantization)\r\nYou guys, Do you have any information about this quantization problem ?",
"Hi @sinban04 \r\n\r\nThanks for the information.\r\n\r\nThe 8-bit quantization approximates floating point values using the following formula.\r\n\r\n$$real value = (int8 value - zero point) \\times scale$$\r\n\r\nAnd as per https://arxiv.org/abs/1712.05877, TFLite uses a single set of\r\nquantization parameters(scale and zero point)for all values within each activations array and within each weights array; separate arrays use separate quantization parameters.\r\n\r\nThe quantization scales are computed offline such that relative accuracy is maintained (Sec 2.2 of https://arxiv.org/abs/1712.05877), the Table 4.1 provides results of ResNet on ImageNet: Floating-point vs quantized network accuracy for various network depths.\r\n\r\nThanks.\r\n\r\n\r\n",
"@pjpratik \r\nThank you for the paper reference :) \r\nI have one more question about my case. \r\nAs you can see in my first comment (main content at the top), i printed each layer of quantized model for debugging.\r\nI expected the values of weight are all in range between [-128, 127] or [0, 256] as it's quantized in 8bit. (unsigned, or signed)\r\nBut, some values out of range exist at some layer. \r\n\r\n```\r\n[ 2110 2151 -1352 1999 1340 -833 -1006 -657 -1454 -641 2383 2034\r\n 2358 -577 -1184 2455 2270 2831 -626 -1301 2957 2355 -665 -1340\r\n...\r\n -1140 2348 2233 2103 2280 2138 2450 2377 -1008 2026 1958 1892\r\n -1075 -1108 1946 -1078 2522 1899 1416 1882]\r\n```\r\nis it normal ? or is there some exceptional case ?\r\nOr i got something wrong ?\r\n",
"Hi @sinban04 \r\n\r\nI see the weights are converted into int8 as intended and are in range [-128,127] when checked with netron.\r\n\r\nI have attached the weights and also the [gist](https://colab.research.google.com/gist/pjpratik/baa2c163df1bf564d126b69a815a7467/61319.ipynb).\r\n\r\n[FC3.txt](https://github.com/tensorflow/tensorflow/files/12247682/FC3.txt)\r\n[FC2.txt](https://github.com/tensorflow/tensorflow/files/12247683/FC2.txt)\r\n[FC1.txt](https://github.com/tensorflow/tensorflow/files/12247684/FC1.txt)\r\n\r\nThanks.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61319\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61319\">No</a>\n"
] | 2023-07-19T05:21:21 | 2023-08-07T03:52:08 | 2023-08-07T03:52:05 | NONE | null | null | null | ### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 22.04.2 LTS
- TensorFlow installation (pip package or built from source):
```bash
export condaPip=`which -a pip | grep $condaEnvName`
$condaPip install tensorflow==2.7.0
$condaPip install numpy
$condaPip install pandas
$condaPip install scikit-learn
$condaPip install statsmodels
$condaPip install matplotlib
$condaPip install future
$condaPip install onnx
$condaPip install torchviz
$condaPip install mpi
$condaPip install torch
$condaPip install tqdm
$condaPip install pydot
$condaPip install ipympl
$condaPip install seaborn
$condaPip install tabulate
$condaPip install xgboost
$condaPip install catboost
$condaPip install bokeh
```
Installed using conda
- TensorFlow library (version, if pip package or github SHA, if built from source):
Listed above
### 2. Code
Provide code to help us reproduce your issues using one of the following options:
```python
dnn_model = keras.Sequential([
normalizer,
layers.Dense(512, activation='relu', input_dim=x_train.shape[1]),
layers.Dense(512, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
```
https://www.tensorflow.org/lite/performance/post_training_quantization
My model code is simple as above
And I used **Post Training Quantization using TF.Lite.Converter** (32-float to 8-int)
Integer only post training quantization (https://www.tensorflow.org/lite/performance/post_training_quantization#integer_only)
(Input dim: 8)
### 3. Failure after conversion
#### Reset input shape
After the conversion, the model has been distorted and i solved the problem with resetting input format
```python
interpreter.resize_tensor_input(input_details['index'], (1,8))
```
And it works well but it returns same output all the time from different inputs
Like this,
```
[array([[76]], dtype=uint8), array([[76]], dtype=uint8), array([[76]], dtype=uint8), array([[76]], ...
```
I googled the cases same as mine, but some pointed out learning epochs, or Model structure.
Nothing was problematic with learning epochs and I suppose there are some model compatibility for quantization
I tried other tensorflow versions (2.13.0, or nightly: 2.14.0-dev20230706) but still same as before.
#### Quantization w/ floating value
https://www.tensorflow.org/lite/performance/post_training_quantization#integer_with_float_fallback_using_default_float_inputoutput
I tried quantization using float32 input/output, but it's same with float value
```
[array([[0.36328125]], dtype=float32), array([[0.36328125]], dtype=float32), array([[0.36328125]], dtype=float32), array([[0.36328125]], dtype=float32), array([[0.36328125]], dtype=float32), ...
```
#### Quantized Model Weights
I printed out model weights, but not sure why it happens
I'm afriad the most of model weights are going 0 during conversion, and it became useless.
So the identical output is returned due to Model Bias.
(You can see the weights of quantized model below)
Do you have any idea why it happens or any tips ?
### 5. (optional) Any other info / logs
#### The predicted output of original model (Expected)
```
[0.04892164 0.05425507 0.34756148 0.12509596 0.05663526 0.05041704
0.01185179 0.33582878 0.0572255 0.07754967 0.20722428 0.208343
0.48504433 0.10513872 0.05141798 0.03368005 0.09833255 0.2034252
0.109099 0.48486084 0.23009542 0.08055633 0.06971437 0.14443058
0.12913615 0.03327829 0.06535947 0.4671367 0.20984942 0.35980904
...
0.23956934 0.23102319 0.10548833 0.06846321 0.16444126 0.04730341
0.47125074 0.1931791 0.16075167 0.06618178 0.06408405 0.50926745
0.12504464 0.73206306 0.22954968 0.00386357 0.19742006 0.09496576
0.04637471 0.06708863 0.37460512 0.17932612 0.10157627 0.04863232
0.04412654 0.50302374 0.06885636 0.21528003 0.09171575 0.0635072
0.08805934 0.3634451 0.57593703 0.09678486]
```
#### Quantized model weights
<details>
<summary> Click to watch </summary>
```
Quantized weights
[[ 0 161 1 199 181 127 0 0]]
[[-128 127 -126 -126 -126 -127 -127 -127]]
[[ 127 -128 -128 -128 -128 -128 -128 -128]]
[[ 80 23 -77 ... 0 7 115]
[ 74 55 -27 ... 70 56 103]
[ 112 -89 43 ... -18 -112 -91]
...
[ 45 104 -4 ... -57 91 47]
[ 43 97 -94 ... 19 113 -43]
[ 110 50 110 ... 34 18 3]]
[ 57 -20 195 271 152 140 164 230 186 199 -129 -9 -25 204
192 -11 183 -83 210 137 166 13 -21 36 248 243 -118 128
-114 224 221 85 221 134 -142 -60 108 194 118 222 183 39
201 104 -130 51 7 -39 16 246 224 103 -103 -43 -36 198
156 -87 224 175 76 -105 -54 97 -69 62 -26 -169 238 63
99 235 32 -130 -98 152 247 -56 145 191 -122 161 88 -119
-2 -17 104 158 147 123 221 189 236 170 266 191 210 162
-43 193 -94 208 -94 71 -130 97 -95 -29 181 244 183 -120
72 -40 -106 -151 41 125 210 -92 84 -14 187 1 141 -94
43 141 195 201 -7 227 178 8 67 243 97 229 200 38
195 197 203 26 186 94 170 -138 210 209 49 -150 -146 202
187 199 -91 196 95 216 222 203 213 237 167 -39 187 180
183 -80 -28 170 -62 243 69 48 -57 178 241 213 247 42
207 131 -111 -117 149 -114 -131 152 143 -36 -124 -81 -141 126
259 127 89 -4 124 -63 229 192 191 -4 -79 -46 35 -99
195 77 -62 185 194 30 -75 192 130 35 179 -155 -96 -7
178 29 177 -105 197 39 237 176 171 201 24 199 -70 -149
236 220 211 -165 165 248 -100 -71 -152 -123 77 183 159 -72
186 37 -127 159 183 211 171 -120 38 -7 183 208 -28 -101
174 176 234 204 182 165 238 145 232 159 256 50 -59 77
-31 109 -3 -117 41 48 140 131 203 235 117 106 80 197
32 166 141 183 203 -62 59 183 -36 -54 188 28 221 -108
-136 -79 190 161 199 111 62 153 -12 139 205 247 233 154
209 222 -130 27 209 -103 -83 240 130 210 184 138 -104 72
173 28 14 206 -80 119 177 124 158 200 242 230 83 156
242 7 190 210 -94 114 210 224 225 -42 232 -34 -67 -39
230 165 199 224 -134 163 156 -7 136 134 176 2 209 73
40 192 -109 216 227 201 -6 157 250 81 142 19 218 187
-52 13 236 29 -24 243 90 135 143 174 189 218 146 239
173 230 211 211 139 248 178 217 210 206 -86 136 -65 198
201 218 238 192 223 -125 -178 203 16 -146 -162 -116 198 220
232 179 105 -51 137 -111 -35 -154 266 -38 -88 -123 189 -95
-98 237 159 -63 186 72 71 -25 182 168 239 216 171 168
253 -33 223 -148 109 186 198 -157 229 198 210 100 155 -130
112 210 111 -114 59 190 224 50 177 -119 144 -95 -110 182
225 183 56 223 119 171 155 200 220 156 -73 178 44 200
194 -6 -65 220 18 81 -20 -43]
[[ 23 24 -26 ... 36 -24 37]
[ 48 40 -87 ... 44 -32 -21]
[ -2 16 -38 ... -101 -28 104]
...
[ 64 89 2 ... -37 95 -48]
[ 47 -14 107 ... -85 -23 -106]
[ 12 31 -60 ... 63 -71 -85]]
[ 2110 2151 -1352 1999 1340 -833 -1006 -657 -1454 -641 2383 2034
2358 -577 -1184 2455 2270 2831 -626 -1301 2957 2355 -665 -1340
2138 -948 1411 -747 -690 -1268 2206 -575 2340 2376 70 2465
-530 -668 -1109 951 2015 2042 -825 -976 2430 -682 -1268 -174
-1176 1994 -440 -954 -727 -1087 -1017 -1290 -968 2100 1938 -783
1915 -175 2582 -340 1964 -1330 2090 1963 -688 -1356 -22 1897
2157 2447 1994 2207 -593 -1213 -1291 2120 2198 -1067 -918 2017
1983 2399 2494 2201 1857 -1121 2406 2163 -48 -895 -1257 -1288
2175 2481 2058 2170 3646 -714 -1171 1959 2715 2042 2605 -928
-473 2099 2286 2298 1995 1968 2472 -356 -901 971 2610 -796
2199 2135 1871 2212 1927 -972 2225 -975 -820 -932 -1070 2459
-755 -721 1895 2021 -967 1987 -689 -1477 2076 -1138 2072 -1029
-413 1856 -649 2216 2087 2194 1955 1996 2144 1895 3004 -780
1739 -1404 -586 -1151 1895 -940 -1537 3030 -282 832 2043 1985
2074 1936 2167 -1049 2218 -531 -1049 -843 -1189 -541 2679 -911
2163 1518 -842 -1254 -966 1975 -1307 -983 -1360 -761 2085 -1137
2005 -1255 2499 2207 -797 -1105 2200 1923 1895 -885 -633 1901
1926 -1021 2195 1965 -1151 -952 -502 -1146 2204 -1306 2188 1859
-1117 1888 -1144 1930 2082 -522 -1029 326 2102 -1007 2078 2298
2038 1992 -1017 1646 -1298 1881 -1000 1916 2775 -558 2738 -1283
-715 -1176 -1282 2161 2398 1922 -746 -1193 2281 -1019 2200 2021
2053 1871 -1253 -621 -1009 -1205 2219 -925 -1342 0 2216 -885
-1417 -1225 -1295 2147 -984 2133 1986 2767 -1077 1961 2159 -1142
-1275 259 2182 -1005 -987 -1213 2103 2014 2043 -1087 -1509 -1196
-1394 -303 -1050 -995 2462 2664 2005 -1157 -931 2737 -1013 -1521
-751 -934 2335 2780 -1111 2000 1935 2166 2061 2373 2191 -682
-1478 1882 -505 2093 1916 -1253 -304 2630 1891 -1145 -1167 1923
1887 1956 2147 2966 -332 -270 336 -1002 -1138 -676 2056 -930
-520 2277 1894 -970 -1346 -768 1858 2171 2364 2105 -795 2191
-999 2027 -1141 -381 -1358 -732 1929 -469 -1206 -1283 1889 2446
-1072 -1082 2227 2127 -1278 2183 -1148 1875 2052 2225 -1003 -741
-1108 -1066 -1231 2373 1970 2116 2023 -234 1028 2289 1832 -1344
-649 -1129 1993 505 625 -932 -1135 -1160 -355 -1130 2307 -1242
-1087 2216 1848 -836 -150 2590 1926 2043 -896 -810 1922 -1031
-869 2524 -476 -228 76 -1179 -469 2511 2012 -1650 1855 2989
2400 2430 -155 -1171 -961 -1158 1823 -1036 -1244 2838 2554 2139
1925 -972 -403 364 -1263 2226 2255 2024 2200 2205 -1223 1982
1809 -921 2116 2142 -843 2195 -212 -1501 2415 -676 -607 2274
2172 2157 -1212 -1000 2007 2753 -1065 -1048 -1231 1937 2695 -1240
-953 -1246 -608 1932 1148 -1232 2039 2225 -1069 2009 2291 2003
2003 -1369 2226 -1174 2264 -890 2128 1888 3043 -734 -719 -1385
-1140 2348 2233 2103 2280 2138 2450 2377 -1008 2026 1958 1892
-1075 -1108 1946 -1078 2522 1899 1416 1882]
[[ -67 -52 50 -61 -66 54 107 16 83 32 -22 -59 -30 113
95 -30 -123 -93 1 55 -5 -24 28 58 -77 60 -67 22
59 21 -99 15 -18 -33 60 -62 111 22 10 -51 -74 -54
36 1 -27 16 103 108 59 -37 53 80 9 100 62 78
112 -94 -100 93 -96 23 -87 19 -86 88 -91 -102 97 20
32 -59 -88 -98 -11 -106 90 18 5 -41 -67 16 9 -54
-38 -70 -119 -102 -89 110 -16 -75 88 5 5 8 -36 -55
-108 -49 -12 31 45 -115 -70 -120 -71 39 43 -124 -44 -27
-21 -62 -127 77 105 -52 -30 46 -79 -46 -83 -64 -28 91
-80 116 99 52 99 -9 38 116 -87 -63 64 -93 71 82
-108 70 -48 111 31 -109 40 -62 -45 -92 -115 -73 -112 -118
-7 48 -11 27 85 86 -59 11 40 -31 25 -88 -83 -116
-99 -39 -37 36 -12 112 65 79 60 1 -51 73 -34 -55
28 53 98 -122 16 85 59 111 -60 40 -125 19 -78 -35
55 12 -26 -87 -96 85 16 -100 -45 94 -109 -9 18 5
61 64 -93 17 -5 -124 110 -41 77 -67 -74 69 105 -68
-28 32 -20 -18 -81 -57 38 -84 14 -53 48 -105 -52 54
-92 78 40 89 24 -49 -106 -90 33 34 -111 74 -9 -72
-93 -87 55 49 10 23 -93 31 4 -60 -41 57 18 34
93 -96 3 -79 -47 -21 114 -36 -13 97 66 72 -7 89
103 61 -36 -120 -57 111 7 83 38 10 37 17 -96 -12
-57 48 7 -75 8 31 16 34 -122 -12 62 -118 -71 -85
-47 -121 -108 34 78 -86 79 -56 -90 60 73 -10 -31 30
92 -99 -80 -119 -34 -55 63 81 69 19 78 70 -47 113
16 -97 -61 16 26 66 -103 -110 -31 -43 108 -78 81 -123
62 61 66 71 -113 34 29 57 -117 -22 93 21 -59 -118
102 -81 35 -61 -98 -58 74 44 16 38 70 -36 -66 -49
-34 18 -2 -48 -103 23 23 32 -35 -94 -68 64 24 114
58 83 -30 22 6 -72 -116 58 44 -32 -31 -89 69 62
-70 47 48 -106 66 73 117 16 108 -101 -114 97 -75 -15
-27 -95 39 58 23 44 -74 77 79 -12 -8 -30 -65 76
49 -98 68 -36 -31 -26 -120 -30 17 -100 -56 89 -43 -49
59 -99 83 36 -37 10 100 -18 -119 -64 111 42 -96 -6
30 87 20 -86 -48 5 92 13 96 -31 -36 112 -112 -112
71 -81 -109 -101 -36 72 -37 54 -89 69 -55 -67 -14 7
89 113 52 -24 -25 -51 -57 -13 -24 -73 91 -104 -49 -51
112 110 -60 25 -46 -77 -72 -9]]
[-960]
[[ 0 -110 -2 74 -75 127 0 0]]
[[ 80 -116 4 33 -75 127 0 0]]
[[-64 -8 50 9 0 0 0 0]]
[[ 0 -110 -2 74 -75 127 0 0 96 33 50 9 0 0
0 0 105 98 47 112 121 116 104 111 110 51 46 56
47 115 105 116 -80 -2 96 9 0 0 0 0 112 40
55 9 0 0 0 0 0 0 0 0 0 0 0 0
98 55 -4 74 -75 127 0 0 80 -116 4 33 -75 127
0 0 0 -110 124 40 -2 127 0 0 0 0 0 0
0 0 0 0 -30 27 -5 74 -75 127 0 0 -64 -8
50 9 0 0 0 0 -64 -8 50 9 0 0 0 0
0 0 0 0 0 0 0 0 -14 81 -4 74 -75 127
0 0 -64 -87 58 45 -75 127 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 -14 27
-5 74 -75 127 0 0 0 -34 106 42 -75 127 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 106 55 -4 74 -75 127 0 0 -80 35 19 41
-75 127 0 0 120 126 -2 74 -75 127 0 0 0 0
0 0 0 0 0 0 -22 27 -5 74 -75 127 0 0
-32 73 -109 8 0 0 0 0 2 0 0 0 0 0
0 0 116 102 46 69 110 113 117 101 117 101 84 80
85 69 109 98 101 100 100 105 110 103 73 110 116 101
103 101 114 66 97 116 99 104 0 103 101 115 47 107
-127 3 0 0 0 0 0 0 48 -86 50 9 0 0
0 0 -48 54 29 9 0 0 0 0 0 0 0 0
0 0 0 0 97 3 0 0 0 0 0 0 -48 48
50 9 0 0 0 0 -32 -100 1 -57 -75 127 0 0
0 0 0 0 0 0 0 0 65 3 0 0 0 0
0 0 -128 88 62 9 0 0 0 0 0 87 27 9
0 0 0 0 16 -120 -109 8 0 0 0 0 88 1
1 75 -75 127 0 0 80 50 -73 7 0 0 0 0
2 0 0 0 3 0 0 0 120 126 -2 74 -75 127
0 0 -32 49 -73 7 0 0 0 0 0 -110 -2 74
-75 127 0 0 0 50 -73 7 0 0 0 0 109 101
47 99 99 47 97 110 97 99 111 110 100 97 51 47
-80 -2 96 9 0 0 0 0 112 40 55 9 0 0
0 0 0 0 0 0 0 0 0 0 82 55 -4 74
-75 127 0 0 80 -116 4 33 -75 127 0 0 0 -110
124 40 -2 127 0 0 0 0 0 0 0 0 0 0
-30 27 -5 74 -75 127 0 0]]
[[ -64 -8 50 9 0 0 0 0 -64 -8 50 9 0 0
0 0 0 0 0 0 0 0 0 0 -22 81 -4 74
-75 127 0 0 -64 -87 58 45 -75 127 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
-14 27 -5 74 -75 127 0 0 0 -34 106 42 -75 127
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 90 55 -4 74 -75 127 0 0 16 36
19 41 -75 127 0 0 120 126 -2 74 -75 127 0 0
0 0 0 0 0 0 0 0 -22 27 -5 74 -75 127
0 0 40 74 -109 8 0 0 0 0 9 0 0 0
0 0 0 0 116 102 46 69 110 113 117 101 117 101
84 80 85 69 109 98 101 100 100 105 110 103 82 97
103 103 101 100 84 101 110 115 111 114 66 97 116 99
104 0 -31 1 0 0 0 0 0 0 -64 36 33 9
0 0 0 0 -64 17 48 9 0 0 0 0 0 0
0 0 0 0 0 0 -127 1 0 0 0 0 0 0
-128 25 50 9 0 0 0 0 -32 -100 1 -57 -75 127
0 0 0 0 0 0 0 0 0 0 97 1 0 0
0 0 0 0 96 6 31 9 0 0 0 0 -80 -49
86 9 0 0 0 0 16 -120 -109 8 0 0 0 0
80 1 1 75 -75 127 0 0 -16 51 -73 7 0 0
0 0 2 0 0 0 3 0 0 0 120 126 -2 74
-75 127 0 0 -128 51 -73 7 0 0 0 0 0 -110
-2 74 -75 127 0 0 -96 51 -73 7 0 0 0 0
98 47 112 121 116 104 111 110 51 46 56 47 115 105
116 101 -80 -2 96 9 0 0 0 0 112 40 55 9
0 0 0 0 0 0 0 0 0 0 0 0 66 55
-4 74 -75 127 0 0 80 -116 4 33 -75 127 0 0
0 -110 124 40 -2 127 0 0 0 0 0 0 0 0
0 0 -30 27 -5 74 -75 127 0 0 -64 -8 50 9
0 0 0 0 -64 -8 50 9 0 0 0 0 0 0
0 0 0 0 0 0 -30 81 -4 74 -75 127 0 0
-64 -87 58 45 -75 127 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 -14 27 -5 74
-75 127 0 0 0 -34 106 42 -75 127 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
74 55 -4 74 -75 127 0 0]]
[[0]]
[[80]]
```
</details>
#### Debugger
- I used debugger
- https://www.tensorflow.org/lite/performance/quantization_debugger

| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61319/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61318 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61318/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61318/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61318/events | https://github.com/tensorflow/tensorflow/issues/61318 | 1,810,994,972 | I_kwDOArmXAs5r8Zcc | 61,318 | tf-nightly prints WARNING:tensorflow during import tensorflow | {
"login": "lutzroeder",
"id": 438516,
"node_id": "MDQ6VXNlcjQzODUxNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/438516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lutzroeder",
"html_url": "https://github.com/lutzroeder",
"followers_url": "https://api.github.com/users/lutzroeder/followers",
"following_url": "https://api.github.com/users/lutzroeder/following{/other_user}",
"gists_url": "https://api.github.com/users/lutzroeder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lutzroeder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lutzroeder/subscriptions",
"organizations_url": "https://api.github.com/users/lutzroeder/orgs",
"repos_url": "https://api.github.com/users/lutzroeder/repos",
"events_url": "https://api.github.com/users/lutzroeder/events{/privacy}",
"received_events_url": "https://api.github.com/users/lutzroeder/received_events",
"type": "User",
"site_admin": true
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097543484,
"node_id": "MDU6TGFiZWwxMDk3NTQzNDg0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:runtime",
"name": "comp:runtime",
"color": "0052cc",
"default": false,
"description": "c++ runtime, performance issues (cpu)"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
}
] | closed | false | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lutzroeder ,\r\n\r\nI have replicated the reported behaviour and attached logs below.\r\n\r\n```\r\n(bazel) suryanarayanay@surya-ubuntu20:~$ python\r\nPython 3.9.16 (main, Mar 8 2023, 14:00:05) \r\n[GCC 11.2.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import os\r\n>>> os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\r\n>>> import tensorflow\r\nWARNING:tensorflow:From /home/suryanarayanay/miniconda3/envs/bazel/lib/python3.9/site-packages/tensorflow/python/ops/distributions/distribution.py:259: ReparameterizationType.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\nWARNING:tensorflow:From /home/suryanarayanay/miniconda3/envs/bazel/lib/python3.9/site-packages/tensorflow/python/ops/distributions/bernoulli.py:165: RegisterKL.__init__ (from tensorflow.python.ops.distributions.kullback_leibler) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\n```",
"@lutzroeder ,\r\n\r\nPlease feel free to raise a pull request if you are willing to contribute. Thanks!",
"These are deprecation warnings to make sure you update to APIs which won't get deleted on the next release. It is not an issue.",
"@mihaimaruseac the output shows when running `import tensorflow` and clutters logs even if none of the APIs mentioned are used. The default method to disable this unnecessary log output does not work. That is an issue.",
"Hi, \r\n\r\nYou can silence the command line output temporarily while importing the library and then enable it back again.\r\nBelow is the code which silences the warning from tensorflow library.\r\n\r\n```\r\nimport os\r\nimport sys\r\n\r\n\r\n# silence command-line output temporarily\r\nsys.stdout, sys.stderr = os.devnull, os.devnull\r\n\r\nimport tensorflow as tf\r\n\r\n# unsilence command-line output\r\nsys.stdout, sys.stderr = sys.__stdout__, sys.__stderr__\r\n```\r\n",
"Running this before `import tensorflow` worked when building from main, turning off log output is not the right answer.\r\n```\r\nimport logging\r\nlogging.getLogger('tensorflow').setLevel(logging.ERROR)\r\n```\r\nIs it possible to fix the deprecation warning to only show if these types are actually used? Why is the warning now showing for all users of TensorFlow.",
"@lutzroeder this does not work for me:\r\n\r\n\r\n\r\nI am using TensorFlow 2.14.0rc0.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61318\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61318\">No</a>\n",
"@salcc this is for @mihaimaruseac to investigate. Closing issue given no follow up from maintainers. "
] | 2023-07-19T01:55:26 | 2023-12-23T21:52:57 | 2023-12-23T21:46:11 | CONTRIBUTOR | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.14.0-dev20230706
### Custom code
Yes
### Python version
3.11
### Current behavior?
`import tensorflow` prints warnings which cannot be disabled with `os.environ['TF_CPP_MIN_LOG_LEVEL']`.
### Standalone code to reproduce the issue
```shell
python -m venv tf
source tf/bin/activate
pip install tf-nightly
python
>>> import os
>>> os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
>>> import tensorflow
WARNING:tensorflow:From tf/lib/python3.11/site-packages/tensorflow/python/ops/distributions/distribution.py:259: ReparameterizationType.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
WARNING:tensorflow:From tf/lib/python3.11/site-packages/tensorflow/python/ops/distributions/bernoulli.py:165: RegisterKL.__init__ (from tensorflow.python.ops.distributions.kullback_leibler) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61318/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61318/timeline | null | not_planned | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61317 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61317/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61317/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61317/events | https://github.com/tensorflow/tensorflow/issues/61317 | 1,810,224,531 | I_kwDOArmXAs5r5dWT | 61,317 | snapshot op wrongly changes data fingerprint | {
"login": "maciejskorski",
"id": 31315784,
"node_id": "MDQ6VXNlcjMxMzE1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/31315784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maciejskorski",
"html_url": "https://github.com/maciejskorski",
"followers_url": "https://api.github.com/users/maciejskorski/followers",
"following_url": "https://api.github.com/users/maciejskorski/following{/other_user}",
"gists_url": "https://api.github.com/users/maciejskorski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maciejskorski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maciejskorski/subscriptions",
"organizations_url": "https://api.github.com/users/maciejskorski/orgs",
"repos_url": "https://api.github.com/users/maciejskorski/repos",
"events_url": "https://api.github.com/users/maciejskorski/events{/privacy}",
"received_events_url": "https://api.github.com/users/maciejskorski/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1114343535,
"node_id": "MDU6TGFiZWwxMTE0MzQzNTM1",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:data",
"name": "comp:data",
"color": "0052cc",
"default": false,
"description": "tf.data related issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@maciejskorski I tried to replicate the issue on colab using TF v2.12, please find the attached [gist](https://colab.research.google.com/gist/sushreebarsa/a371a4c340eabea40555a380de1bf2b2/61317.ipynb) here. I have got the output as ; `1889`. Could you please confirm the same?\r\nThank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61317\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61317\">No</a>\n",
"@SuryanarayanaY This issue is replicating [here](https://colab.sandbox.google.com/gist/SuryanarayanaY/e5973644e0b87a3cda3680a24b5956ac/copy-of-61317.ipynb) in the colab gist. Thank you!",
"@sushreebarsa In your case the snaphost was created twice in different dirs, right?\r\nKindly note that I reported the issue under `tensorflow==2.11` while as of now Colab has `2.13` (let us not rely on Colab exclusively or track the version).",
"Hi @maciejskorski ,\r\n\r\nI have tested with TF2.13v and observed different fingerprints. Also the fingerprints count changes with `batch_size` and increases with no of runs. Each fingerprint have Two shard files and one metadata file. Attached [gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/d69956adb47cb0355d00d483fde98a02/61317_2-13v.ipynb) for reference.\r\n\r\nWill escalate to concern SME and come back."
] | 2023-07-18T15:50:06 | 2023-10-12T12:52:59 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
v2.12.0-rc1-12-g0db597d0d75
### Custom code
Yes
### OS platform and distribution
Linux 4e51bcd72cb8 5.15.109 (Colab)
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When constructing a tabular dataset from a given file, I see the snapshot fingerprint changing with repeated attempts. The pipeline and data don't change tough.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
traffic_volume_csv_gz = tf.keras.utils.get_file(
'Metro_Interstate_Traffic_Volume.csv.gz',
"https://archive.ics.uci.edu/ml/machine-learning-databases/00492/Metro_Interstate_Traffic_Volume.csv.gz",
cache_dir='.', cache_subdir='traffic'
)
ds = tf.data.experimental.make_csv_dataset(
traffic_volume_csv_gz,
batch_size=256,
label_name='traffic_volume',
num_epochs=1,
compression_type="GZIP"
)
ds = ds.enumerate()
ds = ds.snapshot('ds.tfsnap')
ds = ds.map(lambda i,x: x).repeat(10)
for i,_ in enumerate(ds):
pass
print(i)
```
### Relevant log output
In Colab Notebook, several runs of this code generate many snapshots
```console
ls ds.tfsnap
11819476836996993959 2128571372330446365 8272381159243496395
14899783750259964653 3924588669394259065 9335410099383931136
``` | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61317/timeline | null | reopened | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61316 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61316/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61316/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61316/events | https://github.com/tensorflow/tensorflow/issues/61316 | 1,810,199,701 | I_kwDOArmXAs5r5XSV | 61,316 | import tensorflow error | {
"login": "vinayk114",
"id": 96284609,
"node_id": "U_kgDOBb0vwQ",
"avatar_url": "https://avatars.githubusercontent.com/u/96284609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinayk114",
"html_url": "https://github.com/vinayk114",
"followers_url": "https://api.github.com/users/vinayk114/followers",
"following_url": "https://api.github.com/users/vinayk114/following{/other_user}",
"gists_url": "https://api.github.com/users/vinayk114/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinayk114/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinayk114/subscriptions",
"organizations_url": "https://api.github.com/users/vinayk114/orgs",
"repos_url": "https://api.github.com/users/vinayk114/repos",
"events_url": "https://api.github.com/users/vinayk114/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinayk114/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1188421838,
"node_id": "MDU6TGFiZWwxMTg4NDIxODM4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:windows",
"name": "subtype:windows",
"color": "b619ea",
"default": false,
"description": "Windows Build/Installation Issues"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @vinayk114 \r\n\r\nThere is numpy dependency, please check if you've installed correct numpy version .\r\nhttps://github.com/tensorflow/tensorflow/blob/0db597d0d758aba578783b5bf46c889700a45085/tensorflow/tools/pip_package/setup.py#L97\r\nYou can install numpy by running **pip install numpy==1.22**. The compatible version for tensorflow v2.12 is numpy >= 1.22, <1.24.\r\nPlease let us know if that helps?\r\n\r\nThank you!!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61316\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61316\">No</a>\n"
] | 2023-07-18T15:38:04 | 2023-08-03T01:51:02 | 2023-08-03T01:50:59 | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.12.0
### Custom code
Yes
### OS platform and distribution
windows 10
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
not able to import tensorflow
### Standalone code to reproduce the issue
```shell
import tensorflow
```
### Relevant log output
```shell
TypeError Traceback (most recent call last)
Cell In [69], line 1
----> 1 import tensorflow
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\__init__.py:37
34 import sys as _sys
35 import typing as _typing
---> 37 from tensorflow.python.tools import module_util as _module_util
38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
40 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import.
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\__init__.py:42
37 from tensorflow.python.eager import context
39 # pylint: enable=wildcard-import
40
41 # Bring in subpackages.
---> 42 from tensorflow.python import data
43 from tensorflow.python import distribute
44 # from tensorflow.python import keras
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\data\__init__.py:21
15 """`tf.data.Dataset` API for input pipelines.
16
17 See [Importing Data](https://tensorflow.org/guide/data) for an overview.
18 """
20 # pylint: disable=unused-import
---> 21 from tensorflow.python.data import experimental
22 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE
23 from tensorflow.python.data.ops.dataset_ops import Dataset
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\data\experimental\__init__.py:97
15 """Experimental API for building input pipelines.
16
17 This module contains experimental `Dataset` sources and transformations that can
(...)
93 @@UNKNOWN_CARDINALITY
94 """
96 # pylint: disable=unused-import
---> 97 from tensorflow.python.data.experimental import service
98 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch
99 from tensorflow.python.data.experimental.ops.batching import dense_to_sparse_batch
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\data\experimental\service\__init__.py:419
1 # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
13 # limitations under the License.
14 # ==============================================================================
15 """API for using the tf.data service.
16
17 This module contains:
(...)
416 job of ParameterServerStrategy).
417 """
--> 419 from tensorflow.python.data.experimental.ops.data_service_ops import distribute
420 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id
421 from tensorflow.python.data.experimental.ops.data_service_ops import register_dataset
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py:22
20 from tensorflow.core.protobuf import data_service_pb2
21 from tensorflow.python import tf2
---> 22 from tensorflow.python.data.experimental.ops import compression_ops
23 from tensorflow.python.data.experimental.service import _pywrap_server_lib
24 from tensorflow.python.data.experimental.service import _pywrap_utils
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py:16
1 # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
13 # limitations under the License.
14 # ==============================================================================
15 """Ops for compressing and uncompressing dataset elements."""
---> 16 from tensorflow.python.data.util import structure
17 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops
20 def compress(element):
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\data\util\structure.py:22
18 import itertools
20 import wrapt
---> 22 from tensorflow.python.data.util import nest
23 from tensorflow.python.framework import composite_tensor
24 from tensorflow.python.framework import ops
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\data\util\nest.py:34
1 # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
13 # limitations under the License.
14 # ==============================================================================
16 """## Functions for working with arbitrarily nested sequences of elements.
17
18 NOTE(mrry): This fork of the `tensorflow.python.util.nest` module
(...)
31 arrays.
32 """
---> 34 from tensorflow.python.framework import sparse_tensor as _sparse_tensor
35 from tensorflow.python.util import _pywrap_utils
36 from tensorflow.python.util import nest
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\framework\sparse_tensor.py:25
23 from tensorflow.python import tf2
24 from tensorflow.python.framework import composite_tensor
---> 25 from tensorflow.python.framework import constant_op
26 from tensorflow.python.framework import dtypes
27 from tensorflow.python.framework import ops
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\framework\constant_op.py:25
23 from tensorflow.core.framework import types_pb2
24 from tensorflow.python.eager import context
---> 25 from tensorflow.python.eager import execute
26 from tensorflow.python.framework import dtypes
27 from tensorflow.python.framework import op_callbacks
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\eager\execute.py:21
19 from tensorflow.python import pywrap_tfe
20 from tensorflow.python.eager import core
---> 21 from tensorflow.python.framework import dtypes
22 from tensorflow.python.framework import ops
23 from tensorflow.python.framework import tensor_shape
File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\framework\dtypes.py:37
34 from tensorflow.core.function import trace_type
35 from tensorflow.tools.docs import doc_controls
---> 37 _np_bfloat16 = _pywrap_bfloat16.TF_bfloat16_type()
38 _np_float8_e4m3fn = _pywrap_float8.TF_float8_e4m3fn_type()
39 _np_float8_e5m2 = _pywrap_float8.TF_float8_e5m2_type()
TypeError: Unable to convert function return value to a Python type! The signature was
() -> handle
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61316/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61315 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61315/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61315/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61315/events | https://github.com/tensorflow/tensorflow/issues/61315 | 1,810,120,992 | I_kwDOArmXAs5r5EEg | 61,315 | ImportError: undefined symbol after install | {
"login": "helmunt1998",
"id": 93800286,
"node_id": "U_kgDOBZdHXg",
"avatar_url": "https://avatars.githubusercontent.com/u/93800286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helmunt1998",
"html_url": "https://github.com/helmunt1998",
"followers_url": "https://api.github.com/users/helmunt1998/followers",
"following_url": "https://api.github.com/users/helmunt1998/following{/other_user}",
"gists_url": "https://api.github.com/users/helmunt1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helmunt1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helmunt1998/subscriptions",
"organizations_url": "https://api.github.com/users/helmunt1998/orgs",
"repos_url": "https://api.github.com/users/helmunt1998/repos",
"events_url": "https://api.github.com/users/helmunt1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/helmunt1998/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1205615612,
"node_id": "MDU6TGFiZWwxMjA1NjE1NjEy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:%20ubuntu/linux",
"name": "subtype: ubuntu/linux",
"color": "b619ea",
"default": false,
"description": "Ubuntu/Linux Build/Installation Issues"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"From the template it looks like you are installing **TensorFlow** (TF) prebuilt binaries:\n * For TF-GPU - See point 1\n * For TF-CPU - See point 2\n-----------------------------------------------------------------------------------------------\n**1. Installing **TensorFlow-GPU** (TF) prebuilt binaries**\n\nMake sure you are using compatible TF and CUDA versions. Please refer following TF version and CUDA version compatibility table.\n| TF | CUDA |\n| :-------------: | :-------------: |\n| 2.5.0 | 11.2 |\n| 2.4.0 | 11.0 |\n| 2.1.0 - 2.3.0 | 10.1 |\n| 1.13.1 - 2.0 | 10.0 |\n| 1.5.0 - 1.12.0 | 9.0 |\n\n * If you have above configuration and using _**Windows**_ platform -\n * Try adding the CUDA, CUPTI, and cuDNN installation directories to the %PATH% environment variable.\n * Refer [windows setup guide](https://www.tensorflow.org/install/gpu#windows_setup).\n * If you have above configuration and using _**Ubuntu/Linux**_ platform -\n * Try adding the CUDA, CUPTI, and cuDNN installation directories to the $LD_LIBRARY_PATH environment variable.\n * Refer [linux setup guide](https://www.tensorflow.org/install/gpu#linux_setup).\n * If error still persists then, apparently your CPU model does not support AVX instruction sets.\n * Refer [hardware requirements](https://www.tensorflow.org/install/pip#hardware-requirements).\n\n-----------------------------------------------------------------------------------------------\n**2. Installing **TensorFlow** (TF) CPU prebuilt binaries**\n\n*TensorFlow release binaries version 1.6 and higher are prebuilt with AVX instruction sets.*\n\nTherefore on any CPU that does not have these instruction sets, either CPU or GPU version of TF will fail to load.\nApparently, your CPU model does not support AVX instruction sets. You can still use TensorFlow with the alternatives given below:\n\n * Try Google Colab to use TensorFlow.\n * The easiest way to use TF will be to switch to [google colab](https://colab.sandbox.google.com/notebooks/welcome.ipynb#recent=true). You get pre-installed latest stable TF version. Also you can use ```pip install``` to install any other preferred TF version.\n * It has an added advantage since you can you easily switch to different hardware accelerators (cpu, gpu, tpu) as per the task.\n * All you need is a good internet connection and you are all set.\n * Try to build TF from sources by changing CPU optimization flags.\n\n*Please let us know if this helps.*\n",
"@helmunt1998 You are using an older version of TF which is not actively supported. Could you upgrade to the latest TF version and follow this migration [document](https://www.tensorflow.org/guide/migrate) to know more about this. The older version might show these issues as many apis are being deprecated. Please let us know if that help?\r\nThank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61315\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61315\">No</a>\n"
] | 2023-07-18T15:04:06 | 2023-08-03T01:51:09 | 2023-08-03T01:51:02 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
1.7.0
### Custom code
No
### OS platform and distribution
Linux-4.14.0-xilinx-v2018.3-armv7l-with-pynqlinux-v2.6-WFH
### Mobile device
no
### Python version
3.6.5
### Bazel version
0.10.0- (@non-git)
### GCC/compiler version
GCC 7.3.0
### CUDA/cuDNN version
no
### GPU model and memory
_No response_
### Current behavior?
I'm facing with the ImportError - Undefined symbol when trying to import tensorflow after successfully compiling from source and installed Tensorflow 1.7.0 on a 32 bit architecture.
### Standalone code to reproduce the issue
```shell
> Build configurations for Tensorflow:
build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
build --action_env PYTHON_LIB_PATH="/usr/local/lib/python3.6/dist-packages"
build --force_python=py3
build --host_force_python=py3
build --python_path="/usr/bin/python3"
build:gcp --define with_gcp_support=true
build:hdfs --define with_hdfs_support=true
build:s3 --define with_s3_support=true
build:kafka --define with_kafka_support=true
build:xla --define with_xla_support=true
build:gdr --define with_gdr_support=true
build:verbs --define with_verbs_support=true
build --action_env TF_NEED_OPENCL_SYCL="0"
build --action_env TF_NEED_CUDA="0"
build --define grpc_no_ares=true
build:opt --copt=-march=native
build:opt --host_copt=-march=native
build:opt --define with_default_optimizations=true
build --copt=-DGEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK
build --host_copt=-DGEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK
> After bazel build:
sudo python3 -m pip install /tmp/tensorflow_pkg/tensorflow-1.7.0-cp36-cp36m-linux_armv7l.whl
> Importing in python3
import tensorflow as tf
```
### Relevant log output
```shell
xilinx@pynq:/tmp/tensorflow_pkg$ python3 -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: /usr/local/lib/python3.6/dist-packages/tensorflow/python/_pywrap_tensorflow_internal.so: undefined symbol: _ZN10tensorflow9ConcatCPUINS_8bfloat16EEEvPNS_10DeviceBaseERKSt6vectorISt10unique_ptrINS_6TTypesIT_Li2EiE11ConstMatrixESt14default_deleteIS9_EESaISC_EEPNS8_6MatrixE
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import * # pylint: disable=redefined-builtin
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: /usr/local/lib/python3.6/dist-packages/tensorflow/python/_pywrap_tensorflow_internal.so: undefined symbol: _ZN10tensorflow9ConcatCPUINS_8bfloat16EEEvPNS_10DeviceBaseERKSt6vectorISt10unique_ptrINS_6TTypesIT_Li2EiE11ConstMatrixESt14default_deleteIS9_EESaISC_EEPNS8_6MatrixE
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61315/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61314 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61314/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61314/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61314/events | https://github.com/tensorflow/tensorflow/issues/61314 | 1,810,106,203 | I_kwDOArmXAs5r5Adb | 61,314 | TensorFlow 2.13 distributed training fail | {
"login": "nikita-savelyevv",
"id": 23343961,
"node_id": "MDQ6VXNlcjIzMzQzOTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/23343961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikita-savelyevv",
"html_url": "https://github.com/nikita-savelyevv",
"followers_url": "https://api.github.com/users/nikita-savelyevv/followers",
"following_url": "https://api.github.com/users/nikita-savelyevv/following{/other_user}",
"gists_url": "https://api.github.com/users/nikita-savelyevv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikita-savelyevv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikita-savelyevv/subscriptions",
"organizations_url": "https://api.github.com/users/nikita-savelyevv/orgs",
"repos_url": "https://api.github.com/users/nikita-savelyevv/repos",
"events_url": "https://api.github.com/users/nikita-savelyevv/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikita-savelyevv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 996845227,
"node_id": "MDU6TGFiZWw5OTY4NDUyMjc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:dist-strat",
"name": "comp:dist-strat",
"color": "0052cc",
"default": false,
"description": "Distribution Strategy related issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I have the same issue\r\nsource binary\r\ntensorflow 2.13.0\r\nPython 3.9.1\r\nCUDA 12.1.1-1\r\nCUDnn 8.9.1.23\r\nGPU 3x NVIDIA V100",
"Hi @nikita-savelyevv ,\r\n\r\nFrom your attached code snippet i have changed this line:\r\n`train_dataset = mnist_test.cache().shuffle(10000).batch(batch_size)\r\n`\r\n to :\r\n`train_dataset = mnist_train.cache().shuffle(10000).batch(batch_size)\r\n`\r\nand then executed the code on a GCP VM with 4 GPUs. The logs are attached below.\r\n\r\n```\r\n(bazel) suryanarayanay@surya-ubuntu20:~$ python 61314_r3.py\r\n2023-07-19 08:52:48.833519: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:8893] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-07-19 08:52:48.833729: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-07-19 08:52:48.837974: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2023-07-19 08:52:49.192706: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-07-19 08:52:50.862937: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nWARNING:tensorflow:From /home/suryanarayanay/miniconda3/envs/bazel/lib/python3.9/site-packages/tensorflow/python/ops/distributions/distribution.py:259: ReparameterizationType.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\nWARNING:tensorflow:From /home/suryanarayanay/miniconda3/envs/bazel/lib/python3.9/site-packages/tensorflow/python/ops/distributions/bernoulli.py:165: RegisterKL.__init__ (from tensorflow.python.ops.distributions.kullback_leibler) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\n2023-07-19 08:53:01.959096: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:01.961067: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:01.962833: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:01.964631: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:02.295809: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:02.297743: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:02.299515: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:02.301298: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:02.303146: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:02.304774: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:02.306338: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:02.307986: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.493607: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.495717: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.497548: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.499358: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.501174: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.502817: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.504433: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.506064: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.507816: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.509473: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.511157: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:03.512805: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.737342: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.739414: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.741220: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.743084: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.744986: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.746639: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.748217: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.749874: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.751576: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.753158: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1831] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13623 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5\r\n2023-07-19 08:53:06.753544: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.755104: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1831] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 13623 MB memory: -> device: 1, name: Tesla T4, pci bus id: 0000:00:05.0, compute capability: 7.5\r\n2023-07-19 08:53:06.755482: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.757076: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1831] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 13623 MB memory: -> device: 2, name: Tesla T4, pci bus id: 0000:00:06.0, compute capability: 7.5\r\n2023-07-19 08:53:06.757437: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 08:53:06.759072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1831] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 13623 MB memory: -> device: 3, name: Tesla T4, pci bus id: 0000:00:07.0, compute capability: 7.5\r\nDevices: 1\r\n2023-07-19 08:53:07.299239: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:552] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.\r\n2023-07-19 08:53:10.842723: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3a569dd0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\r\n2023-07-19 08:53:10.842919: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\r\n2023-07-19 08:53:10.842983: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): Tesla T4, Compute Capability 7.5\r\n2023-07-19 08:53:10.843030: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (2): Tesla T4, Compute Capability 7.5\r\n2023-07-19 08:53:10.843085: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (3): Tesla T4, Compute Capability 7.5\r\n2023-07-19 08:53:10.944601: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\r\n2023-07-19 08:53:11.479846: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:440] Loaded cuDNN version 8600\r\n2023-07-19 08:53:11.729654: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory\r\n2023-07-19 08:53:11.981977: I ./tensorflow/compiler/jit/device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\r\n938/938 [==============================] - 10s 6ms/step - loss: 10.1223 - accuracy: 0.8286 \r\nDevices: 2\r\n2023-07-19 08:53:19.515161: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:552] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.\r\n2023-07-19 08:53:47.328426: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 2 of 10000\r\n2023-07-19 08:53:52.679706: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 3 of 10000\r\n2023-07-19 08:54:08.411572: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 5 of 10000\r\n2023-07-19 08:54:20.882652: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 6 of 10000\r\n2023-07-19 08:54:34.713750: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 7 of 10000\r\n2023-07-19 08:54:54.214257: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 8 of 10000\r\n2023-07-19 08:55:09.061208: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 9 of 10000\r\n2023-07-19 08:55:14.850822: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 10 of 10000\r\n2023-07-19 08:55:20.150592: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 11 of 10000\r\n2023-07-19 08:55:25.772313: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 12 of 10000\r\n2023-07-19 08:55:31.710363: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 13 of 10000\r\n2023-07-19 08:55:38.318281: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 14 of 10000\r\n2023-07-19 08:55:44.011665: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 15 of 10000\r\n2023-07-19 08:55:54.076764: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 17 of 10000\r\n2023-07-19 08:56:07.181624: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 19 of 10000\r\n2023-07-19 08:56:14.434589: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 20 of 10000\r\n2023-07-19 08:56:29.020860: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 22 of 10000\r\n2023-07-19 08:56:35.686768: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 23 of 10000\r\n2023-07-19 08:56:47.350548: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 25 of 10000\r\n2023-07-19 08:56:53.206492: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 26 of 10000\r\n2023-07-19 08:57:03.189459: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 28 of 10000\r\n2023-07-19 08:57:13.485880: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 30 of 10000\r\n2023-07-19 08:57:24.014117: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 32 of 10000\r\n2023-07-19 08:57:38.654104: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 34 of 10000\r\n2023-07-19 08:57:45.836962: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 35 of 10000\r\n2023-07-19 08:57:53.215866: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 36 of 10000\r\n2023-07-19 08:58:03.464771: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 38 of 10000\r\n2023-07-19 08:58:18.851690: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 39 of 10000\r\n2023-07-19 08:58:37.214024: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 40 of 10000\r\n2023-07-19 08:58:55.288479: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 41 of 10000\r\n2023-07-19 08:59:06.636724: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 42 of 10000\r\n2023-07-19 08:59:21.521155: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 43 of 10000\r\n2023-07-19 08:59:34.989589: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 44 of 10000\r\n2023-07-19 08:59:48.707579: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 45 of 10000\r\n2023-07-19 09:00:05.466870: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 46 of 10000\r\n2023-07-19 09:00:22.107982: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 47 of 10000\r\n2023-07-19 09:00:37.668419: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 48 of 10000\r\n2023-07-19 09:00:51.022583: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 49 of 10000\r\n2023-07-19 09:01:04.442588: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 50 of 10000\r\n2023-07-19 09:01:41.646110: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 51 of 10000\r\n2023-07-19 09:02:27.555847: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 52 of 10000\r\n2023-07-19 09:03:03.864518: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 53 of 10000\r\n\r\n\r\n```\r\n\r\nWhen `devices=1` the code executes fine for me. When `devices>=2` for filling shuffle buffer its taking very much time. This should not be the case.There seems some problem wrt performance but not sure whether your reported behaviour able to replicable or not since I have stopped as the code taking too much time. The code used is attached as [gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/e631267c482a7cc91df962f49892351f/61314_r3.ipynb) here.",
"I have also tested the same code snippet attached by @nikita-savelyevv and found program hangs when devices=2 started executing. Logas attached below.\r\n\r\n```\r\n(bazel) suryanarayanay@surya-ubuntu20:~$ python 61314_r2.py\r\n2023-07-19 09:15:25.741329: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:8893] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-07-19 09:15:25.741552: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-07-19 09:15:25.746410: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2023-07-19 09:15:26.157582: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-07-19 09:15:27.903770: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nWARNING:tensorflow:From /home/suryanarayanay/miniconda3/envs/bazel/lib/python3.9/site-packages/tensorflow/python/ops/distributions/distribution.py:259: ReparameterizationType.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\nWARNING:tensorflow:From /home/suryanarayanay/miniconda3/envs/bazel/lib/python3.9/site-packages/tensorflow/python/ops/distributions/bernoulli.py:165: RegisterKL.__init__ (from tensorflow.python.ops.distributions.kullback_leibler) is deprecated and will be removed after 2019-01-01.\r\nInstructions for updating:\r\nThe TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.\r\n2023-07-19 09:15:38.960161: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:38.962045: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:38.963767: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:38.965520: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:39.264719: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:39.266613: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:39.268353: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:39.270109: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:39.271916: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:39.273529: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:39.275073: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:39.276628: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.428372: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.430244: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.432022: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.433705: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.435508: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.437049: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.438592: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.440161: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.441733: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.443276: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.444815: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:40.446383: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.793004: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.794994: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.796831: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.798663: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.800458: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.802054: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.803632: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.805221: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.806810: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.808346: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1831] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13623 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5\r\n2023-07-19 09:15:43.808735: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.810249: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1831] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 13623 MB memory: -> device: 1, name: Tesla T4, pci bus id: 0000:00:05.0, compute capability: 7.5\r\n2023-07-19 09:15:43.810625: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.812202: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1831] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 13623 MB memory: -> device: 2, name: Tesla T4, pci bus id: 0000:00:06.0, compute capability: 7.5\r\n2023-07-19 09:15:43.812576: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-19 09:15:43.814108: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1831] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 13623 MB memory: -> device: 3, name: Tesla T4, pci bus id: 0000:00:07.0, compute capability: 7.5\r\nDevices: 1\r\n2023-07-19 09:15:44.326961: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:552] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.\r\n2023-07-19 09:15:47.663646: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3a7b33e0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\r\n2023-07-19 09:15:47.663778: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\r\n2023-07-19 09:15:47.663798: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): Tesla T4, Compute Capability 7.5\r\n2023-07-19 09:15:47.663810: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (2): Tesla T4, Compute Capability 7.5\r\n2023-07-19 09:15:47.663821: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (3): Tesla T4, Compute Capability 7.5\r\n2023-07-19 09:15:47.778929: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\r\n2023-07-19 09:15:48.326253: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:440] Loaded cuDNN version 8600\r\n2023-07-19 09:15:48.567179: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory\r\n2023-07-19 09:15:48.771324: I ./tensorflow/compiler/jit/device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\r\n157/157 [==============================] - 5s 2ms/step - loss: 26.1179 - accuracy: 0.6900\r\nDevices: 2\r\n2023-07-19 09:15:50.748191: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:552] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.\r\n##Hangs here\r\n```\r\n\r\nTested code attached here as colab [gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/94b25afc6c081078a6a7446ee5020db4/61314_r2.ipynb).\r\n\r\nThanks!",
"@SuryanarayanaY Thanks for reaching out! I used `mnist_test` intentionally to slightly speed up the reproduction.\r\n\r\nI agree with your results. For me, when order of devices is set to `1, 2, 3`, the case `devices=2` also hangs as you describe. For the order `1, 3, 2`, the case `devices=2` produces the error I've attached in the ticket.\r\n\r\nSince the machine you run the code on has 4 GPUs, I would suppose that setting the order to something like `1, 4, 3, 2` would also lead to the error I attached.\r\n\r\nAnyway, I would assume that these two problems (hanging and throwing error) are related and may have the same cause.",
"Adding to distributed training hanging with `tensorflow==2.13.1`\r\n\r\nSmall fashion mnist example to reproduce jit_compiled model fails to train and hangs:\r\n\r\n```\r\nimport tensorflow as tf\r\nfrom keras import Model\r\nfrom keras.layers import Dense, Dropout, Flatten, Input\r\nfrom keras.utils import set_random_seed\r\n\r\n\r\ndef get_model() -> Model:\r\n set_random_seed(42)\r\n inp = Input(shape=(28, 28))\r\n inp = tf.expand_dims(inp, axis=-1)\r\n flt = Flatten()(inp)\r\n hdn = Dense(32, activation=\"relu\")(flt)\r\n drp = Dropout(0.2)(hdn)\r\n out = Dense(10)(drp)\r\n model = Model(inputs=inp, outputs=out, name=\"mnist_model\")\r\n print(model.summary())\r\n\r\n return model\r\n\r\n\r\ndef main():\r\n strategy = tf.distribute.MirroredStrategy()\r\n print(\"Number of devices: {}\".format(strategy.num_replicas_in_sync))\r\n assert (\r\n strategy.num_replicas_in_sync > 1\r\n ), \"strategy.num_replicas_in_sync must be greater than 1 or else problem will not be shown\"\r\n with strategy.scope():\r\n model = get_model()\r\n model.compile(\r\n optimizer=\"adam\",\r\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\r\n metrics=[\"accuracy\"],\r\n jit_compile=True, # FIXME: jit compiled model will fail to hang during fit\r\n )\r\n\r\n fashion_mnist = tf.keras.datasets.fashion_mnist\r\n (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()\r\n\r\n train_images = train_images / 255.0\r\n test_images = test_images / 255.0\r\n model.fit(train_images, train_labels, epochs=5, batch_size=1024)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n```\r\n2023-07-25 09:31:22.920814: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-25 09:31:22.922395: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13576 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5\r\n2023-07-25 09:31:22.923073: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\r\n2023-07-25 09:31:22.924679: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 13576 MB memory: -> device: 1, name: Tesla T4, pci bus id: 0000:00:05.0, compute capability: 7.5\r\nNumber of devices: 2\r\nModel: \"mnist_model\"\r\n_________________________________________________________________\r\n Layer (type) Output Shape Param # \r\n=================================================================\r\n input_2 (InputLayer) [(None, 28, 28, 1)] 0 \r\n \r\n flatten (Flatten) (None, 784) 0 \r\n \r\n dense (Dense) (None, 32) 25120 \r\n \r\n dropout (Dropout) (None, 32) 0 \r\n \r\n dense_1 (Dense) (None, 10) 330 \r\n \r\n=================================================================\r\nTotal params: 25450 (99.41 KB)\r\nTrainable params: 25450 (99.41 KB)\r\nNon-trainable params: 0 (0.00 Byte)\r\n_________________________________________________________________\r\nNone\r\nEpoch 1/5\r\n2023-07-25 09:31:25.593085: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x296a66b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\r\n2023-07-25 09:31:25.593171: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\r\n2023-07-25 09:31:25.593209: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): Tesla T4, Compute Capability 7.5\r\n2023-07-25 09:31:25.641984: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:255] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\r\n2023-07-25 09:31:25.662430: W tensorflow/compiler/tf2xla/kernels/random_ops.cc:57] Warning: Using tf.random.uniform with XLA compilation will ignore seeds; consider using tf.random.stateless_uniform instead if reproducible behavior is desired. mnist_model/dropout/dropout/random_uniform/RandomUniform\r\n2023-07-25 09:31:26.958144: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:432] Loaded cuDNN version 8600\r\n2023-07-25 09:31:26.984048: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory\r\n2023-07-25 09:31:27.345626: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:432] Loaded cuDNN version 8600\r\n2023-07-25 09:31:28.252749: I ./tensorflow/compiler/jit/device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\r\n......hang here and no more log lines\r\n````",
"Same issue here! Using 4GPU for distributed training on Ubuntu 22.04 with Tensorflow 2.13 hangs at the “compiled cluster using the XLA” line. Issue solved by downgrading to 2.12 ",
"Same for me with RTX 8000 and A6000 setup in MirrorStrategy with NCCL and Hierarchical CrossDeviceOps. I get a huge block of `tensorflow/core/framework/local_rendezvous.cc:405 Local rendezvous recv item cancelled.` before first epoch, but it doesn't hang up for me and I can successfully train the model. I was a bit too excited with 2.13 fixing the \"placeholder tensor\" warning from 2.12.\r\n\r\nWould be nice to get some feedback from the team if reproducible.",
"`Device /job:localhost/replica:0/task:0/device:GPU:1 is joining a group with size 2, but that group has size 3 (group_key=1)` means that when you are running the function with 2 GPUs, the collective op from previous function call with 3 GPUs might still be pending.\r\n\r\nGenerally it's not a good idea to create multiple `tf.dist.Strategy` in sequence in a production job, as they will share the same collective key and is very likely to cause arbitrary collapse between multiple all-reduces. For this case, try to reset context at the beginning of each test case. Example: https://github.com/tensorflow/tensorflow/blob/2a7efd891d3b16ef82b462d76fd9e61d111bf901/tensorflow/python/distribute/mirrored_strategy_test.py#L355",
"I am facing similar issue described by @xinyu-dev. Ubuntu 22.04 and Tensorflow 2.13.0, but running from docker image using `gcr.io/deeplearning-platform-release/tf2-gpu.2-13.py310:m111` as a base, on Vertex AI, with 4 x T4 GPUs. I trained with mirror strategy, which defaults to NCCLAllReduce. The training hangs with 100% GPU and memory utilization. I turned on `NCCL_DEBUG=INFO`, and here is what I have in my logs:\r\n\r\n```\r\nINFO 2023-09-13T15:46:26.702940522Z [resource.labels.taskName: workerpool0-0] NCCL INFO Bootstrap : Using eth0:10.128.0.57<0>\r\nINFO 2023-09-13T15:46:26.702990425Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/Plugin: Failed to find ncclNetPlugin_v6 symbol.\r\nINFO 2023-09-13T15:46:26.703005755Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/Plugin: Loaded net plugin FastSocket (v4)\r\nINFO 2023-09-13T15:46:26.703019842Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v6 symbol.\r\nINFO 2023-09-13T15:46:26.703028102Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin symbol (v4 or v5).\r\nINFO 2023-09-13T15:46:26.703034293Z [resource.labels.taskName: workerpool0-0] NCCL INFO cudaDriverVersion 11040\r\nINFO 2023-09-13T15:46:26.703040462Z [resource.labels.taskName: workerpool0-0] NCCL version 2.13.4+cudaCUDA_MAJOR.CUDA_MINOR\r\nINFO 2023-09-13T15:46:26.893955808Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/FastSocket : Tx CPU start: -2\r\nINFO 2023-09-13T15:46:26.894000335Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/FastSocket : Rx CPU start: -2\r\nINFO 2023-09-13T15:46:26.894009348Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/FastSocket : Flow placement enabled.\r\nINFO 2023-09-13T15:46:26.894015649Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/FastSocket : queue skip: 0\r\nINFO 2023-09-13T15:46:26.894021171Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/FastSocket : Using [0]eth0:10.128.0.57<0> [1]veth46fef7a:fe80::ec4e:4ff:fe68:e6aa%veth46fef7a<0>\r\nINFO 2023-09-13T15:46:26.894045883Z [resource.labels.taskName: workerpool0-0] NCCL INFO NET/FastSocket plugin initialized\r\nINFO 2023-09-13T15:46:26.894052051Z [resource.labels.taskName: workerpool0-0] NCCL INFO Using network FastSocket\r\nINFO 2023-09-13T15:46:26.894058304Z [resource.labels.taskName: workerpool0-0] NCCL INFO Using network FastSocket\r\nINFO 2023-09-13T15:46:26.894064036Z [resource.labels.taskName: workerpool0-0] NCCL INFO Using network FastSocket\r\nINFO 2023-09-13T15:46:26.894069715Z [resource.labels.taskName: workerpool0-0] NCCL INFO Using network FastSocket\r\nINFO 2023-09-13T15:46:26.894075386Z [resource.labels.taskName: workerpool0-0] NCCL INFO PXN Disabled as plugin is v4\r\nINFO 2023-09-13T15:46:26.894079843Z [resource.labels.taskName: workerpool0-0] NCCL INFO Trees [0] 2/-1/-1->1->0 [1] 2/-1/-1->1->0\r\nINFO 2023-09-13T15:46:26.894084842Z [resource.labels.taskName: workerpool0-0] NCCL INFO P2P Chunksize set to 131072\r\nINFO 2023-09-13T15:46:26.894089473Z [resource.labels.taskName: workerpool0-0] NCCL INFO Channel 00/02 : 0 1 2 3\r\nINFO 2023-09-13T15:46:26.894094426Z [resource.labels.taskName: workerpool0-0] NCCL INFO Channel 01/02 : 0 1 2 3\r\nINFO 2023-09-13T15:46:26.894101796Z [resource.labels.taskName: workerpool0-0] NCCL INFO Trees [0] 3/-1/-1->2->1 [1] 3/-1/-1->2->1\r\nINFO 2023-09-13T15:46:26.894108081Z [resource.labels.taskName: workerpool0-0] NCCL INFO P2P Chunksize set to 131072\r\nINFO 2023-09-13T15:46:26.894112965Z [resource.labels.taskName: workerpool0-0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1\r\nINFO 2023-09-13T15:46:26.894118387Z [resource.labels.taskName: workerpool0-0] NCCL INFO P2P Chunksize set to 131072\r\nINFO 2023-09-13T15:46:26.894124008Z [resource.labels.taskName: workerpool0-0] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] -1/-1/-1->3->2\r\nINFO 2023-09-13T15:46:26.894129976Z [resource.labels.taskName: workerpool0-0] NCCL INFO P2P Chunksize set to 131072\r\n```",
"> Same issue here! Using 4GPU for distributed training on Ubuntu 22.04 with Tensorflow 2.13 hangs at the “compiled cluster using the XLA” line. Issue solved by downgrading to 2.12\r\n\r\nYou may refer to my issue in #62234 .\r\n\r\nCurrently, using RING instead of NCCL is a temporary workaround (https://github.com/edwardyehuang/iSeg/blob/master/utils/distribution_utils.py).\r\n\r\nAnother workaround (2.13 only atm), use `conda install -c conda-forge tensorflow-gpu`, instead docker or pip.\r\n\r\nBesides, if anyone has a tf-nightly GPU wheel on Mar 17 and April 27, please share it with me so I can test it and see if the pull request https://github.com/tensorflow/tensorflow/pull/60001 or https://github.com/tensorflow/tensorflow/pull/59424 cause this issue.\r\n",
"> > Same issue here! Using 4GPU for distributed training on Ubuntu 22.04 with Tensorflow 2.13 hangs at the “compiled cluster using the XLA” line. Issue solved by downgrading to 2.12\r\n> \r\n> You may refer to my issue in #62234 .\r\n> \r\n> Currently, using RING instead of NCCL is a temporary workaround (https://github.com/edwardyehuang/iSeg/blob/master/utils/distribution_utils.py).\r\n> \r\n> Another workaround (2.13 only atm), use `conda install -c conda-forge tensorflow-gpu`, instead docker or pip.\r\n> \r\n> Besides, if anyone has a tf-nightly GPU wheel on Mar 17 and April 27, please share it with me so I can test it and see if the pull request #60001 or #59424 cause this issue.\r\n\r\nAnother thing worth to attention: Why the third-party (conda-forge) conda build can avoid this issue? (the TensorFlow in the docker image is directly installed from pip)",
"same issue here. anyone finds solutions?",
"Upgrade the NVIDIA driver >= 545 and the issue should be addressed",
"I got a similar error on the NVIDIA GPU while using tensorflow-federated and nest_asyncio packages. The error appeared when I updated tensorflow-federated package version from 0.38.0 to 0.73.0. I tried updating nest_asyncio, but it didn't help. So I just muted that message. \r\n\r\n`import os`\r\n`os.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"1\"`\r\n\r\n[Check](https://stackoverflow.com/questions/76912213/tf2-13-local-rendezvous-recv-item-cancelled) for details.",
"Similar error. Fixed it by removing the steps_per_epoch argument from model.fit() and model.evaluate()\r\n\r\nimport sys\r\nfrom matplotlib import pyplot\r\nfrom keras.utils import to_categorical\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Conv2D\r\nfrom keras.layers import MaxPooling2D\r\nfrom keras.layers import Dense\r\nfrom keras.layers import Flatten\r\nfrom keras.optimizers import SGD\r\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\r\nimport tensorflow as tf\r\nimport numpy as np\r\n\r\nphysical_devices = tf.config.list_physical_devices('GPU')\r\ntry:\r\n tf.config.experimental.set_memory_growth(physical_devices[0], True)\r\nexcept:\r\n # Invalid device or cannot modify virtual devices once initialized.\r\n pass\r\n\r\n # define cnn model\r\ndef define_model():\r\n model = Sequential()\r\n model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(200, 200, 3)))\r\n model.add(MaxPooling2D((2, 2)))\r\n model.add(Flatten())\r\n model.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))\r\n model.add(Dense(1, activation='sigmoid'))\r\n # compile model\r\n opt = SGD(learning_rate=0.001, momentum=0.9)\r\n model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy'])\r\n return model\r\n\r\n# create data generator\r\ndatagen = ImageDataGenerator(rescale=1.0/255.0)\r\nmodel = define_model()\r\n\r\n# prepare iterators\r\ntrain_it = datagen.flow_from_directory('/workspace/workspace/cats_and_dogs_data/dogs-vs-cats/train/',\r\n class_mode='binary', batch_size=64, target_size=(200, 200))\r\ntest_it = datagen.flow_from_directory('/workspace/workspace/cats_and_dogs_data/dogs-vs-cats/test1/',\r\n class_mode='binary', batch_size=64, target_size=(200, 200))\r\n\r\n# fit model\r\nhistory = model.fit(train_it, validation_data=test_it, epochs=20, verbose=1)\r\n\r\n\r\n# evaluate model\r\n_, acc = model.evaluate(test_it, verbose=1)\r\nprint('> %.3f' % (acc * 100.0))\r\n\r\n\r\n"
] | 2023-07-18T14:56:01 | 2024-04-22T21:18:00 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
2.13.0
### Custom code
No
### OS platform and distribution
Linux Ubuntu 20.04.3
### Mobile device
Linux Ubuntu 20.04.3
### Python version
3.8.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
CUDA 11.7, cuDNN 8.6
### GPU model and memory
3x NVIDIA GeForce RTX 3090
### Current behavior?
When trying to run multiple distributed trainings one after another, one of them fails with an `Collective ops is aborted by: ...` error.
The reproducer attached to this issue produces the following error:
```
Collective ops is aborted by: Device /job:localhost/replica:0/task:0/device:GPU:1 is joining a group with size2, but that group has size 3 (group_key=1)
The error could be from a previous operation. Restart your program to reset.
[[{{node CollectiveReduceV2}}]] [Op:__inference_train_function_5585]
```
When run with TF 2.12 there is no such error.
The original code where I have encountered this problem results in
```
E Collective ops is aborted by: Shape mismatch in the collective instance 100. Op at device /job:localhost/replica:0/task:0/device:GPU:1 expected shape [517169] but another member in the group expected shape [516734]. This is likely due to different input shapes at different members of the collective op.
E The error could be from a previous operation. Restart your program to reset.
E [[{{node CollectiveReduceV2}}]] [Op:__inference_train_function_49105]
```
but I wasn't able to reproduce this with a small code snippet.
### Standalone code to reproduce the issue
```shell
import pytest
import tensorflow as tf
import tensorflow_datasets as tfds
@pytest.mark.parametrize("devices", [1, 3, 2])
def test_distributed_fit(devices):
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
if devices == 1:
strategy = tf.distribute.OneDeviceStrategy("/gpu:0")
else:
strategy = tf.distribute.MirroredStrategy([f"/gpu:{i}" for i in range(devices)])
batch_size = 64 * strategy.num_replicas_in_sync
train_dataset = mnist_test.cache().shuffle(10000).batch(batch_size)
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(train_dataset, epochs=1)
if __name__ == '__main__':
test_distributed_fit(1)
test_distributed_fit(3)
test_distributed_fit(2)
```
### Relevant log output
```shell
/home/nsavel/venvs/nncf_tf_213/bin/python /home/nsavel/workspace/nncf_tf_213/reproducer.py
2023-07-18 16:47:21.693862: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-07-18 16:47:21.722428: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:7630] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-07-18 16:47:21.722456: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-07-18 16:47:21.722481: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-07-18 16:47:21.728124: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-07-18 16:47:22.211027: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/tensorflow/python/ops/distributions/distribution.py:259: ReparameterizationType.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
WARNING:tensorflow:From /home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/tensorflow/python/ops/distributions/bernoulli.py:165: RegisterKL.__init__ (from tensorflow.python.ops.distributions.kullback_leibler) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
2023-07-18 16:47:24.321508: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1833] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22292 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:17:00.0, compute capability: 8.6
2023-07-18 16:47:24.322042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1833] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 22292 MB memory: -> device: 1, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:65:00.0, compute capability: 8.6
2023-07-18 16:47:24.322425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1833] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 22292 MB memory: -> device: 2, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:b3:00.0, compute capability: 8.6
2023-07-18 16:47:24.602273: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:552] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.
2023-07-18 16:47:25.946425: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fcf358b4470 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2023-07-18 16:47:25.946450: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 3090, Compute Capability 8.6
2023-07-18 16:47:25.946455: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): NVIDIA GeForce RTX 3090, Compute Capability 8.6
2023-07-18 16:47:25.946458: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (2): NVIDIA GeForce RTX 3090, Compute Capability 8.6
2023-07-18 16:47:25.950178: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2023-07-18 16:47:26.074588: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:434] Loaded cuDNN version 8600
2023-07-18 16:47:26.171621: I ./tensorflow/compiler/jit/device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
157/157 [==============================] - 2s 5ms/step - loss: 25.9054 - accuracy: 0.6873
2023-07-18 16:47:27.474184: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:552] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.
2023-07-18 16:47:30.690312: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:434] Loaded cuDNN version 8600
2023-07-18 16:47:30.822607: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:434] Loaded cuDNN version 8600
53/53 [==============================] - 3s 7ms/step - loss: 43.9234 - accuracy: 0.5655
2023-07-18 16:47:31.372876: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:552] The `assert_cardinality` transformation is currently not handled by the auto-shard rewrite and will be removed.
2023-07-18 16:47:32.398894: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort INTERNAL: Device /job:localhost/replica:0/task:0/device:GPU:1 is joining a group with size2, but that group has size 3 (group_key=1)
2023-07-18 16:47:32.398950: I tensorflow/core/framework/local_rendezvous.cc:421] Local rendezvous recv item cancelled. Key hash: 7416489994643074752
2023-07-18 16:47:32.399024: I tensorflow/core/framework/local_rendezvous.cc:421] Local rendezvous recv item cancelled. Key hash: 1224112818691547746
2023-07-18 16:47:32.399044: I tensorflow/core/framework/local_rendezvous.cc:421] Local rendezvous recv item cancelled. Key hash: 10338356286700713842
2023-07-18 16:47:32.399063: I tensorflow/core/framework/local_rendezvous.cc:421] Local rendezvous recv item cancelled. Key hash: 6809993284794892577
2023-07-18 16:47:32.399081: I tensorflow/core/framework/local_rendezvous.cc:421] Local rendezvous recv item cancelled. Key hash: 12460047264292639245
2023-07-18 16:47:32.399097: I tensorflow/core/framework/local_rendezvous.cc:421] Local rendezvous recv item cancelled. Key hash: 8051515006773529005
Traceback (most recent call last):
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 35, in <module>
test_distributed_fit(2)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 29, in test_distributed_fit
model.fit(train_dataset, epochs=1)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 53, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InternalError: Graph execution error:
Detected at node CollectiveReduceV2 defined at (most recent call last):
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 35, in <module>
test_distributed_fit(2)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 35, in <module>
test_distributed_fit(2)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 29, in test_distributed_fit
model.fit(train_dataset, epochs=1)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 35, in <module>
test_distributed_fit(2)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 29, in test_distributed_fit
model.fit(train_dataset, epochs=1)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 35, in <module>
test_distributed_fit(2)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 29, in test_distributed_fit
model.fit(train_dataset, epochs=1)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/engine/training.py", line 1782, in fit
tmp_logs = self.train_function(iterator)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 35, in <module>
test_distributed_fit(2)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 29, in test_distributed_fit
model.fit(train_dataset, epochs=1)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/engine/training.py", line 1782, in fit
tmp_logs = self.train_function(iterator)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/engine/training.py", line 1376, in train_function
return step_function(self, iterator)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 35, in <module>
test_distributed_fit(2)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 29, in test_distributed_fit
model.fit(train_dataset, epochs=1)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/engine/training.py", line 1782, in fit
tmp_logs = self.train_function(iterator)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/engine/training.py", line 1376, in train_function
return step_function(self, iterator)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/engine/training.py", line 1359, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 35, in <module>
test_distributed_fit(2)
File "/home/nsavel/workspace/nncf_tf_213/reproducer.py", line 29, in test_distributed_fit
model.fit(train_dataset, epochs=1)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/engine/training.py", line 1782, in fit
tmp_logs = self.train_function(iterator)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/engine/training.py", line 1376, in train_function
return step_function(self, iterator)
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/engine/training.py", line 1359, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/nsavel/venvs/nncf_tf_213/lib/python3.8/site-packages/keras/src/optimizers/utils.py", line 175, in _all_reduce_sum_fn
return distribution.extended.batch_reduce_to(
Collective ops is aborted by: Device /job:localhost/replica:0/task:0/device:GPU:1 is joining a group with size2, but that group has size 3 (group_key=1)
The error could be from a previous operation. Restart your program to reset.
[[{{node CollectiveReduceV2}}]] [Op:__inference_train_function_5585]
Process finished with exit code 1
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61314/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61314/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61313 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61313/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61313/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61313/events | https://github.com/tensorflow/tensorflow/issues/61313 | 1,810,071,643 | I_kwDOArmXAs5r44Bb | 61,313 | snapshoting failure on dataset made from generator | {
"login": "maciejskorski",
"id": 31315784,
"node_id": "MDQ6VXNlcjMxMzE1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/31315784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maciejskorski",
"html_url": "https://github.com/maciejskorski",
"followers_url": "https://api.github.com/users/maciejskorski/followers",
"following_url": "https://api.github.com/users/maciejskorski/following{/other_user}",
"gists_url": "https://api.github.com/users/maciejskorski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maciejskorski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maciejskorski/subscriptions",
"organizations_url": "https://api.github.com/users/maciejskorski/orgs",
"repos_url": "https://api.github.com/users/maciejskorski/repos",
"events_url": "https://api.github.com/users/maciejskorski/events{/privacy}",
"received_events_url": "https://api.github.com/users/maciejskorski/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1114343535,
"node_id": "MDU6TGFiZWwxMTE0MzQzNTM1",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:data",
"name": "comp:data",
"color": "0052cc",
"default": false,
"description": "tf.data related issues"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | open | false | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
},
{
"login": "wilsingosti",
"id": 93937952,
"node_id": "U_kgDOBZlhIA",
"avatar_url": "https://avatars.githubusercontent.com/u/93937952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wilsingosti",
"html_url": "https://github.com/wilsingosti",
"followers_url": "https://api.github.com/users/wilsingosti/followers",
"following_url": "https://api.github.com/users/wilsingosti/following{/other_user}",
"gists_url": "https://api.github.com/users/wilsingosti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wilsingosti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wilsingosti/subscriptions",
"organizations_url": "https://api.github.com/users/wilsingosti/orgs",
"repos_url": "https://api.github.com/users/wilsingosti/repos",
"events_url": "https://api.github.com/users/wilsingosti/events{/privacy}",
"received_events_url": "https://api.github.com/users/wilsingosti/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@sachinprasadhs I was able to replicate this issue on colab using TF v[2.12](https://colab.research.google.com/gist/sushreebarsa/6bbdf455ce7f0ee9a8c21451e72ad619/61313.ipynb) ,[2.13](https://colab.research.google.com/gist/sushreebarsa/5b2c607a0a6cd2be0d368bad7c384cc6/61313.ipynb) and [tf-nightly](https://colab.research.google.com/gist/sushreebarsa/dddddf65fbceffcc20c8a9c434dc9d37/61313.ipynb#scrollTo=boqcOlPBfwc7). Please find the attached gists. Thank you!",
"Hi. I am a developer looking to get into open source. Have been using Tensorflow for a long time and also thinking of contributing to the repository. I had some questions to ask regarding this issue. \r\n- According to assesment is this a good issue to start on. \r\n- If yes, is there some guide on how I can test my changes on code for the same code, locally. "
] | 2023-07-18T14:37:30 | 2023-08-01T07:44:38 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
v2.12.0-rc1-12-g0db597d0d75
### Custom code
Yes
### OS platform and distribution
Linux 4e51bcd72cb8 5.15.109 (Colab)
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Simple snapshoting op with sharding fails when applied to a generator-based dataset.
### Standalone code to reproduce the issue
```shell
import numpy as np
import tensorflow as tf
np.random.seed(1234)
IMG_SHAPE = (224,224,3)
def gen_img(shape=IMG_SHAPE):
while True:
img = np.random.randint(0,256,size=IMG_SHAPE)
lab = np.random.randint(0,10)
yield (img,lab)
ds = tf.data.Dataset.from_generator(
gen_img,
output_signature=(
tf.TensorSpec(shape=IMG_SHAPE, dtype=tf.int32),
tf.TensorSpec(shape=(), dtype=tf.int32)
)
)
ds = ds.take(int(1e3)).batch(32)
ds = ds.enumerate()
ds = ds.snapshot('./my_cached_dataset', shard_func = lambda i,x: i%10)
ds = ds.map(lambda i,x: x).repeat(2) # error disappears under 1 epoch !
for i,(img,lab) in enumerate(ds):
pass
```
### Relevant log output
```shell
ResourceExhaustedError: {{function_node __wrapped__IteratorGetNext_output_types_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} Output buffer(size: 262144 bytes) too small. Should be larger than 19267611 bytes. [Op:IteratorGetNext]
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61313/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61312 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61312/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61312/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61312/events | https://github.com/tensorflow/tensorflow/issues/61312 | 1,809,876,624 | I_kwDOArmXAs5r4IaQ | 61,312 | Linking an Android library with TFLite GPU using CMake causes undefined symbol errors | {
"login": "GoldFeniks",
"id": 14744013,
"node_id": "MDQ6VXNlcjE0NzQ0MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/14744013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GoldFeniks",
"html_url": "https://github.com/GoldFeniks",
"followers_url": "https://api.github.com/users/GoldFeniks/followers",
"following_url": "https://api.github.com/users/GoldFeniks/following{/other_user}",
"gists_url": "https://api.github.com/users/GoldFeniks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GoldFeniks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GoldFeniks/subscriptions",
"organizations_url": "https://api.github.com/users/GoldFeniks/orgs",
"repos_url": "https://api.github.com/users/GoldFeniks/repos",
"events_url": "https://api.github.com/users/GoldFeniks/events{/privacy}",
"received_events_url": "https://api.github.com/users/GoldFeniks/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "alankelly",
"id": 5112267,
"node_id": "MDQ6VXNlcjUxMTIyNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5112267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alankelly",
"html_url": "https://github.com/alankelly",
"followers_url": "https://api.github.com/users/alankelly/followers",
"following_url": "https://api.github.com/users/alankelly/following{/other_user}",
"gists_url": "https://api.github.com/users/alankelly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alankelly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alankelly/subscriptions",
"organizations_url": "https://api.github.com/users/alankelly/orgs",
"repos_url": "https://api.github.com/users/alankelly/repos",
"events_url": "https://api.github.com/users/alankelly/events{/privacy}",
"received_events_url": "https://api.github.com/users/alankelly/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "alankelly",
"id": 5112267,
"node_id": "MDQ6VXNlcjUxMTIyNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5112267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alankelly",
"html_url": "https://github.com/alankelly",
"followers_url": "https://api.github.com/users/alankelly/followers",
"following_url": "https://api.github.com/users/alankelly/following{/other_user}",
"gists_url": "https://api.github.com/users/alankelly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alankelly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alankelly/subscriptions",
"organizations_url": "https://api.github.com/users/alankelly/orgs",
"repos_url": "https://api.github.com/users/alankelly/repos",
"events_url": "https://api.github.com/users/alankelly/events{/privacy}",
"received_events_url": "https://api.github.com/users/alankelly/received_events",
"type": "User",
"site_admin": false
},
{
"login": "pkgoogle",
"id": 132095473,
"node_id": "U_kgDOB9-d8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/132095473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkgoogle",
"html_url": "https://github.com/pkgoogle",
"followers_url": "https://api.github.com/users/pkgoogle/followers",
"following_url": "https://api.github.com/users/pkgoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/pkgoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkgoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkgoogle/subscriptions",
"organizations_url": "https://api.github.com/users/pkgoogle/orgs",
"repos_url": "https://api.github.com/users/pkgoogle/repos",
"events_url": "https://api.github.com/users/pkgoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkgoogle/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hey @GoldFeniks,\r\n\r\nI've got the same problem when building TFLite 2.13.0 with CMake. I manage to fix it by editing `tensorflow/lite/CMakeLists.txt`. \r\n\r\nBefore `if(TFLITE_ENABLE_GPU)` I added the following lines:\r\n```\r\npopulate_tflite_source_vars(\"core/async/interop\" TFLITE_CORE_ASYNC_INTEROP_SRCS)\r\npopulate_tflite_source_vars(\"core/async/interop/c\" TFLITE_CORE_ASYNC_INTEROP_C_SRCS)\r\npopulate_tflite_source_vars(\"delegates/utils\" TFLITE_DELEGATES_UTILS_SRCS)\r\npopulate_tflite_source_vars(\"async\" TFLITE_ASYNC_SRCS)\r\n```\r\nThen, after `set(_ALL_TFLITE_SRCS` I added the following lines:\r\n```\r\n${TFLITE_CORE_ASYNC_INTEROP_SRCS}\r\n${TFLITE_CORE_ASYNC_INTEROP_C_SRCS}\r\n${TFLITE_DELEGATES_UTILS_SRCS}\r\n${TFLITE_ASYNC_SRCS}\r\n```",
"@williamdias Thank you! Interesting, looks like some sources got lost somewhere ",
"@GoldFeniks Could you please let us know if the issue has been resolved for you ?\r\nThank you!",
"@sushreebarsa It helped resolve the TFLite related unresolved symbols, but AHardwareBuffer symbols still cannot be found",
"@GoldFeniks, as for AHardwareBuffer symbols, try adding the following flag to `cmake`command:\r\n```\r\n-DANDROID_PLATFORM=\"26\"\r\n```",
"Adding `-DANDROID_PLATFORM=\"26\"` (i.e. [setting minSdkVersion to 26](https://developer.android.com/ndk/guides/cmake#android_platform)) makes the build incompatible with any Android versions below 8.0 (API level 26), doesn't it?\r\n\r\nIn the docs the minimal supported version is still 19 or 21 for most of the modules: https://www.tensorflow.org/lite/android/development\r\nIs it changed?\r\n",
"@williamdias I have specified that as follows (see the [build.sh](https://github.com/GoldFeniks/tflite_link_issue/blob/986e965c6288282953f698b8f31cea78c057793f/build.sh#L7))\r\n```\r\n-DANDROID_PLATFORM=android-26\r\n```\r\nchanging it to just `\"26\"` has no effect.",
"Hey @AntonMalyshev, yes. It will drop compatibility to versions below 8.0. I think `AHardwareBuffer` symbols were introduced in API 26. I tried to build for API 21, 22, 23, 24, 25 and failed in all of them. Just worked with API 26.\r\n\r\n@GoldFeniks, here's my `cmake` config command:\r\n```\r\ncmake \\\r\n -DCMAKE_BUILD_TYPE=\"release\" \\\r\n -DCMAKE_TOOLCHAIN_FILE=\"$ANDROID_NDK_HOME/build/cmake/android.toolchain.cmake\" \\\r\n -DANDROID_PLATFORM=\"26\" \\\r\n -DANDROID_ABI=\"arm64-v8a\" \\\r\n -DTFLITE_ENABLE_GPU=ON \\\r\n -DXNNPACK_ENABLE_ARM_BF16=OFF \\\r\n ../../tensorflow/lite\r\n ```\r\nI am using NDK `21.4.7075529` instead of `25`. Also, I had to disable `XNNPACK_ENABLE_ARM_BF16` as [advised here](https://github.com/google/XNNPACK/issues/4775).",
"@williamdias Thank you, tensorflow does in fact build with such settings, but trying to link to such tensorflow binary with anything that uses GPU delegate results in the linking errors.",
"@GoldFeniks, hum, I was able to use the binary and run models on GPU. What errors are you getting? The only downside is that I had to drop support to Android < 8.0 (API 26).",
"@williamdias Interesting. Are you building static or dynamic library?\r\nIn the sample I'm trying to link to a static tensorflow binary through cmake as follows\r\n```cmake\r\ncmake_minimum_required(VERSION 3.26)\r\n\r\nproject(tflite_link_issue C CXX)\r\n\r\nset(CMAKE_CXX_STANDARD 17)\r\n\r\nadd_subdirectory(tensorflow/tensorflow/lite)\r\n\r\nadd_library(gpu SHARED gpu.hpp gpu.cpp)\r\ntarget_link_libraries(gpu tensorflow-lite)\r\n```\r\n\r\nAnd I only call these functions in `gpu.cpp`\r\n```c++\r\nTfLiteGpuDelegateOptionsV2 options = TfLiteGpuDelegateOptionsV2Default();\r\nconst auto tf_delegate = tflite::Interpreter::TfLiteDelegatePtr(TfLiteGpuDelegateV2Create(&options), TfLiteGpuDelegateV2Delete);\r\n```\r\n\r\nWhich gives me a bunch of no symbol errors for `AHardwareBuffer_*` (see the issue header).",
"@GoldFeniks, I am building static tensorflow-lite and then another static lib on top of it.\r\n\r\nTry to check if the the symbols are present in `tensorflow-lite.a`. Use [nm command](https://man7.org/linux/man-pages/man1/nm.1.html).",
"Turns out `AHardwareBuffer_*` functions require linking to `libandroid`. So adding \r\n```cmake\r\nfind_library(android-lib android REQUIRED)\r\n```\r\nand changing `target_link_libraries` to\r\n```cmake\r\ntarget_link_libraries(gpu tensorflow-lite ${android-lib})\r\n```\r\nfixes the problem.",
"@williamdias Thank you for the pointers.\r\n\r\n@GoldFeniks Thanks for the PR. The issue will be closed once PR #61381 is merged.",
"Hi @alankelly, it seems like the PR needs a review so I'm assigning this to you for now. Thanks!"
] | 2023-07-18T12:56:34 | 2024-01-31T19:30:15 | null | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.13
### Custom code
Yes
### OS platform and distribution
Linux 6.3.1, EndeavourOS
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
clang version 14.0.7
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Linking an Android library with libtensorflow-lite.a using CMake with GPU delegate enabled causes undefined symbol errors
### Standalone code to reproduce the issue
Please find a minimal test case [here](https://github.com/GoldFeniks/tflite_link_issue).
### Relevant log output
```shell
ld: error: undefined symbol: tflite::delegates::BackendAsyncKernelInterface::BackendAsyncKernelInterface()
>>> referenced by delegate.cc:705 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:705)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::CreateAsyncRegistration()::$_3::__invoke(TfLiteContext*, char const*, unsigned long)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> did you mean: tflite::delegates::BackendAsyncKernelInterface::~BackendAsyncKernelInterface()
>>> defined in: tensorflow/tensorflow/lite/libtensorflow-lite.a(delegate.cc.o)
ld: error: undefined symbol: kTfLiteSyncTypeNoSyncObj
>>> referenced by string.h:61 (/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/bits/fortify/string.h:61)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::CreateAsyncRegistration()::$_3::__invoke(TfLiteContext*, char const*, unsigned long)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by string.h:61 (/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/bits/fortify/string.h:61)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::CreateAsyncRegistration()::$_3::__invoke(TfLiteContext*, char const*, unsigned long)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: TfLiteAttributeMapIsBufferAttributeMap
>>> referenced by delegate.cc:1058 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1058)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:908 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:908)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:909 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:909)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced 1 more times
ld: error: undefined symbol: tflite::delegates::utils::ReadBufferAttrs(TfLiteAttributeMap const*)
>>> referenced by delegate.cc:1061 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1061)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:925 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:925)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: TfLiteBackendBufferGetPtr
>>> referenced by delegate.cc:1087 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1087)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: AHardwareBuffer_acquire
>>> referenced by delegate.cc:787 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:787)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: AHardwareBuffer_describe
>>> referenced by delegate.cc:803 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:803)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:803 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:803)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::$_10::operator()(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs*, std::__ndk1::vector<long, std::__ndk1::allocator<long> > const&, absl::lts_20230125::Status (tflite::gpu::InferenceRunner::*)(int, std::__ndk1::variant<std::__ndk1::monostate, tflite::gpu::OpenGlBuffer, tflite::gpu::OpenGlTexture, tflite::gpu::CpuMemory, tflite::gpu::OpenClBuffer, tflite::gpu::OpenClTexture, tflite::gpu::VulkanBuffer, tflite::gpu::VulkanTexture>)) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: AHardwareBuffer_release
>>> referenced by delegate.cc:795 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:795)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:795 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:795)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::RegisterBuffer(TfLiteOpaqueContext*, TfLiteIoType, TfLiteBackendBuffer const*, TfLiteAttributeMap const*, int)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:795 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:795)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Acquire(AHardwareBuffer*)::'lambda'(AHardwareBuffer*)::__invoke(AHardwareBuffer*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: tflite::delegates::utils::WriteBufferAttrs(tflite::delegates::utils::BufferAttributes const&, TfLiteAttributeMap*)
>>> referenced by delegate.cc:927 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:927)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:927 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:927)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:927 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:927)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced 2 more times
ld: error: undefined symbol: TfLiteAttributeMapIsSyncAttributeMap
>>> referenced by delegate.cc:933 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:933)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:934 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:934)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:941 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:941)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced 1 more times
ld: error: undefined symbol: tflite::delegates::utils::ReadSyncAttrs(TfLiteAttributeMap const*)
>>> referenced by delegate.cc:950 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:950)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:983 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:983)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::SetAttributes(TfLiteOpaqueContext*, TfLiteOpaqueNode*, int, TfLiteAttributeMap const*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: tflite::delegates::utils::WriteSyncAttrs(tflite::delegates::utils::SyncAttributes const&, TfLiteAttributeMap*)
>>> referenced by delegate.cc:952 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:952)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:954 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:954)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::ReconcileRestrictions(TfLiteOpaqueContext const*, TfLiteOpaqueNode const*, int, TfLiteAttributeMap const*, TfLiteAttributeMap*, TfLiteAttributeMap*) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: TfLiteSynchronizationGetPtr
>>> referenced by delegate.cc:1256 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1256)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: tflite::delegates::utils::WaitForAllFds(absl::lts_20230125::Span<int const>)
>>> referenced by delegate.cc:1268 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1268)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: tflite::delegates::utils::ConvertToTfLiteStatus(absl::lts_20230125::Status)
>>> referenced by delegate.cc:1308 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1308)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:1289 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1289)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::$_10::operator()(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs*, std::__ndk1::vector<long, std::__ndk1::allocator<long> > const&, absl::lts_20230125::Status (tflite::gpu::InferenceRunner::*)(int, std::__ndk1::variant<std::__ndk1::monostate, tflite::gpu::OpenGlBuffer, tflite::gpu::OpenGlTexture, tflite::gpu::CpuMemory, tflite::gpu::OpenClBuffer, tflite::gpu::OpenClTexture, tflite::gpu::VulkanBuffer, tflite::gpu::VulkanTexture>)) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: AHardwareBuffer_unlock
>>> referenced by delegate.cc:1212 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1212)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:1212 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1212)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
>>> referenced by delegate.cc:1212 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1212)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs::~LockedAHWBs()) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: TfLiteSynchronizationSetPtr
>>> referenced by delegate.cc:1328 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1328)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::Eval(TfLiteOpaqueContext*, TfLiteOpaqueNode*, TfLiteExecutionTask*)) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
ld: error: undefined symbol: AHardwareBuffer_lock
>>> referenced by delegate.cc:1185 (tensorflow/tensorflow/lite/delegates/gpu/delegate.cc:1185)
>>> delegate.cc.o:(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::$_10::operator()(tflite::gpu::(anonymous namespace)::DelegateAsyncKernel::EvalImpl(TfLiteContext*, TfLiteNode*, TfLiteExecutionTask*)::LockedAHWBs*, std::__ndk1::vector<long, std::__ndk1::allocator<long> > const&, absl::lts_20230125::Status (tflite::gpu::InferenceRunner::*)(int, std::__ndk1::variant<std::__ndk1::monostate, tflite::gpu::OpenGlBuffer, tflite::gpu::OpenGlTexture, tflite::gpu::CpuMemory, tflite::gpu::OpenClBuffer, tflite::gpu::OpenClTexture, tflite::gpu::VulkanBuffer, tflite::gpu::VulkanTexture>)) const) in archive tensorflow/tensorflow/lite/libtensorflow-lite.a
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61312/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61311 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61311/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61311/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61311/events | https://github.com/tensorflow/tensorflow/issues/61311 | 1,809,776,026 | I_kwDOArmXAs5r3v2a | 61,311 | Building TFLite for Android with CMake requires Android 26 | {
"login": "GoldFeniks",
"id": 14744013,
"node_id": "MDQ6VXNlcjE0NzQ0MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/14744013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GoldFeniks",
"html_url": "https://github.com/GoldFeniks",
"followers_url": "https://api.github.com/users/GoldFeniks/followers",
"following_url": "https://api.github.com/users/GoldFeniks/following{/other_user}",
"gists_url": "https://api.github.com/users/GoldFeniks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GoldFeniks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GoldFeniks/subscriptions",
"organizations_url": "https://api.github.com/users/GoldFeniks/orgs",
"repos_url": "https://api.github.com/users/GoldFeniks/repos",
"events_url": "https://api.github.com/users/GoldFeniks/events{/privacy}",
"received_events_url": "https://api.github.com/users/GoldFeniks/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 4989164230,
"node_id": "LA_kwDOArmXAs8AAAABKWCaxg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/Android",
"name": "Android",
"color": "e99695",
"default": false,
"description": ""
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "terryheo",
"id": 2908505,
"node_id": "MDQ6VXNlcjI5MDg1MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2908505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/terryheo",
"html_url": "https://github.com/terryheo",
"followers_url": "https://api.github.com/users/terryheo/followers",
"following_url": "https://api.github.com/users/terryheo/following{/other_user}",
"gists_url": "https://api.github.com/users/terryheo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/terryheo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/terryheo/subscriptions",
"organizations_url": "https://api.github.com/users/terryheo/orgs",
"repos_url": "https://api.github.com/users/terryheo/repos",
"events_url": "https://api.github.com/users/terryheo/events{/privacy}",
"received_events_url": "https://api.github.com/users/terryheo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "terryheo",
"id": 2908505,
"node_id": "MDQ6VXNlcjI5MDg1MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2908505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/terryheo",
"html_url": "https://github.com/terryheo",
"followers_url": "https://api.github.com/users/terryheo/followers",
"following_url": "https://api.github.com/users/terryheo/following{/other_user}",
"gists_url": "https://api.github.com/users/terryheo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/terryheo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/terryheo/subscriptions",
"organizations_url": "https://api.github.com/users/terryheo/orgs",
"repos_url": "https://api.github.com/users/terryheo/repos",
"events_url": "https://api.github.com/users/terryheo/events{/privacy}",
"received_events_url": "https://api.github.com/users/terryheo/received_events",
"type": "User",
"site_admin": false
},
{
"login": "pkgoogle",
"id": 132095473,
"node_id": "U_kgDOB9-d8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/132095473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkgoogle",
"html_url": "https://github.com/pkgoogle",
"followers_url": "https://api.github.com/users/pkgoogle/followers",
"following_url": "https://api.github.com/users/pkgoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/pkgoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkgoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkgoogle/subscriptions",
"organizations_url": "https://api.github.com/users/pkgoogle/orgs",
"repos_url": "https://api.github.com/users/pkgoogle/repos",
"events_url": "https://api.github.com/users/pkgoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkgoogle/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @GoldFeniks \r\n\r\nAs this issue is being tracked in #61312 , can we close this as duplicate?\r\n\r\nThanks.",
"Hello @pjpratik,\r\n\r\nI would argue that it's a separate issue.\r\n\r\nIn the docs the minimal supported version is still 19 or 21 for most of the modules: https://www.tensorflow.org/lite/android/development",
"Hi, @pjpratik !\r\nThis is a separate issue. Seems like tensorflow 2.13 includes `AHardwareBuffer_*` functions, which are only available starting with Android 26, though the minimal version reported in documentation is Android 19. Does gpu delegate require Android 26 starting with tensorflow 2.13?",
"Thanks for the clarification @AntonMalyshev @GoldFeniks \r\n\r\nI was able to reproduce this. \r\n<img width=\"567\" alt=\"image\" src=\"https://github.com/tensorflow/tensorflow/assets/118897289/72eb0045-45f1-4928-9bb9-0200c27063d2\">\r\n\r\n@pkgoogle Can we have update about this for >=TF2.13?\r\n\r\nThanks.",
"I run into the same issues on master and nightly, I run into a different issue on r2.12 actually.\r\n\r\nMy reproduce steps:\r\n1. Install Android NDK (in my case via Android Studio)\r\n2. Setup ANDROID_NDK variable to point to Android NDK directory\r\n\r\n3. clone repo, make build dir\r\n```sh\r\ngit clone https://github.com/tensorflow/tensorflow.git\r\ncd tensorflow\r\nmkdir build\r\ncd build\r\n```\r\n\r\n4. run commands in problem description:\r\n```sh\r\ncmake -DCMAKE_TOOLCHAIN_FILE=${ANDROID_NDK}/build/cmake/android.toolchain.cmake -DANDROID_PLATFORM=android-19 -DANDROID_ABI=arm64-v8a -DCMAKE_ANDROID_NDK_VERSION=25 -DTFLITE_ENABLE_GPU=ON -DCMAKE_BUILD_TYPE=Release ../tensorflow/lite/\r\nmake\r\n```\r\n\r\nHi @terryheo, can you please take a look? Thanks."
] | 2023-07-18T11:54:50 | 2023-07-31T22:06:28 | null | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.13
### Custom code
Yes
### OS platform and distribution
Linux 6.3.1, EndeavourOS
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
clang version 14.0.7
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Building libtensorflow-lite.a with CMake with GPU delegate enabled requires `AHardwareBuffer_*` functions only available with 26+ Android API level, though minSdkVersion is stated to be 19 for tensorflow-lite-gpu [here](https://www.tensorflow.org/lite/android/development#minimum_android_sdk_versions_for_libraries). Tested on branches r2.13, nighly and master. Branch r2.12 builds without issues.
### Standalone code to reproduce the issue
```shell
cmake -DCMAKE_TOOLCHAIN_FILE=${ANDROID_NDK}/build/cmake/android.toolchain.cmake -DANDROID_PLATFORM=android-19 -DANDROID_ABI=arm64-v8a -DCMAKE_ANDROID_NDK_VERSION=25 -DTFLITE_ENABLE_GPU=ON -DCMAKE_BUILD_TYPE=Release ../tensorflow/lite/
make
```
### Relevant log output
```shell
tensorflow/lite/delegates/gpu/delegate.cc:787:7: error: 'AHardwareBuffer_acquire' is unavailable: introduced in Android 26
AHardwareBuffer_acquire(ahwb);
^
/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/android/hardware_buffer.h:386:6: note: 'AHardwareBuffer_acquire' has been explicitly marked unavailable here
void AHardwareBuffer_acquire(AHardwareBuffer* _Nonnull buffer) __INTRODUCED_IN(26);
^
tensorflow/lite/delegates/gpu/delegate.cc:795:9: error: 'AHardwareBuffer_release' is unavailable: introduced in Android 26
AHardwareBuffer_release(b);
^
/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/android/hardware_buffer.h:394:6: note: 'AHardwareBuffer_release' has been explicitly marked unavailable here
void AHardwareBuffer_release(AHardwareBuffer* _Nonnull buffer) __INTRODUCED_IN(26);
^
tensorflow/lite/delegates/gpu/delegate.cc:803:7: error: 'AHardwareBuffer_describe' is unavailable: introduced in Android 26
AHardwareBuffer_describe(uptr_ahwb.get(), &desc_ahwb);
^
/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/android/hardware_buffer.h:402:6: note: 'AHardwareBuffer_describe' has been explicitly marked unavailable here
void AHardwareBuffer_describe(const AHardwareBuffer* _Nonnull buffer,
^
tensorflow/lite/delegates/gpu/delegate.cc:1185:18: error: 'AHardwareBuffer_lock' is unavailable: introduced in Android 26
return AHardwareBuffer_lock(buffer, this->usage_, -1 /* fence */,
^
/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/android/hardware_buffer.h:457:5: note: 'AHardwareBuffer_lock' has been explicitly marked unavailable here
int AHardwareBuffer_lock(AHardwareBuffer* _Nonnull buffer, uint64_t usage, int32_t fence,
^
tensorflow/lite/delegates/gpu/delegate.cc:1212:24: error: 'AHardwareBuffer_unlock' is unavailable: introduced in Android 26
return AHardwareBuffer_unlock(buffer, nullptr /* fence */);
^
/opt/android-ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/android/hardware_buffer.h:479:5: note: 'AHardwareBuffer_unlock' has been explicitly marked unavailable here
int AHardwareBuffer_unlock(AHardwareBuffer* _Nonnull buffer, int32_t* _Nullable fence)
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61311/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61310 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61310/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61310/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61310/events | https://github.com/tensorflow/tensorflow/issues/61310 | 1,809,320,031 | I_kwDOArmXAs5r2Ahf | 61,310 | xla_cpu_gpu_device: MSVC compile errors | {
"login": "johnnkp",
"id": 22496821,
"node_id": "MDQ6VXNlcjIyNDk2ODIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22496821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnkp",
"html_url": "https://github.com/johnnkp",
"followers_url": "https://api.github.com/users/johnnkp/followers",
"following_url": "https://api.github.com/users/johnnkp/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnkp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnkp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnkp/subscriptions",
"organizations_url": "https://api.github.com/users/johnnkp/orgs",
"repos_url": "https://api.github.com/users/johnnkp/repos",
"events_url": "https://api.github.com/users/johnnkp/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnkp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1188421838,
"node_id": "MDU6TGFiZWwxMTg4NDIxODM4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:windows",
"name": "subtype:windows",
"color": "b619ea",
"default": false,
"description": "Windows Build/Installation Issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "kenfranko",
"id": 56562020,
"node_id": "MDQ6VXNlcjU2NTYyMDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/56562020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenfranko",
"html_url": "https://github.com/kenfranko",
"followers_url": "https://api.github.com/users/kenfranko/followers",
"following_url": "https://api.github.com/users/kenfranko/following{/other_user}",
"gists_url": "https://api.github.com/users/kenfranko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kenfranko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenfranko/subscriptions",
"organizations_url": "https://api.github.com/users/kenfranko/orgs",
"repos_url": "https://api.github.com/users/kenfranko/repos",
"events_url": "https://api.github.com/users/kenfranko/events{/privacy}",
"received_events_url": "https://api.github.com/users/kenfranko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "kenfranko",
"id": 56562020,
"node_id": "MDQ6VXNlcjU2NTYyMDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/56562020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenfranko",
"html_url": "https://github.com/kenfranko",
"followers_url": "https://api.github.com/users/kenfranko/followers",
"following_url": "https://api.github.com/users/kenfranko/following{/other_user}",
"gists_url": "https://api.github.com/users/kenfranko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kenfranko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenfranko/subscriptions",
"organizations_url": "https://api.github.com/users/kenfranko/orgs",
"repos_url": "https://api.github.com/users/kenfranko/repos",
"events_url": "https://api.github.com/users/kenfranko/events{/privacy}",
"received_events_url": "https://api.github.com/users/kenfranko/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The priority of this issue is higher than https://github.com/tensorflow/tensorflow/issues/60397. Some errors of that issue may be disappeared under `clang-cl`.",
"I know that 2.14 has applied fixes to `msvc_wrapper_for_nvcc.py`. It at least works up to 13000 compile actions with the following changes to `msvc_wrapper_for_nvcc.py`:\r\n\r\n```\r\ndef InvokeNvcc(argv, log=False):\r\n\r\n # ...\r\n nvccopts += nvcc_compiler_options\r\n nvccopts += undefines\r\n # above is unchanged\r\n # nvccopts += defines\r\n # below is unchanged\r\n nvccopts += m_options\r\n nvccopts += fatbin_options\r\n # ...\r\n```\r\n\r\nThen, the compilation stop at `tensorflow/compiler/xla/service/gpu/hlo_op_profiles.h` because the pb string is too long over MSVC limitation.",
"I also tried to setup LLVM 17.0.0-rc1 and use `clang-cl` instead. However it has errors such as `constexpr variable 'kRepHeaderSize' must be initialized by a constant expression`. Seems `clang-cl` isn't ready for Windows build yet.",
"Hi @johnnkp, can you please try LLVM 15.0.7. and refer to https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/ci_build/windows/cpu/pip/build_tf_windows_clang-cl.sh",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61310\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61310\">No</a>\n"
] | 2023-07-18T07:28:01 | 2023-11-16T01:49:28 | 2023-11-16T01:49:25 | CONTRIBUTOR | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.13.0
### Custom code
Yes
### OS platform and distribution
Windows 10 22H2
### Mobile device
_No response_
### Python version
Anaconda 2023.07-1
### Bazel version
6.2.1
### GCC/compiler version
Visual Studio 2022 (build tools 14.36) + msys2-x86_64-20230718
### CUDA/cuDNN version
CUDA 11.8 + CUDNN 8.6.0 + TensorRT 8.5.3
### GPU model and memory
GTX 750 Ti 2GB
### Current behavior?
This issue required two fixes.
First, `external/local_config_cuda/crosstool/windows/msvc_wrapper_for_nvcc.py` adds too many unrelated include path to compilation command. Related scripts or header dependencies need to be fixed. Minimum command is just like:
`nvcc -v -c -x=c++ -std=c++17 tensorflow/compiler/jit/xla_cpu_device.cc -I .,bazel-out/x64_windows-opt/bin,external/eigen_archive,
external/com_google_absl,external/com_google_protobuf/src,external/farmhash_archive/src,external/llvm-project/llvm/include,
external/llvm-raw/llvm/include,external/llvm-project/mlir/include,bazel-out/x64_windows-opt/bin/external/llvm-project/mlir/include,
external/tf_runtime/include -o bazel-out/x64_windows-opt/bin/tensorflow/compiler/jit/_objs/xla_cpu_device/xla_cpu_device.obj`
Second, `nvcc` will pass the above command to `cl.exe`. Then MSVC will throw some syntax errors and stop. If I use `clang-cl` provided by Visual Studio, some warnings may appear but the `.obj` compilation is successful. Following linux build migration to clang, I think the compiler path of `msvc_wrapper_for_nvcc.py` can change to `clang-cl` to avoid syntax errors.
### Standalone code to reproduce the issue
```shell
1. download https://github.com/tensorflow/tensorflow/archive/refs/tags/v2.13.0.zip and extract
2. comment out Windows CUDA build rejection code in configure.py
3. run `python configure.py` to configure Windows CUDA build
4. run `bazel build --config=opt --define=no_tensorflow_py_deps=true //tensorflow/tools/pip_package:build_pip_package`
```
### Relevant log output
```shell
ERROR: E:/tensorflow-2.13.0-createprocessw/tensorflow/compiler/jit/BUILD:113:11: Compiling tensorflow/compiler/jit/xla_cpu_device.cc failed: (Exit -1): python.exe failed: error exe
cuting command (from target //tensorflow/compiler/jit:xla_cpu_device)
cd /d E:/_bazel_tensorflow/4zvk5ci6/execroot/org_tensorflow
SET CUDA_TOOLKIT_PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8
SET INCLUDE=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.3
6.32532\ATLMFC\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt;C:\Program F
iles (x86)\Windows Kits\10\\include\10.0.22621.0\\um;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621
.0\\winrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt
SET LIB=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\ATLMFC\lib\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\1
4.36.32532\lib\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.22621.0\ucrt\x64;C:\Program Files (x86)\Windows Kits\10\\lib\10.0.22621.0\\um\x64
SET PATH=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\bin\HostX64\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\V
C\VCPackages;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\CommonExtensions\Microsoft\TestWindow;C:\Program Files\Microsoft Visual Studio\2022\Community\Commo
n7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer;C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Current\bin\Roslyn;C:\Program Files\Microsoft Visual
Studio\2022\Community\Team Tools\Performance Tools\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\Team Tools\Performance Tools;C:\Program Files (x86)\Windows Kits\10\b
in\10.0.22621.0\\x64;C:\Program Files (x86)\Windows Kits\10\bin\\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\\MSBuild\Current\Bin\amd64;C:\Windows\Microsoft.NET\Fra
mework64\v4.0.30319;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\Tools\;;C:\Windows\system32
;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\
CommonExtensions\Microsoft\CMake\Ninja;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\VC\Linux\bin\ConnectionManagerExe
SET PWD=/proc/self/cwd
SET PYTHON_BIN_PATH=C:/Users/tensorflow/anaconda3/python.exe
SET PYTHON_LIB_PATH=C:/Users/tensorflow/anaconda3/lib/site-packages
SET RUNFILES_MANIFEST_ONLY=1
SET TEMP=C:\msys64\tmp
SET TF2_BEHAVIOR=1
SET TF_CUDA_COMPUTE_CAPABILITIES=3.5,5.0,7.0
SET TMP=C:\msys64\tmp
C:\Users\tensorflow\anaconda3\python.exe -B external/local_config_cuda/crosstool/windows/msvc_wrapper_for_nvcc.py /nologo /DCOMPILER_MSVC /DNOMINMAX /D_WIN32_WINNT=0x0600 /D_CRT_
SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS /bigobj /Zm500 /J /Gy /GF /EHsc /wd4351 /wd4291 /wd4250 /wd4996 /I. /Ibazel-out/x64_window
s-opt/bin /Iexternal/eigen_archive /Ibazel-out/x64_windows-opt/bin/external/eigen_archive /Iexternal/com_google_absl /Ibazel-out/x64_windows-opt/bin/external/com_google_absl /Iexte
rnal/nsync /Ibazel-out/x64_windows-opt/bin/external/nsync /Iexternal/double_conversion /Ibazel-out/x64_windows-opt/bin/external/double_conversion /Iexternal/com_google_protobuf /Ib
azel-out/x64_windows-opt/bin/external/com_google_protobuf /Iexternal/llvm-project /Ibazel-out/x64_windows-opt/bin/external/llvm-project /Iexternal/llvm_terminfo /Ibazel-out/x64_win
dows-opt/bin/external/llvm_terminfo /Iexternal/llvm_zlib /Ibazel-out/x64_windows-opt/bin/external/llvm_zlib /Iexternal/gif /Ibazel-out/x64_windows-opt/bin/external/gif /Iexternal/l
ibjpeg_turbo /Ibazel-out/x64_windows-opt/bin/external/libjpeg_turbo /Iexternal/com_googlesource_code_re2 /Ibazel-out/x64_windows-opt/bin/external/com_googlesource_code_re2 /Iextern
al/farmhash_archive /Ibazel-out/x64_windows-opt/bin/external/farmhash_archive /Iexternal/fft2d /Ibazel-out/x64_windows-opt/bin/external/fft2d /Iexternal/highwayhash /Ibazel-out/x64
_windows-opt/bin/external/highwayhash /Iexternal/zlib /Ibazel-out/x64_windows-opt/bin/external/zlib /Iexternal/snappy /Ibazel-out/x64_windows-opt/bin/external/snappy /Iexternal/loc
al_config_cuda /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda /Iexternal/local_config_rocm /Ibazel-out/x64_windows-opt/bin/external/local_config_rocm /Iexternal/local_c
onfig_tensorrt /Ibazel-out/x64_windows-opt/bin/external/local_config_tensorrt /Iexternal/cudnn_frontend_archive /Ibazel-out/x64_windows-opt/bin/external/cudnn_frontend_archive /Iex
ternal/curl /Ibazel-out/x64_windows-opt/bin/external/curl /Iexternal/boringssl /Ibazel-out/x64_windows-opt/bin/external/boringssl /Iexternal/jsoncpp_git /Ibazel-out/x64_windows-opt
/bin/external/jsoncpp_git /Iexternal/com_github_grpc_grpc /Ibazel-out/x64_windows-opt/bin/external/com_github_grpc_grpc /Iexternal/upb /Ibazel-out/x64_windows-opt/bin/external/upb
/Iexternal/mkl_dnn_v1 /Ibazel-out/x64_windows-opt/bin/external/mkl_dnn_v1 /Iexternal/stablehlo /Ibazel-out/x64_windows-opt/bin/external/stablehlo /Iexternal/tf_runtime /Ibazel-out/
x64_windows-opt/bin/external/tf_runtime /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinAttributeInterfacesIncGen /Ibazel-out/x64_windows-opt/bi
n/external/llvm-project/mlir/_virtual_includes/BuiltinAttributesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinDialectBytecodeGen /Ibaze
l-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinDialectIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinLoca
tionAttributesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtu
al_includes/BuiltinTypeInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinTypesIncGen /Ibazel-out/x64_windows-opt/bin/external/llv
m-project/mlir/_virtual_includes/CallOpInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/CastOpInterfacesIncGen /Ibazel-out/x64_windows-
opt/bin/external/llvm-project/mlir/_virtual_includes/FunctionInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/InferTypeOpInterfaceIncGe
n /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpAsmInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/Reg
ionKindInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SideEffectInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project
/mlir/_virtual_includes/SymbolInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TensorEncodingIncGen /Ibazel-out/x64_windows-opt/bin/ext
ernal/llvm-project/mlir/_virtual_includes/ArithBaseIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArithCanonicalizationIncGen /Ibazel-out/x64_w
indows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArithOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArithOpsInterfacesIncGen /Ib
azel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/InferIntRangeInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/
VectorInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ControlFlowInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-projec
t/mlir/_virtual_includes/ControlFlowOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/FuncIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-
project/mlir/_virtual_includes/AsmParserTokenKinds /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/QuantOpsIncGen /Ibazel-out/x64_windows-opt/bin/exter
nal/llvm-project/mlir/_virtual_includes/LoopLikeInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/Mem2RegInterfacesIncGen /Ibazel-out/x64
_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/DialectUtilsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ViewLikeInterfaceIncGe
n /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/PDLOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/PDLTypesInc
Gen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/PDLInterpOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/Con
versionPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TransformsPassIncGen /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda/
_virtual_includes/cuda_headers_virtual /Ibazel-out/x64_windows-opt/bin/external/local_config_tensorrt/_virtual_includes/tensorrt_headers /Ibazel-out/x64_windows-opt/bin/external/lo
cal_config_cuda/cuda/_virtual_includes/cudnn_header /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/DerivedAttributeOpInterfaceIncGen /Ibazel-out/x64_w
indows-opt/bin/external/llvm-project/mlir/_virtual_includes/MLProgramAttributesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MLProgramOpsIncGe
n /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MLProgramTypesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/Run
timeVerifiableOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/cudnn_frontend_archive/_virtual_includes/cudnn_frontend /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler
/xla/mlir_hlo/_virtual_includes/mlir_hlo /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/canonicalize_inc_gen /Ibazel-out/x64_windows-opt/bin/ten
sorflow/compiler/xla/mlir_hlo/_virtual_includes/convert_op_folder /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/hlo_ops_attrs_inc_gen /Ibazel-o
ut/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/hlo_ops_common /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/hlo_ops_
enums_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/hlo_ops_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_v
irtual_includes/hlo_ops_pattern_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/hlo_ops_typedefs_inc_gen /Ibazel-out/x64_windows-opt/bin/
external/llvm-project/mlir/_virtual_includes/ComplexAttributesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ComplexBaseIncGen /Ibazel-out/x64_
windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ComplexOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LLVMDialectInterfaceIncGe
n /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LLVMIntrinsicOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/L
LVMOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LLVMTypesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includ
es/CopyOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MemRefBaseIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_v
irtual_includes/MemRefOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ShapedOpInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-
project/mlir/_virtual_includes/DestinationStyleOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ValueBoundsOpInterfaceIncGen /Ibazel-o
ut/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MLIRShapeCanonicalizationIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/Sha
peOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AffineMemoryOpInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_
virtual_includes/AffineOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ParallelCombiningOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/ext
ernal/llvm-project/mlir/_virtual_includes/TensorOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TilingInterfaceIncGen /Ibazel-out/x64_windows
-opt/bin/external/llvm-project/mlir/_virtual_includes/SparseTensorAttrDefsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SparseTensorOpsIncGen
/Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SparseTensorTypesIncGen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/base /Ibaz
el-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/base_attr_interfaces_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/broadcast_utils /I
bazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/chlo_ops /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/chlo_attrs_inc_gen /Ibazel-out/x64_
windows-opt/bin/external/stablehlo/_virtual_includes/chlo_enums_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/chlo_ops_inc_gen /Ibazel-out/x64_window
s-opt/bin/external/stablehlo/_virtual_includes/stablehlo_assembly_format /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/stablehlo_type_inference /Ibazel-out/x
64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/mhlo_passes /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/chlo_legalize_t
o_hlo /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/chlo_legalize_to_hlo_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_h
lo/_virtual_includes/map_chlo_to_hlo_op /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AllocationOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/ext
ernal/llvm-project/mlir/_virtual_includes/BufferizableOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BufferizationBaseIncGen /Ibazel
-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BufferizationEnumsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/Bufferiz
ationOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SCFDeviceMappingInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/m
lir/_virtual_includes/SCFIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SCFPassIncGen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/m
lir_hlo/_virtual_includes/hlo_legalize_to_stablehlo /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/map_stablehlo_to_hlo_op /Ibazel-out/x64_windo
ws-opt/bin/external/stablehlo/_virtual_includes/stablehlo_ops /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/stablehlo_attrs_inc_gen /Ibazel-out/x64_windows-o
pt/bin/external/stablehlo/_virtual_includes/stablehlo_enums_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/stablehlo_ops_inc_gen /Ibazel-out/x64_windo
ws-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/legalize_to_linalg_utils /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/map_mhlo_t
o_scalar_op /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MathBaseIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes
/MathOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MaskableOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_vi
rtual_includes/MaskingOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/VectorOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-p
roject/mlir/_virtual_includes/LinalgEnumsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LinalgInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/
external/llvm-project/mlir/_virtual_includes/LinalgOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LinalgStructuredOpsIncGen /Ibazel-out/x64_
windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/legalize_to_standard_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/l
hlo /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/lhlo_ops_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_in
cludes/lhlo_ops_structs_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/lhlo_structured_interface /Ibazel-out/x64_windows-opt/bin/tensorf
low/compiler/xla/mlir_hlo/_virtual_includes/lhlo_structured_interface_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/lower_complex_inc_g
en /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/map_hlo_to_lhlo_op /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_i
ncludes/mhlo_pass_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/mhlo_rng_utils /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/
mlir_hlo/_virtual_includes/mhlo_scatter_gather_utils /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/shape_component_analysis /Ibazel-out/x64_win
dows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/stablehlo_legalize_to_hlo /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/thlo /I
bazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/gml_st /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/gml_st_op
s_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/thlo_ops_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virt
ual_includes/thlo_bufferizable_op_interface /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/type_conversion /Ibazel-out/x64_windows-opt/bin/exter
nal/llvm-project/mlir/_virtual_includes/BufferizationPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/FuncTransformsPassIncGen /Ibazel-out/x6
4_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/unfuse_batch_norm /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AffinePassIncGen
/Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArithPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/DLTIBaseI
ncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/GPUBaseIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/GPUOps
IncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LinalgPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/Me
mRefPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/NVGPUIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes
/NVGPUPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TensorPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_i
ncludes/VectorEnumsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/VectorPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_
virtual_includes/X86VectorIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ShapeTransformsPassIncGen /Ibazel-out/x64_windows-opt/bin/tensorflow/c
ompiler/xla/mlir_hlo/_virtual_includes/lhlo_gpu /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/lhlo_gpu_ops_attrdefs_inc_gen /Ibazel-out/x64_win
dows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/lhlo_gpu_ops_dialect_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/lhlo
_gpu_ops_enums_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/lhlo_gpu_ops_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/x
la/mlir_hlo/_virtual_includes/lhlo_gpu_ops_ops /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/stablehlo_passes /Ibazel-out/x64_windows-opt/bin/external/stable
hlo/_virtual_includes/stablehlo_pass_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/version /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtua
l_includes/vhlo_ops /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/vhlo_attr_interfaces_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_in
cludes/vhlo_attrs_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/vhlo_enums_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includ
es/vhlo_op_interfaces_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/vhlo_ops_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_incl
udes/vhlo_types /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/vhlo_type_interfaces_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includ
es/vhlo_types_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/hlo_dialect_registration /Ibazel-out/x64_windows-opt/bin/external/stablehlo
/_virtual_includes/register /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/_virtual_includes/NVPTXCodeGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/_vi
rtual_includes/NVPTXCommonTableGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/_virtual_includes/NVPTXInfo /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm
/_virtual_includes/NVPTXUtilsAndDesc /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AsyncOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-proje
ct/mlir/_virtual_includes/GPUPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LLVMConversionIncGen /Ibazel-out/x64_windows-opt/bin/external/l
lvm-project/mlir/_virtual_includes/LLVMPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/NVVMOpsIncGen /Ibazel-out/x64_windows-opt/bin/externa
l/llvm-project/mlir/_virtual_includes/LLVMIntrinsicConversionIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpenMPInterfacesIncGen /Ibazel-out/
x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpenMPOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpenMPTypeInterfacesIn
cGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ROCDLConversionIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes
/ROCDLOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AMXIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/
ArmNeonIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArmSVEIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes
/NVVMConversionIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ShapeToStandardGen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_h
lo/_virtual_includes/transforms_passes /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/deallocation /Ibazel-out/x64_windows-opt/bin/tensorflow/co
mpiler/xla/mlir_hlo/_virtual_includes/deallocation_ops_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/deallocation_utils /Ibazel-out/x64
_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/deallocation_passes /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/deallocat
ion_passes_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/gml_st_bufferizable_op_interface /Ibazel-out/x64_windows-opt/bin/tensorflow/co
mpiler/xla/mlir_hlo/_virtual_includes/gml_st_passes /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/gml_st_passes_inc_gen /Ibazel-out/x64_windows
-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/gml_st_transforms /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/transforms_passes_i
nc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/userange_analysis /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_incl
udes/TransformDialectEnumsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TransformDialectIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-p
roject/mlir/_virtual_includes/TransformDialectInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TransformDialectMatchInterfacesIncGen /I
bazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TransformOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/Transform
TypesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TransformDialectTransformsIncGen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/ml
ir_hlo/_virtual_includes/all_passes /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/lmhlo_pass_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow
/compiler/xla/mlir_hlo/_virtual_includes/lmhlo_passes /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/map_lmhlo_to_scalar_op /Ibazel-out/x64_wind
ows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/map_lhlo_to_hlo_op /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/thlo_passes /Ib
azel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/thlo_passes_inc_gen /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includ
es/transforms_gpu_passes /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/mlir_hlo/_virtual_includes/gpu_transforms_passes_inc_gen /Ibazel-out/x64_windows-opt/bin/external/l
lvm-project/mlir/_virtual_includes/GPUToNVVMGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AMDGPUIncGen /Ibazel-out/x64_windows-opt/bin/external/l
lvm-project/mlir/_virtual_includes/GPUToROCDLTGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AMXConversionIncGen /Ibazel-out/x64_windows-opt/bin/e
xternal/llvm-project/mlir/_virtual_includes/ArmNeonConversionIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArmSVEConversionIncGen /Ibazel-out/
x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpenACCOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpenACCTypeInterfaces
IncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpenACCTypesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/
X86VectorConversionIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/_virtual_includes/X86CodeGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/_virtua
l_includes/X86CommonTableGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/_virtual_includes/X86Info /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/_virtua
l_includes/X86UtilsAndDesc /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/_virtual_includes/JITLinkTableGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/_
virtual_includes/X86DisassemblerInternalHeaders /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SPIRVAttrUtilsGen /Ibazel-out/x64_windows-opt/bin/exter
nal/llvm-project/mlir/_virtual_includes/SPIRVAttributesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SPIRVAvailabilityIncGen /Ibazel-out/x64_w
indows-opt/bin/external/llvm-project/mlir/_virtual_includes/SPIRVCanonicalizationIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SPIRVOpsIncGen
/Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SPIRVSerializationGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/Inde
xEnumsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/IndexOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_include
s/TosaDialectIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TosaInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_vi
rtual_includes/TosaPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SparseTensorPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-proj
ect/mlir/_virtual_includes/AsyncPassIncGen /Ithird_party/eigen3/mkl_include /Ibazel-out/x64_windows-opt/bin/third_party/eigen3/mkl_include /Iexternal/eigen_archive /Ibazel-out/x64_
windows-opt/bin/external/eigen_archive /Iexternal/nsync/public /Ibazel-out/x64_windows-opt/bin/external/nsync/public /Iexternal/com_google_protobuf/src /Ibazel-out/x64_windows-opt/
bin/external/com_google_protobuf/src /Iexternal/llvm-project/llvm/include /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/include /Iexternal/llvm-project/mlir/include /I
bazel-out/x64_windows-opt/bin/external/llvm-project/mlir/include /Itensorflow/compiler/mlir/tensorflow/include /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/mlir/tensorflow/i
nclude /Iexternal/gif /Ibazel-out/x64_windows-opt/bin/external/gif /Iexternal/gif/windows /Ibazel-out/x64_windows-opt/bin/external/gif/windows /Iexternal/farmhash_archive/src /Ibaz
el-out/x64_windows-opt/bin/external/farmhash_archive/src /Iexternal/zlib /Ibazel-out/x64_windows-opt/bin/external/zlib /Iexternal/local_config_cuda/cuda /Ibazel-out/x64_windows-opt
/bin/external/local_config_cuda/cuda /Iexternal/local_config_cuda/cuda/cuda/include /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda/cuda/include /Iexternal/local_co
nfig_rocm/rocm /Ibazel-out/x64_windows-opt/bin/external/local_config_rocm/rocm /Iexternal/local_config_rocm/rocm/rocm/include /Ibazel-out/x64_windows-opt/bin/external/local_config_
rocm/rocm/rocm/include /Iexternal/local_config_rocm/rocm/rocm/include/rocrand /Ibazel-out/x64_windows-opt/bin/external/local_config_rocm/rocm/rocm/include/rocrand /Iexternal/local_
config_rocm/rocm/rocm/include/roctracer /Ibazel-out/x64_windows-opt/bin/external/local_config_rocm/rocm/rocm/include/roctracer /Iexternal/curl/include /Ibazel-out/x64_windows-opt/b
in/external/curl/include /Iexternal/boringssl/src/include /Ibazel-out/x64_windows-opt/bin/external/boringssl/src/include /Iexternal/jsoncpp_git/include /Ibazel-out/x64_windows-opt/
bin/external/jsoncpp_git/include /Iexternal/com_github_grpc_grpc/include /Ibazel-out/x64_windows-opt/bin/external/com_github_grpc_grpc/include /Iexternal/com_github_grpc_grpc/src/c
ore/ext/upb-generated /Ibazel-out/x64_windows-opt/bin/external/com_github_grpc_grpc/src/core/ext/upb-generated /Iexternal/com_github_grpc_grpc/third_party/address_sorting/include /
Ibazel-out/x64_windows-opt/bin/external/com_github_grpc_grpc/third_party/address_sorting/include /Iexternal/mkl_dnn_v1/include /Ibazel-out/x64_windows-opt/bin/external/mkl_dnn_v1/i
nclude /Iexternal/mkl_dnn_v1/src /Ibazel-out/x64_windows-opt/bin/external/mkl_dnn_v1/src /Iexternal/mkl_dnn_v1/src/common /Ibazel-out/x64_windows-opt/bin/external/mkl_dnn_v1/src/co
mmon /Iexternal/mkl_dnn_v1/src/common/ittnotify /Ibazel-out/x64_windows-opt/bin/external/mkl_dnn_v1/src/common/ittnotify /Iexternal/mkl_dnn_v1/src/cpu /Ibazel-out/x64_windows-opt/b
in/external/mkl_dnn_v1/src/cpu /Iexternal/mkl_dnn_v1/src/cpu/gemm /Ibazel-out/x64_windows-opt/bin/external/mkl_dnn_v1/src/cpu/gemm /Iexternal/mkl_dnn_v1/src/cpu/x64/xbyak /Ibazel-o
ut/x64_windows-opt/bin/external/mkl_dnn_v1/src/cpu/x64/xbyak /Itensorflow/compiler/xla/translate/hlo_to_mhlo/include /Ibazel-out/x64_windows-opt/bin/tensorflow/compiler/xla/transla
te/hlo_to_mhlo/include /Iexternal/tf_runtime/include /Ibazel-out/x64_windows-opt/bin/external/tf_runtime/include /Iexternal/tf_runtime/third_party/llvm_derived/include /Ibazel-out/
x64_windows-opt/bin/external/tf_runtime/third_party/llvm_derived/include /Iexternal/llvm-project/llvm/lib/Target/NVPTX /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/li
b/Target/NVPTX /Iexternal/llvm-project/llvm/lib/Target/X86 /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/lib/Target/X86 /Iexternal/llvm-project/mlir/lib/Conversion/Fun
cToSPIRV /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/lib/Conversion/FuncToSPIRV /Iexternal/llvm-project/mlir/lib/Conversion/MathToSPIRV /Ibazel-out/x64_windows-opt/b
in/external/llvm-project/mlir/lib/Conversion/MathToSPIRV /Iexternal/llvm-project/mlir/lib/Conversions/GPUToSPIRV /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/lib/Conv
ersions/GPUToSPIRV /Iexternal/llvm-project/mlir/lib/Conversion/MemRefToSPIRV /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/lib/Conversion/MemRefToSPIRV /Iexternal/llvm
-project/mlir/lib/Conversion/TensorToLinalg /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/lib/Conversion/TensorToLinalg /Iexternal/llvm-project/mlir/lib/Conversion/Ten
sorToSPIRV /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/lib/Conversion/TensorToSPIRV /Iexternal/llvm-project/mlir/lib/Conversion/TosaToArith /Ibazel-out/x64_windows-o
pt/bin/external/llvm-project/mlir/lib/Conversion/TosaToArith /Iexternal/llvm-project/mlir/lib/Conversion/TosaToLinalg /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/lib
/Conversion/TosaToLinalg /Iexternal/llvm-project/mlir/lib/Conversion/TosaToSCF /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/lib/Conversion/TosaToSCF /Iexternal/llvm-p
roject/mlir/lib/Conversion/TosaToTensor /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/lib/Conversion/TosaToTensor /DEIGEN_MPL2_ONLY /DEIGEN_MAX_ALIGN_BYTES=64 /D_CRT_S
ECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE /D_CRT_NONSTDC_NO_WARNINGS /D_SCL_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_WARNINGS /DUNICODE /D_UNICODE /DLTDL
_SHLIB_EXT=".dll" /DLLVM_PLUGIN_EXT=".dll" /DLLVM_NATIVE_ARCH="X86" /DLLVM_NATIVE_ASMPARSER=LLVMInitializeX86AsmParser /DLLVM_NATIVE_ASMPRINTER=LLVMInitializeX86AsmPrinter /DLLVM_N
ATIVE_DISASSEMBLER=LLVMInitializeX86Disassembler /DLLVM_NATIVE_TARGET=LLVMInitializeX86Target /DLLVM_NATIVE_TARGETINFO=LLVMInitializeX86TargetInfo /DLLVM_NATIVE_TARGETMC=LLVMInitia
lizeX86TargetMC /DLLVM_NATIVE_TARGETMCA=LLVMInitializeX86TargetMCA /DLLVM_HOST_TRIPLE="x86_64-pc-win32" /DLLVM_DEFAULT_TARGET_TRIPLE="x86_64-pc-win32" /DLLVM_VERSION_MAJOR=17 /DLLV
M_VERSION_MINOR=0 /DLLVM_VERSION_PATCH=0 /DLLVM_VERSION_STRING="17.0.0git" /D__STDC_LIMIT_MACROS /D__STDC_CONSTANT_MACROS /D__STDC_FORMAT_MACROS /DBLAKE3_USE_NEON=0 /DBLAKE3_NO_AVX
2 /DBLAKE3_NO_AVX512 /DBLAKE3_NO_SSE2 /DBLAKE3_NO_SSE41 /DTF_USE_SNAPPY /DTF_ENABLE_ACTIVITY_WATCHER /DCURL_STATICLIB /DGRPC_ARES=0 /DTENSORFLOW_USE_CUSTOM_CONTRACTION_KERNEL /DTEN
SORFLOW_USE_MKLDNN_CONTRACTION_KERNEL /DEIGEN_USE_AVX512_GEMM_KERNELS=0 /DGOOGLE_CUDA=1 /DEIGEN_ALTIVEC_USE_CUSTOM_PACK=0 /DEIGEN_NEON_GEBP_NR=4 /DTF_LLVM_X86_AVAILABLE=1 /DBAZEL_C
URRENT_REPOSITORY="" /showIncludes /O2 /DNDEBUG /W0 /Zc:__cplusplus /D_USE_MATH_DEFINES /d2ReducedOptimizeHugeFunctions -DWIN32_LEAN_AND_MEAN -DNOGDI /Zc:preprocessor /d2ReducedOpt
imizeHugeFunctions /arch:AVX2 /std:c++17 /Fobazel-out/x64_windows-opt/bin/tensorflow/compiler/jit/_objs/xla_cpu_device/xla_cpu_device.obj /c tensorflow/compiler/jit/xla_cpu_device.
cc
# Configuration: 65bceb0453d201701ee7f6753c2bb4140d61507260ca636610d54b27f4c27251
# Execution platform: @local_execution_config_platform//:platform
Action failed to execute: java.io.IOException: ERROR: src/main/native/windows/process.cc(165): CreateProcessWithExplicitHandles("C:\Users\tensorflow\anaconda3\python.exe" -B extern
al/local_config_cuda/crosstool/windows/msvc_wrapper_for_nvcc.py /nologo /DCOMPILER_MSVC /DNOMINMAX /D_WIN32_WINNT=0x0600 /D_CRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_SIL
ENCE_STDEXT_HASH_DEPRECATION_WARNINGS /bigobj /Zm500 /J /Gy /GF /EHsc /wd4351 /wd4291 /wd4250 /wd4996 /I. /Ibazel-out/x64_windows-opt/bin /Iexternal/eigen_archive /Ibazel-out/x64_w
indows-opt/bin/external/eigen_archive /Iexternal/com_google_absl /Ibazel-out/x64_wi(...)): command is longer than CreateProcessW's limit (32767 characters)
Target //tensorflow/compiler/jit:jit failed to build
INFO: Elapsed time: 768.468s, Critical Path: 94.60s
INFO: 5 processes: 5 internal.
FAILED: Build did NOT complete successfully
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61310/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61309 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61309/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61309/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61309/events | https://github.com/tensorflow/tensorflow/pull/61309 | 1,809,211,884 | PR_kwDOArmXAs5Vv1r9 | 61,309 | segment_reduction_ops_gpu: Fix MSVC compile errors | {
"login": "johnnkp",
"id": 22496821,
"node_id": "MDQ6VXNlcjIyNDk2ODIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22496821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnkp",
"html_url": "https://github.com/johnnkp",
"followers_url": "https://api.github.com/users/johnnkp/followers",
"following_url": "https://api.github.com/users/johnnkp/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnkp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnkp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnkp/subscriptions",
"organizations_url": "https://api.github.com/users/johnnkp/orgs",
"repos_url": "https://api.github.com/users/johnnkp/repos",
"events_url": "https://api.github.com/users/johnnkp/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnkp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1169365494,
"node_id": "MDU6TGFiZWwxMTY5MzY1NDk0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:M",
"name": "size:M",
"color": "adafea",
"default": false,
"description": "CL Change Size: Medium"
},
{
"id": 1178505529,
"node_id": "MDU6TGFiZWwxMTc4NTA1NTI5",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/prtype:bugfix",
"name": "prtype:bugfix",
"color": "159b2e",
"default": false,
"description": "PR to fix a bug"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).\n\nView this [failed invocation](https://github.com/tensorflow/tensorflow/pull/61309/checks?check_run_id=15122111588) of the CLA check for more information.\n\nFor the most up to date status, view the checks section at the bottom of the pull request.",
"Hi @johnnkp Can you please sign CLA. Thank you!",
"> Hi @johnnkp Can you please sign CLA. Thank you!\r\n\r\nHi @johnnkp CLA looks good now. Thank you!",
"https://github.com/tensorflow/tensorflow/issues/61310 is related to this PR. Some changes may be unnecessary under `clang-cl`."
] | 2023-07-18T06:21:51 | 2023-07-20T06:08:17 | 2023-07-20T05:46:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61309",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61309",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61309.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61309.patch",
"merged_at": null
} | This is a draft PR to fix https://github.com/tensorflow/tensorflow/issues/60397. More Windows specific macros may be necessary to be added. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61309/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61308 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61308/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61308/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61308/events | https://github.com/tensorflow/tensorflow/issues/61308 | 1,809,032,102 | I_kwDOArmXAs5r06Om | 61,308 | Question about tensorflow <2.11 and protobuf compatibility | {
"login": "DManowitz",
"id": 66927103,
"node_id": "MDQ6VXNlcjY2OTI3MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/66927103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DManowitz",
"html_url": "https://github.com/DManowitz",
"followers_url": "https://api.github.com/users/DManowitz/followers",
"following_url": "https://api.github.com/users/DManowitz/following{/other_user}",
"gists_url": "https://api.github.com/users/DManowitz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DManowitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DManowitz/subscriptions",
"organizations_url": "https://api.github.com/users/DManowitz/orgs",
"repos_url": "https://api.github.com/users/DManowitz/repos",
"events_url": "https://api.github.com/users/DManowitz/events{/privacy}",
"received_events_url": "https://api.github.com/users/DManowitz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 1188421838,
"node_id": "MDU6TGFiZWwxMTg4NDIxODM4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:windows",
"name": "subtype:windows",
"color": "b619ea",
"default": false,
"description": "Windows Build/Installation Issues"
},
{
"id": 4511033337,
"node_id": "LA_kwDOArmXAs8AAAABDODn-Q",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.10",
"name": "TF 2.10",
"color": "C15088",
"default": false,
"description": ""
}
] | open | false | {
"login": "vam-google",
"id": 25311427,
"node_id": "MDQ6VXNlcjI1MzExNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/25311427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vam-google",
"html_url": "https://github.com/vam-google",
"followers_url": "https://api.github.com/users/vam-google/followers",
"following_url": "https://api.github.com/users/vam-google/following{/other_user}",
"gists_url": "https://api.github.com/users/vam-google/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vam-google/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vam-google/subscriptions",
"organizations_url": "https://api.github.com/users/vam-google/orgs",
"repos_url": "https://api.github.com/users/vam-google/repos",
"events_url": "https://api.github.com/users/vam-google/events{/privacy}",
"received_events_url": "https://api.github.com/users/vam-google/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "vam-google",
"id": 25311427,
"node_id": "MDQ6VXNlcjI1MzExNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/25311427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vam-google",
"html_url": "https://github.com/vam-google",
"followers_url": "https://api.github.com/users/vam-google/followers",
"following_url": "https://api.github.com/users/vam-google/following{/other_user}",
"gists_url": "https://api.github.com/users/vam-google/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vam-google/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vam-google/subscriptions",
"organizations_url": "https://api.github.com/users/vam-google/orgs",
"repos_url": "https://api.github.com/users/vam-google/repos",
"events_url": "https://api.github.com/users/vam-google/events{/privacy}",
"received_events_url": "https://api.github.com/users/vam-google/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-07-18T03:57:22 | 2023-07-20T21:26:21 | null | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.10.1
### Custom code
No
### OS platform and distribution
Windows 10
### Mobile device
N/A
### Python version
3.8
### Bazel version
N/A
### GCC/compiler version
N/A
### CUDA/cuDNN version
N/A
### GPU model and memory
N/A
### Current behavior?
I'd like to do some work with tensorflow on native Windows with GPU support, so I've installed tensorflow 2.10.1 via pip. According to the [setup.py](https://github.com/tensorflow/tensorflow/blob/v2.10.1/tensorflow/tools/pip_package/setup.py) for that version, the protobuf dependency is `protobuf >= 3.9.2, < 3.20`. However, I'm mostly using conda/mamba to handle my package management, and having protobuf limited to <3.20 is limiting my options on a number of other packages, particularly with regard to the upcoming EOL of openssl 1.1.1, so I decided to look into this a bit further.
The comment above this dependency says `Protobuf 3.20 results in linker errors on Windows`. Is this supposed to be the error described in #53234? Based on the discussion in this issue, that appears to be the case. However, if so, I'm not sure why the upper pin of protobuf <3.20 would have been added, as the initial issue described in #53234 appears to have been an attempt to upgrade protobuf to **3.19.0**, not **3.20**. I tried installing all protobuf v3.20.x versions on top of my tensorflow v2.10.1 install, and I was at least able to import tensorflow, although I realize that this might not be the situation that causes corruption, if that does happen with these versions.
Also, in the comment above this dependency, it states `Protobuf 4.0 is binary incompatible with what C++ TF uses.` I verified that I would get an error message when I installed any protobuf v4 package via pip after installing tensorflow. However, if I installed tensorflow (including protobuf 3.19.6) via pip, but then installed protobuf 4.23.3 via conda, I was able to import tensorflow without getting any error. Do you have any idea why this might be? I verified that protobuf was reporting as version 4.23.3 in my python interpreter in this situation.
Finally, I have tried contacting several conda-forge package maintainer about the possibility of building some additional packages with support for protobuf <3.20, but I have generally not found people wanting to add that support in.
### Standalone code to reproduce the issue
```shell
N/A
```
### Relevant log output
```shell
N/A
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61308/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61307 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61307/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61307/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61307/events | https://github.com/tensorflow/tensorflow/issues/61307 | 1,808,882,111 | I_kwDOArmXAs5r0Vm_ | 61,307 | KerasTensor和tf.tensor之间的转换 | {
"login": "lmx666-gif",
"id": 52613795,
"node_id": "MDQ6VXNlcjUyNjEzNzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/52613795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lmx666-gif",
"html_url": "https://github.com/lmx666-gif",
"followers_url": "https://api.github.com/users/lmx666-gif/followers",
"following_url": "https://api.github.com/users/lmx666-gif/following{/other_user}",
"gists_url": "https://api.github.com/users/lmx666-gif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lmx666-gif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmx666-gif/subscriptions",
"organizations_url": "https://api.github.com/users/lmx666-gif/orgs",
"repos_url": "https://api.github.com/users/lmx666-gif/repos",
"events_url": "https://api.github.com/users/lmx666-gif/events{/privacy}",
"received_events_url": "https://api.github.com/users/lmx666-gif/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097545817,
"node_id": "MDU6TGFiZWwxMDk3NTQ1ODE3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:apis",
"name": "comp:apis",
"color": "0052cc",
"default": false,
"description": "Highlevel API related issues"
},
{
"id": 2477739347,
"node_id": "MDU6TGFiZWwyNDc3NzM5MzQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.4",
"name": "TF 2.4",
"color": "5319e7",
"default": false,
"description": "for issues related to TF 2.4"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@lmx666-gif It happens for the older version of TF . Could you try to upgrade it using the following;\r\n```\r\npip install tensorflow --upgrade\r\n```\r\nOtherwise you can also run the following and restart the kernel\r\nimport tensorflow as tf\r\ntf.enable_eager_execution(). \r\n\r\nPlease let us know if it helps?\r\nThank you!",
"Hello, neither of two methods can turn a kerasTensor into a TF.tensor. I found \"tf.compat.v1.disable_eager_execution()\" could let KerasTensor changed into TF.Tensor, however, this problem:**RuntimeError: `tf.data.Dataset` only supports Python-style iteration in eager mode or within tf.function.**",
"@lmx666-gif Thank you for the response!\r\nCould you please share the standalone code to replicate the issue here. \r\nThank you!",
"Thank you for your response!\r\n\r\nThe first is the map function:\r\n def map_function(example):\r\n\r\n feature_map = {\"wav_raw\": tf.io.FixedLenFeature([], tf.string)}\r\n parsed_example = tf.io.parse_single_example(example, features=feature_map)\r\n\r\n wav_slice = tf.io.decode_raw(parsed_example[\"wav_raw\"], out_type=tf.float64)\r\n wav_slice = tf.cast(wav_slice, tf.float32) / 2 ** 15\r\n\r\n return wav_slice\r\n\r\nThe second one is the training process:\r\n\r\n for epoch in range(args.num_epochs):\r\n \r\n trainset = tf.data.TFRecordDataset(args.trainset_tfrecords_path)\r\n trainset = trainset.map(map_func=map_function,\r\n num_parallel_calls=num_cpus) # num_parallel_calls should be number of cpu cores\r\n\r\n #trainset = trainset.shuffle(buffer_size=args.batch_size * 200, reshuffle_each_iteration=True)\r\n trainset = trainset.batch(batch_size=args.batch_size)\r\n trainset = trainset.prefetch(buffer_size=args.batch_size)\r\n\r\n\r\n # train_loss for each epoch\r\n train_loss_epoch = []\r\n train_loss = 0.0\r\n\r\n # record the train time for each epoch\r\n start = time.time()\r\n # MASK参数\r\n\r\n #EMA_MODEL来选择mask的index\r\n\r\n binary_mask = RandomMaskingGenerator(input_size,frame_length,mask_ratio)\r\n # bmr:0为掩码,1为未掩码;\r\n # bm_T:1为掩码,0为未掩码;\r\n # bm,bm_T = binary_mask()\r\n \r\n\r\n for step, _input in enumerate(trainset):\r\n bm, bm_T, _ = binary_mask.random_mask(_input.shape[0],alpha_e_max)#.totally_random_mask(_input.shape[0])\r\n print(\"_input\",_input)\r\n # print(\"bm_shape\", bm.shape)\r\n # print(\"bm_T_shape\", bm_T.shape)\r\n loss_value = train_step(_input,_input*bm,bm_T)\r\n loss_float = float(loss_value)\r\n\r\n train_loss_epoch.append(loss_float)\r\n\r\n # Calculate the accumulated train loss value\r\n train_loss += loss_float\r\n\r\n\r\n\r\n # average train loss for each epoch\r\n train_loss /= (step + 1)\r\n train_loss_all.append(train_loss)\r\n\r\n # print log\r\n log = \"train epoch {}/{}, train_loss = {:.06f}, time = {:.06f}\"\r\nThe third one is the train_step function:\r\n\r\n @tf.function\r\n def train_step(_input,_input_mask,bm_T):\r\n with tf.GradientTape() as tape:\r\n enc_output, batch_mean, batch_var = sem_enc(_input_mask)\r\n #输入进去semantic decoder\r\n print(\"main:\",enc_output)\r\n _output = sem_dec([enc_output, batch_mean, batch_var])\r\n loss_value = mse_loss(tf.multiply(_input, bm_T), _output)\r\n tf.print(loss_value)\r\n loss_whole = loss_value\r\n\r\n grads = tape.gradient(loss_whole, weights_all) # compute gradients\r\n optimizer.apply_gradients(zip(grads, weights_all)) # update parameters\r\n\r\n return loss_whole\r\n\r\n\r\n\r\nThe _ouput generated by sem_dec() is the value I wanted change it from kerasTensor to TF.tensor; I could send you the whole codes, if you need, this is my email: [email protected]. Thank you for your reply!",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61307\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61307\">No</a>\n",
"In addition, before the new question, I found that the tensor in my model is kerasTensor; however, the tensor returned to my main function is chaned into tf.tensor; Rediculously!",
"@lmx666-gif Could you share the complete code and other dependencies to replicate this issue. I have faced different issues [here](https://colab.research.google.com/gist/sushreebarsa/372264e42bbe4b6f0731f9a29f08ebf2/untitled1250.ipynb). Thank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61307\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61307\">No</a>\n"
] | 2023-07-18T01:29:12 | 2023-08-08T01:51:29 | 2023-08-08T01:51:26 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.4
### Custom code
Yes
### OS platform and distribution
Window10
### Mobile device
_No response_
### Python version
3.8
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
KerasTensor没有numpy()这种,应该如何转换
### Standalone code to reproduce the issue
```shell
File "C:/Users/KM Group/Desktop/lmx/SemanticCompression-Speech/DeepSC-S-main/random_mask_training.py", line 73, in <module>
sem_dec = sem_dec_model(frame_length, stride_length, args)
File "C:\Users\KM Group\Desktop\lmx\SemanticCompression-Speech\DeepSC-S-main\model_tfnn.py", line 191, in sem_dec_model
_output = sem_dec(_intput, batch_mean, batch_var)
File "C:\Users\KM Group\Desktop\lmx\SemanticCompression-Speech\DeepSC-S-main\model_tfnn.py", line 142, in __call__
_input = tf.convert_to_tensor(keras.backend.get_value(_input))
File "C:\Users\KM Group\Anaconda3\envs\speech-SC\lib\site-packages\tensorflow\python\keras\backend.py", line 3615, in get_value
return x.numpy()
AttributeError: 'KerasTensor' object has no attribute 'numpy'
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61307/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61306 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61306/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61306/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61306/events | https://github.com/tensorflow/tensorflow/issues/61306 | 1,808,796,007 | I_kwDOArmXAs5r0Aln | 61,306 | TensorFlow distributed training works for at most 2 GPUs | {
"login": "aeave",
"id": 77916424,
"node_id": "MDQ6VXNlcjc3OTE2NDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/77916424?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aeave",
"html_url": "https://github.com/aeave",
"followers_url": "https://api.github.com/users/aeave/followers",
"following_url": "https://api.github.com/users/aeave/following{/other_user}",
"gists_url": "https://api.github.com/users/aeave/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aeave/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aeave/subscriptions",
"organizations_url": "https://api.github.com/users/aeave/orgs",
"repos_url": "https://api.github.com/users/aeave/repos",
"events_url": "https://api.github.com/users/aeave/events{/privacy}",
"received_events_url": "https://api.github.com/users/aeave/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 996845227,
"node_id": "MDU6TGFiZWw5OTY4NDUyMjc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:dist-strat",
"name": "comp:dist-strat",
"color": "0052cc",
"default": false,
"description": "Distribution Strategy related issues"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@aeave If you want to use multiple GPUs, you have to manually specify what tensors to put on each GPU.\r\nCould you please refer to [this](https://www.tensorflow.org/guide/distributed_training) document which explains in detail about the distribution strategy and multi-gpu. Thank you!",
"Hello! Thank you for responding. I followed the steps at the following link when writing the code example: https://www.tensorflow.org/guide/distributed_training#use_tfdistributestrategy_with_keras_modelfit\r\n\r\nI was thinking that the `tf.distribute.MirroredStrategy` API would automatically divide each batch evenly across multiple GPUs. This is the snippet of documentation from the TensorFlow website that led me to believe this:\r\n\r\n \r\nWhen running the code with two GPUs passed into the `tf.distribute.MirroredStrategy()` constructor, I am able to monitor utilization of both of them during the model's training. During training, the two GPUs that I have selected increase in utilization, so the code seems to be working as intended for the case with only two GPUs. However, the code will error if I ask it to use more than two GPUs. If I do not pass in the `gpus` list at all into the `tf.distribute.MirroredStrategy()` constructor, I expect the code to automatically detect all 8 GPUs on the system. `strategy.num_replicas_in_sync` returns 8 in this case, so I assume it is working. However, the same error as with the case of passing only 3 GPUs into the constructor appears in the case with 8 GPUs as well. Thank you for taking the time to help me out!",
"Hi @aeave ,\r\n\r\nWhen using distribution strategy we need to create and compile the model in the strategy scope itself. Also try adding the argument `drop_remainder=True` to the` batch()` method . So please change code like below.\r\n\r\n```\r\nimport tensorflow as tf\r\ngpus = [\"/gpu:0\", \"/gpu:1\", \"/gpu:2\"]\r\nstrategy = tf.distribute.MirroredStrategy(gpus)\r\nwith strategy.scope():\r\n model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])\r\n model.compile(loss=\"mse\", optimizer=\"sgd\")\r\ndataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(16,drop_remainder=True)\r\nmodel.fit(dataset)\r\n```\r\nPlease execute the above code and let us know if it works.\r\n\r\nThanks!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61306\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61306\">No</a>\n"
] | 2023-07-17T23:41:39 | 2023-08-10T01:54:24 | 2023-08-10T01:54:22 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.12.0
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.9.16
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
CUDA 11.8, cuDNN 8.6.0.163
### GPU model and memory
_No response_
### Current behavior?
I have 8 NVIDIA H100s on this system. Below is the output of `nvidia-smi`.

The following code works as intended if `gpus` includes at most 2 GPUs.

In the "Relevant log output" section, I have the error log from running the code with 3 GPUs in the `gpus` list.
Please let me know if you need any additional information!
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
gpus = ["/gpu:0", "/gpu:1", "/gpu:2"]
strategy = tf.distribute.MirroredStrategy(gpus)
with strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss="mse", optimizer="sgd")
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(16)
model.fit(dataset)
```
### Relevant log output
```shell
2023-07-17 23:28:31.469519: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-07-17 23:28:31.516738: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 AVX_VNNI AMX_TILE AMX_INT8 AMX_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-07-17 23:28:31.972313: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-07-17 23:28:35.178825: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:35.181183: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:35.183528: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:35.185905: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:35.188234: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:35.190559: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:35.192895: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:35.195198: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:36.609484: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:36.610685: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:36.611908: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:36.613166: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:36.614370: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:36.615584: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:36.616799: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:36.617975: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2048] TensorFlow was not built with CUDA kernel binaries compatible with compute capability 9.0. CUDA kernels will be jit-compiled from PTX, which could take 30 minutes or longer.
2023-07-17 23:28:36.680264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 78938 MB memory: -> device: 0, name: NVIDIA H100 80GB HBM3, pci bus id: 0000:19:00.0, compute capability: 9.0
2023-07-17 23:28:36.682112: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 78938 MB memory: -> device: 1, name: NVIDIA H100 80GB HBM3, pci bus id: 0000:3b:00.0, compute capability: 9.0
2023-07-17 23:28:36.683772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 78938 MB memory: -> device: 2, name: NVIDIA H100 80GB HBM3, pci bus id: 0000:4c:00.0, compute capability: 9.0
2023-07-17 23:28:36.685499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 78938 MB memory: -> device: 3, name: NVIDIA H100 80GB HBM3, pci bus id: 0000:5d:00.0, compute capability: 9.0
2023-07-17 23:28:36.687179: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:4 with 78938 MB memory: -> device: 4, name: NVIDIA H100 80GB HBM3, pci bus id: 0000:9b:00.0, compute capability: 9.0
2023-07-17 23:28:36.688884: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:5 with 78938 MB memory: -> device: 5, name: NVIDIA H100 80GB HBM3, pci bus id: 0000:bb:00.0, compute capability: 9.0
2023-07-17 23:28:36.690581: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:6 with 78938 MB memory: -> device: 6, name: NVIDIA H100 80GB HBM3, pci bus id: 0000:cb:00.0, compute capability: 9.0
2023-07-17 23:28:36.692292: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:7 with 78938 MB memory: -> device: 7, name: NVIDIA H100 80GB HBM3, pci bus id: 0000:db:00.0, compute capability: 9.0
2023-07-17 23:28:39.806560: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_1' with dtype float and shape [1]
[[{{node Placeholder/_1}}]]
2023-07-17 23:28:39.806723: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype float and shape [1]
[[{{node Placeholder/_0}}]]
2023-07-17 23:28:39.807452: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:786] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Found an unshardable source dataset: name: "TensorDataset/_2"
op: "TensorDataset"
input: "Placeholder/_0"
input: "Placeholder/_1"
attr {
key: "Toutput_types"
value {
list {
type: DT_FLOAT
type: DT_FLOAT
}
}
}
attr {
key: "_cardinality"
value {
i: 1
}
}
attr {
key: "metadata"
value {
s: "\n\017TensorDataset:0"
}
}
attr {
key: "output_shapes"
value {
list {
shape {
dim {
size: 1
}
}
shape {
dim {
size: 1
}
}
}
}
}
experimental_type {
type_id: TFT_PRODUCT
args {
type_id: TFT_DATASET
args {
type_id: TFT_PRODUCT
args {
type_id: TFT_TENSOR
args {
type_id: TFT_FLOAT
}
}
args {
type_id: TFT_TENSOR
args {
type_id: TFT_FLOAT
}
}
}
}
}
2023-07-17 23:28:39.823305: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_1' with dtype float and shape [1]
[[{{node Placeholder/_1}}]]
2023-07-17 23:28:39.823469: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_1' with dtype float and shape [1]
[[{{node Placeholder/_1}}]]
2023-07-17 23:28:40.010671: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_1' with dtype float and shape [1]
[[{{node Placeholder/_1}}]]
2023-07-17 23:28:40.010843: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype float and shape [1]
[[{{node Placeholder/_0}}]]
2023-07-17 23:28:48.599915: I tensorflow/compiler/xla/service/service.cc:169] XLA service 0x7fe600008ff0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2023-07-17 23:28:48.599956: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (0): NVIDIA H100 80GB HBM3, Compute Capability 9.0
2023-07-17 23:28:48.599964: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (1): NVIDIA H100 80GB HBM3, Compute Capability 9.0
2023-07-17 23:28:48.599971: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (2): NVIDIA H100 80GB HBM3, Compute Capability 9.0
2023-07-17 23:28:48.599977: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (3): NVIDIA H100 80GB HBM3, Compute Capability 9.0
2023-07-17 23:28:48.599983: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (4): NVIDIA H100 80GB HBM3, Compute Capability 9.0
2023-07-17 23:28:48.599988: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (5): NVIDIA H100 80GB HBM3, Compute Capability 9.0
2023-07-17 23:28:48.599994: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (6): NVIDIA H100 80GB HBM3, Compute Capability 9.0
2023-07-17 23:28:48.600003: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (7): NVIDIA H100 80GB HBM3, Compute Capability 9.0
2023-07-17 23:28:48.636269: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:429] Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
2023-07-17 23:28:48.636335: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:438] Possibly insufficient driver version: 535.54.3
2023-07-17 23:28:48.649063: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:429] Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
2023-07-17 23:28:48.649131: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:438] Possibly insufficient driver version: 535.54.3
2023-07-17 23:28:48.711026: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:424] Loaded cuDNN version 8600
2023-07-17 23:28:48.712282: E tensorflow/compiler/xla/status_macros.cc:57] INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
*** Begin stack trace ***
tsl::CurrentStackTrace[abi:cxx11]()
xla::status_macros::MakeErrorStream::Impl::GetStatus()
xla::gpu::GpuCompiler::OptimizeHloModule(xla::HloModule*, stream_executor::StreamExecutor*, stream_executor::DeviceMemoryAllocator*, xla::gpu::GpuTargetConfig const&, xla::AutotuneResults const*)
xla::gpu::GpuCompiler::RunHloPasses(std::unique_ptr<xla::HloModule, std::default_delete<xla::HloModule> >, stream_executor::StreamExecutor*, xla::Compiler::CompileOptions const&)
xla::Service::BuildExecutable(xla::HloModuleProto const&, std::unique_ptr<xla::HloModuleConfig, std::default_delete<xla::HloModuleConfig> >, xla::Backend*, stream_executor::StreamExecutor*, xla::Compiler::CompileOptions const&, bool)
xla::LocalService::CompileExecutables(xla::XlaComputation const&, absl::lts_20220623::Span<xla::Shape const* const>, xla::ExecutableBuildOptions const&)
xla::LocalClient::Compile(xla::XlaComputation const&, absl::lts_20220623::Span<xla::Shape const* const>, xla::ExecutableBuildOptions const&)
tensorflow::XlaDeviceCompilerClient::BuildExecutable(tensorflow::XlaCompiler::Options const&, tensorflow::XlaCompilationResult const&)
tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileStrict(tensorflow::DeviceCompilationClusterSignature const&, tensorflow::XlaCompiler::CompileOptions const&, tensorflow::XlaCompiler::Options const&, std::vector<tensorflow::XlaArgument, std::allocator<tensorflow::XlaArgument> > const&, tensorflow::NameAttrList const&, tensorflow::DeviceCompilationCache<xla::LocalExecutable>::Value, tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileScope, tensorflow::OpKernelContext*, tensorflow::DeviceCompilationProfiler*, tsl::mutex*)
tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileImpl(tensorflow::XlaCompiler::CompileOptions const&, tensorflow::XlaCompiler::Options const&, tensorflow::NameAttrList const&, std::vector<tensorflow::XlaArgument, std::allocator<tensorflow::XlaArgument> > const&, tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileScope, tensorflow::DeviceCompileMode, tensorflow::OpKernelContext*, tensorflow::DeviceCompilationProfiler*, tensorflow::XlaCompilationResult const**, xla::LocalExecutable**)
tensorflow::XlaLocalLaunchBase::ComputeAsync(tensorflow::OpKernelContext*, std::function<void ()>)
tensorflow::BaseGPUDevice::ComputeAsync(tensorflow::AsyncOpKernel*, tensorflow::OpKernelContext*, std::function<void ()>)
Eigen::ThreadPoolTempl<tsl::thread::EigenEnvironment>::WorkerLoop(int)
std::_Function_handler<void (), tsl::thread::EigenEnvironment::CreateThread(std::function<void ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&)
*** End stack trace ***
2023-07-17 23:28:48.712483: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:362 : INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
2023-07-17 23:28:48.712506: I tensorflow/core/common_runtime/executor.cc:1197] [/job:localhost/replica:0/task:0/device:GPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
[[{{node update_0_1/StatefulPartitionedCall}}]]
2023-07-17 23:28:48.727426: E tensorflow/compiler/xla/status_macros.cc:57] INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
*** Begin stack trace ***
tsl::CurrentStackTrace[abi:cxx11]()
xla::status_macros::MakeErrorStream::Impl::GetStatus()
xla::gpu::GpuCompiler::OptimizeHloModule(xla::HloModule*, stream_executor::StreamExecutor*, stream_executor::DeviceMemoryAllocator*, xla::gpu::GpuTargetConfig const&, xla::AutotuneResults const*)
xla::gpu::GpuCompiler::RunHloPasses(std::unique_ptr<xla::HloModule, std::default_delete<xla::HloModule> >, stream_executor::StreamExecutor*, xla::Compiler::CompileOptions const&)
xla::Service::BuildExecutable(xla::HloModuleProto const&, std::unique_ptr<xla::HloModuleConfig, std::default_delete<xla::HloModuleConfig> >, xla::Backend*, stream_executor::StreamExecutor*, xla::Compiler::CompileOptions const&, bool)
xla::LocalService::CompileExecutables(xla::XlaComputation const&, absl::lts_20220623::Span<xla::Shape const* const>, xla::ExecutableBuildOptions const&)
xla::LocalClient::Compile(xla::XlaComputation const&, absl::lts_20220623::Span<xla::Shape const* const>, xla::ExecutableBuildOptions const&)
tensorflow::XlaDeviceCompilerClient::BuildExecutable(tensorflow::XlaCompiler::Options const&, tensorflow::XlaCompilationResult const&)
tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileStrict(tensorflow::DeviceCompilationClusterSignature const&, tensorflow::XlaCompiler::CompileOptions const&, tensorflow::XlaCompiler::Options const&, std::vector<tensorflow::XlaArgument, std::allocator<tensorflow::XlaArgument> > const&, tensorflow::NameAttrList const&, tensorflow::DeviceCompilationCache<xla::LocalExecutable>::Value, tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileScope, tensorflow::OpKernelContext*, tensorflow::DeviceCompilationProfiler*, tsl::mutex*)
tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileImpl(tensorflow::XlaCompiler::CompileOptions const&, tensorflow::XlaCompiler::Options const&, tensorflow::NameAttrList const&, std::vector<tensorflow::XlaArgument, std::allocator<tensorflow::XlaArgument> > const&, tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileScope, tensorflow::DeviceCompileMode, tensorflow::OpKernelContext*, tensorflow::DeviceCompilationProfiler*, tensorflow::XlaCompilationResult const**, xla::LocalExecutable**)
tensorflow::XlaLocalLaunchBase::ComputeAsync(tensorflow::OpKernelContext*, std::function<void ()>)
tensorflow::BaseGPUDevice::ComputeAsync(tensorflow::AsyncOpKernel*, tensorflow::OpKernelContext*, std::function<void ()>)
Eigen::ThreadPoolTempl<tsl::thread::EigenEnvironment>::WorkerLoop(int)
std::_Function_handler<void (), tsl::thread::EigenEnvironment::CreateThread(std::function<void ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&)
*** End stack trace ***
2023-07-17 23:28:48.727590: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:362 : INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
2023-07-17 23:28:48.727609: I tensorflow/core/common_runtime/executor.cc:1197] [/job:localhost/replica:0/task:0/device:GPU:1] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
[[{{node update_1_1/StatefulPartitionedCall}}]]
2023-07-17 23:28:48.728128: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:429] Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
2023-07-17 23:28:48.728185: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:438] Possibly insufficient driver version: 535.54.3
2023-07-17 23:28:48.740946: W tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:231] Falling back to the CUDA driver for PTX compilation; ptxas does not support CC 9.0
2023-07-17 23:28:48.740965: W tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:234] Used ptxas at ptxas
2023-07-17 23:28:48.742906: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:429] Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
2023-07-17 23:28:48.742961: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:438] Possibly insufficient driver version: 535.54.3
2023-07-17 23:28:48.770994: I ./tensorflow/compiler/jit/device_compiler.h:180] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
2023-07-17 23:28:48.806579: E tensorflow/compiler/xla/status_macros.cc:57] INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
*** Begin stack trace ***
tsl::CurrentStackTrace[abi:cxx11]()
xla::status_macros::MakeErrorStream::Impl::GetStatus()
xla::gpu::GpuCompiler::OptimizeHloModule(xla::HloModule*, stream_executor::StreamExecutor*, stream_executor::DeviceMemoryAllocator*, xla::gpu::GpuTargetConfig const&, xla::AutotuneResults const*)
xla::gpu::GpuCompiler::RunHloPasses(std::unique_ptr<xla::HloModule, std::default_delete<xla::HloModule> >, stream_executor::StreamExecutor*, xla::Compiler::CompileOptions const&)
xla::Service::BuildExecutable(xla::HloModuleProto const&, std::unique_ptr<xla::HloModuleConfig, std::default_delete<xla::HloModuleConfig> >, xla::Backend*, stream_executor::StreamExecutor*, xla::Compiler::CompileOptions const&, bool)
xla::LocalService::CompileExecutables(xla::XlaComputation const&, absl::lts_20220623::Span<xla::Shape const* const>, xla::ExecutableBuildOptions const&)
xla::LocalClient::Compile(xla::XlaComputation const&, absl::lts_20220623::Span<xla::Shape const* const>, xla::ExecutableBuildOptions const&)
tensorflow::XlaDeviceCompilerClient::BuildExecutable(tensorflow::XlaCompiler::Options const&, tensorflow::XlaCompilationResult const&)
tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileStrict(tensorflow::DeviceCompilationClusterSignature const&, tensorflow::XlaCompiler::CompileOptions const&, tensorflow::XlaCompiler::Options const&, std::vector<tensorflow::XlaArgument, std::allocator<tensorflow::XlaArgument> > const&, tensorflow::NameAttrList const&, tensorflow::DeviceCompilationCache<xla::LocalExecutable>::Value, tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileScope, tensorflow::OpKernelContext*, tensorflow::DeviceCompilationProfiler*, tsl::mutex*)
tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileImpl(tensorflow::XlaCompiler::CompileOptions const&, tensorflow::XlaCompiler::Options const&, tensorflow::NameAttrList const&, std::vector<tensorflow::XlaArgument, std::allocator<tensorflow::XlaArgument> > const&, tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileScope, tensorflow::DeviceCompileMode, tensorflow::OpKernelContext*, tensorflow::DeviceCompilationProfiler*, tensorflow::XlaCompilationResult const**, xla::LocalExecutable**)
tensorflow::XlaLocalLaunchBase::ComputeAsync(tensorflow::OpKernelContext*, std::function<void ()>)
tensorflow::BaseGPUDevice::ComputeAsync(tensorflow::AsyncOpKernel*, tensorflow::OpKernelContext*, std::function<void ()>)
Eigen::ThreadPoolTempl<tsl::thread::EigenEnvironment>::WorkerLoop(int)
std::_Function_handler<void (), tsl::thread::EigenEnvironment::CreateThread(std::function<void ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&)
*** End stack trace ***
2023-07-17 23:28:48.806809: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:362 : INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
2023-07-17 23:28:48.818268: E tensorflow/compiler/xla/status_macros.cc:57] INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
*** Begin stack trace ***
tsl::CurrentStackTrace[abi:cxx11]()
xla::status_macros::MakeErrorStream::Impl::GetStatus()
xla::gpu::GpuCompiler::OptimizeHloModule(xla::HloModule*, stream_executor::StreamExecutor*, stream_executor::DeviceMemoryAllocator*, xla::gpu::GpuTargetConfig const&, xla::AutotuneResults const*)
xla::gpu::GpuCompiler::RunHloPasses(std::unique_ptr<xla::HloModule, std::default_delete<xla::HloModule> >, stream_executor::StreamExecutor*, xla::Compiler::CompileOptions const&)
xla::Service::BuildExecutable(xla::HloModuleProto const&, std::unique_ptr<xla::HloModuleConfig, std::default_delete<xla::HloModuleConfig> >, xla::Backend*, stream_executor::StreamExecutor*, xla::Compiler::CompileOptions const&, bool)
xla::LocalService::CompileExecutables(xla::XlaComputation const&, absl::lts_20220623::Span<xla::Shape const* const>, xla::ExecutableBuildOptions const&)
xla::LocalClient::Compile(xla::XlaComputation const&, absl::lts_20220623::Span<xla::Shape const* const>, xla::ExecutableBuildOptions const&)
tensorflow::XlaDeviceCompilerClient::BuildExecutable(tensorflow::XlaCompiler::Options const&, tensorflow::XlaCompilationResult const&)
tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileStrict(tensorflow::DeviceCompilationClusterSignature const&, tensorflow::XlaCompiler::CompileOptions const&, tensorflow::XlaCompiler::Options const&, std::vector<tensorflow::XlaArgument, std::allocator<tensorflow::XlaArgument> > const&, tensorflow::NameAttrList const&, tensorflow::DeviceCompilationCache<xla::LocalExecutable>::Value, tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileScope, tensorflow::OpKernelContext*, tensorflow::DeviceCompilationProfiler*, tsl::mutex*)
tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileImpl(tensorflow::XlaCompiler::CompileOptions const&, tensorflow::XlaCompiler::Options const&, tensorflow::NameAttrList const&, std::vector<tensorflow::XlaArgument, std::allocator<tensorflow::XlaArgument> > const&, tensorflow::DeviceCompiler<xla::LocalExecutable, xla::LocalClient>::CompileScope, tensorflow::DeviceCompileMode, tensorflow::OpKernelContext*, tensorflow::DeviceCompilationProfiler*, tensorflow::XlaCompilationResult const**, xla::LocalExecutable**)
tensorflow::XlaLocalLaunchBase::ComputeAsync(tensorflow::OpKernelContext*, std::function<void ()>)
tensorflow::BaseGPUDevice::ComputeAsync(tensorflow::AsyncOpKernel*, tensorflow::OpKernelContext*, std::function<void ()>)
Eigen::ThreadPoolTempl<tsl::thread::EigenEnvironment>::WorkerLoop(int)
std::_Function_handler<void (), tsl::thread::EigenEnvironment::CreateThread(std::function<void ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&)
*** End stack trace ***
2023-07-17 23:28:48.818437: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:362 : INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
2023-07-17 23:28:48.858099: I tensorflow/core/common_runtime/executor.cc:1197] [/job:localhost/replica:0/task:0/device:GPU:2] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
[[{{node update_0_1/StatefulPartitionedCall}}]]
[[GroupCrossDeviceControlEdges_1/Identity_7/_166]]
Traceback (most recent call last):
File "/home/user4/project/src/test.py", line 8, in <module>
model.fit(dataset)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InternalError: Graph execution error:
Detected at node 'update_1_1/StatefulPartitionedCall' defined at (most recent call last):
File "/home/user4/project/src/test.py", line 8, in <module>
model.fit(dataset)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1685, in fit
tmp_logs = self.train_function(iterator)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1284, in train_function
return step_function(self, iterator)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1268, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/optimizers/optimizer.py", line 1250, in _distributed_apply_gradients_fn
distribution.extended.update(
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/optimizers/optimizer.py", line 1245, in apply_grad_to_update_var
return self._update_step_xla(grad, var, id(self._var_key(var)))
Node: 'update_1_1/StatefulPartitionedCall'
Detected at node 'update_0_1/StatefulPartitionedCall' defined at (most recent call last):
File "/home/user4/project/src/test.py", line 8, in <module>
model.fit(dataset)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1685, in fit
tmp_logs = self.train_function(iterator)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1284, in train_function
return step_function(self, iterator)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1268, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/optimizers/optimizer.py", line 1250, in _distributed_apply_gradients_fn
distribution.extended.update(
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/optimizers/optimizer.py", line 1245, in apply_grad_to_update_var
return self._update_step_xla(grad, var, id(self._var_key(var)))
Node: 'update_0_1/StatefulPartitionedCall'
Detected at node 'update_0_1/StatefulPartitionedCall' defined at (most recent call last):
File "/home/user4/project/src/test.py", line 8, in <module>
model.fit(dataset)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1685, in fit
tmp_logs = self.train_function(iterator)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1284, in train_function
return step_function(self, iterator)
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1268, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/optimizers/optimizer.py", line 1250, in _distributed_apply_gradients_fn
distribution.extended.update(
File "/home/user4/miniconda3/envs/tf/lib/python3.9/site-packages/keras/optimizers/optimizer.py", line 1245, in apply_grad_to_update_var
return self._update_step_xla(grad, var, id(self._var_key(var)))
Node: 'update_0_1/StatefulPartitionedCall'
3 root error(s) found.
(0) INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
[[{{node update_1_1/StatefulPartitionedCall}}]]
(1) INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
[[{{node update_0_1/StatefulPartitionedCall}}]]
[[GroupCrossDeviceControlEdges_1/Identity_7/_166]]
(2) INTERNAL: RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr
[[{{node update_0_1/StatefulPartitionedCall}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_1810]
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61306/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61305 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61305/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61305/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61305/events | https://github.com/tensorflow/tensorflow/pull/61305 | 1,808,606,068 | PR_kwDOArmXAs5VtxtU | 61,305 | Interface for XLA Outside Compilation in TPU VMs | {
"login": "LionOfJewdah",
"id": 19232265,
"node_id": "MDQ6VXNlcjE5MjMyMjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/19232265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LionOfJewdah",
"html_url": "https://github.com/LionOfJewdah",
"followers_url": "https://api.github.com/users/LionOfJewdah/followers",
"following_url": "https://api.github.com/users/LionOfJewdah/following{/other_user}",
"gists_url": "https://api.github.com/users/LionOfJewdah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LionOfJewdah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LionOfJewdah/subscriptions",
"organizations_url": "https://api.github.com/users/LionOfJewdah/orgs",
"repos_url": "https://api.github.com/users/LionOfJewdah/repos",
"events_url": "https://api.github.com/users/LionOfJewdah/events{/privacy}",
"received_events_url": "https://api.github.com/users/LionOfJewdah/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1173072136,
"node_id": "MDU6TGFiZWwxMTczMDcyMTM2",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:XL",
"name": "size:XL",
"color": "adafea",
"default": false,
"description": "CL Change Size:Extra Large"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-07-17T20:51:12 | 2023-07-17T21:31:14 | 2023-07-17T21:31:14 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61305",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61305",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61305.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61305.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61305/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61304 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61304/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61304/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61304/events | https://github.com/tensorflow/tensorflow/pull/61304 | 1,808,560,101 | PR_kwDOArmXAs5VtncR | 61,304 | Interface for XLA Outside Compilation in TPU VMs | {
"login": "LionOfJewdah",
"id": 19232265,
"node_id": "MDQ6VXNlcjE5MjMyMjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/19232265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LionOfJewdah",
"html_url": "https://github.com/LionOfJewdah",
"followers_url": "https://api.github.com/users/LionOfJewdah/followers",
"following_url": "https://api.github.com/users/LionOfJewdah/following{/other_user}",
"gists_url": "https://api.github.com/users/LionOfJewdah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LionOfJewdah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LionOfJewdah/subscriptions",
"organizations_url": "https://api.github.com/users/LionOfJewdah/orgs",
"repos_url": "https://api.github.com/users/LionOfJewdah/repos",
"events_url": "https://api.github.com/users/LionOfJewdah/events{/privacy}",
"received_events_url": "https://api.github.com/users/LionOfJewdah/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Whoops, first PR from Github. I only need to merge my own commit 😅"
] | 2023-07-17T20:24:47 | 2023-07-17T20:51:40 | 2023-07-17T20:51:40 | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61304",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61304",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61304.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61304.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61304/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61303 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61303/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61303/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61303/events | https://github.com/tensorflow/tensorflow/issues/61303 | 1,808,454,217 | I_kwDOArmXAs5rytJJ | 61,303 | Build failure | {
"login": "morgandu",
"id": 11281451,
"node_id": "MDQ6VXNlcjExMjgxNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/11281451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morgandu",
"html_url": "https://github.com/morgandu",
"followers_url": "https://api.github.com/users/morgandu/followers",
"following_url": "https://api.github.com/users/morgandu/following{/other_user}",
"gists_url": "https://api.github.com/users/morgandu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morgandu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morgandu/subscriptions",
"organizations_url": "https://api.github.com/users/morgandu/orgs",
"repos_url": "https://api.github.com/users/morgandu/repos",
"events_url": "https://api.github.com/users/morgandu/events{/privacy}",
"received_events_url": "https://api.github.com/users/morgandu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61303\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61303\">No</a>\n"
] | 2023-07-17T19:26:44 | 2023-07-17T19:54:01 | 2023-07-17T19:53:58 | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.11.0
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.10
### Bazel version
5.4.0
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The built was successful till last Friday, and when I repeat the previously successful built, I ran into error in `Relevant log output
`
### Standalone code to reproduce the issue
```shell
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARG SAXML_BUILD_SOURCE=github
ARG BASE_IMAGE=ubuntu:22.04
FROM ${BASE_IMAGE} AS base-build-image
ARG SAXML_VERSION_GIT_BRANCH=main
ARG SAXML_GIT_COMMIT=HEAD
ARG PYTHON_VERSION=3.10
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED TRUE
RUN apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
apt-transport-https \
automake \
build-essential \
ca-certificates \
curl \
git \
gnupg \
libcurl3-dev \
libfreetype6-dev \
libpng-dev \
libtool \
libzmq3-dev \
mlocate \
patch \
pkg-config \
software-properties-common \
sudo \
swig \
unzip \
wget \
zip \
zlib1g-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install python
RUN add-apt-repository ppa:deadsnakes/ppa && \
apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
python3-pip \
python${PYTHON_VERSION} \
python${PYTHON_VERSION}-dev \
python${PYTHON_VERSION}-venv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
python${PYTHON_VERSION} -m pip install -q pip --upgrade && \
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 0 && \
update-alternatives --install /usr/bin/python python /usr/bin/python${PYTHON_VERSION} 0
# Install google-cloud-cli
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" >> /etc/apt/sources.list.d/google-cloud-sdk.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
RUN apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
google-cloud-cli && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install bazel
ARG BAZEL_VERSION=5.4.0
RUN mkdir /bazel && \
cd /bazel && \
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/${BAZEL_VERSION}/bazel-${BAZEL_VERSION}-installer-linux-x86_64.sh && \
chmod +x bazel-*.sh && \
./bazel-${BAZEL_VERSION}-installer-linux-x86_64.sh && \
cd / && \
rm -f /bazel/bazel-${BAZEL_VERSION}-installer-linux-x86_64.sh
# Install numpy
RUN pip install --no-cache-dir --no-deps numpy
RUN rm -rf /root/.cache/pip
FROM base-build-image as base-build-from-github
ONBUILD ARG SAXML_GIT_COMMIT
# Download saxml sources (optionally at specific commit)
ONBUILD WORKDIR /saxml
ONBUILD RUN curl -sSL --retry 5 https://github.com/google/saxml/tarball/${SAXML_GIT_COMMIT} | tar --strip-components=1 -xzf -
FROM base-build-image as base-build-from-local
# Copy saxml local repo
ONBUILD COPY local_saxml_repo /saxml
ONBUILD WORKDIR /saxml
FROM base-build-from-${SAXML_BUILD_SOURCE} AS build-source-image
ARG SAXML_BUILD_SOURCE
RUN echo "Building SAXML from: ${SAXML_BUILD_SOURCE}"
FROM build-source-image AS build-admin-server-image
WORKDIR /saxml
RUN bazel build --color=yes --curses=yes saxml/bin:admin_config && \
cp bazel-bin/saxml/bin/admin_config_/admin_config /usr/bin/admin_config
```
```
docker build \
--pull \
--target runtime-admin-server-image \
-t sax-admin-server \
-f Dockerfile . \
--build-arg=SAXML_BUILD_SOURCE=github \
--build-arg=SAXML_VERSION_GIT_BRANCH=main \
--build-arg=SAXML_GIT_COMMIT=HEAD ;
```
```
### Relevant log output
```shell
#18 [build-admin-server-image 2/10] RUN bazel build --color=yes --curses=yes saxml/bin:admin_config && cp bazel-bin/saxml/bin/admin_config_/admin_config /usr/bin/admin_config
#18 0.618 Extracting Bazel installation...
#18 2.311 Starting local Bazel server and connecting to it...
DEBUG: Rule 'python3_10_x86_64-unknown-linux-gnu' indicated that a canonical reproducible form can be obtained by dropping arguments ["ignore_root_user_error"]
DEBUG: Repository python3_10_x86_64-unknown-linux-gnu instantiated at:
#18 6.580 /saxml/WORKSPACE:27:27: in <toplevel>
#18 6.580 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/rules_python/python/repositories.bzl:366:26: in python_register_toolchains
#18 6.580 Repository rule python_repository defined at:
#18 6.580 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/rules_python/python/repositories.bzl:269:36: in <toplevel>
DEBUG: Rule 'org_tensorflow' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "99c732b92b1b37fc243a559e02f9aef5671771e272758aa4aec7f34dc92dac48"
DEBUG: Repository org_tensorflow instantiated at:
#18 28.14 /saxml/WORKSPACE:356:13: in <toplevel>
#18 28.14 Repository rule http_archive defined at:
#18 28.14 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/bazel_tools/tools/build_defs/repo/http.bzl:355:31: in <toplevel>
ERROR: Traceback (most recent call last):
#18 28.21 File "/root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/org_tensorflow/third_party/py/python_configure.bzl", line 288, column 42, in <toplevel>
#18 28.21 remote_python_configure = repository_rule(
#18 28.21 Error in repository_rule: in call to repository_rule(), parameter 'remotable' is experimental and thus unavailable with the current flags. It may be enabled by setting --experimental_repo_remote_exec
INFO: Repository tf_runtime instantiated at:
#18 28.27 /saxml/WORKSPACE:363:14: in <toplevel>
#18 28.27 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/org_tensorflow/tensorflow/workspace3.bzl:18:15: in workspace
#18 28.27 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/org_tensorflow/third_party/tf_runtime/workspace.bzl:12:20: in repo
#18 28.27 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/org_tensorflow/third_party/repo.bzl:136:21: in tf_http_archive
#18 28.27 Repository rule _tf_http_archive defined at:
#18 28.27 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/org_tensorflow/third_party/repo.bzl:89:35: in <toplevel>
INFO: Repository io_bazel_rules_closure instantiated at:
#18 28.31 /saxml/WORKSPACE:363:14: in <toplevel>
#18 28.31 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/org_tensorflow/tensorflow/workspace3.bzl:8:17: in workspace
#18 28.31 Repository rule http_archive defined at:
#18 28.31 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/bazel_tools/tools/build_defs/repo/http.bzl:355:31: in <toplevel>
ERROR: error loading package '': at /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/org_tensorflow/tensorflow/workspace2.bzl:10:6: initialization of module 'third_party/py/python_configure.bzl' failed
INFO: Elapsed time: 27.731s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
#18 28.39 Fetching @llvm-raw; Restarting.
#18 28.39 Fetching https://storage.googleapis.com/...8ee0471b67a40403df940149.tar.gz
#18 28.39
#18 ERROR: process "/bin/sh -c bazel build --color=yes --curses=yes saxml/bin:admin_config && cp bazel-bin/saxml/bin/admin_config_/admin_config /usr/bin/admin_config" did not complete successfully: exit code: 1
------
> [build-admin-server-image 2/10] RUN bazel build --color=yes --curses=yes saxml/bin:admin_config && cp bazel-bin/saxml/bin/admin_config_/admin_config /usr/bin/admin_config:
#18 28.39
#18 28.31 /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/bazel_tools/tools/build_defs/repo/http.bzl:355:31: in <toplevel>
ERROR: error loading package '': at /root/.cache/bazel/_bazel_root/edfec97661350df226696afb5a35c874/external/org_tensorflow/tensorflow/workspace2.bzl:10:6: initialization of module 'third_party/py/python_configure.bzl' failed
#18 28.36 Loading: 0 packages loaded
#18 28.36 Fetching @llvm-raw; Restarting.
#18 28.36 Fetching https://storage.googleapis.com/...8ee0471b67a40403df940149.tar.gz
------
Dockerfile:117
--------------------
116 |
117 | >>> RUN bazel build --color=yes --curses=yes saxml/bin:admin_config && \
118 | >>> cp bazel-bin/saxml/bin/admin_config_/admin_config /usr/bin/admin_config
119 | RUN bazel build --color=yes --curses=yes saxml/bin:admin_server && \
--------------------
ERROR: failed to solve: process "/bin/sh -c bazel build --color=yes --curses=yes saxml/bin:admin_config && cp bazel-bin/saxml/bin/admin_config_/admin_config /usr/bin/admin_config" did not complete successfully: exit code: 1
```
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61303/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61302 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61302/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61302/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61302/events | https://github.com/tensorflow/tensorflow/pull/61302 | 1,808,451,366 | PR_kwDOArmXAs5VtSqQ | 61,302 | Tfl-to-TOSA lowering for 'tfl.range' and 'tfl.shape' ops | {
"login": "rafaelubalmw",
"id": 133914350,
"node_id": "U_kgDOB_te7g",
"avatar_url": "https://avatars.githubusercontent.com/u/133914350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelubalmw",
"html_url": "https://github.com/rafaelubalmw",
"followers_url": "https://api.github.com/users/rafaelubalmw/followers",
"following_url": "https://api.github.com/users/rafaelubalmw/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelubalmw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelubalmw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelubalmw/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelubalmw/orgs",
"repos_url": "https://api.github.com/users/rafaelubalmw/repos",
"events_url": "https://api.github.com/users/rafaelubalmw/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelubalmw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1169365682,
"node_id": "MDU6TGFiZWwxMTY5MzY1Njgy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:L",
"name": "size:L",
"color": "adafea",
"default": false,
"description": "CL Change Size: Large"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"A couple of broad comments. It's not clear these belong in a TFL to TOSA pass, as neither of them are generating TOSA, rather shape/arith/tensor/etc. They could be separated out into a separate pass doing TFL to tensor (or similar).\r\n\r\nIn addition, I'm working to propose DIM/tosa.dim as a new operator inside TOSA. Then the tfl.shape could be legalized with pure TOSA ops using tosa.dim instead of going through the shape dialect. Some additional work would go into TOSA so ops such as RESHAPE could take the calculated shape.",
"@jpienaar @eric-k256 Just pinging on this PR. Thanks!",
"Hi @rafaelubalmw Can you please resolve conflicts? Thank you!",
"We seem to be clearer now about what to do with ops that produce dialects other than TOSA. I'll wait until this PR (https://github.com/tensorflow/tensorflow/pull/61660) is merged and will rearrange this according to the new file structure.",
"Hi @rafaelubalmw Any update on this PR? Please. Thank you!",
"Hi @rafaelubalmw Can you please resolve conflicts? Thank you!",
"This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Hi @rafaelubalmw I'm going to go ahead and close this PR, because it seems to have stalled. If you're still interested in pursing this (and responding to my comments), please feel free to reopen! Thank you for your contribution!\r\n"
] | 2023-07-17T19:24:32 | 2024-01-19T08:34:16 | 2024-01-19T08:34:10 | NONE | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61302",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61302",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61302.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61302.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61302/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61301 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61301/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61301/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61301/events | https://github.com/tensorflow/tensorflow/issues/61301 | 1,808,193,282 | I_kwDOArmXAs5rxtcC | 61,301 | Model checkpoint to tflite model | {
"login": "MaheshwariAnkit",
"id": 118751480,
"node_id": "U_kgDOBxQA-A",
"avatar_url": "https://avatars.githubusercontent.com/u/118751480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshwariAnkit",
"html_url": "https://github.com/MaheshwariAnkit",
"followers_url": "https://api.github.com/users/MaheshwariAnkit/followers",
"following_url": "https://api.github.com/users/MaheshwariAnkit/following{/other_user}",
"gists_url": "https://api.github.com/users/MaheshwariAnkit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaheshwariAnkit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaheshwariAnkit/subscriptions",
"organizations_url": "https://api.github.com/users/MaheshwariAnkit/orgs",
"repos_url": "https://api.github.com/users/MaheshwariAnkit/repos",
"events_url": "https://api.github.com/users/MaheshwariAnkit/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaheshwariAnkit/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097543484,
"node_id": "MDU6TGFiZWwxMDk3NTQzNDg0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:runtime",
"name": "comp:runtime",
"color": "0052cc",
"default": false,
"description": "c++ runtime, performance issues (cpu)"
},
{
"id": 1661751498,
"node_id": "MDU6TGFiZWwxNjYxNzUxNDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteConverter",
"name": "TFLiteConverter",
"color": "bfdadc",
"default": false,
"description": "For issues related to TFLite converter"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@MaheshwariAnkit Once you have loaded your TensorFlow Lite model into memory, you can perform inference using the tflite::Interpreter::Invoke() method:\r\n```\r\ninterpreter->Invoke();\r\n```\r\nPlease have a look at [this](https://www.tensorflow.org/lite/guide/inference) guide to know more on TFlite inference.\r\nThank you!\r\n",
"I also need some help on this.. i have tflite model and after retraining using signature runner i got model.ckpt.. now i want to merge model.ckpt and model.tflite file.. is there any available method for this? \r\n\r\n@sushreebarsa -> above link/guide does not having any information for similar problem. ",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further."
] | 2023-07-17T17:02:43 | 2023-08-09T01:52:18 | 2023-08-09T01:52:18 | NONE | null | null | null | ### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux
- TensorFlow installation (pip package or built from source): built from source
- TensorFlow library (version, if pip package or github SHA, if built from source):2.12
I am using tflilte(c++) and doing on device training using signature runner and saving the model.ckpt file..
I want to convert model.ckpt to tflite model for inference..is there any possible method using c++?? | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61301/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61300 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61300/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61300/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61300/events | https://github.com/tensorflow/tensorflow/pull/61300 | 1,808,079,924 | PR_kwDOArmXAs5VsAVY | 61,300 | Fix TOSA legalization for RSQRT operator | {
"login": "jamwar01",
"id": 70524219,
"node_id": "MDQ6VXNlcjcwNTI0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/70524219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamwar01",
"html_url": "https://github.com/jamwar01",
"followers_url": "https://api.github.com/users/jamwar01/followers",
"following_url": "https://api.github.com/users/jamwar01/following{/other_user}",
"gists_url": "https://api.github.com/users/jamwar01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamwar01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamwar01/subscriptions",
"organizations_url": "https://api.github.com/users/jamwar01/orgs",
"repos_url": "https://api.github.com/users/jamwar01/repos",
"events_url": "https://api.github.com/users/jamwar01/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamwar01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1169365494,
"node_id": "MDU6TGFiZWwxMTY5MzY1NDk0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:M",
"name": "size:M",
"color": "adafea",
"default": false,
"description": "CL Change Size: Medium"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"> Its a little bit difficult to evaluate pure on code, was there numerical evaluations done here additionally? Some stats as to them?\r\n\r\nFor an int8 quantized DETR model, there was sparse -1 discrepancies when comparing the output tensors of tosa.rsqrt and tfl.rsqrt. This patch fixes the reported issue and passes any rsqrt unit tests.\r\n\r\nIt was necessary to more closely mimic the TFLite kernel's approach to calculating the rsqrt in order to ensure consistency between the two for any input (i.e. perform operations in the quantized type as opposed to in the dequantized type, as previously).",
"Hi @jpienaar Can you please assist on above comments from @jamwar01. Thank you!"
] | 2023-07-17T15:55:03 | 2023-09-15T21:11:25 | 2023-09-15T21:11:25 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61300",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61300",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61300.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61300.patch",
"merged_at": "2023-09-15T21:11:25"
} | More closely match tflite kernel's behaviour
Change-Id: Ibe1a6e24c069beddfb46cf40c665d346b88cf097 | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61300/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61299 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61299/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61299/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61299/events | https://github.com/tensorflow/tensorflow/issues/61299 | 1,807,840,710 | I_kwDOArmXAs5rwXXG | 61,299 | Parse output of `mobile_ssd_v2_float_coco.tflite` | {
"login": "caiotoledo-lunasystems",
"id": 92656601,
"node_id": "U_kgDOBYXT2Q",
"avatar_url": "https://avatars.githubusercontent.com/u/92656601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caiotoledo-lunasystems",
"html_url": "https://github.com/caiotoledo-lunasystems",
"followers_url": "https://api.github.com/users/caiotoledo-lunasystems/followers",
"following_url": "https://api.github.com/users/caiotoledo-lunasystems/following{/other_user}",
"gists_url": "https://api.github.com/users/caiotoledo-lunasystems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caiotoledo-lunasystems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caiotoledo-lunasystems/subscriptions",
"organizations_url": "https://api.github.com/users/caiotoledo-lunasystems/orgs",
"repos_url": "https://api.github.com/users/caiotoledo-lunasystems/repos",
"events_url": "https://api.github.com/users/caiotoledo-lunasystems/events{/privacy}",
"received_events_url": "https://api.github.com/users/caiotoledo-lunasystems/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473184161,
"node_id": "MDU6TGFiZWw0NzMxODQxNjE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:support",
"name": "type:support",
"color": "159b2e",
"default": false,
"description": "Support issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 4829271983,
"node_id": "LA_kwDOArmXAs8AAAABH9jXrw",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.11",
"name": "TF 2.11",
"color": "46B4D7",
"default": false,
"description": "Issues related to TF 2.11"
}
] | closed | false | {
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pjpratik",
"id": 118897289,
"node_id": "U_kgDOBxY6iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118897289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjpratik",
"html_url": "https://github.com/pjpratik",
"followers_url": "https://api.github.com/users/pjpratik/followers",
"following_url": "https://api.github.com/users/pjpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/pjpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjpratik/subscriptions",
"organizations_url": "https://api.github.com/users/pjpratik/orgs",
"repos_url": "https://api.github.com/users/pjpratik/repos",
"events_url": "https://api.github.com/users/pjpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjpratik/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @caiotoledo-lunasystems \r\n\r\nThe [TensorFlow Lite Object Detection](https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android#tensorflow-lite-object-detection-android-demo) Demo example uses different models trained on COCO dataset. \r\n\r\nIn the example, [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/api_docs/python/tflite_support/task/vision) is used to parse the output which provides simple interface to use.\r\n\r\nPlease check the [post process](https://github.com/tensorflow/tflite-support/blob/51d72eb189660da222c477b960861c0d98bfa6fc/tensorflow_lite_support/cc/task/vision/object_detector.cc#L600) function for the reference which is used by library to parse the output. Also, please check the [demo example](https://github.com/tensorflow/tflite-support/tree/master/tensorflow_lite_support/examples/task/vision/desktop) for C++ Vision Task APIs.\r\n\r\nThanks.\r\n\r\n",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"@pjpratik,\r\n\r\nThanks for the support!",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61299\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61299\">No</a>\n"
] | 2023-07-17T13:53:12 | 2023-07-27T08:08:26 | 2023-07-27T08:08:23 | NONE | null | null | null | ### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
v2.11.1
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04
### Mobile device
Android
### Python version
_No response_
### Bazel version
6.2.0
### GCC/compiler version
12
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I'm trying to use the model [mobile_ssd_v2_float_coco.tflite](https://storage.googleapis.com/download.tensorflow.org/models/tflite/gpu/mobile_ssd_v2_float_coco.tflite) on a C++ application, I'm able to execute the inference and get the results.
Based on the Netron app I see that its output is:

But I couldn't find an example code showing how to parse this output.
I tried to look into https://github.com/tensorflow/tensorflow/issues/29054 and https://github.com/tensorflow/tensorflow/issues/40298 but the output of the model is different from the one provided [here](https://storage.googleapis.com/download.tensorflow.org/models/tflite/gpu/mobile_ssd_v2_float_coco.tflite).
Do you have any example code available in Java, Python, or even better in C++ to parse this model output?
### Standalone code to reproduce the issue
```shell
No example code is available to parse the output of mobile_ssd_v2_float_coco.tflite.
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61299/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61298 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61298/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61298/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61298/events | https://github.com/tensorflow/tensorflow/issues/61298 | 1,807,655,395 | I_kwDOArmXAs5rvqHj | 61,298 | tensorflow keras model.predict() is not thread safe | {
"login": "henghamao",
"id": 4012446,
"node_id": "MDQ6VXNlcjQwMTI0NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4012446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/henghamao",
"html_url": "https://github.com/henghamao",
"followers_url": "https://api.github.com/users/henghamao/followers",
"following_url": "https://api.github.com/users/henghamao/following{/other_user}",
"gists_url": "https://api.github.com/users/henghamao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/henghamao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/henghamao/subscriptions",
"organizations_url": "https://api.github.com/users/henghamao/orgs",
"repos_url": "https://api.github.com/users/henghamao/repos",
"events_url": "https://api.github.com/users/henghamao/events{/privacy}",
"received_events_url": "https://api.github.com/users/henghamao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097546578,
"node_id": "MDU6TGFiZWwxMDk3NTQ2NTc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:keras",
"name": "comp:keras",
"color": "0052cc",
"default": false,
"description": "Keras related issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@henghamao I have tried to replicate the code in both TF v[2.12](https://colab.research.google.com/gist/sushreebarsa/e16d74cbeafeed276592a84d5034591c/61298.ipynb) and [2.13](https://colab.research.google.com/gist/sushreebarsa/c469631cfe2252acf1a871f92496cf8a/61298.ipynb#scrollTo=7oZEGiY_t-0o). I haven't faced the issue reported here. Could you find the attached gists and confirm the same? Please let me know if i am missing something to replicate the issue.\r\nThank you!",
"Thanks for looking at the issue.\r\nThe issue happend with Python 3.11, and we seldom observed the issue with Python 3.8.\r\nIn addition, the model prediction were execueted with multi-thread.\r\nHere is an example to show the major process:\r\n```\r\nimport threading\r\nimport numpy as np\r\nimport tensorflow as tf\r\nfrom models.model_attention import AttentionModel\r\nfrom models.model_attention_three_category import AttentionThreeCategoryModel\r\n\r\n\r\nclass PredictTask(object):\r\n def __init__(self, thread_num):\r\n self.thread_num = thread_num\r\n self.result = []\r\n # modified to the local model path\r\n self.model1 = AttentionThreeCategoryModel(40, 20, 10)\r\n self.model1.load_model('/data/model_save')\r\n self.model2 = AttentionModel(40, 20, 1)\r\n self.model2.load_model('/data/model_save')\r\n\r\n self.sl = []\r\n for i in range(8000):\r\n # generated data to predict\r\n self.sl.append(np.random.random((1, 40, 20)))\r\n return\r\n\r\n def start(self):\r\n pool = []\r\n for i in range(min(self.thread_num, len(self.sl))):\r\n t = threading.Thread(target=self.predict_thread, args=(i,))\r\n pool.append(t)\r\n t.start()\r\n for t in pool:\r\n t.join()\r\n return\r\n\r\n def predict_thread(self, tid):\r\n loop = tid\r\n cnt = 0\r\n print('Predict thread %d starting...' % tid)\r\n while loop < len(self.sl):\r\n cnt += 1\r\n try:\r\n self.predict(self.sl[loop])\r\n if cnt % 10 == 0:\r\n print(\"Thread %d, predict progress: %d/%d\" % (tid, loop, len(self.sl)))\r\n loop += self.thread_num\r\n except Exception as e:\r\n print('Fail to predict stock: %s'%self.sl[loop].stock_sym)\r\n print(e)\r\n continue\r\n \r\n print('Predict thread %d completed, finished task number %d.' % (tid, cnt)) \r\n\r\n def predict(self, x):\r\n try:\r\n pred1 = self.model2.model.predict(x)\r\n pred2 = self.model1.model.predict(x)\r\n except Exception as ex:\r\n print(ex)\r\n exit(0)\r\n return (pred1, pred2) \r\n\r\ndef main():\r\n p = PredictTask(8)\r\n p.start()\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```",
"@henghamao You should use multi processing to utilize all your resources as multi threading is not recommended mostly. Loading several models in several threads have its own overhead. Could you try increasing the batch size? \r\n\r\nThank you!",
"In the code, we didnot load models in several threads. We loaded the model in the main process, and used multi-thread to do the predictions.\r\nAnd we found out a simple workaround is to retry model prediction while the exception raised. With this method, we could get all correct prediction results, though we still observed exception raised sometimes.\r\n```\r\ndef predict(self, x, tf_server=False, port=8501, model_path='', step=0):\r\n if tf_server:\r\n return self.predict_tf_server_grpc(x, port, step)\r\n pred = None\r\n fail_cnt = 0\r\n max_fail = 5\r\n while fail_cnt < max_fail:\r\n try:\r\n if not self.model_trained:\r\n print('try to load model ...\\n')\r\n self.load_model(model_path)\r\n self.model_trained = True\r\n if step == 0:\r\n pred = self.model.predict(x)\r\n else:\r\n pred = self.model.predict(x, steps=step)\r\n break\r\n except Exception as ex:\r\n print(ex)\r\n fail_cnt += 1\r\n return pred\r\n```",
"@henghamao Could you please downgrade the python version and retry the training as it is not replicating the reported error there. TF v2.13 is also compatible with that. Thank you!",
"Downgrade python to 3.8, we did not observe the issue.",
"Hi, \r\n\r\nThanks for reporting the issue.\r\n\r\nTensorflow in general is not considered as Thread safe, since it has the [GIL](https://wiki.python.org/moin/GlobalInterpreterLock) concept, which prevents threads from executing Python code simultaneously in a single process.\r\n\r\nEven if your code has executed successfully in other python version, it does not guarantee or makes a valid use case for Tensorflow to work thread safe.",
"Thanks for the clarifications.\r\nFor multi-thread model predictions, we would use tf serving.\r\nAnd we could close the issue.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61298\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61298\">No</a>\n"
] | 2023-07-17T12:19:06 | 2023-08-11T16:56:23 | 2023-08-11T16:56:19 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf 2.13.0
### Custom code
Yes
### OS platform and distribution
Linux CentOS 7.9
### Mobile device
_No response_
### Python version
3.11.4
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
We executed model.predict() in multi-thread. And sometimes, the code raised the exception: Functional' object has no attribute 'predict_function.
### Standalone code to reproduce the issue
```shell
def predict(self, x, tf_server=False, port=8501, model_path='', step=0):
if tf_server:
return self.predict_tf_server_grpc(x, port, step)
pred = None
try:
if not self.model_trained:
print('try to load model ...\n')
self.load_model(model_path)
self.model_trained = True
if step == 0:
pred = self.model.predict(x)
else:
pred = self.model.predict(x, steps=step)
except Exception as ex:
print(ex)
return pred
```
While the exception raised, we executed the code "self.model.predict(x)" in debug window again, and it returned the correct prediction results.
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61298/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61297 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61297/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61297/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61297/events | https://github.com/tensorflow/tensorflow/issues/61297 | 1,807,374,244 | I_kwDOArmXAs5rulek | 61,297 | CTC Loss errors on TPU | {
"login": "sronen71",
"id": 4361027,
"node_id": "MDQ6VXNlcjQzNjEwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4361027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sronen71",
"html_url": "https://github.com/sronen71",
"followers_url": "https://api.github.com/users/sronen71/followers",
"following_url": "https://api.github.com/users/sronen71/following{/other_user}",
"gists_url": "https://api.github.com/users/sronen71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sronen71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sronen71/subscriptions",
"organizations_url": "https://api.github.com/users/sronen71/orgs",
"repos_url": "https://api.github.com/users/sronen71/repos",
"events_url": "https://api.github.com/users/sronen71/events{/privacy}",
"received_events_url": "https://api.github.com/users/sronen71/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097541661,
"node_id": "MDU6TGFiZWwxMDk3NTQxNjYx",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:tpus",
"name": "comp:tpus",
"color": "0052cc",
"default": false,
"description": "tpu, tpuestimator"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | open | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Correction: it doesn't crash on TPU, only nonterminating errors and nan loss as described.\r\n(the log above was wrong due to setting jit_compile=True while on TPU).\r\n\r\nCorrected error log is:\r\n\r\nWARNING:tensorflow:From /usr/local/lib/python3.8/site-packages/tensorflow/python/ops/ctc_ops.py:1512: alias_inplace_add (from tensorflow.python.ops.inplace_ops) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nPrefer tf.tensor_scatter_nd_add, which offers the same functionality with well-defined read-write semantics.\r\n\r\nWARNING:tensorflow:From /usr/local/lib/python3.8/site-packages/tensorflow/python/ops/ctc_ops.py:1512: alias_inplace_add (from tensorflow.python.ops.inplace_ops) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nPrefer tf.tensor_scatter_nd_add, which offers the same functionality with well-defined read-write semantics.\r\n\r\nWARNING:tensorflow:From /usr/local/lib/python3.8/site-packages/tensorflow/python/ops/ctc_ops.py:1495: alias_inplace_update (from tensorflow.python.ops.inplace_ops) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nPrefer tf.tensor_scatter_nd_update, which offers the same functionality with well-defined read-write semantics.\r\n\r\nWARNING:tensorflow:From /usr/local/lib/python3.8/site-packages/tensorflow/python/ops/ctc_ops.py:1495: alias_inplace_update (from tensorflow.python.ops.inplace_ops) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nPrefer tf.tensor_scatter_nd_update, which offers the same functionality with well-defined read-write semantics.\r\n2023-07-17 09:38:42.402571: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node AssignAddVariableOp.\r\n2023-07-17 09:38:42.770611: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node AssignAddVariableOp.\r\n\r\n 98/100 [============================>.] - ETA: 0s - loss: nan\r\n\r\n2023-07-17 09:38:58.284868: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node AssignAddVariableOp.\r\n2023-07-17 09:38:58.523500: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node AssignAddVariableOp.\r\n\r\n100/100 [==============================] - 33s 138ms/step - loss: nan - val_loss: nan\r\n",
"Hi @sronen71,\r\n\r\nI have tried to replicate the reported behaviour in Google Colab but fails due to import issues as per attached gist. Could you please provide the code snippet in the form of colab [gist](https://colab.research.google.com/gist/SuryanarayanaY/2cb59dcb173284c2a5b7c35feecec687/61297_ctc-example.ipynb) that will help us largely.\r\n\r\nAlso please have a look on the TPU supported Ops of Tensorflow [here](https://cloud.google.com/tpu/docs/tensorflow-ops). Please check whether any Op being used in the model is not implemented on TPU ? Thanks!",
"Hi @SuryanarayanaY , thanks for having a look at it. It fails because Kaggle notebook has also access to data directories (visible in the right bar in edit mode), while the colab doesn't contain the data folder. The data can be downloaded from kaggle, but I don't know if colab supports uploading a data folder. One possible work around is to generate in colab random matrices for x_train,y_train,x_val,y_val of the right shape and type, and see if same behavior occurs. I can do it soon. \r\nThe errors do seem to indicate an unsupported TPU operator located in tensorflow tf.nn.ctc_loss. However according to tensorflow documentation it should support TPU. \r\nThe supported operators list [here](https://cloud.google.com/tpu/docs/tensorflow-ops) is very long. Also the text says it's not exhaustive. I'm not able to diagnose with it.",
"I replaced the Kaggle image data with random matrices with fixed seed so it can be easily tested in Colab. ( @SuryanarayanaY )\r\nThe Colab gist is [here](https://colab.research.google.com/gist/sronen71/9bea9743ed40a80400496b5c1c4a80d3/61297_ctc-example.ipynb)\r\nThe Kaggle Notebook is [here: ](https://www.kaggle.com/code/shaironen/ctc-example/notebook) \r\nResults:\r\nOn Kaggle, model.train(...) gives nan loss and prints some errors. TPU v3-8\r\nIn Colab, it runs fine with finite loss and no errors. TPU v2.\r\nBoth use TF==12.0\r\nIssue is not reproduced in Colab.\r\nThe TPU versions are different though. With a free colab account, I'm not able to get TPU v3-8.\r\n",
"Hi @sronen71 ,\r\n\r\nThanks for the reproducible code snippet. I can confirm on colab the code is working fine with finite loss and same can be verified in attached [colab-gist](https://colab.research.google.com/gist/SuryanarayanaY/d22b581bd876fdfdfa6340c6bb2f90e1/61297_ctc-example-2.ipynb). Whereas on Kaggle notebook the loss becoming nan with INVALID_ARGUMENT error as per your Kaggle gist.\r\n\r\nWe need to check whether the behaviour is due to TPU versions or with the Kaggle environment itself. We would let you know after hearing from concerned team. Thanks!",
"@SuryanarayanaY One core difference between Kaggle/Colab TPUs is that Colab is using the legacy \"remote\" TPU setup, while Kaggle has migrated to TPU VMs.",
"@SuryanarayanaY @djherbis \r\nCheck this new implementation:\r\nhttps://www.kaggle.com/competitions/asl-fingerspelling/discussion/426504#2356239",
"> @SuryanarayanaY @djherbis \n> Check this new implementation:\n> https://www.kaggle.com/competitions/asl-fingerspelling/discussion/426504#2356239\n\nThat's awesome! Have you filed a PR to tensorflow to try and add it?",
"No. It's not mine. I'm not familiar with the code details. "
] | 2023-07-17T09:34:24 | 2023-07-24T14:49:28 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.12
### Custom code
Yes
### OS platform and distribution
Kaggle notebook
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Keras Model with LSTM+ CTC loss is running normally on GPU,
but on VM TPU it prints grappler errors. The errors usually don't stop the execution but the loss is nan.
It sometimes also crashes with coredump, but it's not consistent.
I created a public kaggle notebook with the code producing the issue here:
https://www.kaggle.com/code/shaironen/ctc-example/notebook
The grappler errors are:
E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node AssignAddVariableOp.
2023-07-17 09:06:37.445931: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node AssignAddVariableOp.
E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node AssignAddVariableOp.
In addition, I read it's recommended to use model.compile(jit_compile=True) while on GPU to pre-diagnose TPU issues. It gives similar errors and terminates.
(with jit_compile=False it run normally on GPU only).
According to tf.nn.ctc_loss documentation it should work on tpu.
### Standalone code to reproduce the issue
```shell
https://www.kaggle.com/code/shaironen/ctc-example/notebook
```
### Relevant log output
```shell
WARNING:tensorflow:From /usr/local/lib/python3.8/site-packages/tensorflow/python/ops/ctc_ops.py:1512: alias_inplace_add (from tensorflow.python.ops.inplace_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Prefer tf.tensor_scatter_nd_add, which offers the same functionality with well-defined read-write semantics.
WARNING:tensorflow:From /usr/local/lib/python3.8/site-packages/tensorflow/python/ops/ctc_ops.py:1512: alias_inplace_add (from tensorflow.python.ops.inplace_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Prefer tf.tensor_scatter_nd_add, which offers the same functionality with well-defined read-write semantics.
WARNING:tensorflow:From /usr/local/lib/python3.8/site-packages/tensorflow/python/ops/ctc_ops.py:1495: alias_inplace_update (from tensorflow.python.ops.inplace_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Prefer tf.tensor_scatter_nd_update, which offers the same functionality with well-defined read-write semantics.
WARNING:tensorflow:From /usr/local/lib/python3.8/site-packages/tensorflow/python/ops/ctc_ops.py:1495: alias_inplace_update (from tensorflow.python.ops.inplace_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Prefer tf.tensor_scatter_nd_update, which offers the same functionality with well-defined read-write semantics.
2023-07-17 09:29:45.795306: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node AssignAddVariableOp.
2023-07-17 09:29:46.168913: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node AssignAddVariableOp.
99/100 [============================>.] - ETA: 0s - loss: nan
2023-07-17 09:30:01.881305: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node StatefulPartitionedCall.
2023-07-17 09:30:02.118121: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:954] model_pruner failed: INVALID_ARGUMENT: Graph does not contain terminal node StatefulPartitionedCall.
F0717 09:30:03.134464 3526 throw_delegate.cc:121] RAW: absl::container_internal::raw_hash_map<>::at
@ 0x78bd48329338 (unknown)
@ 0x78bd483271e6 (unknown)
@ 0x78bd465e0f20 (unknown)
@ 0x78bd465d6231 (unknown)
@ 0x78bd465d5b59 (unknown)
@ 0x78bd476ce3c9 (unknown)
@ 0x78bd476cf3d7 (unknown)
@ 0x78bd476ccfba (unknown)
@ 0x78bd40ef00e6 (unknown)
@ 0x78bd465d4e1f (unknown)
@ 0x78bd465d6919 (unknown)
@ 0x78bd465d662c (unknown)
@ 0x78bd46345238 (unknown)
@ 0x78bd430f4143 (unknown)
@ 0x78bd46d5f6b4 (unknown)
@ 0x78bd46d5dc50 (unknown)
@ 0x78bd46d5d54e (unknown)
@ 0x78bd430cee43 (unknown)
@ 0x78bd430de27a (unknown)
@ 0x78bd430ee05b (unknown)
@ 0x78bd430ee945 (unknown)
@ 0x78bd424648a3 (unknown)
@ 0x78bd4245e54c (unknown)
@ 0x78bd41562909 (unknown)
@ 0x78bd41563a59 (unknown)
@ 0x78bd40f32a1c TpuCompile_CompileAndBuild
@ 0x78bd515e0225 tensorflow::tpu::TpuProgramGroup::CompileAndBuild()
@ 0x78bd51530d6f tensorflow::tpu::TpuCompileOpKernelImpl::Compile()
@ 0x78bd515e34df tensorflow::tpu::TpuCompileOpKernelCommon::CompileLocallyAndFillHostCacheInternal()
@ 0x78bd515e3aa1 tensorflow::tpu::TpuCompileOpKernelCommon::CompileLocallyAndFillHostCache()
@ 0x78bd515e3c8b tensorflow::tpu::TpuCompileOpKernelCommon::ComputeInternal()::{lambda()#3}::operator()()
@ 0x78bd515e3d6c std::_Function_handler<>::_M_invoke()
@ 0x78bd515cc070 tensorflow::tpu::TpuCompilationCacheExternal::InitializeEntry()
@ 0x78bd51612e72 tensorflow::tpu::TpuCompilationCacheInterface::CompileIfKeyAbsentHelper()
@ 0x78bd5161391a tensorflow::tpu::TpuCompilationCacheInterface::CompileIfKeyAbsent()
@ 0x78bd515e4323 tensorflow::tpu::TpuCompileOpKernelCommon::ComputeInternal()
@ 0x78bd515e6fc4 tensorflow::tpu::TpuCompileOpKernelCommon::Compute()
@ 0x78bd62de6b6d tensorflow::ThreadPoolDevice::Compute()
@ 0x78bd62ecfe2c tensorflow::(anonymous namespace)::ExecutorState<>::Process()
@ 0x78bd62eb9272 std::_Function_handler<>::_M_invoke()
@ 0x78bd621b7275 Eigen::ThreadPoolTempl<>::WorkerLoop()
@ 0x78bd621b41c7 std::_Function_handler<>::_M_invoke()
@ 0x78bd62cefabf tsl::(anonymous namespace)::PThread::ThreadFn()
@ 0x78be31abeea7 start_thread
https://symbolize.stripped_domain/r/?trace=78bd48329338,78bd483271e5,78bd465e0f1f,78bd465d6230,78bd465d5b58,78bd476ce3c8,78bd476cf3d6,78bd476ccfb9,78bd40ef00e5,78bd465d4e1e,78bd465d6918,78bd465d662b,78bd46345237,78bd430f4142,78bd46d5f6b3,78bd46d5dc4f,78bd46d5d54d,78bd430cee42,78bd430de279,78bd430ee05a,78bd430ee944,78bd424648a2,78bd4245e54b,78bd41562908,78bd41563a58,78bd40f32a1b,78bd515e0224,78bd51530d6e,78bd515e34de,78bd515e3aa0,78bd515e3c8a,78bd515e3d6b,78bd515cc06f,78bd51612e71,78bd51613919,78bd515e4322,78bd515e6fc3,78bd62de6b6c,78bd62ecfe2b,78bd62eb9271,78bd621b7274,78bd621b41c6,78bd62cefabe,78be31abeea6&map=aef7fe2e538f701f46d88df9ee3b51d79ec62b1e:78bd6181e000-78bd637f9728,8f79f803f683427be94b1cfeea32716e6ef365e4:78bd48eb8000-78bd60e0d830,1278088d049ad36cb636fbbc76303cb3:78bd3cc43000-78bd484907c0
https://symbolize.stripped_domain/r/?trace=78be31b11ce1,78be31b11d5f,78bd48329337,78bd483271e5,78bd465e0f1f,78bd465d6230,78bd465d5b58,78bd476ce3c8,78bd476cf3d6,78bd476ccfb9,78bd40ef00e5,78bd465d4e1e,78bd465d6918,78bd465d662b,78bd46345237,78bd430f4142,78bd46d5f6b3,78bd46d5dc4f,78bd46d5d54d,78bd430cee42,78bd430de279,78bd430ee05a,78bd430ee944,78bd424648a2,78bd4245e54b,78bd41562908,78bd41563a58,78bd40f32a1b,78bd515e0224,78bd51530d6e,78bd515e34de,78bd515e3aa0,78bd515e3c8a&map=8f79f803f683427be94b1cfeea32716e6ef365e4:78bd48eb8000-78bd60e0d830,1278088d049ad36cb636fbbc76303cb3:78bd3cc43000-78bd484907c0
*** SIGABRT received by PID 2614 (TID 3526) on cpu 51 from PID 2614; ***
E0717 09:30:03.650583 3526 coredump_hook.cc:414] RAW: Remote crash data gathering hook invoked.
E0717 09:30:03.650603 3526 client.cc:278] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E0717 09:30:03.650607 3526 coredump_hook.cc:512] RAW: Sending fingerprint to remote end.
E0717 09:30:03.650615 3526 coredump_socket.cc:120] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
E0717 09:30:03.650619 3526 coredump_hook.cc:518] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0717 09:30:03.650623 3526 coredump_hook.cc:580] RAW: Dumping core locally.
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61297/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61296 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61296/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61296/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61296/events | https://github.com/tensorflow/tensorflow/pull/61296 | 1,807,369,528 | PR_kwDOArmXAs5VpkZm | 61,296 | Support for jit-ed block reorder on AArch64 | {
"login": "cfRod",
"id": 65665931,
"node_id": "MDQ6VXNlcjY1NjY1OTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/65665931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cfRod",
"html_url": "https://github.com/cfRod",
"followers_url": "https://api.github.com/users/cfRod/followers",
"following_url": "https://api.github.com/users/cfRod/following{/other_user}",
"gists_url": "https://api.github.com/users/cfRod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cfRod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cfRod/subscriptions",
"organizations_url": "https://api.github.com/users/cfRod/orgs",
"repos_url": "https://api.github.com/users/cfRod/repos",
"events_url": "https://api.github.com/users/cfRod/events{/privacy}",
"received_events_url": "https://api.github.com/users/cfRod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1104829434,
"node_id": "MDU6TGFiZWwxMTA0ODI5NDM0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:mkl",
"name": "comp:mkl",
"color": "0052cc",
"default": false,
"description": "MKL related issues"
},
{
"id": 1173072136,
"node_id": "MDU6TGFiZWwxMTczMDcyMTM2",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:XL",
"name": "size:XL",
"color": "adafea",
"default": false,
"description": "CL Change Size:Extra Large"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@penpornk ",
"@penpornk done!",
"Done!"
] | 2023-07-17T09:31:55 | 2023-07-21T17:24:34 | 2023-07-21T17:24:33 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61296",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61296",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61296.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61296.patch",
"merged_at": "2023-07-21T17:24:33"
} | Re-opening a new PR for commit https://github.com/tensorflow/tensorflow/pull/61123/commits/a071af0ca8a3ea1fe242094514507f649f3ab64e | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61296/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61295 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61295/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61295/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61295/events | https://github.com/tensorflow/tensorflow/pull/61295 | 1,807,343,735 | PR_kwDOArmXAs5Vpein | 61,295 | Update oneDNN reorder | {
"login": "cfRod",
"id": 65665931,
"node_id": "MDQ6VXNlcjY1NjY1OTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/65665931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cfRod",
"html_url": "https://github.com/cfRod",
"followers_url": "https://api.github.com/users/cfRod/followers",
"following_url": "https://api.github.com/users/cfRod/following{/other_user}",
"gists_url": "https://api.github.com/users/cfRod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cfRod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cfRod/subscriptions",
"organizations_url": "https://api.github.com/users/cfRod/orgs",
"repos_url": "https://api.github.com/users/cfRod/repos",
"events_url": "https://api.github.com/users/cfRod/events{/privacy}",
"received_events_url": "https://api.github.com/users/cfRod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1104829434,
"node_id": "MDU6TGFiZWwxMTA0ODI5NDM0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:mkl",
"name": "comp:mkl",
"color": "0052cc",
"default": false,
"description": "MKL related issues"
},
{
"id": 1169365682,
"node_id": "MDU6TGFiZWwxMTY5MzY1Njgy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:L",
"name": "size:L",
"color": "adafea",
"default": false,
"description": "CL Change Size: Large"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@penpornk "
] | 2023-07-17T09:20:53 | 2023-07-18T13:01:50 | 2023-07-18T13:01:50 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61295",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61295",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61295.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61295.patch",
"merged_at": "2023-07-18T13:01:50"
} | Opening this PR again, since https://github.com/tensorflow/tensorflow/pull/61123 was reverted and the dependent PR https://github.com/tensorflow/tensorflow/pull/61093 was closed. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61295/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61294 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61294/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61294/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61294/events | https://github.com/tensorflow/tensorflow/pull/61294 | 1,807,290,173 | PR_kwDOArmXAs5VpS3v | 61,294 | [NVIDIA TF] Optimize embedding_lookup_sparse using new grad op [PART 2/3] | {
"login": "benbarsdell",
"id": 3979096,
"node_id": "MDQ6VXNlcjM5NzkwOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3979096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benbarsdell",
"html_url": "https://github.com/benbarsdell",
"followers_url": "https://api.github.com/users/benbarsdell/followers",
"following_url": "https://api.github.com/users/benbarsdell/following{/other_user}",
"gists_url": "https://api.github.com/users/benbarsdell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benbarsdell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benbarsdell/subscriptions",
"organizations_url": "https://api.github.com/users/benbarsdell/orgs",
"repos_url": "https://api.github.com/users/benbarsdell/repos",
"events_url": "https://api.github.com/users/benbarsdell/events{/privacy}",
"received_events_url": "https://api.github.com/users/benbarsdell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1169365682,
"node_id": "MDU6TGFiZWwxMTY5MzY1Njgy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:L",
"name": "size:L",
"color": "adafea",
"default": false,
"description": "CL Change Size: Large"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @cantonios, would you be able to help review this PR please?"
] | 2023-07-17T08:48:15 | 2023-08-02T21:48:05 | 2023-08-02T21:48:04 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61294",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61294",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61294.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61294.patch",
"merged_at": "2023-08-02T21:48:04"
} | Sequel to https://github.com/tensorflow/tensorflow/pull/60383
(This is a rebased version of https://github.com/benbarsdell/tensorflow/pull/1)
cc @cantonios @nluehr @pjannaty | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61294/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61293 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61293/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61293/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61293/events | https://github.com/tensorflow/tensorflow/issues/61293 | 1,807,031,686 | I_kwDOArmXAs5rtR2G | 61,293 | Unable to hide TPUs | {
"login": "AakashKumarNain",
"id": 11736571,
"node_id": "MDQ6VXNlcjExNzM2NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/11736571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AakashKumarNain",
"html_url": "https://github.com/AakashKumarNain",
"followers_url": "https://api.github.com/users/AakashKumarNain/followers",
"following_url": "https://api.github.com/users/AakashKumarNain/following{/other_user}",
"gists_url": "https://api.github.com/users/AakashKumarNain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AakashKumarNain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AakashKumarNain/subscriptions",
"organizations_url": "https://api.github.com/users/AakashKumarNain/orgs",
"repos_url": "https://api.github.com/users/AakashKumarNain/repos",
"events_url": "https://api.github.com/users/AakashKumarNain/events{/privacy}",
"received_events_url": "https://api.github.com/users/AakashKumarNain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097541661,
"node_id": "MDU6TGFiZWwxMDk3NTQxNjYx",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:tpus",
"name": "comp:tpus",
"color": "0052cc",
"default": false,
"description": "tpu, tpuestimator"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | open | false | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@AakashKumarNain Please follow [this](https://www.tensorflow.org/guide/tpu) guide to know more about the usability of TPU using TF. I tried to replicate the issue as mentioned above, and didn't face the error reported. I was able to run the code as expected. Could you please have a look at this [gist](https://colab.research.google.com/gist/sushreebarsa/07e0472488f87c20c3a943e9b9a10946/61293.ipynb#scrollTo=bvzFrxESe2Wh) and confirm the same. In case you are using multiple TPUs you can also detect the [TPUs](https://colab.research.google.com/notebooks/tpu.ipynb#scrollTo=FpvUOuC3j27n) as below;\r\n\r\n```\r\nimport tensorflow as tf\r\nprint(\"Tensorflow version \" + tf.__version__)\r\n\r\ntry:\r\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection\r\n print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])\r\nexcept ValueError:\r\n raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')\r\n\r\ntf.config.experimental_connect_to_cluster(tpu)\r\ntf.tpu.experimental.initialize_tpu_system(tpu)\r\ntpu_strategy = tf.distribute.TPUStrategy(tpu)\r\n\r\n\r\n```\r\n Thank you!",
"@sushreebarsa I am not asking `How to use TPUs with TF?`. I have different problem on hand. I have a TPU cluster but I don't want TF to use it. I don't want TF to use any of the TPUs for processing. I want to hide the TPUs and move everything to CPU only. Please go through the issue again for the details",
"@AakashKumarNain Sorry for the misread. \r\n@sachinprasadhs Could you please have a look at this issue. Thank you!",
"@sachinprasadhs any suggestions?",
"`tf.config.set_visible_devices` `device_type` works for `CPU` or `GPU`, other devices will be left [unaltered](https://www.tensorflow.org/api_docs/python/tf/config/set_visible_devices#args).\r\n\r\nFor TPUs, you need to configure the logical devices as something like below.\r\n\r\n```\r\nlogical_devices = tf.config.list_logical_devices('TPU')\r\nfor logical_device in logical_devices:\r\n tf.config.set_logical_device_configuration(\r\n logical_device,\r\n [tf.config.LogicalDeviceConfiguration(), ]\r\n )\r\n```",
"Thanks for the answer but this doesn't work. Error below\r\n\r\n```\r\nValueError: Unrecognized device: LogicalDevice(name='/device:TPU:0', device_type='TPU')\r\n```",
"Could you please attach the outcome of `tf.config.list_logical_devices()`",
"I have already done that in the code snippet at the top",
"Any updates on this?",
"Team is currently looking into this.\r\nMeantime, can you please check by calling `context._reset_context()` to remove the TPUs from the logical device list. "
] | 2023-07-17T05:53:45 | 2023-08-03T18:21:40 | null | MEMBER | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.12
### Custom code
No
### OS platform and distribution
Kaggle Notebooks
### Mobile device
_No response_
### Python version
3.8
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Unable to hide TPUs from TensorFlow. The consequence of this is that if we want to use JAX along with TensorFlow, only one of them will be able to initialize the TPU system, and the other will fail. We won't be able to use `tfds`, `tf.image` or any TF operation per se if we can't hide TPUs from being used by TF. I want all these operations to run on CPU only, and leverage JAX for TPU. Here is the code to test it on a TPU machine:
```python
import tenorflow as tf
tf.config.set_visible_devices([], device_type="TPU_SYSTEM")
tf.config.set_visible_devices([], device_type="TPU")
print(tf.config.list_logical_devices())
# output:
# [LogicalDevice(name='/device:CPU:0', device_type='CPU'),
# LogicalDevice(name='/device:TPU_SYSTEM:0', device_type='TPU_SYSTEM'),
# LogicalDevice(name='/device:TPU:0', device_type='TPU'),
# LogicalDevice(name='/device:TPU:1', device_type='TPU'),
# LogicalDevice(name='/device:TPU:2', device_type='TPU'),
# LogicalDevice(name='/device:TPU:3', device_type='TPU'),
# LogicalDevice(name='/device:TPU:4', device_type='TPU'),
# LogicalDevice(name='/device:TPU:5', device_type='TPU'),
# LogicalDevice(name='/device:TPU:6', device_type='TPU'),
# LogicalDevice(name='/device:TPU:7', device_type='TPU')]
```
This also doesn't work:
```python
physical_devices = tf.config.list_physical_devices()
tf.config.set_visible_devices(physical_devices[0], 'CPU')
```
### Standalone code to reproduce the issue
```shell
import tenorflow as tf
tf.config.set_visible_devices([], device_type="TPU_SYSTEM")
tf.config.set_visible_devices([], device_type="TPU")
print(tf.config.list_logical_devices())
# This also doesn't work:
physical_devices = tf.config.list_physical_devices()
tf.config.set_visible_devices(physical_devices[0], 'CPU')
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61293/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61293/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61292 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61292/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61292/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61292/events | https://github.com/tensorflow/tensorflow/pull/61292 | 1,806,747,381 | PR_kwDOArmXAs5VncHf | 61,292 | Prevent callback copies. Make lambda as mutable. | {
"login": "pateldeev",
"id": 42748234,
"node_id": "MDQ6VXNlcjQyNzQ4MjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/42748234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pateldeev",
"html_url": "https://github.com/pateldeev",
"followers_url": "https://api.github.com/users/pateldeev/followers",
"following_url": "https://api.github.com/users/pateldeev/following{/other_user}",
"gists_url": "https://api.github.com/users/pateldeev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pateldeev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pateldeev/subscriptions",
"organizations_url": "https://api.github.com/users/pateldeev/orgs",
"repos_url": "https://api.github.com/users/pateldeev/repos",
"events_url": "https://api.github.com/users/pateldeev/events{/privacy}",
"received_events_url": "https://api.github.com/users/pateldeev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1169364458,
"node_id": "MDU6TGFiZWwxMTY5MzY0NDU4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:S",
"name": "size:S",
"color": "adafea",
"default": false,
"description": "CL Change Size: Small"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-07-16T22:58:06 | 2023-07-17T17:38:30 | 2023-07-17T17:38:30 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61292",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61292",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61292.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61292.patch",
"merged_at": "2023-07-17T17:38:30"
} | I noticed extra copies of the callbacks.
By default, lambdas capture by const semantics. They need to marked mutable to move capture.
See https://godbolt.org/z/3Ec6zcK8r
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61292/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61291 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61291/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61291/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61291/events | https://github.com/tensorflow/tensorflow/issues/61291 | 1,806,720,946 | I_kwDOArmXAs5rsF-y | 61,291 | Issue still tittle | {
"login": "RubyRusul",
"id": 51497057,
"node_id": "MDQ6VXNlcjUxNDk3MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/51497057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RubyRusul",
"html_url": "https://github.com/RubyRusul",
"followers_url": "https://api.github.com/users/RubyRusul/followers",
"following_url": "https://api.github.com/users/RubyRusul/following{/other_user}",
"gists_url": "https://api.github.com/users/RubyRusul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RubyRusul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RubyRusul/subscriptions",
"organizations_url": "https://api.github.com/users/RubyRusul/orgs",
"repos_url": "https://api.github.com/users/RubyRusul/repos",
"events_url": "https://api.github.com/users/RubyRusul/events{/privacy}",
"received_events_url": "https://api.github.com/users/RubyRusul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 473173272,
"node_id": "MDU6TGFiZWw0NzMxNzMyNzI=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:feature",
"name": "type:feature",
"color": "159b2e",
"default": false,
"description": "Feature requests"
},
{
"id": 1593512946,
"node_id": "MDU6TGFiZWwxNTkzNTEyOTQ2",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/invalid",
"name": "invalid",
"color": "db6f57",
"default": true,
"description": "Hacktoberfest spam PR"
},
{
"id": 2012480497,
"node_id": "MDU6TGFiZWwyMDEyNDgwNDk3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:docs-feature",
"name": "type:docs-feature",
"color": "159b2e",
"default": false,
"description": "Doc issues for new feature, or clarifications about functionality"
}
] | closed | true | {
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"> ### Issue type\r\n> Documentation Feature Request\r\n> \r\n> ### Have you reproduced the bug with TensorFlow Nightly?\r\n> Yes\r\n> \r\n> ### Source\r\n> source\r\n> \r\n> ### TensorFlow version\r\n> Tf2.8\r\n> \r\n> ### Custom code\r\n> Yes\r\n> \r\n> ### OS platform and distribution\r\n> _No response_\r\n> \r\n> ### Mobile device\r\n> _No response_\r\n> \r\n> ### Python version\r\n> _No response_\r\n> \r\n> ### Bazel version\r\n> _No response_\r\n> \r\n> ### GCC/compiler version\r\n> _No response_\r\n> \r\n> ### CUDA/cuDNN version\r\n> _No response_\r\n> \r\n> ### GPU model and memory\r\n> _No response_\r\n> \r\n> ### Current behavior?\r\n> What you see what I'm see\r\n> \r\n> ### Standalone code to reproduce the issue\r\n> ```shell\r\n> Us see you\r\n> ```\r\n> \r\n> ### Relevant log output\r\n> ```shell\r\n> Productive projects\r\n> ```\r\n\r\nHi",
"Please don't spam"
] | 2023-07-16T21:10:22 | 2023-07-16T21:55:31 | 2023-07-16T21:55:30 | NONE | spam | null | null | ### Issue type
Documentation Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
Tf2.8
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
What you see what I'm see
### Standalone code to reproduce the issue
```shell
Us see you
```
### Relevant log output
```shell
Productive projects
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61291/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61290 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61290/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61290/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61290/events | https://github.com/tensorflow/tensorflow/issues/61290 | 1,806,677,893 | I_kwDOArmXAs5rr7eF | 61,290 | Add ComplexOp to TFLite | {
"login": "drubinstein",
"id": 577149,
"node_id": "MDQ6VXNlcjU3NzE0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/577149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drubinstein",
"html_url": "https://github.com/drubinstein",
"followers_url": "https://api.github.com/users/drubinstein/followers",
"following_url": "https://api.github.com/users/drubinstein/following{/other_user}",
"gists_url": "https://api.github.com/users/drubinstein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drubinstein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drubinstein/subscriptions",
"organizations_url": "https://api.github.com/users/drubinstein/orgs",
"repos_url": "https://api.github.com/users/drubinstein/repos",
"events_url": "https://api.github.com/users/drubinstein/events{/privacy}",
"received_events_url": "https://api.github.com/users/drubinstein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473173272,
"node_id": "MDU6TGFiZWw0NzMxNzMyNzI=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:feature",
"name": "type:feature",
"color": "159b2e",
"default": false,
"description": "Feature requests"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 2691740411,
"node_id": "MDU6TGFiZWwyNjkxNzQwNDEx",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite-kernels",
"name": "comp:lite-kernels",
"color": "D97FB4",
"default": false,
"description": "TensorFlow Lite kernel issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "zichuan-wei",
"id": 10928342,
"node_id": "MDQ6VXNlcjEwOTI4MzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/10928342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zichuan-wei",
"html_url": "https://github.com/zichuan-wei",
"followers_url": "https://api.github.com/users/zichuan-wei/followers",
"following_url": "https://api.github.com/users/zichuan-wei/following{/other_user}",
"gists_url": "https://api.github.com/users/zichuan-wei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zichuan-wei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zichuan-wei/subscriptions",
"organizations_url": "https://api.github.com/users/zichuan-wei/orgs",
"repos_url": "https://api.github.com/users/zichuan-wei/repos",
"events_url": "https://api.github.com/users/zichuan-wei/events{/privacy}",
"received_events_url": "https://api.github.com/users/zichuan-wei/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "zichuan-wei",
"id": 10928342,
"node_id": "MDQ6VXNlcjEwOTI4MzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/10928342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zichuan-wei",
"html_url": "https://github.com/zichuan-wei",
"followers_url": "https://api.github.com/users/zichuan-wei/followers",
"following_url": "https://api.github.com/users/zichuan-wei/following{/other_user}",
"gists_url": "https://api.github.com/users/zichuan-wei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zichuan-wei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zichuan-wei/subscriptions",
"organizations_url": "https://api.github.com/users/zichuan-wei/orgs",
"repos_url": "https://api.github.com/users/zichuan-wei/repos",
"events_url": "https://api.github.com/users/zichuan-wei/events{/privacy}",
"received_events_url": "https://api.github.com/users/zichuan-wei/received_events",
"type": "User",
"site_admin": false
},
{
"login": "pkgoogle",
"id": 132095473,
"node_id": "U_kgDOB9-d8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/132095473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkgoogle",
"html_url": "https://github.com/pkgoogle",
"followers_url": "https://api.github.com/users/pkgoogle/followers",
"following_url": "https://api.github.com/users/pkgoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/pkgoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkgoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkgoogle/subscriptions",
"organizations_url": "https://api.github.com/users/pkgoogle/orgs",
"repos_url": "https://api.github.com/users/pkgoogle/repos",
"events_url": "https://api.github.com/users/pkgoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkgoogle/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@pjpratik I was able to replicate the issue, please find the [gist](https://colab.research.google.com/gist/sushreebarsa/d81e141e46491014502142931af458bc/61290.ipynb) here. Thank you!",
"Hi @drubinstein \r\n\r\nYou might need to add in\r\n\r\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/ir/tfl_ops.cc\r\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/ir/tfl_ops.td\r\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/tests/legalize-tf.mlir\r\n\r\nCan you please refer to sample commit https://github.com/tensorflow/tensorflow/commit/ace44332389423ac161c4b04e1b1ca9ca5ce8898 which adds bitcast operator into builtin ops.\r\n\r\nThanks.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"To keep the issue open, I'm working on PR #61359.",
"in the given [link](https://github.com/tensorflow/tensorflow/compare/master...drubinstein:tensorflow:drubinstein/add-complex?expand=1) there is no support for models with `kTfLiteFloat16` eg `real_input->type == kTfLiteFloat16'\r\n\r\nmaybe using `Eigen::half`?",
"Hi @zichuan-wei, it seems like the PR needs a review so I'm assigning to you for now, thanks!"
] | 2023-07-16T18:32:37 | 2024-01-31T19:28:23 | null | CONTRIBUTOR | null | null | null | **System information**
- MacOS 13.4.1
- binary (via pip)
- 2.13.0
**Provide the text output from tflite_convert**
```python3
loc(callsite(callsite(fused["Complex:", callsite("sequential/lambda/Complex@__inference__wrapped_model_30"("/opt/homebrew/lib/python3.11/site-packages/tensorflow/python/util/dispatch.py":1176:0) at callsite("/opt/homebrew/lib/python3.11/site-packages/tensorflow/python/util/traceback_utils.py":150:0 at callsite("/Users/drubinstein/test/complex_test/complex.py":4:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/keras/src/layers/core/lambda_layer.py":212:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py":96:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/keras/src/engine/base_layer.py":1150:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py":65:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/keras/src/engine/sequential.py":420:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py":96:0 at "/opt/homebrew/lib/python3.11/site-packages/keras/src/engine/base_layer.py":1150:0)))))))))] at fused["PartitionedCall:", callsite("PartitionedCall@__inference_signature_wrapper_126"("/opt/homebrew/lib/python3.11/site-packages/tensorflow/python/saved_model/save.py":1313:0) at callsite("/opt/homebrew/lib/python3.11/site-packages/tensorflow/python/saved_model/save.py":1280:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py":1427:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/convert_phase.py":205:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py":1504:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py":1526:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py":1042:0 at callsite("/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py":1065:0 at "/Users/drubinstein/test/complex_test/complex.py":11:0))))))))]) at fused["PartitionedCall:", "PartitionedCall"])): error: 'tf.Complex' op is neither a custom op nor a flex op
error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: Complex
Details:
tf.Complex(tensor<3xf32>, tensor<3xf32>) -> (tensor<3xcomplex<f32>>) : {device = ""}
Traceback (most recent call last):
File "/Users/drubinstein/test/complex_test/complex.py", line 11, in <module>
tflite_model = converter.convert()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py", line 1065, in wrapper
return self._convert_and_export_metrics(convert_func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py", line 1042, in _convert_and_export_metrics
result = convert_func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py", line 1526, in convert
saved_model_convert_result = self._convert_as_saved_model()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py", line 1507, in _convert_as_saved_model
return super(TFLiteKerasModelConverterV2, self).convert(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/lite.py", line 1296, in convert
result = _convert_graphdef(
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/convert_phase.py", line 212, in wrapper
raise converter_error from None # Re-throws the exception.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/convert_phase.py", line 205, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/convert.py", line 918, in convert_graphdef
data = convert(
^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/tensorflow/lite/python/convert.py", line 367, in convert
raise converter_error
tensorflow.lite.python.convert_phase.ConverterError: /opt/homebrew/lib/python3.11/site-packages/tensorflow/python/util/dispatch.py:1176:0: error: 'tf.Complex' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["PartitionedCall:", "PartitionedCall"]): called from
/opt/homebrew/lib/python3.11/site-packages/tensorflow/python/util/dispatch.py:1176:0: note: Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: Complex
Details:
tf.Complex(tensor<3xf32>, tensor<3xf32>) -> (tensor<3xcomplex<f32>>) : {device = ""}
```
**Standalone code to reproduce the issue**
```python3
import tensorflow as tf
model = tf.keras.models.Sequential(
[tf.keras.layers.Lambda(lambda x: tf.dtypes.complex(x[0], x[1]))]
)
print(model([[1.0, 2.0, 3.0], [2.0, 3.0, 4.0]]))
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
```
**Any other info / logs**
I have [begun implementing ComplexOp in TFLite](https://github.com/tensorflow/tensorflow/compare/master...drubinstein:tensorflow:drubinstein/add-complex?expand=1), but I'm not sure if what I'm doing is correct as I can't find a guide on how to add operators. I have so far done everything I think I needed to to add `Complex` as a builtin in the lite directory, but am a little stuck on what to do with MLIR. I believe I need to:
- add `ComplexOp` as a tfl op
- legalize the `ComplexOp` pattern so TF and TFL can both lower to TOSA IR
Is this correct? Is there anything I'm missing? How do I test the converter?
Thanks!
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61290/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61289 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61289/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61289/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61289/events | https://github.com/tensorflow/tensorflow/issues/61289 | 1,806,571,416 | I_kwDOArmXAs5rrheY | 61,289 | Build error with Clang 16 | {
"login": "zamazan4ik",
"id": 7355383,
"node_id": "MDQ6VXNlcjczNTUzODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7355383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zamazan4ik",
"html_url": "https://github.com/zamazan4ik",
"followers_url": "https://api.github.com/users/zamazan4ik/followers",
"following_url": "https://api.github.com/users/zamazan4ik/following{/other_user}",
"gists_url": "https://api.github.com/users/zamazan4ik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zamazan4ik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zamazan4ik/subscriptions",
"organizations_url": "https://api.github.com/users/zamazan4ik/orgs",
"repos_url": "https://api.github.com/users/zamazan4ik/repos",
"events_url": "https://api.github.com/users/zamazan4ik/events{/privacy}",
"received_events_url": "https://api.github.com/users/zamazan4ik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 1205615612,
"node_id": "MDU6TGFiZWwxMjA1NjE1NjEy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:%20ubuntu/linux",
"name": "subtype: ubuntu/linux",
"color": "b619ea",
"default": false,
"description": "Ubuntu/Linux Build/Installation Issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | open | false | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @zamazan4ik ,\r\n\r\nThanks for reporting.I have to setup required environment and then will try to replicate the issue and let you know. Thanks",
"Just to note - the issue can be easily fixed by adding the missing `cstdint` header and using `std::uint64_t` and `std::size_t` types (I did just this on my local TF copy).",
"Hi @zamazan4ik ,\r\n\r\nIf you are willing to contribute please feel free to create a Pull request. Thanks!",
"had the same issue installing 2.11.1 and 2.12.1 (only ones I tested). the solution was to add `#include <cstdint>` to [tensorflow/tsl/lib/io/cache.h](https://github.com/tensorflow/tensorflow/pull/61503/files#diff-6e5525fc067db8964fa1cccbb0b8d96523399d9afde4c023bbceda5ad15f7126) as outlined in the merged PR just above this post\r\n\r\nhad to also do this here: tensorflow/lite/kernels/internal/spectrogram.cc"
] | 2023-07-16T12:53:20 | 2023-12-03T04:28:18 | null | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
master branch
### Custom code
No
### OS platform and distribution
Fedora 38
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
Automatically chosen via Bazelisk
### GCC/compiler version
clang version 16.0.5 (Fedora 16.0.5-1.fc38)
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Cannot build `tfcompile` from the sources on the current master with Clang 16.0.5 (from Fedora repos).
### Standalone code to reproduce the issue
```shell
* Do all instructions accordingly to https://www.tensorflow.org/install/source
* Configure without CUDA, rocM, etc but with `usr/bin/clang`
* Run `bazelisk build tensorflow/compiler/aot:tfcompile`
```
### Relevant log output
```shell
During the build, I get the following error:
bazelisk build tensorflow/compiler/aot:tfcompile
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=183
INFO: Reading rc options for 'build' from /home/zamazan4ik/open_source/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/zamazan4ik/open_source/tensorflow/.bazelrc:
'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --features=-force_no_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Reading rc options for 'build' from /home/zamazan4ik/open_source/tensorflow/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/home/zamazan4ik/open_source/tensorflow/release_env/bin/python3 --action_env PYTHON_LIB_PATH=/home/zamazan4ik/open_source/tensorflow/release_env/lib/python3.11/site-packages --python_path=/home/zamazan4ik/open_source/tensorflow/release_env/bin/python3 --action_env CLANG_COMPILER_PATH=/usr/bin/clang-16 --repo_env=CC=/usr/bin/clang-16 --repo_env=BAZEL_COMPILER=/usr/bin/clang-16 --copt=-Wno-gnu-offsetof-extensions
INFO: Reading rc options for 'build' from /home/zamazan4ik/open_source/tensorflow/.bazelrc:
'build' options: --deleted_packages=tensorflow/core/tfrt/stubs,tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/ir,tensorflow/compiler/mlir/tfrt/ir/mlrt,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/mlrt,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/compiler/mlir/tfrt/transforms/mlrt,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/mlrt,tensorflow/core/tfrt/mlrt/attribute,tensorflow/core/tfrt/mlrt/kernel,tensorflow/core/tfrt/mlrt/bytecode,tensorflow/core/tfrt/mlrt/interpreter,tensorflow/compiler/mlir/tfrt/translate/mlrt,tensorflow/compiler/mlir/tfrt/translate/mlrt/testdata,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils,tensorflow/core/tfrt/utils/debug,tensorflow/core/tfrt/saved_model/python,tensorflow/core/tfrt/graph_executor/python,tensorflow/core/tfrt/saved_model/utils
INFO: Found applicable config definition build:short_logs in file /home/zamazan4ik/open_source/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/zamazan4ik/open_source/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:linux in file /home/zamazan4ik/open_source/tensorflow/.bazelrc: --define=build_with_onednn_v2=true --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --copt=-Wno-error=unused-but-set-variable --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/zamazan4ik/open_source/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
DEBUG: /home/zamazan4ik/open_source/tensorflow/tensorflow/tools/toolchains/python/python_repo.bzl:21:14:
TF_PYTHON_VERSION variable was not set correctly, using default version. 3.10 Python
will be used.
To set Python version, run
export TF_PYTHON_VERSION=3.9
WARNING: Download from https://mirror.bazel.build/github.com/bazelbuild/rules_cc/archive/081771d4a0e9d7d3aa0eed2ef389fa4700dfb23e.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/abseil/abseil-cpp/archive/b971ac5250ea8de900eae9f95e06548d14cd95fe.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/re2/archive/03da4fc0857c285e3a26782f6bc8931c4c950df4.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/openxla/stablehlo/archive/41bad512515d609ccd3896d74bf697e7d456e1d3.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/boringssl/archive/c00d7ca810e93780bd0c8ee4eea28f4f2ea4bcdc.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
INFO: Analyzed target //tensorflow/compiler/aot:tfcompile (332 packages loaded, 24183 targets configured).
INFO: Found 1 target...
ERROR: /home/zamazan4ik/open_source/tensorflow/tensorflow/tsl/lib/io/BUILD:205:11: Compiling tensorflow/tsl/lib/io/cache.cc [for tool] failed: (Exit 1): clang-16 failed: error executing command (from target //tensorflow/tsl/lib/io:cache) /usr/bin/clang-16 -U_FORTIFY_SOURCE -fstack-protector -Wall -Wthread-safety -Wself-assign -Wunused-but-set-parameter -Wno-free-nonheap-object -fcolor-diagnostics -fno-omit-frame-pointer -g0 -O2 ... (remaining 66 arguments skipped)
In file included from tensorflow/tsl/lib/io/cache.cc:16:
./tensorflow/tsl/lib/io/cache.h:99:11: error: unknown type name 'uint64_t'
virtual uint64_t NewId() = 0;
^
tensorflow/tsl/lib/io/cache.cc:391:20: error: only virtual member functions can be marked 'override'
uint64_t NewId() override {
^~~~~~~~~
2 errors generated.
Target //tensorflow/compiler/aot:tfcompile failed to build
```
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61289/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61288 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61288/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61288/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61288/events | https://github.com/tensorflow/tensorflow/issues/61288 | 1,806,506,425 | I_kwDOArmXAs5rrRm5 | 61,288 | GPU Usage in tensorflow | {
"login": "SirKnightV",
"id": 71390611,
"node_id": "MDQ6VXNlcjcxMzkwNjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/71390611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SirKnightV",
"html_url": "https://github.com/SirKnightV",
"followers_url": "https://api.github.com/users/SirKnightV/followers",
"following_url": "https://api.github.com/users/SirKnightV/following{/other_user}",
"gists_url": "https://api.github.com/users/SirKnightV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SirKnightV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SirKnightV/subscriptions",
"organizations_url": "https://api.github.com/users/SirKnightV/orgs",
"repos_url": "https://api.github.com/users/SirKnightV/repos",
"events_url": "https://api.github.com/users/SirKnightV/events{/privacy}",
"received_events_url": "https://api.github.com/users/SirKnightV/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1188421838,
"node_id": "MDU6TGFiZWwxMTg4NDIxODM4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:windows",
"name": "subtype:windows",
"color": "b619ea",
"default": false,
"description": "Windows Build/Installation Issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@SirKnightV If you want to use a gpu for TF , then please follow [this](https://www.tensorflow.org/guide/gpu#:~:text=TensorFlow%20supports%20running%20computations%20on,that%20is%20visible%20to%20TensorFlow.) guide. This [doc](https://github.com/tensorflow/docs/blob/master/site/en/guide/gpu.ipynb) will give you the instruction how you can use a single gpu or multiple gpus as per your requirement on one or many machines using distribution strategies. If you want to check whether the gpu is installed or not you can run the following command;\r\n```\r\nimport tensorflow as tf\r\nprint(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))\r\n \r\n```\r\nPlease have a look at this [gist](https://colab.research.google.com/gist/sushreebarsa/7ebd1315789ac41fe4b139267706aa5f/61288.ipynb) as well. \r\n\r\nThank you!",
"> @SirKnightV If you want to use a gpu for TF , then please follow [this](https://www.tensorflow.org/guide/gpu#:~:text=TensorFlow%20supports%20running%20computations%20on,that%20is%20visible%20to%20TensorFlow.) guide. This [doc](https://github.com/tensorflow/docs/blob/master/site/en/guide/gpu.ipynb) will give you the instruction how you can use a single gpu or multiple gpus as per your requirement on one or many machines using distribution strategies. If you want to check whether the gpu is installed or not you can run the following command;\r\n> \r\n> ```\r\n> import tensorflow as tf\r\n> print(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))\r\n> \r\n> ```\r\n> \r\n> Please have a look at this [gist](https://colab.research.google.com/gist/sushreebarsa/7ebd1315789ac41fe4b139267706aa5f/61288.ipynb) as well.\r\n> \r\n> Thank you!\r\n\r\nHello @sushreebarsa , thanks for the answer, i have tested your code\r\n```\r\nimport tensorflow as tf\r\nprint(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))\r\n```\r\nbut it says Num GPUs Available: 0\r\ni see my problem is tensorflow is not detecting my gpu, when i have followed the doc https://www.tensorflow.org/install/gpu?hl=es-419 , in the docs say install cuda v11.2 for tensorflow >= 2.5.0 , my tensorflow version is 2.13.0 , i have instaled cuDNN for version cuda v11.2 , i extracted the files inside cuda instalation, later i have added to path , i have updated my drivers with driver booster pro, but still tensorflow dont detect my gpu, what cuda version i should use then?, or docs are outdated?",
"@SirKnightV Thank you for your kind response!\r\nAs per the documentation;\r\nTensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install [TensorFlow in WSL2](https://tensorflow.org/install/pip#windows-wsl2), or install tensorflow or tensorflow-cpu and, optionally, try the [TensorFlow-DirectML-Plugin](https://github.com/microsoft/tensorflow-directml-plugin#tensorflow-directml-plugin-).\r\nThank you!",
"> @SirKnightV Thank you for your kind response! As per the documentation; TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install [TensorFlow in WSL2](https://tensorflow.org/install/pip#windows-wsl2), or install tensorflow or tensorflow-cpu and, optionally, try the [TensorFlow-DirectML-Plugin](https://github.com/microsoft/tensorflow-directml-plugin#tensorflow-directml-plugin-). Thank you!\r\n\r\nHello @sushreebarsa , thanks for answer me again for your patience too, i have uninstalled Tensorflow 2.13.0 and installed tensorflow 2.10 , i have installed Cuda 11.8 refer in https://www.tensorflow.org/install/pip?hl=es-419 , and same version of cudNN, tensorflow is now detecting and using my gpu, but i have a question, if i install in other pc for example, with nvidia gpu too, i install ubuntu last version for example, i can use tensorflow last version with gpu without problems? if is yes what version of cuda is requirement for do that and what cudNN version too?, i hope your answer soon, and thanks :)",
"@SirKnightV Thank you for the quick response!\r\nGlad it worked fine for you.\r\nYes, you can install the latest version without any issue and to check the compatible version please follow the[ instructions](https://www.tensorflow.org/install/source#gpu) here. Kindly let us know if the issue is resolved now and move the issue to closed status?\r\nThank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61288\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61288\">No</a>\n"
] | 2023-07-16T09:06:06 | 2023-08-06T01:48:30 | 2023-08-06T01:48:27 | NONE | null | null | null | ### Issue type
Documentation Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.13.0
### Custom code
Yes
### OS platform and distribution
Windows 10 Pro 22H2
### Mobile device
_No response_
### Python version
3.10.0
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
8.6.0.163
### GPU model and memory
Nvidia Geforce GTX 1050TI
### Current behavior?
Use Tensorflow With Gpu and not cpu...
1.- I have followed tutorial from this page https://www.tensorflow.org/install/gpu?hl=es-419
2.- I Have Installed cuda_11.2.0_460.89_win10.exe from https://developer.nvidia.com/cuda-11.2.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal
3.- I Have Downloaded cuDNN from https://developer.nvidia.com/rdp/cudnn-archive , the version 8.6.0 (is the version that documentation says tensorflow supports
4.- Later i installed the cuda, and in the path of cuda in my case is C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2 , and i have unzip the cuDNN file un format .zip and copied the 3 folder in my cuda path
5.- I have executed those commands modifyng the version of cuda with my current version:
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin;%PATH%
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\extras\CUPTI\lib64;%PATH%
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include;%PATH%
SET PATH=C:\tools\cuda\bin;%PATH%
setx PATH "%PATH%;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\libnvvp"
6.- I have restarted the pc and checked with CMD Admin Mode the next command nvcc --version
and i have received fine the output:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Nov_30_19:15:10_Pacific_Standard_Time_2020
Cuda compilation tools, release 11.2, V11.2.67
Build cuda_11.2.r11.2/compiler.29373293_0
7.- I Have Tried the tutorial from this page too https://www.codingforentrepreneurs.com/blog/install-tensorflow-gpu-windows-cuda-cudnn/ , but i continue without be abble to run tensorflow in gpu mode , i have tried follow the documentation too have tried 4 times, reinstalling and doing again, but didnt worked.
8.- I dont know if i'm doing something bad or i skipped something, if someone can help me i would appreciate it very much..
### Standalone code to reproduce the issue
```shell
# my code is the next
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
if gpus:
for gpu in gpus:
print("GPU:", gpu)
else:
print("No Gpu for Tensorflow")
if tf.test.is_built_with_cuda():
print("Current Tensorflow version support gpu")
else:
print("Current Version dont support gpu")
```
### Relevant log output
```shell
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 15862100159162906098
xla_global_id: -1
]
No Gpu for Tensorflow
Current Version dont support gpu
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61288/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61287 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61287/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61287/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61287/events | https://github.com/tensorflow/tensorflow/issues/61287 | 1,806,415,144 | I_kwDOArmXAs5rq7Uo | 61,287 | The return value when LayerNormalization takes zero vectors as input | {
"login": "PhyllisJi",
"id": 34181680,
"node_id": "MDQ6VXNlcjM0MTgxNjgw",
"avatar_url": "https://avatars.githubusercontent.com/u/34181680?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhyllisJi",
"html_url": "https://github.com/PhyllisJi",
"followers_url": "https://api.github.com/users/PhyllisJi/followers",
"following_url": "https://api.github.com/users/PhyllisJi/following{/other_user}",
"gists_url": "https://api.github.com/users/PhyllisJi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhyllisJi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhyllisJi/subscriptions",
"organizations_url": "https://api.github.com/users/PhyllisJi/orgs",
"repos_url": "https://api.github.com/users/PhyllisJi/repos",
"events_url": "https://api.github.com/users/PhyllisJi/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhyllisJi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097545817,
"node_id": "MDU6TGFiZWwxMDk3NTQ1ODE3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:apis",
"name": "comp:apis",
"color": "0052cc",
"default": false,
"description": "Highlevel API related issues"
},
{
"id": 1097546578,
"node_id": "MDU6TGFiZWwxMDk3NTQ2NTc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:keras",
"name": "comp:keras",
"color": "0052cc",
"default": false,
"description": "Keras related issues"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | closed | false | {
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @PhyllisJi,\r\n\r\nThere is a new parameter called momentum in Batch Normalization which makes a difference during inference.\r\n\r\nFor Batch Normalization - **During inference (i.e. when using evaluate() or predict() or when calling the layer/model with the argument training=False (which is the default), the layer normalizes its output using a moving average of the mean and standard deviation of the batches it has seen during training. That is to say, it returns gamma * (batch - self.moving_mean) / sqrt(self.moving_var+epsilon) + beta.\r\n\r\nself.moving_mean and self.moving_var are non-trainable variables that are updated each time the layer in called in training mode, as such:\r\n\r\nmoving_mean = moving_mean * momentum + mean(batch) * (1 - momentum)\r\nmoving_var = moving_var * momentum + var(batch) * (1 - momentum)\r\nAs such, the layer will only normalize its inputs during inference after having been trained on data that has similar statistics as the inference data.**\r\nCheck the [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) for your reference\r\n\r\nWe can see by default, training = False, when training= True, we get NAN values for batch normalization.\r\n\r\nPlease find the [gist](https://colab.research.google.com/gist/Varsha-anjanappa/0f23b826355921b6831423de356b7bf5/61287.ipynb) for your reference.\r\n\r\nThank you!!\r\n\r\n\r\n\r\n",
"> Hi @PhyllisJi,\r\n> \r\n> There is a new parameter called momentum in Batch Normalization which makes a difference during inference.\r\n> \r\n> For Batch Normalization - **During inference (i.e. when using evaluate() or predict() or when calling the layer/model with the argument training=False (which is the default), the layer normalizes its output using a moving average of the mean and standard deviation of the batches it has seen during training. That is to say, it returns gamma * (batch - self.moving_mean) / sqrt(self.moving_var+epsilon) + beta.\r\n> \r\n> self.moving_mean and self.moving_var are non-trainable variables that are updated each time the layer in called in training mode, as such:\r\n> \r\n> moving_mean = moving_mean * momentum + mean(batch) * (1 - momentum) moving_var = moving_var * momentum + var(batch) * (1 - momentum) As such, the layer will only normalize its inputs during inference after having been trained on data that has similar statistics as the inference data.** Check the [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) for your reference\r\n> \r\n> We can see by default, training = False, when training= True, we get NAN values for batch normalization.\r\n> \r\n> Please find the [gist](https://colab.research.google.com/gist/Varsha-anjanappa/0f23b826355921b6831423de356b7bf5/61287.ipynb) for your reference.\r\n> \r\n> Thank you!!\r\n\r\nHi, \r\nFirst, using tensorflow version 2.12.0, BatchNormalization outputs 0 regardless of whether trainable is False or True, and LayerNormalization outputs nan, and I'll try it on 2.13.0 next; Second, we want to express that epsilon is a bias term designed to prevent division by zero errors, so we don't think it can accept a value of 0.\r\n\r\n<img width=\"888\" alt=\"截屏2023-07-20 22 44 31\" src=\"https://github.com/tensorflow/tensorflow/assets/127826463/7342df1e-9d29-4dfd-a6c7-47333053a5ef\">",
"Hi @BiophiliaSWDA,\r\n\r\nPlease check the gist provided in the previous comment. We've used the parameter training and not trainable. When training = False, we get 0 values and when training= True, we get NAN values for batch normalization.\r\n\r\nThank you.\r\n",
"> Hi @BiophiliaSWDA,\r\n> \r\n> Please check the gist provided in the previous comment. We've used the parameter training and not trainable. When training = False, we get 0 values and when training= True, we get NAN values for batch normalization.\r\n> \r\n> Thank you.\r\n\r\n\r\nSorry for using the wrong parameter. But I want to emphasize these:\r\n\r\n1) When I use `training=True` for two functions, I get NAN, but when I use `training=False` I get the different output then. So `training=True` doesn't seem to be a good solution;\r\n\r\n2) The epsilon parameter, according to tensorflow's official documentation, is defined as (Small float added to variance to avoid dividing by zero. Defaults to 1e-3), `0` should not be supported as a parameter to prevent division by zero problems;\r\n\r\n3) The function we are testing is tf.keras.layers.LayerNormalization not tf.keras.layers.BatchNormalization.\r\n\r\n```python\r\n# My code is as follows:\r\nimport tensorflow as tf\r\ndata = tf.zeros(shape=(3, 4))\r\nprint(data)\r\n\r\nlayer1 = tf.keras.layers.LayerNormalization(axis=1, epsilon=0)\r\noutput1 = layer1(data, training=True)\r\nprint(\"LayerNormalization: training=True: \\n{}\".format(output1))\r\n\r\nlayer2 = tf.keras.layers.LayerNormalization(axis=1, epsilon=0)\r\noutput2 = layer2(data, training=False)\r\nprint(\"LayerNormalization: training=False: \\n{}\".format(output2))\r\n\r\nlayer3 = tf.keras.layers.BatchNormalization(axis=1, epsilon=0)\r\noutput3 = layer3(data, training=True)\r\nprint(\"BatchNormalization: training=True: \\n{}\".format(output3))\r\n\r\nlayer4 = tf.keras.layers.BatchNormalization(axis=1, epsilon=0)\r\noutput4 = layer4(data, training=False)\r\nprint(\"BatchNormalization: training=False: \\n{}\".format(output4))\r\n\r\n```\r\n\r\n```python\r\n# The output is as follows:\r\ntf.Tensor(\r\n[[0. 0. 0. 0.]\r\n [0. 0. 0. 0.]\r\n [0. 0. 0. 0.]], shape=(3, 4), dtype=float32)\r\nLayerNormalization: training=True: \r\n[[nan nan nan nan]\r\n [nan nan nan nan]\r\n [nan nan nan nan]]\r\nLayerNormalization: training=False: \r\n[[nan nan nan nan]\r\n [nan nan nan nan]\r\n [nan nan nan nan]]\r\nBatchNormalization: training=True: \r\n[[nan nan nan nan]\r\n [nan nan nan nan]\r\n [nan nan nan nan]]\r\nBatchNormalization: training=False: \r\n[[0. 0. 0. 0.]\r\n [0. 0. 0. 0.]\r\n [0. 0. 0. 0.]]\r\n```\r\n\r\n",
"Hi @BiophiliaSWDA \r\n\r\nDuring Inference BatchNormalization works different. Please check the call function in source code that takes training, mask, inputs as additional parameters.\r\n\r\nhttps://github.com/keras-team/keras/blob/b3ffea6602dbbb481e82312baa24fe657de83e11/keras/layers/normalization/batch_normalization.py#L547\r\n\r\nWhen `training=False` which is the default value for batch normalization during inference the calculations in batch normalization changes .Please refer the source code here.\r\n\r\nhttps://github.com/keras-team/keras/blob/b3ffea6602dbbb481e82312baa24fe657de83e11/keras/layers/normalization/batch_normalization.py#L645\r\n\r\nWhereas in layer normalization there is no specific implementation during inference, by default it considers as training and performs the normalization \r\nPlease refer the source code for layer normalization below :\r\n\r\nhttps://github.com/keras-team/keras/blob/b3ffea6602dbbb481e82312baa24fe657de83e11/keras/layers/normalization/layer_normalization.py#L253\r\n\r\nHere we can see mean and variance is calculated by using tf.nn.moments\r\nRefer the below link:\r\nhttps://github.com/keras-team/keras/blob/b3ffea6602dbbb481e82312baa24fe657de83e11/keras/layers/normalization/layer_normalization.py#L290C25-L290C25\r\n\r\nPlease go through the source code of [LayerNormalization](https://github.com/keras-team/keras/blob/v2.13.1/keras/layers/normalization/layer_normalization.py) and [BatchNormalization](https://github.com/keras-team/keras/blob/v2.13.1/keras/layers/normalization/batch_normalization.py) for better understanding.\r\n\r\nThank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61287\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61287\">No</a>\n"
] | 2023-07-16T03:39:41 | 2023-08-09T01:52:23 | 2023-08-09T01:52:20 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf2.12.0
### Custom code
Yes
### OS platform and distribution
MacOs
### Mobile device
_No response_
### Python version
3.9
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
#### Output
```
tf.Tensor(
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]], shape=(3, 4), dtype=float32)
tf.Tensor(
[[nan nan nan nan]
[nan nan nan nan]
[nan nan nan nan]], shape=(3, 4), dtype=float32)
tf.Tensor(
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]], shape=(3, 4), dtype=float32)
```
#### Document
epsilon is used to avoid division by zero errors, but when LayerNormalization takes a zero vector as input, it returns nan, and BatchNormalization returns 0
(p.s.epsilon is documented as a small floating-point number.In our experiments, we found that epsilon works with larger floating-point numbers.Is this a bit vague?)
### Standalone code to reproduce the issue
```shell
#### Standalone code
import tensorflow as tf
data = tf.zeros(shape=(3, 4))
print(data)
layer1 = tf.keras.layers.LayerNormalization(axis=1, epsilon=0)
output1 = layer1(data)
print(output1)
layer2 = tf.keras.layers.BatchNormalization(axis=1, epsilon=0)
output2 = layer2(data)
print(output2)
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61287/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61286 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61286/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61286/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61286/events | https://github.com/tensorflow/tensorflow/issues/61286 | 1,806,087,130 | I_kwDOArmXAs5rprPa | 61,286 | `tf.image.decode_jpeg` can not decode jpeg base64 encoded image | {
"login": "GF-Huang",
"id": 4510984,
"node_id": "MDQ6VXNlcjQ1MTA5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4510984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GF-Huang",
"html_url": "https://github.com/GF-Huang",
"followers_url": "https://api.github.com/users/GF-Huang/followers",
"following_url": "https://api.github.com/users/GF-Huang/following{/other_user}",
"gists_url": "https://api.github.com/users/GF-Huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GF-Huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GF-Huang/subscriptions",
"organizations_url": "https://api.github.com/users/GF-Huang/orgs",
"repos_url": "https://api.github.com/users/GF-Huang/repos",
"events_url": "https://api.github.com/users/GF-Huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/GF-Huang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097547147,
"node_id": "MDU6TGFiZWwxMDk3NTQ3MTQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:ops",
"name": "comp:ops",
"color": "0052cc",
"default": false,
"description": "OPs related issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@GF-Huang \r\nCould you have a look at the below example of what decode jpeg function does in TensorFlow;\r\n```\r\n\r\nimport tensorflow as tf\r\npath_image = \"/content/kitten.jpg\"\r\nimage_open = open(path_image, 'rb')\r\nread_image = image_open.read()\r\nimage_decode = tf.image.decode_jpeg(read_image)\r\nimage_decode\r\n```\r\nThough, I was able to replicate the issue using an image, please find the [gist](https://colab.research.google.com/gist/sushreebarsa/97b758558e63cf3ff9e0b195dadac81e/61286.ipynb) here!\r\nThank you!\r\n",
"My mistake, the `PIL.Image.tobytes()` is the internal bytes format.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61286\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61286\">No</a>\n",
"@GF-Huang Glad it is working fine for you now. Thank you!"
] | 2023-07-15T13:33:28 | 2023-07-24T04:13:14 | 2023-07-23T08:50:23 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.10.1
### Custom code
Yes
### OS platform and distribution
Win11 22H2
### Mobile device
_No response_
### Python version
3.10.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
It raise: `InvalidArgumentError: {{function_node __wrapped__DecodeJpeg_device_/job:localhost/replica:0/task:0/device:CPU:0}} Unknown image file format. One of JPEG, PNG, GIF, BMP required. [Op:DecodeJpeg]`


### Standalone code to reproduce the issue
```shell
import base64
from PIL import Image
import tensorflow as tf
img = Image.open('xxx.jpg')
base64str = base64.b64encode(img.tobytes()).decode()
tf.image.decode_jpeg(base64str, channels=3)
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61286/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61285 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61285/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61285/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61285/events | https://github.com/tensorflow/tensorflow/issues/61285 | 1,806,030,862 | I_kwDOArmXAs5rpdgO | 61,285 | ValueError: No gradients provided for any variable | {
"login": "innat",
"id": 17668390,
"node_id": "MDQ6VXNlcjE3NjY4Mzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/17668390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/innat",
"html_url": "https://github.com/innat",
"followers_url": "https://api.github.com/users/innat/followers",
"following_url": "https://api.github.com/users/innat/following{/other_user}",
"gists_url": "https://api.github.com/users/innat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/innat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/innat/subscriptions",
"organizations_url": "https://api.github.com/users/innat/orgs",
"repos_url": "https://api.github.com/users/innat/repos",
"events_url": "https://api.github.com/users/innat/events{/privacy}",
"received_events_url": "https://api.github.com/users/innat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 404586594,
"node_id": "MDU6TGFiZWw0MDQ1ODY1OTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20tensorflower",
"name": "stat:awaiting tensorflower",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from tensorflower"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 1097547147,
"node_id": "MDU6TGFiZWwxMDk3NTQ3MTQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:ops",
"name": "comp:ops",
"color": "0052cc",
"default": false,
"description": "OPs related issues"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
},
{
"id": 5206407904,
"node_id": "LA_kwDOArmXAs8AAAABNlN64A",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.12",
"name": "TF 2.12",
"color": "c5def5",
"default": false,
"description": "For issues related to Tensorflow 2.12"
}
] | open | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @innat,\r\n\r\nThanks for reaching us. I have replicated the reported behaviour and attaching [gist](https://colab.research.google.com/gist/SuryanarayanaY/dfa90a5e94ff3f5d136bc2d27b72e6f0/61285_-grqad_of_pytorch_tf.ipynb) here for reference. Gradients will return None when `target` and `sources` not connected i.e `target` is not a `f(sources)` or the `target` is a constant.\r\n\r\nIn this case initially we have `model. trainable_variables` as `sources` and `target` is some `f(sources)` for first gradient calculation. Where as in second gradient calculation the `target` is function of first gradient values i.e `f(first_gradients)` but `source` is initial `model. trainable_variables`. I even tried passing first gradients as sources instead of `model.trainable_variables` for the second gradient, even then the gradients are returning None.\r\n\r\nThe reason for this behaviour to be checked whether lack of implementation or there is other way.Will dig more and come back or will escalate to concern Engineer. Thanks!\r\n\r\n",
"@SuryanarayanaY Hi. \r\nCan we take this as an incomplete feature of gradient tape? Could you please confirm, I like to close this issue anyway? ",
"@innat ,\r\n\r\nIt seems incomplete implementation. IMO, this can be potential feature request if not a bug. I will escalate the dev team to have a look into this.\r\n\r\nSimilar issue #61199 \r\n\r\n",
"A gentle reminder on this topic."
] | 2023-07-15T11:35:56 | 2023-08-26T01:52:02 | null | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.12
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I have run a PyTorch code that computes the gradient of the gradient w.r.t some computation. It works just fine. Now, I want to translate PyTorch code into TensorFlow but got some errors.
## Standalone code to reproduce the issue
Here is the reproducible code. [Gist](https://colab.research.google.com/drive/1GPhctZNrXynrCQ0qNbLyMDmuixQtC0fw?usp=sharing).
The above collab is small and quickly reproduces the run of PyTorch and TensorFlow. PyTorch runs as expected but TensorLow doesn't. Below is the main spot to look at:
**Main Part**
In PyTorch,
```python
rand_model = Rnadom()
model = Model()
ran_optim = torch.optim.SGD(
ran_model.parameters()
)
model_params = model.parameters()
loss_mod = model.forward(x)
loss_rand = model.forward(y)
model_grad = torch.autograd.grad(loss_mod, model_params)
rand_grad = torch.autograd.grad(
loss_rand,
model_params,
create_graph=True
)
loss = some_method(model_grad, rand_grad)
rand_model.zero_grad()
loss.backward()
ran_optim.step()
```
In `pytorch`, the above `create_graph=True` is crucial.
In TensorFlow, I tried
```python
ran_model = Random()
ran_optim = tf.keras.optimizers.SGD()
model = Model()
model.build(input_shape=(1, 784))
optim = tf.keras.optimizers.SGD(0.01)
model_params = model.trainable_variables
with tf.GradientTape(persistent=True) as tape:
tape.watch(ran_model.trainable_variables)
loss_mod = tf.reduce_mean(tf.math.log(model(x)[:, i]))
loss_rand = tf.reduce_mean(tf.math.log(model(y)[:, i]))
grads_mod = tape.gradient(loss_mod, model_params)
grads_rand = tape.gradient(loss_rand, model_params)
loss = some_method(model_grad, rand_grad)
ran_model_grads = tape.gradient(loss, ran_model.trainable_variables)
ran_optim.apply_gradients(
zip(ran_model_grads, ran_model.trainable_variables)
)
```
The `tf` code gives the following error.
```yaml
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-01562609cda8> in <cell line: 33>()
44 loss += tf.reduce_sum(tf.stack([a, b], axis=0))
45 ran_model_grads = tape.gradient(loss, ran_model.trainable_variables)
---> 46 ran_optim.apply_gradients(zip(ran_model_grads, ran_model.trainable_variables))
47
48
3 frames
/usr/local/lib/python3.10/dist-packages/keras/optimizers/utils.py in filter_empty_gradients(grads_and_vars)
75 if not filtered:
76 variable = ([v.name for _, v in grads_and_vars],)
---> 77 raise ValueError(
78 f"No gradients provided for any variable: {variable}. "
79 f"Provided `grads_and_vars` is {grads_and_vars}."
ValueError: No gradients provided for any variable: (['Variable:0'],). Provided `grads_and_vars` is ((None, <tf.Variable 'Variable:0' shape=(10, 1, 784) dtype=float32, numpy=
```
- This is probably because the `ran_model_grads, ran_model.trainable_variables` are not connected. As mentioned in this [doc](https://www.tensorflow.org/guide/autodiff),
> When a **target** is not connected to a **source**, the gradient will return `None`
- In PyTorch, `create_graph=True` is used to compute the gradient of the gradient in the later part. To compute [grad-of-grad](https://www.tensorflow.org/guide/advanced_autodiff#example_input_gradient_regularization), but didn't work (shown below). The reason probably is the same as before, source and target are not connected.
```python
for i in range(5):
with tf.GradientTape() as tape1:
loss_mod = tf.reduce_mean(tf.math.log(model(x)[:, i]))
grads_mod = tape1.gradient(loss_mod, model_params)
with tf.GradientTape() as tape3:
with tf.GradientTape() as tape2:
loss_rand = tf.reduce_mean(tf.math.log(model(y)[:, i]))
grads_rand = tape2.gradient(loss_rand, model_params)
loss = 0
for a, b in zip(grads_mod, grads_rand):
loss += tf.reduce_sum(tf.stack([a, b], axis=0))
[ISSUE] > ran_model_grads = tape3.gradient(loss, ran_model.trainable_variables)
ran_optim.apply_gradients(zip(ran_model_grads, ran_model.trainable_variables))
```
But in this case, how to resolve this in TensorFlow? | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61285/timeline | null | null | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61284 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61284/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61284/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61284/events | https://github.com/tensorflow/tensorflow/issues/61284 | 1,805,872,053 | I_kwDOArmXAs5ro2u1 | 61,284 | `tf.math.reduce_sum`'s `name` property doesn't change in `model.compile` | {
"login": "ubless607",
"id": 5417257,
"node_id": "MDQ6VXNlcjU0MTcyNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5417257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ubless607",
"html_url": "https://github.com/ubless607",
"followers_url": "https://api.github.com/users/ubless607/followers",
"following_url": "https://api.github.com/users/ubless607/following{/other_user}",
"gists_url": "https://api.github.com/users/ubless607/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ubless607/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ubless607/subscriptions",
"organizations_url": "https://api.github.com/users/ubless607/orgs",
"repos_url": "https://api.github.com/users/ubless607/repos",
"events_url": "https://api.github.com/users/ubless607/events{/privacy}",
"received_events_url": "https://api.github.com/users/ubless607/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097547147,
"node_id": "MDU6TGFiZWwxMDk3NTQ3MTQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:ops",
"name": "comp:ops",
"color": "0052cc",
"default": false,
"description": "OPs related issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@ubless607 \r\nIn order to expedite the trouble-shooting process, please provide the complete code snippet to reproduce the issue reported here. Please share all the dependencies [here](https://colab.research.google.com/gist/sushreebarsa/97272f7c7ea8dfcce3a2842680c7fe6e/61284.ipynb). Thank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61284\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61284\">No</a>\n"
] | 2023-07-15T04:12:59 | 2023-08-06T01:48:32 | 2023-08-06T01:48:29 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.13
### Custom code
No
### OS platform and distribution
Linux Ubuntu 18.04.6 LTS
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
```
...
out = tf.math.reduce_sum(tf.cast(position> 0.5, tf.float32), axis=-1, keepdims=True, name='final')
model = models.Model(inputs=input_, outputs=[position, out])
model.summary()
=============================================================
...
tf.math.reduce_sum (TFOpLambda (None, 1) 0 ['tf.cast[0][0]'] )
==============================================================
```
### Standalone code to reproduce the issue
```shell
input_ = layers.Input(shape=(3))
position = layers.Dense(5, 'sigmoid', name='position')(input_)
out = tf.math.reduce_sum(tf.cast(position> 0.5, tf.float32), axis=-1, keepdims=True, name='final')
model = models.Model(inputs=input_, outputs=[position, out])
model.summary()
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61284/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61283 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61283/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61283/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61283/events | https://github.com/tensorflow/tensorflow/issues/61283 | 1,805,649,207 | I_kwDOArmXAs5roAU3 | 61,283 | Tensor dimension mismatch when `tf.keras.Input` is used as input | {
"login": "YaoJiayi",
"id": 82156730,
"node_id": "MDQ6VXNlcjgyMTU2NzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/82156730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YaoJiayi",
"html_url": "https://github.com/YaoJiayi",
"followers_url": "https://api.github.com/users/YaoJiayi/followers",
"following_url": "https://api.github.com/users/YaoJiayi/following{/other_user}",
"gists_url": "https://api.github.com/users/YaoJiayi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YaoJiayi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YaoJiayi/subscriptions",
"organizations_url": "https://api.github.com/users/YaoJiayi/orgs",
"repos_url": "https://api.github.com/users/YaoJiayi/repos",
"events_url": "https://api.github.com/users/YaoJiayi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YaoJiayi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 1661751498,
"node_id": "MDU6TGFiZWwxNjYxNzUxNDk4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TFLiteConverter",
"name": "TFLiteConverter",
"color": "bfdadc",
"default": false,
"description": "For issues related to TFLite converter"
}
] | closed | false | {
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Varsha-anjanappa",
"id": 137163810,
"node_id": "U_kgDOCCz0Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/137163810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Varsha-anjanappa",
"html_url": "https://github.com/Varsha-anjanappa",
"followers_url": "https://api.github.com/users/Varsha-anjanappa/followers",
"following_url": "https://api.github.com/users/Varsha-anjanappa/following{/other_user}",
"gists_url": "https://api.github.com/users/Varsha-anjanappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Varsha-anjanappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Varsha-anjanappa/subscriptions",
"organizations_url": "https://api.github.com/users/Varsha-anjanappa/orgs",
"repos_url": "https://api.github.com/users/Varsha-anjanappa/repos",
"events_url": "https://api.github.com/users/Varsha-anjanappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Varsha-anjanappa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @YaoJiayi \r\n\r\nYou can expand the dimension by using expand_dims:\r\nx = np.expand_dims(x,axis=0)\r\n\r\nYou can also refer to a similar issue [#34720](https://github.com/tensorflow/tensorflow/issues/34720)\r\n\r\nPlease have a look at the gist [here](https://colab.research.google.com/gist/Varsha-anjanappa/7de4c251401c8ccd18169b02e4173b4c/61283.ipynb)\r\n\r\nThank you!\r\n",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61283\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61283\">No</a>\n"
] | 2023-07-14T22:13:11 | 2023-08-02T01:49:41 | 2023-08-02T01:49:38 | NONE | null | null | null | ### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04
- TensorFlow installation (pip package or built from source): pip
- TensorFlow library (version, if pip package or github SHA, if built from source): 2.14.0-dev20230602
### 2. Code
This is the minimized code to reproduce the issue:
```python
import tensorflow as tf
import numpy as np
input_shape = [1, 2]
x1 = tf.keras.Input(shape=input_shape, dtype="float32")
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.w1 = tf.Variable([[3., 4.], [5., 6.]])
self.b1 = tf.Variable([7., 8.])
@tf.function(input_signature=[tf.TensorSpec(x1.shape, x1.dtype)])
def call(self, x1):
return tf.matmul(x1, self.w1) + self.b1
m = Model()
converter = tf.lite.TFLiteConverter.from_keras_model(m)
tflite_model = converter.convert()
def _evaluateTFLiteModel(tflite_model, input_data):
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print(f'Keras input shape: {input_data[0].shape}') # print keras input shape
print(f'Lite input shape: {input_details[0]["shape"]}') # print lite input shape
for i in range(len(input_data)):
interpreter.set_tensor(input_details[i]['index'], input_data[i])
interpreter.invoke()
output_data = [interpreter.get_tensor(output_details[i]['index'])
for i in range(len(output_details))]
return output_data
x = tf.constant([1., 2.], shape=input_shape)
actual_value = _evaluateTFLiteModel(tflite_model,[x])
```
### 3. Failure after conversion
Output
```
Keras input shape: (1, 2)
Lite input shape: [1 1 2]
```
Error Message:
```
ValueError: Cannot set tensor: Dimension mismatch. Got 2 but expected 3 for input 0.
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61283/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61282 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61282/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61282/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61282/events | https://github.com/tensorflow/tensorflow/issues/61282 | 1,805,626,758 | I_kwDOArmXAs5rn62G | 61,282 | AttributeError: module 'tensorflow.saved_model' has no attribute 'builder' | {
"login": "GF-Huang",
"id": 4510984,
"node_id": "MDQ6VXNlcjQ1MTA5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4510984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GF-Huang",
"html_url": "https://github.com/GF-Huang",
"followers_url": "https://api.github.com/users/GF-Huang/followers",
"following_url": "https://api.github.com/users/GF-Huang/following{/other_user}",
"gists_url": "https://api.github.com/users/GF-Huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GF-Huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GF-Huang/subscriptions",
"organizations_url": "https://api.github.com/users/GF-Huang/orgs",
"repos_url": "https://api.github.com/users/GF-Huang/repos",
"events_url": "https://api.github.com/users/GF-Huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/GF-Huang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473184161,
"node_id": "MDU6TGFiZWw0NzMxODQxNjE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:support",
"name": "type:support",
"color": "159b2e",
"default": false,
"description": "Support issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097545817,
"node_id": "MDU6TGFiZWwxMDk3NTQ1ODE3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:apis",
"name": "comp:apis",
"color": "0052cc",
"default": false,
"description": "Highlevel API related issues"
},
{
"id": 4511033337,
"node_id": "LA_kwDOArmXAs8AAAABDODn-Q",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.10",
"name": "TF 2.10",
"color": "C15088",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@GF-Huang [Migrate the SavedModel workflow](https://www.tensorflow.org/guide/migrate/saved_model), will guide you to migrate TF1 saved model to TF2 compatible. Please have a look at this [link](https://www.tensorflow.org/guide/migrate/saved_model#tensorflow_1_load_a_savedmodel_with_tfsaved_modelload) to export the model from the saved model.\r\n Thank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61282\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61282\">No</a>\n"
] | 2023-07-14T21:48:59 | 2023-08-03T01:51:13 | 2023-08-03T01:51:11 | NONE | null | null | null | ### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
v2.10.0-76-gfdfc646704c 2.10.1
### Custom code
Yes
### OS platform and distribution
Win11 22H2
### Mobile device
_No response_
### Python version
3.10.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The docs(https://www.tensorflow.org/tfx/serving/serving_basic#train_and_export_tensorflow_model) show me:

But I can find it in my code:

### Standalone code to reproduce the issue
```shell
import tensorflow as tf
tf.saved_model.builder.SavedModelBuilder
```
### Relevant log output
_No response_ | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61282/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61281 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61281/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61281/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61281/events | https://github.com/tensorflow/tensorflow/issues/61281 | 1,805,398,599 | I_kwDOArmXAs5rnDJH | 61,281 | `MeanAbsoluteError` returns "nan" when given empty tensors | {
"login": "FGRCL",
"id": 35940434,
"node_id": "MDQ6VXNlcjM1OTQwNDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35940434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FGRCL",
"html_url": "https://github.com/FGRCL",
"followers_url": "https://api.github.com/users/FGRCL/followers",
"following_url": "https://api.github.com/users/FGRCL/following{/other_user}",
"gists_url": "https://api.github.com/users/FGRCL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FGRCL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FGRCL/subscriptions",
"organizations_url": "https://api.github.com/users/FGRCL/orgs",
"repos_url": "https://api.github.com/users/FGRCL/repos",
"events_url": "https://api.github.com/users/FGRCL/events{/privacy}",
"received_events_url": "https://api.github.com/users/FGRCL/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097546578,
"node_id": "MDU6TGFiZWwxMDk3NTQ2NTc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:keras",
"name": "comp:keras",
"color": "0052cc",
"default": false,
"description": "Keras related issues"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
}
] | closed | false | {
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "SuryanarayanaY",
"id": 116063290,
"node_id": "U_kgDOBur8Og",
"avatar_url": "https://avatars.githubusercontent.com/u/116063290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuryanarayanaY",
"html_url": "https://github.com/SuryanarayanaY",
"followers_url": "https://api.github.com/users/SuryanarayanaY/followers",
"following_url": "https://api.github.com/users/SuryanarayanaY/following{/other_user}",
"gists_url": "https://api.github.com/users/SuryanarayanaY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuryanarayanaY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuryanarayanaY/subscriptions",
"organizations_url": "https://api.github.com/users/SuryanarayanaY/orgs",
"repos_url": "https://api.github.com/users/SuryanarayanaY/repos",
"events_url": "https://api.github.com/users/SuryanarayanaY/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuryanarayanaY/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @FGRCL ,\r\n\r\nI have gone through the issue and replicated the reported behaviour and attached [gist](https://colab.research.google.com/gist/SuryanarayanaY/fff1cc34b602cdc91dcafb5f3a0051ba/61281_nightly.ipynb) for reference. What I observed is even with Numpy for empty arrays Numpy also generating `nan` only which is same as TF behaviour with empty Tensors of shape=(0,) .\r\n\r\nThis is a different case which needs to be discussed. Tensorflow implemented numpy like behaviour for empty tensors.\r\n\r\nCan you check the numpy behaviour and comment. AFAIK it's possible to implement exception of ValueError if the input is empty tensor/array. But we need to take Dev team confirmation on this.\r\n\r\nThanks!\r\n",
"Hi @FGRCL ,\r\n\r\nAFAIK by default `reset_state()` will be called at the end of each epoch/step during training. So in that case this should not be a problem. WDYT ?",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Hi @SuryanarayanaY \r\n\r\nSorry for the delay. I think the bug I'm perceiving here is not necessarily how the MAE is calculated by Tensorflow when the metric is called as in the following way: `tf.keras.metrics.mean_absolute_error(y_true, y_pred)`, which is perfectly consistent with numpy, but rather the way that the `MeanAbsoluteError` class accumulates the mae when called with `.update_state()`. The way I conceptualize it, `MeanAbsoluteError` starts out with 0 data points and returns `0.0`, therefore adding “more” 0 data points, i.e. `mae.update_state([])`, should retain that behaviour of returning `0.0`.\r\n\r\n`reset_state()` doesn't necessarily solve the issue. For example, if batch no. 25 in an epoch of 100 batches passes an empty tensor to `update_state()`, then the metric's `result()` will return `nan` until the end of the epoch. If you were then to graph the MAE w.r.t. epochs, that graph would only be `nan` for all epochs. \r\n\r\nThe particular use case in which I encountered this bug is the following. I needed to plot the MAE for samples where the target value is above or below a certain threshold. Naturally, some batches contain none of these samples, resulting in an empty tensor.\r\n\r\nLet me know if I'm not understanding correctly the way these classes are intended to work.",
"Hi @FGRCL ,\r\n\r\nApologies.This got slipped from my TODO list.This seems change in API design and needs to be addressed in [Keras](https://github.com/keras-team/keras/issues) repo. Would you mind opening this issue at Keras repo? Thanks!\r\n\r\nAdding [gist](https://colab.research.google.com/gist/SuryanarayanaY/bd2485edcd0e91fda89654ee9083214b/61281_keras3.ipynb#scrollTo=rzgr0KGAtXCP) with Keras3 for reference.",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61281\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61281\">No</a>\n"
] | 2023-07-14T19:06:49 | 2024-02-07T01:46:47 | 2024-02-07T01:46:41 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
v1.12.1-96880-g4499c968316 2.14.0-dev20230714
### Custom code
No
### OS platform and distribution
macOS 13.2.1
### Mobile device
v
### Python version
3.11.4
### Bazel version
n/a
### GCC/compiler version
n/a
### CUDA/cuDNN version
n/a
### GPU model and memory
n/a
### Current behavior?
When using `MeanAbsoluteError.update_state(y_true, y_pred)` if `y_true` and `y_pred` happen to be empty tensors of size 0, all subsequent call to `MeanAbsoluteError.result()` will yield "nan" until the internal state of the metric is reset.
Since calling `MeanAbsoluteError.result()` before any result has been passed returns 0.0, I would expect empty tensors to be ignored when passed to `MeanAbsoluteError.update_state(y_true, y_pred)`.
I've only tested this issue with `MeanAbsoluteError`, but it seems likely that it would happen with other metrics.
### Standalone code to reproduce the issue
```shell
from unittest import TestCase
from tensorflow import constant, float32, reduce_mean
from tensorflow import ragged
from tensorflow.python.keras.metrics import MeanAbsoluteError
class TestEmptyTensorMetrics(TestCase):
def test_no_data(self):
metric = MeanAbsoluteError()
result = metric.result()
expected = constant(0.0)
self.assertEqual(expected, result)
def test_empty_array(self):
metric = MeanAbsoluteError()
y_true = constant([])
y_pred = constant([])
metric.update_state(y_true, y_pred)
result = metric.result()
expected = constant(0.0)
self.assertEqual(expected, result)
def test_multiple_batches(self):
metric = MeanAbsoluteError()
y_true_batches = constant([
[39, 22, 73],
[22, 50, 23]
], dtype=float32)
y_pred_batches = constant([
[80, 59, 52],
[87, 8, 38],
], dtype=float32)
for y_true, y_pred in zip(y_true_batches, y_pred_batches):
metric.update_state(y_true, y_pred)
result = metric.result()
expected = reduce_mean(abs(y_true_batches - y_pred_batches))
self.assertAlmostEqual(expected.numpy(), result.numpy(), 5)
def test_multiple_batches_with_empty_array(self):
metric = MeanAbsoluteError()
y_true_batches = ragged.constant([
[39, 22, 73],
[],
[22, 50, 23]
], dtype=float32)
y_pred_batches = ragged.constant([
[80, 59, 52],
[],
[87, 8, 38],
], dtype=float32)
for y_true, y_pred in zip(y_true_batches, y_pred_batches):
metric.update_state(y_true, y_pred)
result = metric.result()
expected = reduce_mean(abs(y_true_batches - y_pred_batches).flat_values)
self.assertAlmostEqual(expected.numpy(), result.numpy(), 5)
```
```
### Relevant log output
```shell
2023-07-14 14:54:43.650532: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING:tensorflow:From /Users/francois/.local/share/virtualenvs/sandbox-w3BOTv5B/lib/python3.11/site-packages/tensorflow/python/ops/distributions/distribution.py:259: ReparameterizationType.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
WARNING:tensorflow:From /Users/francois/.local/share/virtualenvs/sandbox-w3BOTv5B/lib/python3.11/site-packages/tensorflow/python/ops/distributions/bernoulli.py:165: RegisterKL.__init__ (from tensorflow.python.ops.distributions.kullback_leibler) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
test_empty_array (nanbug.TestEmptyTensorMetrics.test_empty_array) ... FAIL
test_multiple_batches (nanbug.TestEmptyTensorMetrics.test_multiple_batches) ... ok
test_multiple_batches_with_empty_array (nanbug.TestEmptyTensorMetrics.test_multiple_batches_with_empty_array) ... FAIL
test_no_data (nanbug.TestEmptyTensorMetrics.test_no_data) ... ok
======================================================================
FAIL: test_empty_array (nanbug.TestEmptyTensorMetrics.test_empty_array)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/francois/Documents/sandbox/nanbug.py", line 26, in test_empty_array
self.assertEqual(expected, result)
AssertionError: <tf.Tensor: shape=(), dtype=float32, numpy=0.0> != <tf.Tensor: shape=(), dtype=float32, numpy=nan>
======================================================================
FAIL: test_multiple_batches_with_empty_array (nanbug.TestEmptyTensorMetrics.test_multiple_batches_with_empty_array)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/francois/Documents/sandbox/nanbug.py", line 64, in test_multiple_batches_with_empty_array
self.assertAlmostEqual(expected.numpy(), result.numpy(), 5)
AssertionError: 36.833332 != nan within 5 places (nan difference)
----------------------------------------------------------------------
Ran 4 tests in 0.104s
FAILED (failures=2)
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61281/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61280 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61280/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61280/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61280/events | https://github.com/tensorflow/tensorflow/pull/61280 | 1,804,736,930 | PR_kwDOArmXAs5Vgylh | 61,280 | Promote IRFFT2D to TFLite Builtin ops | {
"login": "drubinstein",
"id": 577149,
"node_id": "MDQ6VXNlcjU3NzE0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/577149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drubinstein",
"html_url": "https://github.com/drubinstein",
"followers_url": "https://api.github.com/users/drubinstein/followers",
"following_url": "https://api.github.com/users/drubinstein/following{/other_user}",
"gists_url": "https://api.github.com/users/drubinstein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drubinstein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drubinstein/subscriptions",
"organizations_url": "https://api.github.com/users/drubinstein/orgs",
"repos_url": "https://api.github.com/users/drubinstein/repos",
"events_url": "https://api.github.com/users/drubinstein/events{/privacy}",
"received_events_url": "https://api.github.com/users/drubinstein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 1169365494,
"node_id": "MDU6TGFiZWwxMTY5MzY1NDk0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:M",
"name": "size:M",
"color": "adafea",
"default": false,
"description": "CL Change Size: Medium"
}
] | open | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Can you give advice on how to add IRFFT2D for the converter? I can't find a guide and I realize this PR is missing the MLIR implementation for the TF->TFLite conversion. ",
"All mlir and lite tests should be passing now",
"Bump. I've noticed that other people are making changes to similar files. Before I go and resolve the conflicts/regenerate the schema, is there any plan to review this PR in the near term?",
"Hi @drubinstein Can you please resolve conflicts? Thank you!",
"Thanks @gbaned , like I asked in #61359 , are there any major changes in TFLite I should be aware of before I go and resolve these conflicts? These conflicts are due to the developers adding new TFLite builtins related to HLO committing changes faster than I can get a review.",
"This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Bump",
"Hi @drubinstein Can you please resolve conflicts? Thank you!"
] | 2023-07-14T11:44:00 | 2024-06-07T16:09:49 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61280",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61280",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61280.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61280.patch",
"merged_at": null
} | IRFFT2D has been implemented in TFLite for a couple of years now. In a lot of my use cases, the models I work with have both an RFFT2D (already builtin) and an IRFFT2D (not builtin) step and having IRFFT2D promoted to a built-in op would be massively useful for me. In this PR, I removed IRFFT2D from the custom ops list and changed it to a builtin op.
Thanks! | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61280/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61280/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61279 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61279/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61279/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61279/events | https://github.com/tensorflow/tensorflow/issues/61279 | 1,804,459,881 | I_kwDOArmXAs5rjd9p | 61,279 | can't save compiled model as .tf | {
"login": "Imacder",
"id": 101330867,
"node_id": "U_kgDOBgovsw",
"avatar_url": "https://avatars.githubusercontent.com/u/101330867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Imacder",
"html_url": "https://github.com/Imacder",
"followers_url": "https://api.github.com/users/Imacder/followers",
"following_url": "https://api.github.com/users/Imacder/following{/other_user}",
"gists_url": "https://api.github.com/users/Imacder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Imacder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Imacder/subscriptions",
"organizations_url": "https://api.github.com/users/Imacder/orgs",
"repos_url": "https://api.github.com/users/Imacder/repos",
"events_url": "https://api.github.com/users/Imacder/events{/privacy}",
"received_events_url": "https://api.github.com/users/Imacder/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1097545817,
"node_id": "MDU6TGFiZWwxMDk3NTQ1ODE3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:apis",
"name": "comp:apis",
"color": "0052cc",
"default": false,
"description": "Highlevel API related issues"
},
{
"id": 1097546578,
"node_id": "MDU6TGFiZWwxMDk3NTQ2NTc4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:keras",
"name": "comp:keras",
"color": "0052cc",
"default": false,
"description": "Keras related issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@Imacder Keras separates the concerns of saving your model architecture and saving your model weights.\r\nThe model structure can be described and saved using two different formats: JSON and YAML. You can save a model with model.save() or [keras.models.save_model()](https://www.tensorflow.org/api_docs/python/tf/keras/saving/save_model) (which is equivalent). You can load it back with [keras.models.load_model()](https://www.tensorflow.org/api_docs/python/tf/keras/saving/load_model). Please make sure to follow this guide here to save a compiled model, \r\nThank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61279\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61279\">No</a>\n"
] | 2023-07-14T08:34:09 | 2023-08-03T01:51:18 | 2023-08-03T01:51:15 | NONE | null | null | null | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.13.0
### Custom code
Yes
### OS platform and distribution
Mac mini m1
### Mobile device
_No response_
### Python version
3.9
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
m1 gpu and 8GB Ram
### Current behavior?
can't save compile model as a .tf file.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, f=0.5, warmup_steps=4_000):
super().__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
self.f = f
def __call__(self, step):
step = tf.cast(step, dtype=tf.float32)
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)
return self.f * tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
def get_config(self):
config = {
"d_model": self.d_model,
"warmup_steps": self.warmup_steps,
"f": self.f
}
return config
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(10)
])
learning_rate = CustomSchedule(1024)
optimizer = tf.keras.optimizers.legacy.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-5)
model.compile(
optimizer=optimizer,
loss="mse", metrics=['accuracy'])
model.save("model.tf")
```
### Relevant log output
```shell
/opt/homebrew/Caskroom/miniconda/base/envs/pythonProject/bin/python /Users/albert/PycharmProjects/pythonProject/AI/test.py
Traceback (most recent call last):
File "/Users/albert/PycharmProjects/pythonProject/AI/test.py", line 40, in <module>
model.save("model.tf")
File "/opt/homebrew/Caskroom/miniconda/base/envs/pythonProject/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/opt/homebrew/Caskroom/miniconda/base/envs/pythonProject/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/opt/homebrew/Caskroom/miniconda/base/envs/pythonProject/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: Unable to serialize 1024.0 to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61279/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61278 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61278/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61278/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61278/events | https://github.com/tensorflow/tensorflow/pull/61278 | 1,804,147,297 | PR_kwDOArmXAs5Vexyy | 61,278 | [NextPluggableDevice] Variant datatype support in MakeTensorFromProto | {
"login": "jzhoulon",
"id": 6346853,
"node_id": "MDQ6VXNlcjYzNDY4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzhoulon",
"html_url": "https://github.com/jzhoulon",
"followers_url": "https://api.github.com/users/jzhoulon/followers",
"following_url": "https://api.github.com/users/jzhoulon/following{/other_user}",
"gists_url": "https://api.github.com/users/jzhoulon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzhoulon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzhoulon/subscriptions",
"organizations_url": "https://api.github.com/users/jzhoulon/orgs",
"repos_url": "https://api.github.com/users/jzhoulon/repos",
"events_url": "https://api.github.com/users/jzhoulon/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzhoulon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1169365494,
"node_id": "MDU6TGFiZWwxMTY5MzY1NDk0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:M",
"name": "size:M",
"color": "adafea",
"default": false,
"description": "CL Change Size: Medium"
},
{
"id": 1478826728,
"node_id": "MDU6TGFiZWwxNDc4ODI2NzI4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:core",
"name": "comp:core",
"color": "024391",
"default": false,
"description": "issues related to core part of tensorflow"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@jyingl3 can you help to have a look? thanks"
] | 2023-07-14T04:32:34 | 2023-08-18T10:30:02 | 2023-08-18T10:30:02 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61278",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61278",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61278.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61278.patch",
"merged_at": "2023-08-18T10:30:01"
} | This PR will add Variant datatype support in MakeTensorFromProto. With out this, it will failed when converting Tensor to XLA tensor(DataTypeToPrimitiveType) since xla has no primitive type for variant. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61278/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61277 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61277/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61277/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61277/events | https://github.com/tensorflow/tensorflow/pull/61277 | 1,803,880,386 | PR_kwDOArmXAs5Vd32f | 61,277 | [oneDNN] Enable Caching Scaled Bias in QuantizedMatmul in oneDNN 3.x | {
"login": "mahmoud-abuzaina",
"id": 24963061,
"node_id": "MDQ6VXNlcjI0OTYzMDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/24963061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahmoud-abuzaina",
"html_url": "https://github.com/mahmoud-abuzaina",
"followers_url": "https://api.github.com/users/mahmoud-abuzaina/followers",
"following_url": "https://api.github.com/users/mahmoud-abuzaina/following{/other_user}",
"gists_url": "https://api.github.com/users/mahmoud-abuzaina/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahmoud-abuzaina/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahmoud-abuzaina/subscriptions",
"organizations_url": "https://api.github.com/users/mahmoud-abuzaina/orgs",
"repos_url": "https://api.github.com/users/mahmoud-abuzaina/repos",
"events_url": "https://api.github.com/users/mahmoud-abuzaina/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahmoud-abuzaina/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 390482148,
"node_id": "MDU6TGFiZWwzOTA0ODIxNDg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/awaiting%20review",
"name": "awaiting review",
"color": "bc3869",
"default": false,
"description": "Pull request awaiting review"
},
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1104829434,
"node_id": "MDU6TGFiZWwxMTA0ODI5NDM0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:mkl",
"name": "comp:mkl",
"color": "0052cc",
"default": false,
"description": "MKL related issues"
},
{
"id": 1169365682,
"node_id": "MDU6TGFiZWwxMTY5MzY1Njgy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:L",
"name": "size:L",
"color": "adafea",
"default": false,
"description": "CL Change Size: Large"
}
] | closed | false | {
"login": "penpornk",
"id": 38085909,
"node_id": "MDQ6VXNlcjM4MDg1OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/38085909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penpornk",
"html_url": "https://github.com/penpornk",
"followers_url": "https://api.github.com/users/penpornk/followers",
"following_url": "https://api.github.com/users/penpornk/following{/other_user}",
"gists_url": "https://api.github.com/users/penpornk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penpornk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penpornk/subscriptions",
"organizations_url": "https://api.github.com/users/penpornk/orgs",
"repos_url": "https://api.github.com/users/penpornk/repos",
"events_url": "https://api.github.com/users/penpornk/events{/privacy}",
"received_events_url": "https://api.github.com/users/penpornk/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "penpornk",
"id": 38085909,
"node_id": "MDQ6VXNlcjM4MDg1OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/38085909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penpornk",
"html_url": "https://github.com/penpornk",
"followers_url": "https://api.github.com/users/penpornk/followers",
"following_url": "https://api.github.com/users/penpornk/following{/other_user}",
"gists_url": "https://api.github.com/users/penpornk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penpornk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penpornk/subscriptions",
"organizations_url": "https://api.github.com/users/penpornk/orgs",
"repos_url": "https://api.github.com/users/penpornk/repos",
"events_url": "https://api.github.com/users/penpornk/events{/privacy}",
"received_events_url": "https://api.github.com/users/penpornk/received_events",
"type": "User",
"site_admin": false
},
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thank you for reviewing the PR, I have addressed your comments. For the code refactoring, yes, we will create a separate PR after this is merged.",
"(Just for future reference)\r\nAll 3 failures seem unrelated.\r\n\r\n[AMD ROCm](http://ml-ci.amd.com:21096/blue/organizations/jenkins/tensorflow%2Fgithub-prs-upstream-master%2FAMD-ROCm-Community-CI-Build/detail/PR-61277/2/pipeline):\r\n```\r\nERROR: /home/jenkins/workspace/ROCm-Community-CI-Build_PR-61277/bazel-ci_build-cache/.cache/bazel/_bazel_jenkins/eab0d61a99b6696edb3d2aff87b585e8/external/local_config_rocm/rocm/build_defs.bzl:52:21: syntax error at '%': expected expression\r\n```\r\n\r\n[Android Demo App](https://source.cloud.google.com/results/invocations/59ef3d64-af14-40b4-ac96-61b9a252fe6f/log):\r\n```\r\n__main__.UserInputError: Invalid CLANG_COMPILER_PATH setting was provided 10 times in a row. Assuming to be a scripting mistake.\r\n```\r\n\r\n[MacOS CPU Python3.9](https://source.cloud.google.com/results/invocations/5b0792f6-6cb7-4c69-b182-9b098a96fda6/log):\r\n```\r\n...\r\n[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/descriptor_database.cc:642] File already exists in database: tensorflow/compiler/xla/service/hlo.proto\r\n[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/descriptor.cc:1986] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size): \r\nlibc++abi: terminating with uncaught exception of type google::protobuf::FatalException: CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size): \r\n...\r\n==================== Test output for //tensorflow/python/data/experimental/kernel_tests:tf_record_writer_test:\r\n[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/descriptor_database.cc:642] File already exists in database: tensorflow/compiler/xla/service/hlo.proto\r\n[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/descriptor.cc:1986] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size): \r\nlibc++abi: terminating with uncaught exception of type google::protobuf::FatalException: CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size): \r\n================================================================================\r\n==================== Test output for //tensorflow/python/data/experimental/kernel_tests:tf_record_writer_test:\r\n[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/descriptor_database.cc:642] File already exists in database: tensorflow/compiler/xla/service/hlo.proto\r\n[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/descriptor.cc:1986] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size): \r\nlibc++abi: terminating with uncaught exception of type google::protobuf::FatalException: CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size): \r\n================================================================================\r\n...\r\n```"
] | 2023-07-13T23:08:16 | 2023-07-21T20:53:30 | 2023-07-21T20:53:30 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61277",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61277",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61277.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61277.patch",
"merged_at": "2023-07-21T20:53:30"
} | - oneDNN v3.x requires Bias to be passed as float in QuantizedMatMul. There are some models that have Bias as QINT32. So we convert that Bias to float with proper scaling. This PR caches the scaled bias in the first iteration to avoid doing that conversion in subsequent iterations.
- This PR also enables weight caching for MatMul op with oneDNN v3.x.
- This PR does not change common (non-oneDNN) TF code. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61277/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61276 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61276/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61276/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61276/events | https://github.com/tensorflow/tensorflow/pull/61276 | 1,803,848,121 | PR_kwDOArmXAs5Vdwts | 61,276 | return assert_shapes for debugging.assert_shapes | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1097547147,
"node_id": "MDU6TGFiZWwxMDk3NTQ3MTQ3",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:ops",
"name": "comp:ops",
"color": "0052cc",
"default": false,
"description": "OPs related issues"
},
{
"id": 1169364259,
"node_id": "MDU6TGFiZWwxMTY5MzY0MjU5",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:XS",
"name": "size:XS",
"color": "adafea",
"default": false,
"description": "CL Change Size: Extra Small"
},
{
"id": 1178505529,
"node_id": "MDU6TGFiZWwxMTc4NTA1NTI5",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/prtype:bugfix",
"name": "prtype:bugfix",
"color": "159b2e",
"default": false,
"description": "PR to fix a bug"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @sachinprasadhs Any update on this PR? Please. Thank you!",
"This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Hi @sachinprasadhs Any update on this PR? Please. Thank you!",
"This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Hi @sachinprasadhs Any update on this PR? Please. Thank you!",
"This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.",
"Hi @sachinprasadhs Any update on this PR? Please. Thank you!",
"Hi @sachinprasadhs Any update on this PR? Please. Thank you!"
] | 2023-07-13T22:30:25 | 2023-11-10T21:03:24 | 2023-11-10T21:03:22 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61276",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61276",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61276.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61276.patch",
"merged_at": "2023-11-10T21:03:22"
} | return assert_shapes for debugging.assert_shapes.
Fixes: https://github.com/tensorflow/tensorflow/issues/61163 | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61276/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61276/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61275 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61275/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61275/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61275/events | https://github.com/tensorflow/tensorflow/issues/61275 | 1,803,815,885 | I_kwDOArmXAs5rhAvN | 61,275 | Files are missing in Windows | {
"login": "zarat",
"id": 6713390,
"node_id": "MDQ6VXNlcjY3MTMzOTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6713390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zarat",
"html_url": "https://github.com/zarat",
"followers_url": "https://api.github.com/users/zarat/followers",
"following_url": "https://api.github.com/users/zarat/following{/other_user}",
"gists_url": "https://api.github.com/users/zarat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zarat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zarat/subscriptions",
"organizations_url": "https://api.github.com/users/zarat/orgs",
"repos_url": "https://api.github.com/users/zarat/repos",
"events_url": "https://api.github.com/users/zarat/events{/privacy}",
"received_events_url": "https://api.github.com/users/zarat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473173351,
"node_id": "MDU6TGFiZWw0NzMxNzMzNTE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:build/install",
"name": "type:build/install",
"color": "159b2e",
"default": false,
"description": "Build and install issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1188421838,
"node_id": "MDU6TGFiZWwxMTg4NDIxODM4",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/subtype:windows",
"name": "subtype:windows",
"color": "b619ea",
"default": false,
"description": "Windows Build/Installation Issues"
},
{
"id": 5508003926,
"node_id": "LA_kwDOArmXAs8AAAABSE14Vg",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/TF%202.13",
"name": "TF 2.13",
"color": "B13ACB",
"default": false,
"description": "For issues related to Tensorflow 2.13"
}
] | closed | false | {
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sushreebarsa",
"id": 84765720,
"node_id": "MDQ6VXNlcjg0NzY1NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/84765720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushreebarsa",
"html_url": "https://github.com/sushreebarsa",
"followers_url": "https://api.github.com/users/sushreebarsa/followers",
"following_url": "https://api.github.com/users/sushreebarsa/following{/other_user}",
"gists_url": "https://api.github.com/users/sushreebarsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushreebarsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushreebarsa/subscriptions",
"organizations_url": "https://api.github.com/users/sushreebarsa/orgs",
"repos_url": "https://api.github.com/users/sushreebarsa/repos",
"events_url": "https://api.github.com/users/sushreebarsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushreebarsa/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The error you are encountering indicates that the C code is unable to find the required TensorFlow C header files. To resolve this issue, you need to ensure that the necessary header files and libraries are correctly included in your build environment.\r\n\r\nHere are the steps to fix the issue:\r\n\r\n1. Make sure you have installed TensorFlow C library and its development package on your system.\r\n\r\n2. Set the appropriate include and library paths during compilation. The exact commands depend on your operating system and the way you have installed TensorFlow C. For example, if you are using GCC on Linux and installed TensorFlow C using `apt`, you can use the following command:\r\n\r\n```bash\r\ngcc main.c -o main -ltensorflow\r\n```\r\n\r\nThis command will link the TensorFlow C library with your code. If you installed TensorFlow C manually or in a different location, you might need to provide the correct include and library paths explicitly.\r\n\r\n3. If you still encounter issues, ensure that you have the necessary permissions to access the TensorFlow C library and header files. Sometimes, file permissions can cause problems during compilation.\r\n\r\nBy following these steps, you should be able to compile and run the code without the \"No such file or directory\" error.",
"@SankeethYadav Thank you for your response here.\r\n@zarat Could you please mention the steps you have followed ?\r\nMake sure that you have referred to the steps correctly as per the documentation and the provided path is correct. This [link](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/lib_package/README.md) provides you the instruction to install C library from source code. Could you check the header files are also included? \r\n\r\nFYKI, TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install [TensorFlow in WSL2](https://tensorflow.org/install/pip#windows-wsl2), or install tensorflow or tensorflow-cpu and, optionally, try the [TensorFlow-DirectML-Plugin](https://github.com/microsoft/tensorflow-directml-plugin#tensorflow-directml-plugin-)\r\n\r\n\r\nThank you! ",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61275\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61275\">No</a>\n"
] | 2023-07-13T21:56:35 | 2023-08-03T01:51:21 | 2023-08-03T01:51:18 | NONE | null | null | null | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
2.13.0
### Custom code
No
### OS platform and distribution
Windows 10 x86_64
### Mobile device
_No response_
### Python version
3.10.8
### Bazel version
_No response_
### GCC/compiler version
gcc (MinGW.org GCC-6.3.0-1) 6.3.0
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The file "tensorflow/c/tf_buffer.h" is missing.
### Standalone code to reproduce the issue
```shell
#include <stdio.h>
#include <tensorflow/c/c_api.h>
int main() {
printf("Hello from TensorFlow C library version %s\n", TF_Version());
return 0;
}
```
### Relevant log output
```shell
In file included from main.c:2:0:
include/tensorflow/c/c_api.h:23:36: fatal error: tensorflow/c/tf_buffer.h: No such file or directory
#include "tensorflow/c/tf_buffer.h"
^
```
| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61275/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61274 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61274/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61274/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61274/events | https://github.com/tensorflow/tensorflow/issues/61274 | 1,803,659,363 | I_kwDOArmXAs5rgahj | 61,274 | Error installing impoerting TF model into Node red | {
"login": "brandonlaerdal",
"id": 139502937,
"node_id": "U_kgDOCFClWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/139502937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brandonlaerdal",
"html_url": "https://github.com/brandonlaerdal",
"followers_url": "https://api.github.com/users/brandonlaerdal/followers",
"following_url": "https://api.github.com/users/brandonlaerdal/following{/other_user}",
"gists_url": "https://api.github.com/users/brandonlaerdal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brandonlaerdal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brandonlaerdal/subscriptions",
"organizations_url": "https://api.github.com/users/brandonlaerdal/orgs",
"repos_url": "https://api.github.com/users/brandonlaerdal/repos",
"events_url": "https://api.github.com/users/brandonlaerdal/events{/privacy}",
"received_events_url": "https://api.github.com/users/brandonlaerdal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473184161,
"node_id": "MDU6TGFiZWw0NzMxODQxNjE=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:support",
"name": "type:support",
"color": "159b2e",
"default": false,
"description": "Support issues"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 1093464312,
"node_id": "MDU6TGFiZWwxMDkzNDY0MzEy",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:others",
"name": "type:others",
"color": "159b2e",
"default": false,
"description": "issues not falling in bug, perfromance, support, build and install or feature"
}
] | closed | false | {
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sachinprasadhs",
"id": 73069040,
"node_id": "MDQ6VXNlcjczMDY5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/73069040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinprasadhs",
"html_url": "https://github.com/sachinprasadhs",
"followers_url": "https://api.github.com/users/sachinprasadhs/followers",
"following_url": "https://api.github.com/users/sachinprasadhs/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinprasadhs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinprasadhs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinprasadhs/subscriptions",
"organizations_url": "https://api.github.com/users/sachinprasadhs/orgs",
"repos_url": "https://api.github.com/users/sachinprasadhs/repos",
"events_url": "https://api.github.com/users/sachinprasadhs/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinprasadhs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, \r\n\r\nI don't we have any distributions done for Node Red and we don't provide any support outside of our releases published in our Github repo, https://pypi.org/project/tensorflow/.\r\n\r\nFrom the `Node Red` document, I was able to find the way to Install Tensorflow using community build. \r\nFor more details follow https://flows.nodered.org/node/node-red-contrib-tensorflow",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61274\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61274\">No</a>\n"
] | 2023-07-13T19:42:52 | 2023-07-30T01:52:03 | 2023-07-30T01:52:00 | NONE | null | null | null | Hello, I am getting a tensor flow to install on my node red. It seems like there is and install issue and I keep on getting the error pictured below. I have also tried import an model from teachable machine too and I get the same error please help.


| {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61274/timeline | null | completed | false |
https://api.github.com/repos/tensorflow/tensorflow/issues/61272 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61272/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61272/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61272/events | https://github.com/tensorflow/tensorflow/pull/61272 | 1,803,381,221 | PR_kwDOArmXAs5VcKHJ | 61,272 | Fix ambiguity in use of overloaded functions in XLA | {
"login": "elfringham",
"id": 10442001,
"node_id": "MDQ6VXNlcjEwNDQyMDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/10442001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elfringham",
"html_url": "https://github.com/elfringham",
"followers_url": "https://api.github.com/users/elfringham/followers",
"following_url": "https://api.github.com/users/elfringham/following{/other_user}",
"gists_url": "https://api.github.com/users/elfringham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elfringham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elfringham/subscriptions",
"organizations_url": "https://api.github.com/users/elfringham/orgs",
"repos_url": "https://api.github.com/users/elfringham/repos",
"events_url": "https://api.github.com/users/elfringham/events{/privacy}",
"received_events_url": "https://api.github.com/users/elfringham/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 987666414,
"node_id": "MDU6TGFiZWw5ODc2NjY0MTQ=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/ready%20to%20pull",
"name": "ready to pull",
"color": "2cd643",
"default": false,
"description": "PR ready for merge process"
},
{
"id": 1133285679,
"node_id": "MDU6TGFiZWwxMTMzMjg1Njc5",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:xla",
"name": "comp:xla",
"color": "0052cc",
"default": false,
"description": "XLA"
},
{
"id": 1169364259,
"node_id": "MDU6TGFiZWwxMTY5MzY0MjU5",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/size:XS",
"name": "size:XS",
"color": "adafea",
"default": false,
"description": "CL Change Size: Extra Small"
}
] | closed | false | {
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gbaned",
"id": 48215717,
"node_id": "MDQ6VXNlcjQ4MjE1NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48215717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaned",
"html_url": "https://github.com/gbaned",
"followers_url": "https://api.github.com/users/gbaned/followers",
"following_url": "https://api.github.com/users/gbaned/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaned/subscriptions",
"organizations_url": "https://api.github.com/users/gbaned/orgs",
"repos_url": "https://api.github.com/users/gbaned/repos",
"events_url": "https://api.github.com/users/gbaned/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaned/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @elfringham Can you please resolve conflicts? Thank you!",
"Hi @elfringham Can you please resolve conflicts? Thank you!",
"@gbaned conflicts resolved. Sorry for the delay but I was out last week.",
"(Just for future reference)\r\n[AMD ROCm](http://ml-ci.amd.com:21096/blue/organizations/jenkins/tensorflow%2Fgithub-prs-upstream-master%2FAMD-ROCm-Community-CI-Build/detail/PR-61272/3/pipeline) failures are unrelated\r\n```\r\nERROR: /workspace/tensorflow/compiler/xla/service/gpu/BUILD:853:11: in deps attribute of cc_library rule //tensorflow/compiler/xla/service/gpu:gpu_executable: Label '//tensorflow/tsl/platform:random' is duplicated\r\n\r\nERROR: /workspace/tensorflow/compiler/xla/service/gpu/BUILD:853:11: Analysis of target '//tensorflow/compiler/xla/service/gpu:gpu_executable' failed\r\n```\r\n\r\n[Py+CPP Ubuntu GPU](https://source.cloud.google.com/results/invocations/f1a3c9a8-fd2f-40f7-85ac-660ac357df79/log) has 1 test timed out, unrelated to this PR.\r\n```\r\n//tensorflow/python/kernel_tests/linalg:normalize_op_test_gpu TIMEOUT in 1 out of 20 in 452.4s\r\n```"
] | 2023-07-13T16:34:20 | 2023-08-22T14:08:37 | 2023-07-25T14:13:34 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/pulls/61272",
"html_url": "https://github.com/tensorflow/tensorflow/pull/61272",
"diff_url": "https://github.com/tensorflow/tensorflow/pull/61272.diff",
"patch_url": "https://github.com/tensorflow/tensorflow/pull/61272.patch",
"merged_at": "2023-07-25T14:13:34"
} | Cast ambiguous parameter so that it is not ambiguous and so gcc will compile it. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61272/timeline | null | null | true |
https://api.github.com/repos/tensorflow/tensorflow/issues/61271 | https://api.github.com/repos/tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow/issues/61271/labels{/name} | https://api.github.com/repos/tensorflow/tensorflow/issues/61271/comments | https://api.github.com/repos/tensorflow/tensorflow/issues/61271/events | https://github.com/tensorflow/tensorflow/issues/61271 | 1,802,982,930 | I_kwDOArmXAs5rd1YS | 61,271 | TFlite running interpreter->invoke() has failed - Segmentation fault | {
"login": "HemiFate",
"id": 87158423,
"node_id": "MDQ6VXNlcjg3MTU4NDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/87158423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HemiFate",
"html_url": "https://github.com/HemiFate",
"followers_url": "https://api.github.com/users/HemiFate/followers",
"following_url": "https://api.github.com/users/HemiFate/following{/other_user}",
"gists_url": "https://api.github.com/users/HemiFate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HemiFate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HemiFate/subscriptions",
"organizations_url": "https://api.github.com/users/HemiFate/orgs",
"repos_url": "https://api.github.com/users/HemiFate/repos",
"events_url": "https://api.github.com/users/HemiFate/events{/privacy}",
"received_events_url": "https://api.github.com/users/HemiFate/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 386191887,
"node_id": "MDU6TGFiZWwzODYxOTE4ODc=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stat:awaiting%20response",
"name": "stat:awaiting response",
"color": "f4b400",
"default": false,
"description": "Status - Awaiting response from author"
},
{
"id": 473172988,
"node_id": "MDU6TGFiZWw0NzMxNzI5ODg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/type:bug",
"name": "type:bug",
"color": "159b2e",
"default": false,
"description": "Bug"
},
{
"id": 474725938,
"node_id": "MDU6TGFiZWw0NzQ3MjU5Mzg=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/stale",
"name": "stale",
"color": "d4c5f9",
"default": false,
"description": "This label marks the issue/pr stale - to be closed automatically if no activity"
},
{
"id": 750616506,
"node_id": "MDU6TGFiZWw3NTA2MTY1MDY=",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:lite",
"name": "comp:lite",
"color": "0052cc",
"default": false,
"description": "TF Lite related issues"
},
{
"id": 1097543484,
"node_id": "MDU6TGFiZWwxMDk3NTQzNDg0",
"url": "https://api.github.com/repos/tensorflow/tensorflow/labels/comp:runtime",
"name": "comp:runtime",
"color": "0052cc",
"default": false,
"description": "c++ runtime, performance issues (cpu)"
}
] | closed | false | {
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tilakrayal",
"id": 81610181,
"node_id": "MDQ6VXNlcjgxNjEwMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81610181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilakrayal",
"html_url": "https://github.com/tilakrayal",
"followers_url": "https://api.github.com/users/tilakrayal/followers",
"following_url": "https://api.github.com/users/tilakrayal/following{/other_user}",
"gists_url": "https://api.github.com/users/tilakrayal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilakrayal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilakrayal/subscriptions",
"organizations_url": "https://api.github.com/users/tilakrayal/orgs",
"repos_url": "https://api.github.com/users/tilakrayal/repos",
"events_url": "https://api.github.com/users/tilakrayal/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilakrayal/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@HemiFate,\r\nCould you please provide the complete standalone code and the tensorflow version you are trying to reproduce the issue which helps us to analyse the issue in an effective way. Thank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"@HemiFate,\r\nThe delegate test failure is more or less expected, though it shouldn't seg fault. In your test, did you explicitly call **TfLiteInterpreterAllocateTensors** before checking the output tensor data? Thank you!",
"This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.",
"This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.",
"Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61271\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/61271\">No</a>\n"
] | 2023-07-13T13:03:01 | 2023-08-18T01:46:32 | 2023-08-18T01:46:30 | NONE | null | null | null | In TFLite, I wrote a custom delegate in C++ and encountered an error: "Segmentation fault". This error occurs after the initialization is complete and specifically after the invocation of interpreter->invoke(). The custom delegate's prepare function is executed, but the eva function is not executed. | {
"url": "https://api.github.com/repos/tensorflow/tensorflow/issues/61271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/tensorflow/tensorflow/issues/61271/timeline | null | completed | false |
Subsets and Splits