cve_id
stringlengths 13
16
| obtain_all_privilege
stringclasses 3
values | obtain_user_privilege
stringclasses 2
values | obtain_other_privilege
stringclasses 2
values | user_interaction_required
stringclasses 3
values | cvss2_vector_string
stringclasses 106
values | cvss2_access_vector
stringclasses 4
values | cvss2_access_complexity
stringclasses 4
values | cvss2_authentication
stringclasses 3
values | cvss2_confidentiality_impact
stringclasses 4
values | cvss2_integrity_impact
stringclasses 4
values | cvss2_availability_impact
stringclasses 4
values | cvss2_base_score
stringclasses 50
values | cvss3_vector_string
stringclasses 226
values | cvss3_attack_vector
stringclasses 5
values | cvss3_attack_complexity
stringclasses 3
values | cvss3_privileges_required
stringclasses 4
values | cvss3_user_interaction
stringclasses 3
values | cvss3_scope
stringclasses 3
values | cvss3_confidentiality_impact
stringclasses 4
values | cvss3_integrity_impact
stringclasses 4
values | cvss3_availability_impact
stringclasses 4
values | cvss3_base_score
stringclasses 55
values | cvss3_base_severity
stringclasses 5
values | exploitability_score
stringclasses 22
values | impact_score
stringclasses 15
values | ac_insuf_info
stringclasses 3
values | reference_json
stringlengths 221
23.3k
| problemtype_json
stringclasses 200
values | severity
stringclasses 4
values | cve_nodes
stringlengths 2
33.1k
| cve_description
stringlengths 64
1.99k
| cve_last_modified_date
stringlengths 17
17
| cve_published_date
stringlengths 17
17
| cwe_name
stringclasses 125
values | cwe_description
stringclasses 124
values | cwe_extended_description
stringclasses 95
values | cwe_url
stringclasses 124
values | cwe_is_category
int64 0
1
| commit_author
stringlengths 0
34
| commit_author_date
stringlengths 25
25
| commit_msg
stringlengths 0
13.3k
| commit_hash
stringlengths 40
40
| commit_is_merge
stringclasses 1
value | repo_name
stringclasses 467
values | repo_description
stringclasses 459
values | repo_date_created
stringclasses 467
values | repo_date_last_push
stringclasses 467
values | repo_homepage
stringclasses 294
values | repo_owner
stringclasses 470
values | repo_stars
stringclasses 406
values | repo_forks
stringclasses 352
values | function_name
stringlengths 3
120
| function_signature
stringlengths 6
640
| function_parameters
stringlengths 2
302
| function
stringlengths 12
114k
| function_token_count
stringlengths 1
5
| function_before_change
stringclasses 1
value | labels
int64 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
CVE-2019-16778 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/db4f9717c41bccc3ce10099ab61996b246099892', 'name': 'https://github.com/tensorflow/tensorflow/commit/db4f9717c41bccc3ce10099ab61996b246099892', 'refsource': 'MISC', 'tags': ['Patch']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2019-002.md', 'name': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2019-002.md', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-844w-j86r-4x2j', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-844w-j86r-4x2j', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-681'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.0.0', 'versionEndExcluding': '1.15.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In TensorFlow before 1.15, a heap buffer overflow in UnsortedSegmentSum can be produced when the Index template argument is int32. In this case data_size and num_segments fields are truncated from int64 to int32 and can produce negative numbers, resulting in accessing out of bounds heap memory. This is unlikely to be exploitable and was detected and fixed internally in TensorFlow 1.15 and 2.0.'}] | 2021-10-29T15:03Z | 2019-12-16T21:15Z | Incorrect Conversion between Numeric Types | When converting from one data type to another, such as long to integer, data can be omitted or translated in a way that produces unexpected values. If the resulting values are used in a sensitive context, then dangerous behaviors may occur. | https://cwe.mitre.org/data/definitions/681.html | 0 | RJ Skerry-Ryan | 2019-07-03 15:45:01-07:00 | Fix heap buffer overflow in UnsortedSegmentSum.
When Index=int32, data_size and num_segments were truncated from int64 to int32. This truncation can produce negative numbers, which causes UnsortedSegmentFunctor to access out of bounds memory.
Also:
- Switches some indexing calculations to int64 to avoid signed integer overflow when either the input or output tensors have more than 2**31 - 1 elements.
- Fixes a range check error in the GPU kernel. The segment ID was checked against an upper bound measured in elements, not segments.
PiperOrigin-RevId: 256451663 | db4f9717c41bccc3ce10099ab61996b246099892 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::functor::UnsortedSegmentFunctor<GPUDevice,T,Index,InitialValueF,ReductionF>::operator ( ) | tensorflow::functor::UnsortedSegmentFunctor<GPUDevice,T,Index,InitialValueF,ReductionF>::operator ( )( OpKernelContext * ctx , const Index num_segments , const TensorShape & segment_ids_shape , typename TTypes<Index> :: ConstFlat segment_ids , const Index data_size , const T * data , typename TTypes<T,2> :: Tensor output) | ['ctx', 'num_segments', 'segment_ids_shape', 'segment_ids', 'data_size', 'data', 'output'] | void operator()(OpKernelContext* ctx, const Index num_segments,
const TensorShape& segment_ids_shape,
typename TTypes<Index>::ConstFlat segment_ids,
const Index data_size, const T* data,
typename TTypes<T, 2>::Tensor output) {
if (output.size() == 0) {
return;
}
// Set 'output' to initial value.
GPUDevice d = ctx->template eigen_device<GPUDevice>();
GpuLaunchConfig config = GetGpuLaunchConfig(output.size(), d);
TF_CHECK_OK(GpuLaunchKernel(
SetToValue<T>, config.block_count, config.thread_per_block, 0,
d.stream(), output.size(), output.data(), InitialValueF()()));
if (data_size == 0 || segment_ids_shape.num_elements() == 0) {
return;
}
// Launch kernel to compute unsorted segment reduction.
// Notes:
// *) 'data_size' is the total number of elements to process.
// *) 'segment_ids.shape' is a prefix of data's shape.
// *) 'input_outer_dim_size' is the total number of segments to process.
const Index input_outer_dim_size = segment_ids.dimension(0);
const Index input_inner_dim_size = data_size / input_outer_dim_size;
config = GetGpuLaunchConfig(data_size, d);
TF_CHECK_OK(
GpuLaunchKernel(UnsortedSegmentCustomKernel<T, Index, ReductionF>,
config.block_count, config.thread_per_block, 0,
d.stream(), input_outer_dim_size, input_inner_dim_size,
num_segments, segment_ids.data(), data, output.data()));
} | 231 | True | 1 |
|
CVE-2020-15212 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | LOW | LOW | HIGH | 8.6 | HIGH | 3.9 | 4.7 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hx2x-85gr-wrpq', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hx2x-85gr-wrpq', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/204945b19e44b57906c9344c0d00120eeeae178a', 'name': 'https://github.com/tensorflow/tensorflow/commit/204945b19e44b57906c9344c0d00120eeeae178a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `output_data` buffer. This might result in a segmentation fault but it can also be used to further corrupt the memory and can be chained with other vulnerabilities to create more advanced exploits. The issue is patched in commit 204945b19e44b57906c9344c0d00120eeeae178a and is released in TensorFlow versions 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that the segment ids are all positive, although this only handles the case when the segment ids are stored statically in the model. A similar validation could be done if the segment ids are generated at runtime between inference steps. If the segment ids are generated as outputs of a tensor during inference steps, then there are no possible workaround and users are advised to upgrade to patched code.'}] | 2021-08-17T13:21Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:04:23-07:00 | [tflite] Validate segment ids for segment_sum.
Segment identifiers in segment_sum should be in a 1-D tensor of same size as the first dimension of the input. The values of the tensor should be integers from {0, 1, 2, ... k-1}, where k is the first dimension of the input. The segment identifiers must not contain jumps and must be increasing.
See https://www.tensorflow.org/api_docs/python/tf/math#Segmentation as the source for these constraints.
PiperOrigin-RevId: 332510942
Change-Id: I898beaba00642c918bcd4b4d4ce893ebb190d869 | 204945b19e44b57906c9344c0d00120eeeae178a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::segment_sum::ResizeOutputTensor | tflite::ops::builtin::segment_sum::ResizeOutputTensor( TfLiteContext * context , const TfLiteTensor * data , const TfLiteTensor * segment_ids , TfLiteTensor * output) | ['context', 'data', 'segment_ids', 'output'] | TfLiteStatus ResizeOutputTensor(TfLiteContext* context,
const TfLiteTensor* data,
const TfLiteTensor* segment_ids,
TfLiteTensor* output) {
int max_index = -1;
const int segment_id_size = segment_ids->dims->data[0];
if (segment_id_size > 0) {
max_index = segment_ids->data.i32[segment_id_size - 1];
}
const int data_rank = NumDimensions(data);
TfLiteIntArray* output_shape = TfLiteIntArrayCreate(NumDimensions(data));
output_shape->data[0] = max_index + 1;
for (int i = 1; i < data_rank; ++i) {
output_shape->data[i] = data->dims->data[i];
}
return context->ResizeTensor(context, output, output_shape);
} | 138 | True | 1 |
CVE-2020-15213 | False | False | False | False | AV:N/AC:M/Au:N/C:N/I:N/A:P | NETWORK | MEDIUM | NONE | NONE | NONE | PARTIAL | 4.3 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:N/I:N/A:L | NETWORK | HIGH | NONE | NONE | CHANGED | NONE | NONE | LOW | 4.0 | MEDIUM | 2.2 | 1.4 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hjmq-236j-8m87', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hjmq-236j-8m87', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/204945b19e44b57906c9344c0d00120eeeae178a', 'name': 'https://github.com/tensorflow/tensorflow/commit/204945b19e44b57906c9344c0d00120eeeae178a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-770'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger a denial of service by causing an out of memory allocation in the implementation of segment sum. Since code uses the last element of the tensor holding them to determine the dimensionality of output tensor, attackers can use a very large value to trigger a large allocation. The issue is patched in commit 204945b19e44b57906c9344c0d00120eeeae178a and is released in TensorFlow versions 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to limit the maximum value in the segment ids tensor. This only handles the case when the segment ids are stored statically in the model, but a similar validation could be done if the segment ids are generated at runtime, between inference steps. However, if the segment ids are generated as outputs of a tensor during inference steps, then there are no possible workaround and users are advised to upgrade to patched code.'}] | 2021-11-18T17:28Z | 2020-09-25T19:15Z | Allocation of Resources Without Limits or Throttling | The software allocates a reusable resource or group of resources on behalf of an actor without imposing any restrictions on the size or number of resources that can be allocated, in violation of the intended security policy for that actor. | Code frequently has to work with limited resources, so programmers must be careful to ensure that resources are not consumed too quickly, or too easily. Without use of quotas, resource limits, or other protection mechanisms, it can be easy for an attacker to consume many resources by rapidly making many requests, or causing larger resources to be used than is needed. When too many resources are allocated, or if a single resource is too large, then it can prevent the code from working correctly, possibly leading to a denial of service.
| https://cwe.mitre.org/data/definitions/770.html | 0 | Mihai Maruseac | 2020-09-18 13:04:23-07:00 | [tflite] Validate segment ids for segment_sum.
Segment identifiers in segment_sum should be in a 1-D tensor of same size as the first dimension of the input. The values of the tensor should be integers from {0, 1, 2, ... k-1}, where k is the first dimension of the input. The segment identifiers must not contain jumps and must be increasing.
See https://www.tensorflow.org/api_docs/python/tf/math#Segmentation as the source for these constraints.
PiperOrigin-RevId: 332510942
Change-Id: I898beaba00642c918bcd4b4d4ce893ebb190d869 | 204945b19e44b57906c9344c0d00120eeeae178a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::segment_sum::ResizeOutputTensor | tflite::ops::builtin::segment_sum::ResizeOutputTensor( TfLiteContext * context , const TfLiteTensor * data , const TfLiteTensor * segment_ids , TfLiteTensor * output) | ['context', 'data', 'segment_ids', 'output'] | TfLiteStatus ResizeOutputTensor(TfLiteContext* context,
const TfLiteTensor* data,
const TfLiteTensor* segment_ids,
TfLiteTensor* output) {
int max_index = -1;
const int segment_id_size = segment_ids->dims->data[0];
if (segment_id_size > 0) {
max_index = segment_ids->data.i32[segment_id_size - 1];
}
const int data_rank = NumDimensions(data);
TfLiteIntArray* output_shape = TfLiteIntArrayCreate(NumDimensions(data));
output_shape->data[0] = max_index + 1;
for (int i = 1; i < data_rank; ++i) {
output_shape->data[i] = data->dims->data[i];
}
return context->ResizeTensor(context, output, output_shape);
} | 138 | True | 1 |
CVE-2020-15214 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:P | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | PARTIAL | 6.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:L/I:L/A:H | NETWORK | HIGH | NONE | NONE | CHANGED | LOW | LOW | HIGH | 8.1 | HIGH | 2.2 | 5.3 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p2cq-cprg-frvm', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p2cq-cprg-frvm', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/204945b19e44b57906c9344c0d00120eeeae178a', 'name': 'https://github.com/tensorflow/tensorflow/commit/204945b19e44b57906c9344c0d00120eeeae178a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger a write out bounds / segmentation fault if the segment ids are not sorted. Code assumes that the segment ids are in increasing order, using the last element of the tensor holding them to determine the dimensionality of output tensor. This results in allocating insufficient memory for the output tensor and in a write outside the bounds of the output array. This usually results in a segmentation fault, but depending on runtime conditions it can provide for a write gadget to be used in future memory corruption-based exploits. The issue is patched in commit 204945b19e44b57906c9344c0d00120eeeae178a and is released in TensorFlow versions 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that the segment ids are sorted, although this only handles the case when the segment ids are stored statically in the model. A similar validation could be done if the segment ids are generated at runtime between inference steps. If the segment ids are generated as outputs of a tensor during inference steps, then there are no possible workaround and users are advised to upgrade to patched code.'}] | 2021-08-17T13:21Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:04:23-07:00 | [tflite] Validate segment ids for segment_sum.
Segment identifiers in segment_sum should be in a 1-D tensor of same size as the first dimension of the input. The values of the tensor should be integers from {0, 1, 2, ... k-1}, where k is the first dimension of the input. The segment identifiers must not contain jumps and must be increasing.
See https://www.tensorflow.org/api_docs/python/tf/math#Segmentation as the source for these constraints.
PiperOrigin-RevId: 332510942
Change-Id: I898beaba00642c918bcd4b4d4ce893ebb190d869 | 204945b19e44b57906c9344c0d00120eeeae178a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::segment_sum::ResizeOutputTensor | tflite::ops::builtin::segment_sum::ResizeOutputTensor( TfLiteContext * context , const TfLiteTensor * data , const TfLiteTensor * segment_ids , TfLiteTensor * output) | ['context', 'data', 'segment_ids', 'output'] | TfLiteStatus ResizeOutputTensor(TfLiteContext* context,
const TfLiteTensor* data,
const TfLiteTensor* segment_ids,
TfLiteTensor* output) {
int max_index = -1;
const int segment_id_size = segment_ids->dims->data[0];
if (segment_id_size > 0) {
max_index = segment_ids->data.i32[segment_id_size - 1];
}
const int data_rank = NumDimensions(data);
TfLiteIntArray* output_shape = TfLiteIntArrayCreate(NumDimensions(data));
output_shape->data[0] = max_index + 1;
for (int i = 1; i < data_rank; ++i) {
output_shape->data[i] = data->dims->data[i];
}
return context->ResizeTensor(context, output, output_shape);
} | 138 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:10:41-07:00 | [tflite] Test for `kTfLiteOptionalTensor` in `GetInput`.
`GetInput`, `GetVariableInput` and `GetOutput` all fail to check for the case where `node->inputs->data[index]` is the special `kTfLiteOptionalTensor` value (-1) which then causes `context->tensors[node->inputs->data[index]]` to read from invalid memory location.
This fix makes `GetInput` and related return `nullptr` in those cases, asking the caller to check for `nullptr`. This is better than having `GetOptionalInputTensor` and `GetOptionalOutputTensor` (does not exist but could be added) as using the patched `GetInput` in error would be caught by a sanitizer test in the default optimized build (due to the `-fsanitize=null` option).
PiperOrigin-RevId: 332512190
Change-Id: Iabca54da2f2de02b6ece3c38b54f76d4277d689e | 46d5b0852528ddfd614ded79bccc75589f801bd9 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetMutableInput | tflite::GetMutableInput( const TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | inline TfLiteTensor* GetMutableInput(const TfLiteContext* context,
const TfLiteNode* node, int index) {
if (context->tensors != nullptr) {
return &context->tensors[node->inputs->data[index]];
} else {
return context->GetTensor(context, node->inputs->data[index]);
}
} | 63 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:10:41-07:00 | [tflite] Test for `kTfLiteOptionalTensor` in `GetInput`.
`GetInput`, `GetVariableInput` and `GetOutput` all fail to check for the case where `node->inputs->data[index]` is the special `kTfLiteOptionalTensor` value (-1) which then causes `context->tensors[node->inputs->data[index]]` to read from invalid memory location.
This fix makes `GetInput` and related return `nullptr` in those cases, asking the caller to check for `nullptr`. This is better than having `GetOptionalInputTensor` and `GetOptionalOutputTensor` (does not exist but could be added) as using the patched `GetInput` in error would be caught by a sanitizer test in the default optimized build (due to the `-fsanitize=null` option).
PiperOrigin-RevId: 332512190
Change-Id: Iabca54da2f2de02b6ece3c38b54f76d4277d689e | 46d5b0852528ddfd614ded79bccc75589f801bd9 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetMutableInput | tflite::GetMutableInput( const TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | inline TfLiteTensor* GetMutableInput(const TfLiteContext* context,
const TfLiteNode* node, int index) {
if (context->tensors != nullptr) {
return &context->tensors[node->inputs->data[index]];
} else {
return context->GetTensor(context, node->inputs->data[index]);
}
} | 63 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:10:41-07:00 | [tflite] Test for `kTfLiteOptionalTensor` in `GetInput`.
`GetInput`, `GetVariableInput` and `GetOutput` all fail to check for the case where `node->inputs->data[index]` is the special `kTfLiteOptionalTensor` value (-1) which then causes `context->tensors[node->inputs->data[index]]` to read from invalid memory location.
This fix makes `GetInput` and related return `nullptr` in those cases, asking the caller to check for `nullptr`. This is better than having `GetOptionalInputTensor` and `GetOptionalOutputTensor` (does not exist but could be added) as using the patched `GetInput` in error would be caught by a sanitizer test in the default optimized build (due to the `-fsanitize=null` option).
PiperOrigin-RevId: 332512190
Change-Id: Iabca54da2f2de02b6ece3c38b54f76d4277d689e | 46d5b0852528ddfd614ded79bccc75589f801bd9 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetOutput | tflite::GetOutput( TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | TfLiteTensor* GetOutput(TfLiteContext* context, const TfLiteNode* node,
int index) {
if (context->tensors != nullptr) {
return &context->tensors[node->outputs->data[index]];
} else {
return context->GetTensor(context, node->outputs->data[index]);
}
} | 62 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:10:41-07:00 | [tflite] Test for `kTfLiteOptionalTensor` in `GetInput`.
`GetInput`, `GetVariableInput` and `GetOutput` all fail to check for the case where `node->inputs->data[index]` is the special `kTfLiteOptionalTensor` value (-1) which then causes `context->tensors[node->inputs->data[index]]` to read from invalid memory location.
This fix makes `GetInput` and related return `nullptr` in those cases, asking the caller to check for `nullptr`. This is better than having `GetOptionalInputTensor` and `GetOptionalOutputTensor` (does not exist but could be added) as using the patched `GetInput` in error would be caught by a sanitizer test in the default optimized build (due to the `-fsanitize=null` option).
PiperOrigin-RevId: 332512190
Change-Id: Iabca54da2f2de02b6ece3c38b54f76d4277d689e | 46d5b0852528ddfd614ded79bccc75589f801bd9 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetOutput | tflite::GetOutput( TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | TfLiteTensor* GetOutput(TfLiteContext* context, const TfLiteNode* node,
int index) {
if (context->tensors != nullptr) {
return &context->tensors[node->outputs->data[index]];
} else {
return context->GetTensor(context, node->outputs->data[index]);
}
} | 62 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:16:53-07:00 | [tflite] Make `GetOptionalInputTensor` the same as `GetInput`.
With the previous change, there is no more need for two separate APIs. We would deprecate `GetOptionalInputTensor` in the future.
PiperOrigin-RevId: 332513386
Change-Id: Id7110271c25ebd6126ad8c82a493e37e0e0756b3 | 00302787b788c5ff04cb6f62aed5a74d936e86c0 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetOptionalInputTensor | tflite::GetOptionalInputTensor( const TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | const TfLiteTensor* GetOptionalInputTensor(const TfLiteContext* context,
const TfLiteNode* node, int index) {
const bool use_tensor = index < node->inputs->size &&
node->inputs->data[index] != kTfLiteOptionalTensor;
if (use_tensor) {
return GetMutableInput(context, node, index);
}
return nullptr;
} | 59 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:16:53-07:00 | [tflite] Make `GetOptionalInputTensor` the same as `GetInput`.
With the previous change, there is no more need for two separate APIs. We would deprecate `GetOptionalInputTensor` in the future.
PiperOrigin-RevId: 332513386
Change-Id: Id7110271c25ebd6126ad8c82a493e37e0e0756b3 | 00302787b788c5ff04cb6f62aed5a74d936e86c0 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetOptionalInputTensor | tflite::GetOptionalInputTensor( const TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | const TfLiteTensor* GetOptionalInputTensor(const TfLiteContext* context,
const TfLiteNode* node, int index) {
const bool use_tensor = index < node->inputs->size &&
node->inputs->data[index] != kTfLiteOptionalTensor;
if (use_tensor) {
return GetMutableInput(context, node, index);
}
return nullptr;
} | 59 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::AddOpRegistration | tflite::AddOpRegistration() | [] | TfLiteRegistration AddOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.custom_name = "my_add";
reg.builtin_code = tflite::BuiltinOperator_CUSTOM;
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Set output size to input size
const TfLiteTensor* input1 = GetInput(context, node, 0);
const TfLiteTensor* input2 = GetInput(context, node, 1);
TfLiteTensor* output = GetOutput(context, node, 0);
TF_LITE_ENSURE_EQ(context, input1->dims->size, input2->dims->size);
for (int i = 0; i < input1->dims->size; ++i) {
TF_LITE_ENSURE_EQ(context, input1->dims->data[i], input2->dims->data[i]);
}
TF_LITE_ENSURE_STATUS(context->ResizeTensor(
context, output, TfLiteIntArrayCopy(input1->dims)));
return kTfLiteOk;
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
// Copy input data to output data.
const TfLiteTensor* a0 = GetInput(context, node, 0);
TF_LITE_ENSURE(context, a0);
TF_LITE_ENSURE(context, a0->data.f);
const TfLiteTensor* a1 = GetInput(context, node, 1);
TF_LITE_ENSURE(context, a1);
TF_LITE_ENSURE(context, a1->data.f);
TfLiteTensor* out = GetOutput(context, node, 0);
TF_LITE_ENSURE(context, out);
TF_LITE_ENSURE(context, out->data.f);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
out->data.f[i] = a0->data.f[i] + a1->data.f[i];
}
return kTfLiteOk;
};
return reg;
} | 347 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::AddOpRegistration | tflite::AddOpRegistration() | [] | TfLiteRegistration AddOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.custom_name = "my_add";
reg.builtin_code = tflite::BuiltinOperator_CUSTOM;
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Set output size to input size
const TfLiteTensor* input1 = GetInput(context, node, 0);
const TfLiteTensor* input2 = GetInput(context, node, 1);
TfLiteTensor* output = GetOutput(context, node, 0);
TF_LITE_ENSURE_EQ(context, input1->dims->size, input2->dims->size);
for (int i = 0; i < input1->dims->size; ++i) {
TF_LITE_ENSURE_EQ(context, input1->dims->data[i], input2->dims->data[i]);
}
TF_LITE_ENSURE_STATUS(context->ResizeTensor(
context, output, TfLiteIntArrayCopy(input1->dims)));
return kTfLiteOk;
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
// Copy input data to output data.
const TfLiteTensor* a0 = GetInput(context, node, 0);
TF_LITE_ENSURE(context, a0);
TF_LITE_ENSURE(context, a0->data.f);
const TfLiteTensor* a1 = GetInput(context, node, 1);
TF_LITE_ENSURE(context, a1);
TF_LITE_ENSURE(context, a1->data.f);
TfLiteTensor* out = GetOutput(context, node, 0);
TF_LITE_ENSURE(context, out);
TF_LITE_ENSURE(context, out->data.f);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
out->data.f[i] = a0->data.f[i] + a1->data.f[i];
}
return kTfLiteOk;
};
return reg;
} | 347 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::TestDelegate::SimpleDelegate::tflite::TestDelegate::SimpleDelegate::SimpleDelegate.FakeFusedRegistration | tflite::TestDelegate::SimpleDelegate::tflite::TestDelegate::SimpleDelegate::SimpleDelegate.FakeFusedRegistration() | [] | TfLiteRegistration FakeFusedRegistration() {
TfLiteRegistration reg = {nullptr};
reg.custom_name = "fake_fused_op";
// Different flavors of the delegate kernel's Invoke(), dependent on
// testing parameters.
if (fail_delegate_node_invoke_) {
reg.invoke = [](TfLiteContext* context,
TfLiteNode* node) -> TfLiteStatus {
return kTfLiteError;
};
} else {
reg.invoke = [](TfLiteContext* context,
TfLiteNode* node) -> TfLiteStatus {
// Copy input data to output data.
const TfLiteTensor* a0;
const TfLiteTensor* a1;
if (node->inputs->size == 2) {
a0 = GetInput(context, node, 0);
a1 = GetInput(context, node, 1);
} else {
a0 = GetInput(context, node, 0);
a1 = a0;
}
TfLiteTensor* out = GetOutput(context, node, 0);
int num = 1;
for (int i = 0; i < a0->dims->size; ++i) {
num *= a0->dims->data[i];
}
for (int i = 0; i < num; i++) {
out->data.f[i] = a0->data.f[i] + a1->data.f[i];
}
if (out->buffer_handle != kTfLiteNullBufferHandle) {
// Make the data stale so that CopyFromBufferHandle can be invoked
out->data_is_stale = true;
}
return kTfLiteOk;
};
}
// Different flavors of the delegate kernel's Prepare(), dependent on
// testing parameters.
if (automatic_shape_propagation_) {
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Shapes should already by propagated by the runtime, just need to
// check.
const TfLiteTensor* input1 = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
const int input_dims_size = input1->dims->size;
TF_LITE_ENSURE(context, output->dims->size == input_dims_size);
for (int i = 0; i < input_dims_size; ++i) {
TF_LITE_ENSURE(context,
output->dims->data[i] == input1->dims->data[i]);
}
return kTfLiteOk;
};
} else if (fail_delegate_node_prepare_) {
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
return kTfLiteError;
};
} else {
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Set output size to input size
const TfLiteTensor* input1;
const TfLiteTensor* input2;
if (node->inputs->size == 2) {
input1 = GetInput(context, node, 0);
input2 = GetInput(context, node, 1);
} else {
input1 = GetInput(context, node, 0);
input2 = input1;
}
TfLiteTensor* output = GetOutput(context, node, 0);
TF_LITE_ENSURE_STATUS(context->ResizeTensor(
context, output, TfLiteIntArrayCopy(input1->dims)));
return kTfLiteOk;
};
}
return reg;
} | 509 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::TestDelegate::SimpleDelegate::tflite::TestDelegate::SimpleDelegate::SimpleDelegate.FakeFusedRegistration | tflite::TestDelegate::SimpleDelegate::tflite::TestDelegate::SimpleDelegate::SimpleDelegate.FakeFusedRegistration() | [] | TfLiteRegistration FakeFusedRegistration() {
TfLiteRegistration reg = {nullptr};
reg.custom_name = "fake_fused_op";
// Different flavors of the delegate kernel's Invoke(), dependent on
// testing parameters.
if (fail_delegate_node_invoke_) {
reg.invoke = [](TfLiteContext* context,
TfLiteNode* node) -> TfLiteStatus {
return kTfLiteError;
};
} else {
reg.invoke = [](TfLiteContext* context,
TfLiteNode* node) -> TfLiteStatus {
// Copy input data to output data.
const TfLiteTensor* a0;
const TfLiteTensor* a1;
if (node->inputs->size == 2) {
a0 = GetInput(context, node, 0);
a1 = GetInput(context, node, 1);
} else {
a0 = GetInput(context, node, 0);
a1 = a0;
}
TfLiteTensor* out = GetOutput(context, node, 0);
int num = 1;
for (int i = 0; i < a0->dims->size; ++i) {
num *= a0->dims->data[i];
}
for (int i = 0; i < num; i++) {
out->data.f[i] = a0->data.f[i] + a1->data.f[i];
}
if (out->buffer_handle != kTfLiteNullBufferHandle) {
// Make the data stale so that CopyFromBufferHandle can be invoked
out->data_is_stale = true;
}
return kTfLiteOk;
};
}
// Different flavors of the delegate kernel's Prepare(), dependent on
// testing parameters.
if (automatic_shape_propagation_) {
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Shapes should already by propagated by the runtime, just need to
// check.
const TfLiteTensor* input1 = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
const int input_dims_size = input1->dims->size;
TF_LITE_ENSURE(context, output->dims->size == input_dims_size);
for (int i = 0; i < input_dims_size; ++i) {
TF_LITE_ENSURE(context,
output->dims->data[i] == input1->dims->data[i]);
}
return kTfLiteOk;
};
} else if (fail_delegate_node_prepare_) {
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
return kTfLiteError;
};
} else {
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Set output size to input size
const TfLiteTensor* input1;
const TfLiteTensor* input2;
if (node->inputs->size == 2) {
input1 = GetInput(context, node, 0);
input2 = GetInput(context, node, 1);
} else {
input1 = GetInput(context, node, 0);
input2 = input1;
}
TfLiteTensor* output = GetOutput(context, node, 0);
TF_LITE_ENSURE_STATUS(context->ResizeTensor(
context, output, TfLiteIntArrayCopy(input1->dims)));
return kTfLiteOk;
};
}
return reg;
} | 509 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::TestDelegate::TestDelegateWithDynamicTensors::DelegateRegistration | tflite::TestDelegate::TestDelegateWithDynamicTensors::DelegateRegistration() | [] | static TfLiteRegistration DelegateRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// If tensors are resized, the runtime should propagate shapes
// automatically if correct flag is set. Ensure values are correct.
// Output 0 should be dynamic.
TfLiteTensor* output0 = GetOutput(context, node, 0);
TF_LITE_ENSURE(context, IsDynamicTensor(output0));
// Output 1 has the same shape as input.
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output1 = GetOutput(context, node, 1);
TF_LITE_ENSURE(context, input->dims->size == output1->dims->size);
TF_LITE_ENSURE(context, input->dims->data[0] == output1->dims->data[0]);
return kTfLiteOk;
};
return reg;
} | 132 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::TestDelegate::TestDelegateWithDynamicTensors::DelegateRegistration | tflite::TestDelegate::TestDelegateWithDynamicTensors::DelegateRegistration() | [] | static TfLiteRegistration DelegateRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// If tensors are resized, the runtime should propagate shapes
// automatically if correct flag is set. Ensure values are correct.
// Output 0 should be dynamic.
TfLiteTensor* output0 = GetOutput(context, node, 0);
TF_LITE_ENSURE(context, IsDynamicTensor(output0));
// Output 1 has the same shape as input.
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output1 = GetOutput(context, node, 1);
TF_LITE_ENSURE(context, input->dims->size == output1->dims->size);
TF_LITE_ENSURE(context, input->dims->data[0] == output1->dims->data[0]);
return kTfLiteOk;
};
return reg;
} | 132 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::TestDelegate::TestDelegateWithDynamicTensors::DynamicCopyOpRegistration | tflite::TestDelegate::TestDelegateWithDynamicTensors::DynamicCopyOpRegistration() | [] | static TfLiteRegistration DynamicCopyOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Output 0 is dynamic
TfLiteTensor* output0 = GetOutput(context, node, 0);
SetTensorToDynamic(output0);
// Output 1 has the same shape as input.
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output1 = GetOutput(context, node, 1);
TF_LITE_ENSURE_STATUS(context->ResizeTensor(
context, output1, TfLiteIntArrayCopy(input->dims)));
return kTfLiteOk;
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
// Not implemented since this isn't required in testing.
return kTfLiteOk;
};
return reg;
} | 127 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::TestDelegate::TestDelegateWithDynamicTensors::DynamicCopyOpRegistration | tflite::TestDelegate::TestDelegateWithDynamicTensors::DynamicCopyOpRegistration() | [] | static TfLiteRegistration DynamicCopyOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Output 0 is dynamic
TfLiteTensor* output0 = GetOutput(context, node, 0);
SetTensorToDynamic(output0);
// Output 1 has the same shape as input.
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output1 = GetOutput(context, node, 1);
TF_LITE_ENSURE_STATUS(context->ResizeTensor(
context, output1, TfLiteIntArrayCopy(input->dims)));
return kTfLiteOk;
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
// Not implemented since this isn't required in testing.
return kTfLiteOk;
};
return reg;
} | 127 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::gpu::PadOperationParser::IsSupported | tflite::gpu::PadOperationParser::IsSupported( const TfLiteContext * context , const TfLiteNode * tflite_node , const TfLiteRegistration * registration) | ['context', 'tflite_node', 'registration'] | absl::Status IsSupported(const TfLiteContext* context,
const TfLiteNode* tflite_node,
const TfLiteRegistration* registration) final {
if (mirror_pad_) {
const TfLiteMirrorPaddingParams* tf_options;
RETURN_IF_ERROR(RetrieveBuiltinData(tflite_node, &tf_options));
if (tf_options->mode !=
TfLiteMirrorPaddingMode::kTfLiteMirrorPaddingReflect) {
return absl::InvalidArgumentError(
"Only Reflective padding is supported for Mirror Pad operation.");
}
}
RETURN_IF_ERROR(CheckMaxSupportedOpVersion(registration, 2));
RETURN_IF_ERROR(CheckInputsOutputs(context, tflite_node,
/*runtime_inputs=*/1, /*outputs=*/1));
RETURN_IF_ERROR(CheckTensorIsAvailable(context, tflite_node, 1));
auto pad_tensor = tflite::GetInput(context, tflite_node, 1);
if (pad_tensor->dims->size != 2) {
return absl::InvalidArgumentError(absl::StrCat(
"Invalid paddings tensor dimension: expected 2 dim, got ",
pad_tensor->dims->size, " dim"));
}
bool supported =
pad_tensor->dims->data[0] == 3 || pad_tensor->dims->data[0] == 4;
if (!supported || pad_tensor->dims->data[1] != 2) {
return absl::InvalidArgumentError(absl::StrCat(
"Invalid paddings tensor shape: expected 4x2 or 3x2, got ",
pad_tensor->dims->data[0], "x", pad_tensor->dims->data[1]));
}
return absl::OkStatus();
} | 228 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::gpu::PadOperationParser::IsSupported | tflite::gpu::PadOperationParser::IsSupported( const TfLiteContext * context , const TfLiteNode * tflite_node , const TfLiteRegistration * registration) | ['context', 'tflite_node', 'registration'] | absl::Status IsSupported(const TfLiteContext* context,
const TfLiteNode* tflite_node,
const TfLiteRegistration* registration) final {
if (mirror_pad_) {
const TfLiteMirrorPaddingParams* tf_options;
RETURN_IF_ERROR(RetrieveBuiltinData(tflite_node, &tf_options));
if (tf_options->mode !=
TfLiteMirrorPaddingMode::kTfLiteMirrorPaddingReflect) {
return absl::InvalidArgumentError(
"Only Reflective padding is supported for Mirror Pad operation.");
}
}
RETURN_IF_ERROR(CheckMaxSupportedOpVersion(registration, 2));
RETURN_IF_ERROR(CheckInputsOutputs(context, tflite_node,
/*runtime_inputs=*/1, /*outputs=*/1));
RETURN_IF_ERROR(CheckTensorIsAvailable(context, tflite_node, 1));
auto pad_tensor = tflite::GetInput(context, tflite_node, 1);
if (pad_tensor->dims->size != 2) {
return absl::InvalidArgumentError(absl::StrCat(
"Invalid paddings tensor dimension: expected 2 dim, got ",
pad_tensor->dims->size, " dim"));
}
bool supported =
pad_tensor->dims->data[0] == 3 || pad_tensor->dims->data[0] == 4;
if (!supported || pad_tensor->dims->data[1] != 2) {
return absl::InvalidArgumentError(absl::StrCat(
"Invalid paddings tensor shape: expected 4x2 or 3x2, got ",
pad_tensor->dims->data[0], "x", pad_tensor->dims->data[1]));
}
return absl::OkStatus();
} | 228 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::delegates::coreml::IsConvolutionOpSupported | tflite::delegates::coreml::IsConvolutionOpSupported( const TfLiteRegistration * registration , const TfLiteNode * node , TfLiteContext * context) | ['registration', 'node', 'context'] | bool IsConvolutionOpSupported(const TfLiteRegistration* registration,
const TfLiteNode* node, TfLiteContext* context) {
if (node->builtin_data == nullptr) return false;
TfLiteFusedActivation activation;
if (registration->builtin_code == kTfLiteBuiltinConv2d) {
const auto* conv_params =
reinterpret_cast<const TfLiteConvParams*>(node->builtin_data);
activation = conv_params->activation;
} else if (registration->builtin_code == kTfLiteBuiltinDepthwiseConv2d) {
const auto* depthwise_conv_params =
reinterpret_cast<const TfLiteDepthwiseConvParams*>(node->builtin_data);
activation = depthwise_conv_params->activation;
} else if (registration->builtin_code == kTfLiteBuiltinTransposeConv) {
activation = kTfLiteActNone;
} else {
TF_LITE_KERNEL_LOG(
context,
"Invalid op: op must be Conv2D, DepthwiseConv2D or TransposeConv.");
return false;
}
if (activation == kTfLiteActSignBit) {
return false;
}
const int kOutputShapeTensor = 0; // Only used for TransposeConv
const int kWeightTensor = 1;
const int kBiasTensor = 2; // Only used for non-TransposeConv
const TfLiteTensor* weights = GetInput(context, node, kWeightTensor);
const int max_kernel_size = 16384;
if (!IsConstantTensor(weights)) {
return false;
}
if (weights->dims->data[1] > max_kernel_size ||
weights->dims->data[2] > max_kernel_size) {
return false;
}
if (registration->builtin_code == kTfLiteBuiltinTransposeConv) {
if (!IsConstantTensor(GetInput(context, node, kOutputShapeTensor))) {
return false;
}
} else {
if (node->inputs->size >= kBiasTensor &&
!IsConstantTensor(GetInput(context, node, kBiasTensor))) {
return false;
}
}
return true;
} | 282 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::delegates::coreml::IsConvolutionOpSupported | tflite::delegates::coreml::IsConvolutionOpSupported( const TfLiteRegistration * registration , const TfLiteNode * node , TfLiteContext * context) | ['registration', 'node', 'context'] | bool IsConvolutionOpSupported(const TfLiteRegistration* registration,
const TfLiteNode* node, TfLiteContext* context) {
if (node->builtin_data == nullptr) return false;
TfLiteFusedActivation activation;
if (registration->builtin_code == kTfLiteBuiltinConv2d) {
const auto* conv_params =
reinterpret_cast<const TfLiteConvParams*>(node->builtin_data);
activation = conv_params->activation;
} else if (registration->builtin_code == kTfLiteBuiltinDepthwiseConv2d) {
const auto* depthwise_conv_params =
reinterpret_cast<const TfLiteDepthwiseConvParams*>(node->builtin_data);
activation = depthwise_conv_params->activation;
} else if (registration->builtin_code == kTfLiteBuiltinTransposeConv) {
activation = kTfLiteActNone;
} else {
TF_LITE_KERNEL_LOG(
context,
"Invalid op: op must be Conv2D, DepthwiseConv2D or TransposeConv.");
return false;
}
if (activation == kTfLiteActSignBit) {
return false;
}
const int kOutputShapeTensor = 0; // Only used for TransposeConv
const int kWeightTensor = 1;
const int kBiasTensor = 2; // Only used for non-TransposeConv
const TfLiteTensor* weights = GetInput(context, node, kWeightTensor);
const int max_kernel_size = 16384;
if (!IsConstantTensor(weights)) {
return false;
}
if (weights->dims->data[1] > max_kernel_size ||
weights->dims->data[2] > max_kernel_size) {
return false;
}
if (registration->builtin_code == kTfLiteBuiltinTransposeConv) {
if (!IsConstantTensor(GetInput(context, node, kOutputShapeTensor))) {
return false;
}
} else {
if (node->inputs->size >= kBiasTensor &&
!IsConstantTensor(GetInput(context, node, kBiasTensor))) {
return false;
}
}
return true;
} | 282 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::delegates::coreml::IsFullyConnectedOpSupported | tflite::delegates::coreml::IsFullyConnectedOpSupported( const TfLiteRegistration * registration , const TfLiteNode * node , TfLiteContext * context) | ['registration', 'node', 'context'] | bool IsFullyConnectedOpSupported(const TfLiteRegistration* registration,
const TfLiteNode* node,
TfLiteContext* context) {
if (node->builtin_data == nullptr) return false;
const auto* fc_params =
reinterpret_cast<const TfLiteFullyConnectedParams*>(node->builtin_data);
const int kInput = 0;
const int kWeights = 1;
const int kBias = 2;
if (fc_params->weights_format != kTfLiteFullyConnectedWeightsFormatDefault) {
return false;
}
const TfLiteTensor* input = GetInput(context, node, kInput);
const TfLiteTensor* weights = GetInput(context, node, kWeights);
if (!IsFloatType(input->type)) {
return false;
}
if (!IsFloatType(weights->type) || !IsConstantTensor(weights)) {
return false;
}
// Core ML 2 only supports single-batch fully connected layer, thus dimensions
// except the last one should be 1.
if (input->dims->data[input->dims->size - 1] != NumElements(input)) {
return false;
}
if (node->inputs->size > 2) {
const TfLiteTensor* bias = GetInput(context, node, kBias);
if (!IsFloatType(bias->type) || !IsConstantTensor(bias)) {
return false;
}
}
TfLiteFusedActivation activation = fc_params->activation;
if (activation == kTfLiteActSignBit) {
return false;
}
return true;
} | 236 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::delegates::coreml::IsFullyConnectedOpSupported | tflite::delegates::coreml::IsFullyConnectedOpSupported( const TfLiteRegistration * registration , const TfLiteNode * node , TfLiteContext * context) | ['registration', 'node', 'context'] | bool IsFullyConnectedOpSupported(const TfLiteRegistration* registration,
const TfLiteNode* node,
TfLiteContext* context) {
if (node->builtin_data == nullptr) return false;
const auto* fc_params =
reinterpret_cast<const TfLiteFullyConnectedParams*>(node->builtin_data);
const int kInput = 0;
const int kWeights = 1;
const int kBias = 2;
if (fc_params->weights_format != kTfLiteFullyConnectedWeightsFormatDefault) {
return false;
}
const TfLiteTensor* input = GetInput(context, node, kInput);
const TfLiteTensor* weights = GetInput(context, node, kWeights);
if (!IsFloatType(input->type)) {
return false;
}
if (!IsFloatType(weights->type) || !IsConstantTensor(weights)) {
return false;
}
// Core ML 2 only supports single-batch fully connected layer, thus dimensions
// except the last one should be 1.
if (input->dims->data[input->dims->size - 1] != NumElements(input)) {
return false;
}
if (node->inputs->size > 2) {
const TfLiteTensor* bias = GetInput(context, node, kBias);
if (!IsFloatType(bias->type) || !IsConstantTensor(bias)) {
return false;
}
}
TfLiteFusedActivation activation = fc_params->activation;
if (activation == kTfLiteActSignBit) {
return false;
}
return true;
} | 236 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::delegates::coreml::IsPadOpSupported | tflite::delegates::coreml::IsPadOpSupported( const TfLiteRegistration * registration , const TfLiteNode * node , TfLiteContext * context) | ['registration', 'node', 'context'] | bool IsPadOpSupported(const TfLiteRegistration* registration,
const TfLiteNode* node, TfLiteContext* context) {
// padding is d x 2 tensor, where d is the dimension of input.
const TfLiteTensor* padding = GetInput(context, node, 1);
if (!IsConstantTensor(padding)) {
TF_LITE_KERNEL_LOG(context,
"%s: Only constant padding is supported for PAD.",
padding->name);
return false;
}
if (padding->dims->data[0] != 4 || padding->dims->data[1] != 2) {
TF_LITE_KERNEL_LOG(context, "%s: Only 4D inputs are supported for PAD.",
padding->name);
return false;
}
const int32_t* padding_data = GetTensorData<int32_t>(padding);
if (!(padding_data[0] == 0 && padding_data[1] == 0)) {
TF_LITE_KERNEL_LOG(
context, "%s: Padding for batch dimension is not supported in PAD.",
padding->name);
return false;
}
if (!(padding_data[6] == 0 && padding_data[7] == 0)) {
TF_LITE_KERNEL_LOG(
context, "%s: Padding for channel dimension is not supported in PAD.",
padding->name);
return false;
}
return true;
} | 182 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::delegates::coreml::IsPadOpSupported | tflite::delegates::coreml::IsPadOpSupported( const TfLiteRegistration * registration , const TfLiteNode * node , TfLiteContext * context) | ['registration', 'node', 'context'] | bool IsPadOpSupported(const TfLiteRegistration* registration,
const TfLiteNode* node, TfLiteContext* context) {
// padding is d x 2 tensor, where d is the dimension of input.
const TfLiteTensor* padding = GetInput(context, node, 1);
if (!IsConstantTensor(padding)) {
TF_LITE_KERNEL_LOG(context,
"%s: Only constant padding is supported for PAD.",
padding->name);
return false;
}
if (padding->dims->data[0] != 4 || padding->dims->data[1] != 2) {
TF_LITE_KERNEL_LOG(context, "%s: Only 4D inputs are supported for PAD.",
padding->name);
return false;
}
const int32_t* padding_data = GetTensorData<int32_t>(padding);
if (!(padding_data[0] == 0 && padding_data[1] == 0)) {
TF_LITE_KERNEL_LOG(
context, "%s: Padding for batch dimension is not supported in PAD.",
padding->name);
return false;
}
if (!(padding_data[6] == 0 && padding_data[7] == 0)) {
TF_LITE_KERNEL_LOG(
context, "%s: Padding for channel dimension is not supported in PAD.",
padding->name);
return false;
}
return true;
} | 182 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::delegates::coreml::IsReshapeOpSupported | tflite::delegates::coreml::IsReshapeOpSupported( const TfLiteRegistration * registration , const TfLiteNode * node , TfLiteContext * context , int coreml_version) | ['registration', 'node', 'context', 'coreml_version'] | bool IsReshapeOpSupported(const TfLiteRegistration* registration,
const TfLiteNode* node, TfLiteContext* context,
int coreml_version) {
if (coreml_version >= 3) {
return false;
}
if (node->inputs->size == 1) {
const auto* params =
reinterpret_cast<TfLiteReshapeParams*>(node->builtin_data);
return params->num_dimensions == 3 || params->num_dimensions == 4;
}
const int kShapeTensor = 1;
const auto* shape = GetInput(context, node, kShapeTensor);
if (shape->allocation_type != kTfLiteMmapRo) {
TF_LITE_KERNEL_LOG(context, "Reshape has non-const shape.");
return false;
}
const bool is_shape_tensor =
shape->dims->size == 1 && shape->type == kTfLiteInt32;
return is_shape_tensor &&
(shape->dims->data[0] == 3 || shape->dims->data[0] == 4);
} | 158 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::delegates::coreml::IsReshapeOpSupported | tflite::delegates::coreml::IsReshapeOpSupported( const TfLiteRegistration * registration , const TfLiteNode * node , TfLiteContext * context , int coreml_version) | ['registration', 'node', 'context', 'coreml_version'] | bool IsReshapeOpSupported(const TfLiteRegistration* registration,
const TfLiteNode* node, TfLiteContext* context,
int coreml_version) {
if (coreml_version >= 3) {
return false;
}
if (node->inputs->size == 1) {
const auto* params =
reinterpret_cast<TfLiteReshapeParams*>(node->builtin_data);
return params->num_dimensions == 3 || params->num_dimensions == 4;
}
const int kShapeTensor = 1;
const auto* shape = GetInput(context, node, kShapeTensor);
if (shape->allocation_type != kTfLiteMmapRo) {
TF_LITE_KERNEL_LOG(context, "Reshape has non-const shape.");
return false;
}
const bool is_shape_tensor =
shape->dims->size == 1 && shape->type == kTfLiteInt32;
return is_shape_tensor &&
(shape->dims->data[0] == 3 || shape->dims->data[0] == 4);
} | 158 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::experimental::ctc_beam_search_decoder::Eval | tflite::ops::experimental::ctc_beam_search_decoder::Eval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* inputs = GetInput(context, node, kInputsTensor);
const TfLiteTensor* sequence_length =
GetInput(context, node, kSequenceLengthTensor);
const CTCBeamSearchDecoderParams* option =
reinterpret_cast<CTCBeamSearchDecoderParams*>(node->user_data);
const int max_time = SizeOfDimension(inputs, 0);
const int batch_size = SizeOfDimension(inputs, 1);
const int num_classes = SizeOfDimension(inputs, 2);
const int beam_width = option->beam_width;
const int top_paths = option->top_paths;
const bool merge_repeated = option->merge_repeated;
// Validate sequence length is less or equal than max time.
for (int i = 0; i < batch_size; ++i) {
TF_LITE_ENSURE(context,
max_time >= GetTensorData<int32_t>(sequence_length)[i]);
}
// The following logic is implemented like
// tensorflow/core/kernels/ctc_decoder_ops.cc
std::vector<optimized_ops::TTypes<float>::UnalignedConstMatrix> input_list_t;
for (std::size_t t = 0; t < max_time; ++t) {
input_list_t.emplace_back(
GetTensorData<float>(inputs) + t * batch_size * num_classes, batch_size,
num_classes);
}
::tflite::experimental::ctc::CTCBeamSearchDecoder<>::DefaultBeamScorer
beam_scorer;
::tflite::experimental::ctc::CTCBeamSearchDecoder<> beam_search(
num_classes, beam_width, &beam_scorer, 1 /* batch_size */,
merge_repeated);
// Allocate temporary memory for holding chip operation data.
float* input_chip_t_data =
static_cast<float*>(malloc(num_classes * sizeof(float)));
Eigen::array<Eigen::DenseIndex, 1> dims;
dims[0] = num_classes;
optimized_ops::TTypes<float>::Flat input_chip_t(input_chip_t_data, dims);
std::vector<std::vector<std::vector<int>>> best_paths(batch_size);
std::vector<float> log_probs;
TfLiteTensor* log_probabilities = GetOutput(context, node, 3 * top_paths);
float* log_probabilities_output = GetTensorData<float>(log_probabilities);
// Assumption: the blank index is num_classes - 1
for (int b = 0; b < batch_size; ++b) {
auto& best_paths_b = best_paths[b];
best_paths_b.resize(top_paths);
for (int t = 0; t < GetTensorData<int32_t>(sequence_length)[b]; ++t) {
input_chip_t = input_list_t[t].chip(b, 0);
auto input_bi =
Eigen::Map<const Eigen::ArrayXf>(input_chip_t.data(), num_classes);
beam_search.Step(input_bi);
}
TF_LITE_ENSURE(context, beam_search.TopPaths(top_paths, &best_paths_b,
&log_probs, merge_repeated));
beam_search.Reset();
// Fill in log_probabilities output.
for (int bp = 0; bp < top_paths; ++bp) {
log_probabilities_output[b * top_paths + bp] = log_probs[bp];
}
}
free(input_chip_t_data);
return StoreAllDecodedSequences(context, best_paths, node, top_paths);
} | 525 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::experimental::ctc_beam_search_decoder::Eval | tflite::ops::experimental::ctc_beam_search_decoder::Eval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* inputs = GetInput(context, node, kInputsTensor);
const TfLiteTensor* sequence_length =
GetInput(context, node, kSequenceLengthTensor);
const CTCBeamSearchDecoderParams* option =
reinterpret_cast<CTCBeamSearchDecoderParams*>(node->user_data);
const int max_time = SizeOfDimension(inputs, 0);
const int batch_size = SizeOfDimension(inputs, 1);
const int num_classes = SizeOfDimension(inputs, 2);
const int beam_width = option->beam_width;
const int top_paths = option->top_paths;
const bool merge_repeated = option->merge_repeated;
// Validate sequence length is less or equal than max time.
for (int i = 0; i < batch_size; ++i) {
TF_LITE_ENSURE(context,
max_time >= GetTensorData<int32_t>(sequence_length)[i]);
}
// The following logic is implemented like
// tensorflow/core/kernels/ctc_decoder_ops.cc
std::vector<optimized_ops::TTypes<float>::UnalignedConstMatrix> input_list_t;
for (std::size_t t = 0; t < max_time; ++t) {
input_list_t.emplace_back(
GetTensorData<float>(inputs) + t * batch_size * num_classes, batch_size,
num_classes);
}
::tflite::experimental::ctc::CTCBeamSearchDecoder<>::DefaultBeamScorer
beam_scorer;
::tflite::experimental::ctc::CTCBeamSearchDecoder<> beam_search(
num_classes, beam_width, &beam_scorer, 1 /* batch_size */,
merge_repeated);
// Allocate temporary memory for holding chip operation data.
float* input_chip_t_data =
static_cast<float*>(malloc(num_classes * sizeof(float)));
Eigen::array<Eigen::DenseIndex, 1> dims;
dims[0] = num_classes;
optimized_ops::TTypes<float>::Flat input_chip_t(input_chip_t_data, dims);
std::vector<std::vector<std::vector<int>>> best_paths(batch_size);
std::vector<float> log_probs;
TfLiteTensor* log_probabilities = GetOutput(context, node, 3 * top_paths);
float* log_probabilities_output = GetTensorData<float>(log_probabilities);
// Assumption: the blank index is num_classes - 1
for (int b = 0; b < batch_size; ++b) {
auto& best_paths_b = best_paths[b];
best_paths_b.resize(top_paths);
for (int t = 0; t < GetTensorData<int32_t>(sequence_length)[b]; ++t) {
input_chip_t = input_list_t[t].chip(b, 0);
auto input_bi =
Eigen::Map<const Eigen::ArrayXf>(input_chip_t.data(), num_classes);
beam_search.Step(input_bi);
}
TF_LITE_ENSURE(context, beam_search.TopPaths(top_paths, &best_paths_b,
&log_probs, merge_repeated));
beam_search.Reset();
// Fill in log_probabilities output.
for (int bp = 0; bp < top_paths; ++bp) {
log_probabilities_output[b * top_paths + bp] = log_probs[bp];
}
}
free(input_chip_t_data);
return StoreAllDecodedSequences(context, best_paths, node, top_paths);
} | 525 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::experimental::ctc_beam_search_decoder::Prepare | tflite::ops::experimental::ctc_beam_search_decoder::Prepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
const CTCBeamSearchDecoderParams* option =
reinterpret_cast<CTCBeamSearchDecoderParams*>(node->user_data);
const int top_paths = option->top_paths;
TF_LITE_ENSURE(context, option->beam_width >= top_paths);
TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);
// The outputs should be top_paths * 3 + 1.
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 3 * top_paths + 1);
const TfLiteTensor* inputs = GetInput(context, node, kInputsTensor);
TF_LITE_ENSURE_EQ(context, NumDimensions(inputs), 3);
// TensorFlow only supports float.
TF_LITE_ENSURE_EQ(context, inputs->type, kTfLiteFloat32);
const int batch_size = SizeOfDimension(inputs, 1);
const TfLiteTensor* sequence_length =
GetInput(context, node, kSequenceLengthTensor);
TF_LITE_ENSURE_EQ(context, NumDimensions(sequence_length), 1);
TF_LITE_ENSURE_EQ(context, NumElements(sequence_length), batch_size);
// TensorFlow only supports int32.
TF_LITE_ENSURE_EQ(context, sequence_length->type, kTfLiteInt32);
// Resize decoded outputs.
// Do not resize indices & values cause we don't know the values yet.
for (int i = 0; i < top_paths; ++i) {
TfLiteTensor* indices = GetOutput(context, node, i);
SetTensorToDynamic(indices);
TfLiteTensor* values = GetOutput(context, node, i + top_paths);
SetTensorToDynamic(values);
TfLiteTensor* output_shape = GetOutput(context, node, i + 2 * top_paths);
SetTensorToDynamic(output_shape);
}
// Resize log probability outputs.
TfLiteTensor* log_probability_output =
GetOutput(context, node, top_paths * 3);
TfLiteIntArray* log_probability_output_shape_array = TfLiteIntArrayCreate(2);
log_probability_output_shape_array->data[0] = batch_size;
log_probability_output_shape_array->data[1] = top_paths;
return context->ResizeTensor(context, log_probability_output,
log_probability_output_shape_array);
} | 302 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::experimental::ctc_beam_search_decoder::Prepare | tflite::ops::experimental::ctc_beam_search_decoder::Prepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
const CTCBeamSearchDecoderParams* option =
reinterpret_cast<CTCBeamSearchDecoderParams*>(node->user_data);
const int top_paths = option->top_paths;
TF_LITE_ENSURE(context, option->beam_width >= top_paths);
TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);
// The outputs should be top_paths * 3 + 1.
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 3 * top_paths + 1);
const TfLiteTensor* inputs = GetInput(context, node, kInputsTensor);
TF_LITE_ENSURE_EQ(context, NumDimensions(inputs), 3);
// TensorFlow only supports float.
TF_LITE_ENSURE_EQ(context, inputs->type, kTfLiteFloat32);
const int batch_size = SizeOfDimension(inputs, 1);
const TfLiteTensor* sequence_length =
GetInput(context, node, kSequenceLengthTensor);
TF_LITE_ENSURE_EQ(context, NumDimensions(sequence_length), 1);
TF_LITE_ENSURE_EQ(context, NumElements(sequence_length), batch_size);
// TensorFlow only supports int32.
TF_LITE_ENSURE_EQ(context, sequence_length->type, kTfLiteInt32);
// Resize decoded outputs.
// Do not resize indices & values cause we don't know the values yet.
for (int i = 0; i < top_paths; ++i) {
TfLiteTensor* indices = GetOutput(context, node, i);
SetTensorToDynamic(indices);
TfLiteTensor* values = GetOutput(context, node, i + top_paths);
SetTensorToDynamic(values);
TfLiteTensor* output_shape = GetOutput(context, node, i + 2 * top_paths);
SetTensorToDynamic(output_shape);
}
// Resize log probability outputs.
TfLiteTensor* log_probability_output =
GetOutput(context, node, top_paths * 3);
TfLiteIntArray* log_probability_output_shape_array = TfLiteIntArrayCreate(2);
log_probability_output_shape_array->data[0] = batch_size;
log_probability_output_shape_array->data[1] = top_paths;
return context->ResizeTensor(context, log_probability_output,
log_probability_output_shape_array);
} | 302 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::experimental::ctc_beam_search_decoder::StoreAllDecodedSequences | tflite::ops::experimental::ctc_beam_search_decoder::StoreAllDecodedSequences( TfLiteContext * context , const std :: vector<std::vector<std::vector<int>>> & sequences , TfLiteNode * node , int top_paths) | ['context', 'sequences', 'node', 'top_paths'] | TfLiteStatus StoreAllDecodedSequences(
TfLiteContext* context,
const std::vector<std::vector<std::vector<int>>>& sequences,
TfLiteNode* node, int top_paths) {
const int32_t batch_size = sequences.size();
std::vector<int32_t> num_entries(top_paths, 0);
// Calculate num_entries per path
for (const auto& batch_s : sequences) {
TF_LITE_ENSURE_EQ(context, batch_s.size(), top_paths);
for (int p = 0; p < top_paths; ++p) {
num_entries[p] += batch_s[p].size();
}
}
for (int p = 0; p < top_paths; ++p) {
const int32_t p_num = num_entries[p];
// Resize the decoded outputs.
TfLiteTensor* indices = GetOutput(context, node, p);
TF_LITE_ENSURE_OK(context, Resize(context, {p_num, 2}, indices));
TfLiteTensor* values = GetOutput(context, node, p + top_paths);
TF_LITE_ENSURE_OK(context, Resize(context, {p_num}, values));
TfLiteTensor* decoded_shape = GetOutput(context, node, p + 2 * top_paths);
TF_LITE_ENSURE_OK(context, Resize(context, {2}, decoded_shape));
int32_t max_decoded = 0;
int32_t offset = 0;
int32_t* indices_data = GetTensorData<int32_t>(indices);
int32_t* values_data = GetTensorData<int32_t>(values);
int32_t* decoded_shape_data = GetTensorData<int32_t>(decoded_shape);
for (int b = 0; b < batch_size; ++b) {
auto& p_batch = sequences[b][p];
int32_t num_decoded = p_batch.size();
max_decoded = std::max(max_decoded, num_decoded);
std::copy_n(p_batch.begin(), num_decoded, values_data + offset);
for (int32_t t = 0; t < num_decoded; ++t, ++offset) {
indices_data[offset * 2] = b;
indices_data[offset * 2 + 1] = t;
}
}
decoded_shape_data[0] = batch_size;
decoded_shape_data[1] = max_decoded;
}
return kTfLiteOk;
} | 399 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::experimental::ctc_beam_search_decoder::StoreAllDecodedSequences | tflite::ops::experimental::ctc_beam_search_decoder::StoreAllDecodedSequences( TfLiteContext * context , const std :: vector<std::vector<std::vector<int>>> & sequences , TfLiteNode * node , int top_paths) | ['context', 'sequences', 'node', 'top_paths'] | TfLiteStatus StoreAllDecodedSequences(
TfLiteContext* context,
const std::vector<std::vector<std::vector<int>>>& sequences,
TfLiteNode* node, int top_paths) {
const int32_t batch_size = sequences.size();
std::vector<int32_t> num_entries(top_paths, 0);
// Calculate num_entries per path
for (const auto& batch_s : sequences) {
TF_LITE_ENSURE_EQ(context, batch_s.size(), top_paths);
for (int p = 0; p < top_paths; ++p) {
num_entries[p] += batch_s[p].size();
}
}
for (int p = 0; p < top_paths; ++p) {
const int32_t p_num = num_entries[p];
// Resize the decoded outputs.
TfLiteTensor* indices = GetOutput(context, node, p);
TF_LITE_ENSURE_OK(context, Resize(context, {p_num, 2}, indices));
TfLiteTensor* values = GetOutput(context, node, p + top_paths);
TF_LITE_ENSURE_OK(context, Resize(context, {p_num}, values));
TfLiteTensor* decoded_shape = GetOutput(context, node, p + 2 * top_paths);
TF_LITE_ENSURE_OK(context, Resize(context, {2}, decoded_shape));
int32_t max_decoded = 0;
int32_t offset = 0;
int32_t* indices_data = GetTensorData<int32_t>(indices);
int32_t* values_data = GetTensorData<int32_t>(values);
int32_t* decoded_shape_data = GetTensorData<int32_t>(decoded_shape);
for (int b = 0; b < batch_size; ++b) {
auto& p_batch = sequences[b][p];
int32_t num_decoded = p_batch.size();
max_decoded = std::max(max_decoded, num_decoded);
std::copy_n(p_batch.begin(), num_decoded, values_data + offset);
for (int32_t t = 0; t < num_decoded; ++t, ++offset) {
indices_data[offset * 2] = b;
indices_data[offset * 2 + 1] = t;
}
}
decoded_shape_data[0] = batch_size;
decoded_shape_data[1] = max_decoded;
}
return kTfLiteOk;
} | 399 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::audio_microfrontend::Eval | tflite::ops::custom::audio_microfrontend::Eval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
auto* data =
reinterpret_cast<TfLiteAudioMicrofrontendParams*>(node->user_data);
FrontendReset(data->state);
const TfLiteTensor* input = GetInput(context, node, kInputTensor);
TfLiteTensor* output = GetOutput(context, node, kOutputTensor);
if (data->out_float) {
GenerateFeatures<float>(data, input, output);
} else {
GenerateFeatures<int32>(data, input, output);
}
return kTfLiteOk;
} | 99 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::audio_microfrontend::Eval | tflite::ops::custom::audio_microfrontend::Eval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
auto* data =
reinterpret_cast<TfLiteAudioMicrofrontendParams*>(node->user_data);
FrontendReset(data->state);
const TfLiteTensor* input = GetInput(context, node, kInputTensor);
TfLiteTensor* output = GetOutput(context, node, kOutputTensor);
if (data->out_float) {
GenerateFeatures<float>(data, input, output);
} else {
GenerateFeatures<int32>(data, input, output);
}
return kTfLiteOk;
} | 99 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::audio_microfrontend::Prepare | tflite::ops::custom::audio_microfrontend::Prepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
auto* data =
reinterpret_cast<TfLiteAudioMicrofrontendParams*>(node->user_data);
TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input = GetInput(context, node, kInputTensor);
TfLiteTensor* output = GetOutput(context, node, kOutputTensor);
TF_LITE_ENSURE_EQ(context, NumDimensions(input), 1);
TF_LITE_ENSURE_EQ(context, input->type, kTfLiteInt16);
output->type = kTfLiteInt32;
if (data->out_float) {
output->type = kTfLiteFloat32;
}
TfLiteIntArray* output_size = TfLiteIntArrayCreate(2);
int num_frames = 0;
if (input->dims->data[0] >= data->state->window.size) {
num_frames = (input->dims->data[0] - data->state->window.size) /
data->state->window.step / data->frame_stride +
1;
}
output_size->data[0] = num_frames;
output_size->data[1] = data->state->filterbank.num_channels *
(1 + data->left_context + data->right_context);
return context->ResizeTensor(context, output, output_size);
} | 239 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::audio_microfrontend::Prepare | tflite::ops::custom::audio_microfrontend::Prepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
auto* data =
reinterpret_cast<TfLiteAudioMicrofrontendParams*>(node->user_data);
TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input = GetInput(context, node, kInputTensor);
TfLiteTensor* output = GetOutput(context, node, kOutputTensor);
TF_LITE_ENSURE_EQ(context, NumDimensions(input), 1);
TF_LITE_ENSURE_EQ(context, input->type, kTfLiteInt16);
output->type = kTfLiteInt32;
if (data->out_float) {
output->type = kTfLiteFloat32;
}
TfLiteIntArray* output_size = TfLiteIntArrayCreate(2);
int num_frames = 0;
if (input->dims->data[0] >= data->state->window.size) {
num_frames = (input->dims->data[0] - data->state->window.size) /
data->state->window.step / data->frame_stride +
1;
}
output_size->data[0] = num_frames;
output_size->data[1] = data->state->filterbank.num_channels *
(1 + data->left_context + data->right_context);
return context->ResizeTensor(context, output, output_size);
} | 239 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::TEST | tflite::TEST( BasicInterpreter , CheckResize) | ['BasicInterpreter', 'CheckResize'] | TEST(BasicInterpreter, CheckResize) {
const float floats[] = {-3., -4.};
const int32_t int32s[] = {-3, -4};
const uint8_t uint8s[] = {3, 4};
const int64_t int64s[] = {6, -7};
const int16_t int16s[] = {8, -9};
const Eigen::half float16s[] = {Eigen::half_impl::float_to_half_rtne(-3.f),
Eigen::half_impl::float_to_half_rtne(-4.f)};
struct {
TfLiteType type;
size_t size;
const char* array;
} cases[] = {
{kTfLiteFloat32, sizeof(float), reinterpret_cast<const char*>(floats)},
{kTfLiteInt32, sizeof(int32_t), reinterpret_cast<const char*>(int32s)},
{kTfLiteUInt8, sizeof(uint8_t), reinterpret_cast<const char*>(uint8s)},
{kTfLiteInt64, sizeof(int64_t), reinterpret_cast<const char*>(int64s)},
{kTfLiteInt16, sizeof(int16_t), reinterpret_cast<const char*>(int16s)},
{kTfLiteFloat16, sizeof(TfLiteFloat16),
reinterpret_cast<const char*>(float16s)},
};
for (auto test : cases) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
interpreter.SetInputs({0, 1});
interpreter.SetOutputs({});
TfLiteQuantizationParams quant;
ASSERT_EQ(
interpreter.SetTensorParametersReadWrite(0, test.type, "", {3}, quant),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadOnly(
1, test.type, "", {2}, quant, test.array, 2 * test.size),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {1, 2}), kTfLiteOk);
// Resizing a mmapped tensor is not allowed and should produce error.
ASSERT_NE(interpreter.ResizeInputTensor(1, {3}), kTfLiteOk);
// Set the tensor to be mmapped but with a buffer size that is insufficient
// to match the dimensionality.
ASSERT_NE(interpreter.SetTensorParametersReadOnly(
1, test.type, "", {2}, quant, test.array, 1 * test.size),
kTfLiteOk);
// Allocating should work since we should have our last correct array
// values in place.
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
}
TEST(BasicInterpreter, CheckAlignment) {
struct {
TfLiteType type;
} cases[] = {{kTfLiteFloat32}, {kTfLiteInt32}, {kTfLiteUInt8},
{kTfLiteInt64}, {kTfLiteInt16}, {kTfLiteFloat16}};
for (auto test : cases) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(4), kTfLiteOk);
for (int i = 0; i < 4; i++) {
TfLiteQuantizationParams quant;
interpreter.SetTensorParametersReadWrite(i, test.type, "", {2 * i + 1},
quant);
}
interpreter.AllocateTensors();
for (int i = 0; i < 4; i++) {
const TfLiteTensor& tensor = *interpreter.tensor(i);
ASSERT_EQ(reinterpret_cast<intptr_t>(tensor.data.raw) % 4, 0);
}
}
}
TEST(BasicInterpreter, CheckArenaAllocation) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(10), kTfLiteOk);
TfLiteQuantizationParams quant;
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
std::vector<int> sizes{2048, 4096, 1023, 2047, 1021,
2047, 1023, 2046, 0, 2048};
for (size_t i = 0; i < sizes.size(); ++i) {
interpreter.SetTensorParametersReadWrite(static_cast<int>(i), kTfLiteUInt8,
"", {sizes[i]}, quant);
}
interpreter.SetInputs({0, 1});
interpreter.SetOutputs({9, 4});
interpreter.AddNodeWithParameters({0, 1}, {2, 3}, nullptr, 0, nullptr, ®);
interpreter.AddNodeWithParameters({2, 1}, {4, 5}, nullptr, 0, nullptr, ®);
interpreter.AddNodeWithParameters({4, 3}, {6, 7}, nullptr, 0, nullptr, ®);
interpreter.AddNodeWithParameters({6, 5}, {8}, nullptr, 0, nullptr, ®);
interpreter.AddNodeWithParameters({8, 7}, {9}, nullptr, 0, nullptr, ®);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_LT(interpreter.tensor(0)->data.raw, interpreter.tensor(1)->data.raw);
ASSERT_LT(interpreter.tensor(1)->data.raw, interpreter.tensor(3)->data.raw);
ASSERT_EQ(interpreter.tensor(3)->data.raw, interpreter.tensor(9)->data.raw);
ASSERT_LT(interpreter.tensor(3)->data.raw, interpreter.tensor(5)->data.raw);
ASSERT_LT(interpreter.tensor(5)->data.raw, interpreter.tensor(2)->data.raw);
ASSERT_EQ(interpreter.tensor(2)->data.raw, interpreter.tensor(7)->data.raw);
ASSERT_LT(interpreter.tensor(2)->data.raw, interpreter.tensor(4)->data.raw);
// #4 is the one with the largest pointer.
ASSERT_EQ(interpreter.tensor(8)->data.raw, nullptr);
}
TEST(BasicInterpreter, BufferAccess) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Verify we get a valid pointer.
ASSERT_NE(interpreter.typed_tensor<float>(0), nullptr);
// Verify incorrect pointer is not returned.
ASSERT_EQ(interpreter.typed_tensor<int>(0), nullptr);
// Verify that raw c interface ptr matches safe interface.
ASSERT_EQ(interpreter.typed_tensor<float>(0), interpreter.tensor(0)->data.f);
}
TEST(BasicInterpreter, NoOpInterpreter) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(interpreter.inputs()[0], {1, 2, 3}),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
}
TEST(BasicInterpreter, RedundantAllocateTensors) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
const auto data_raw = interpreter.tensor(0)->data.raw;
ASSERT_NE(data_raw, nullptr);
// A redundant allocation request should have no impact.
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.tensor(0)->data.raw, data_raw);
}
TEST(BasicInterpreter, RedundantAllocateTensorsWithDynamicInputs) {
Interpreter interpreter;
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
interpreter.SetInputs({0});
interpreter.SetOutputs({1});
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
1, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
// Configure the input tensor as dynamic.
interpreter.tensor(0)->data.raw = nullptr;
interpreter.tensor(0)->allocation_type = kTfLiteDynamic;
ASSERT_EQ(interpreter.ResizeInputTensor(interpreter.inputs()[0], {1, 2, 3}),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_NE(interpreter.tensor(1)->data.raw, nullptr);
// Reset the output tensor's buffer.
interpreter.tensor(1)->data.raw = nullptr;
// A redundant allocation request should be honored, as the input tensor
// was marked dynamic.
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_NE(interpreter.tensor(1)->data.raw, nullptr);
}
TEST(BasicInterpreter, ResizingTensors) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
int t = interpreter.inputs()[0];
TfLiteTensor* tensor = interpreter.tensor(t);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
tensor->data.f[5] = 0.123f;
// Changing from kTfLiteArenaRw to kTfLiteDynamic is quite complicate: we need
// to unset data.raw, otherwise Realloc will try to free that memory.
tensor->data.raw = nullptr;
tensor->allocation_type = kTfLiteDynamic;
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 4}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 8 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 1 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {0}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 0);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 0}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 0);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// TODO(ahentz): We shouldn't have to force reallocation, but
// ResizeInputTensor doesn't realloc dynamic tensors. Also note that
// TfLiteTensorRealloc(tensor->bytes, tensor) is a no-op.
TfLiteTensorRealloc(9 * sizeof(float), tensor);
tensor->data.f[7] = 0.123f;
ASSERT_EQ(interpreter.ResizeInputTensor(t, {2, 2, 4}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 16 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// TODO(ahentz): We shouldn't have to force reallocation, but
// ResizeInputTensor doesn't realloc dynamic tensors. Also note that
// TfLiteTensorRealloc(tensor->bytes, tensor) is a no-op.
TfLiteTensorRealloc(17 * sizeof(float), tensor);
tensor->data.f[15] = 0.123f;
}
TEST(BasicInterpreter, NoopResizingTensors) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
int t = interpreter.inputs()[0];
TfLiteTensor* tensor = interpreter.tensor(t);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
tensor->data.f[5] = 0.123f;
// Resizing to the same size should not trigger re-allocation.
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_NE(tensor->data.raw, nullptr);
ASSERT_EQ(tensor->data.f[5], 0.123f);
// Explicitly allocating should be a no-op, as no resize was performed.
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_NE(tensor->data.raw, nullptr);
ASSERT_EQ(tensor->data.f[5], 0.123f);
}
TEST(BasicInterpreter, ResizingTensorsStrictInvalid) {
// Tests ResizeInputTensorStrict where `dims_signature` is not specified.
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {1, 1, 3}, TfLiteQuantizationParams()),
kTfLiteOk);
int t = interpreter.inputs()[0];
TfLiteTensor* tensor = interpreter.tensor(t);
ASSERT_EQ(interpreter.ResizeInputTensorStrict(t, {1, 1, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 3 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Invalid becuase `dims_signature` is not specified.
ASSERT_EQ(interpreter.ResizeInputTensorStrict(t, {1, 2, 3}), kTfLiteError);
EXPECT_EQ(tensor->bytes, 3 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Assert that ResizeInputTensor works for this value.
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
TEST(BasicInterpreter, ResizingTensorsStrict) {
// Tests ResizeInputTensorStrict where `dims_signature` is specified.
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
std::vector<int> dims_signature = {-1, -1, 3};
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {1, 1, 3}, TfLiteQuantizationParams(),
false, &dims_signature),
kTfLiteOk);
int t = interpreter.inputs()[0];
TfLiteTensor* tensor = interpreter.tensor(t);
ASSERT_EQ(interpreter.ResizeInputTensorStrict(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensorStrict(t, {1, 2, 4}), kTfLiteError);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Assert that ResizeInputTensor works for this value.
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 4}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 8 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
// Simple op that does input = output.
TfLiteRegistration GetPassthroughOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.init = [](TfLiteContext* context, const char*, size_t) -> void* {
auto* first_new_tensor = new int;
context->AddTensors(context, 2, first_new_tensor);
return first_new_tensor;
};
reg.free = [](TfLiteContext* context, void* buffer) {
delete static_cast<int*>(buffer);
};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
auto* first_new_tensor = static_cast<int*>(node->user_data);
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
TF_LITE_ENSURE_STATUS(context->ResizeTensor(context, tensor1, newSize));
TfLiteIntArrayFree(node->temporaries);
node->temporaries = TfLiteIntArrayCreate(2);
for (int i = 0; i < 2; ++i) {
node->temporaries->data[i] = *(first_new_tensor) + i;
}
auto setup_temporary = [&](int id) {
TfLiteTensor* tmp = &context->tensors[id];
tmp->type = kTfLiteFloat32;
tmp->allocation_type = kTfLiteArenaRw;
return context->ResizeTensor(context, tmp,
TfLiteIntArrayCopy(tensor0->dims));
};
TF_LITE_ENSURE_STATUS(setup_temporary(node->temporaries->data[0]));
TF_LITE_ENSURE_STATUS(setup_temporary(node->temporaries->data[1]));
return kTfLiteOk;
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
auto populate = [&](int id) {
TfLiteTensor* t = &context->tensors[id];
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
t->data.f[i] = a0->data.f[i];
}
};
populate(node->outputs->data[0]);
populate(node->temporaries->data[0]);
populate(node->temporaries->data[1]);
return kTfLiteOk;
};
return reg;
}
TEST(BasicInterpreter, OneOpInterpreter) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "in1",
{3}, quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteFloat32, "out0",
{3}, quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.GetInputName(0), "in1");
ASSERT_EQ(interpreter.GetOutputName(0), "out0");
TfLiteRegistration reg = GetPassthroughOpRegistration();
ASSERT_EQ(
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {3}), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
}
TEST(BasicInterpreter, ReleaseNonPersistentMemory) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "in1",
{3}, quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteFloat32, "out0",
{3}, quantized),
kTfLiteOk);
TfLiteRegistration reg = GetPassthroughOpRegistration();
ASSERT_EQ(
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {3}), kTfLiteOk);
// AllocateTensors() hasn't been called yet, so this should be a no-op.
ASSERT_EQ(interpreter.ReleaseNonPersistentMemory(), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.ReleaseNonPersistentMemory(), kTfLiteOk);
// Invoke() now fails because non-persistent arenas have been released.
ASSERT_NE(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
// ResizeInputTensors just after ReleaseNonPersistentMemory should also need
// AllocateTensors, without causing any unexpected crashes.
ASSERT_EQ(interpreter.ReleaseNonPersistentMemory(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {4}), kTfLiteOk);
ASSERT_NE(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
}
// Forcefully divides tensor allocation in three steps: one before invocation
// and two more at invocation time. This happens because we use string tensors
// and their sizes can't be determined until invocation time.
TEST(BasicInterpreter, ThreeStepAllocate) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(5), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({4}), kTfLiteOk);
TfLiteQuantizationParams quantized;
// String tensor with one string of length 3
union {
char raw_bytes[15];
struct {
int32_t num_strs;
int32_t offsets[2];
char str_data[3];
} tensor_data;
} data;
data.tensor_data = {1, {12, 15}, {'A', 'B', 'C'}};
// Read only string tensor.
ASSERT_EQ(interpreter.SetTensorParametersReadOnly(0, kTfLiteString, "", {1},
quantized, data.raw_bytes,
sizeof(data.raw_bytes)),
kTfLiteOk);
// Read-write string tensor.
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteString, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(2, kTfLiteInt32, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(3, kTfLiteString, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(4, kTfLiteInt32, "", {1},
quantized),
kTfLiteOk);
// String-in String-out node.
TfLiteRegistration reg_copy = {nullptr, nullptr, nullptr, nullptr};
reg_copy.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
DynamicBuffer buf;
StringRef str_ref = GetString(input, 0);
buf.AddString(str_ref);
buf.WriteToTensorAsVector(output);
return kTfLiteOk;
};
// String-in Int-out node.
TfLiteRegistration reg_len = {nullptr, nullptr, nullptr, nullptr};
reg_len.prepare = [](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* output = GetOutput(context, node, 0);
TfLiteIntArray* outputSize = TfLiteIntArrayCreate(1);
outputSize->data[0] = 1;
return context->ResizeTensor(context, output, outputSize);
};
reg_len.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
a1->data.i32[0] = a0->bytes;
return kTfLiteOk;
};
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®_copy),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({1}, {2}, nullptr, 0, nullptr,
®_len),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {3}, nullptr, 0, nullptr,
®_copy),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({3}, {4}, nullptr, 0, nullptr,
®_len),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.tensor(0)->bytes, 15);
ASSERT_NE(interpreter.tensor(0)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(1)->bytes, 15);
ASSERT_NE(interpreter.tensor(1)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(3)->bytes, 15);
ASSERT_NE(interpreter.tensor(4)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(2)->bytes, 4);
ASSERT_EQ(interpreter.tensor(2)->data.i32[0], 15);
ASSERT_EQ(interpreter.tensor(4)->bytes, 4);
ASSERT_EQ(interpreter.tensor(4)->data.i32[0], 15);
}
TEST(BasicInterpreter, AllocateTwice) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "", {3},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteFloat32, "", {3},
quantized),
kTfLiteOk);
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
return context->ResizeTensor(context, tensor1, newSize);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
a1->data.f[i] = a0->data.f[i];
}
return kTfLiteOk;
};
ASSERT_EQ(
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {3}), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
char* old_tensor0_ptr = interpreter.tensor(0)->data.raw;
char* old_tensor1_ptr = interpreter.tensor(1)->data.raw;
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(old_tensor0_ptr, interpreter.tensor(0)->data.raw);
ASSERT_EQ(old_tensor1_ptr, interpreter.tensor(1)->data.raw);
}
TEST(BasicInterpreter, TestNullErrorReporter) {
TestErrorReporter reporter;
Interpreter interpreter;
}
TEST(BasicInterpreter, TestCustomErrorReporter) {
TestErrorReporter reporter;
Interpreter interpreter(&reporter);
ASSERT_NE(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(reporter.error_messages(),
"Invoke called on model that is not ready.");
ASSERT_EQ(reporter.num_calls(), 1);
}
TEST(BasicInterpreter, TestOverflow) {
TestErrorReporter reporter;
Interpreter interpreter(&reporter);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
// Overflow testing is pointer word size dependent.
if (sizeof(size_t) == 8) {
// #bits for bytecount = 30 + 30 + 2 = 62 < 64
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 30, 1 << 30}, quantized),
kTfLiteOk);
// #bits for element count = 30 + 30 + 2 = 62 < 64 (no overflow)
// #bits for byte count = 30 + 30 + 2 + 2 = 64 == 64 (overflow)
ASSERT_NE(
interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 30, 1 << 30, 1 << 2}, quantized),
kTfLiteOk);
EXPECT_THAT(
reporter.error_messages(),
testing::EndsWith("BytesRequired number of bytes overflowed.\n"));
// #bits for element count = 30 + 30 + 2 + 4 = 66 > 64 (overflow).
// #bits for byte count = 30 + 30 + 2 + 4 + 2 = 68 > 64 (overflow).
reporter.Reset();
ASSERT_NE(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 30, 1 << 30, 1 << 2, 1 << 4},
quantized),
kTfLiteOk);
EXPECT_THAT(
reporter.error_messages(),
testing::EndsWith("BytesRequired number of elements overflowed.\n"));
} else if (sizeof(size_t) == 4) {
// #bits for bytecount = 14 + 14 + 2 = 30 < 32
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 14, 1 << 14}, quantized),
kTfLiteOk);
// #bits for element count = 14 + 14 + 3 = 31 < 32 (no overflow).
// #bits for byte count = 14 + 14 + 3 + 2 = 33 > 32 (overflow).
ASSERT_NE(
interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 14, 1 << 14, 1 << 3}, quantized),
kTfLiteOk);
EXPECT_THAT(
reporter.error_messages(),
testing::EndsWith("BytesRequired number of bytes overflowed.\n"));
// #bits for element count = 14 + 14 + 4 = 32 == 32 (overflow).
// byte count also overflows, but we don't get to that check.
reporter.Reset();
ASSERT_NE(
interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 14, 1 << 14, 1 << 4}, quantized),
kTfLiteOk);
EXPECT_THAT(
reporter.error_messages(),
testing::EndsWith("BytesRequired number of elements overflowed.\n"));
} else {
// This test failing means that we are using a non 32/64 bit architecture.
ASSERT_TRUE(false);
}
}
TEST(BasicInterpreter, TestUseNNAPI) {
TestErrorReporter reporter;
Interpreter interpreter(&reporter);
interpreter.UseNNAPI(true);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
interpreter.UseNNAPI(false);
ASSERT_EQ(reporter.error_messages(),
"Attempting to disable NNAPI delegate after it's applied.");
}
TEST(BasicInterpreter, TestUnsupportedDelegateFunctions) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
TfLiteRegistration registration = {nullptr, nullptr, nullptr, nullptr};
// These functions are only supported inside Delegate's Prepare function.
// The test verifies that these functions returns `kTfLiteError`, but not
// `kTfLiteOk` or just crashes.
registration.prepare = [](TfLiteContext* context, TfLiteNode* node) {
{
TfLiteIntArray* execution_plan;
EXPECT_EQ(context->GetExecutionPlan(context, &execution_plan),
kTfLiteError);
}
{
TfLiteNode* node;
TfLiteRegistration* registration;
EXPECT_EQ(
context->GetNodeAndRegistration(context, 0, &node, ®istration),
kTfLiteError);
}
{
TfLiteRegistration delegate_registration = {nullptr, nullptr, nullptr,
nullptr};
TfLiteIntArray nodes_to_replace;
nodes_to_replace.size = 0;
EXPECT_EQ(context->ReplaceNodeSubsetsWithDelegateKernels(
context, delegate_registration, &nodes_to_replace, nullptr),
kTfLiteError);
}
return kTfLiteError;
};
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®istration),
kTfLiteOk);
EXPECT_EQ(interpreter.AllocateTensors(), kTfLiteError);
}
TEST(BasicInterpreter, DynamicTensorsResizeDescendants) {
// Assemble a graph with a node that has dynamically sized output (via the
// pad op), followed by a node with a standard element-wise op (negate).
Interpreter interpreter;
interpreter.AddTensors(4);
interpreter.SetInputs({0, 1});
interpreter.SetOutputs({3});
TfLiteQuantizationParams quant;
interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "", {2, 2, 1, 1},
quant);
interpreter.SetTensorParametersReadWrite(1, kTfLiteInt32, "", {4, 2}, quant);
interpreter.SetTensorParametersReadWrite(2, kTfLiteFloat32, "", {}, quant);
interpreter.SetTensorParametersReadWrite(3, kTfLiteFloat32, "", {}, quant);
TfLiteRegistration* pad_op = tflite::ops::builtin::Register_PADV2();
TfLiteRegistration* neg_op = tflite::ops::builtin::Register_NEG();
interpreter.AddNodeWithParameters({0, 1}, {2}, nullptr, 0, nullptr, pad_op);
interpreter.AddNodeWithParameters({2}, {3}, nullptr, 0, nullptr, neg_op);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Configure [[2,2],[4,4]] padding and execute the graph.
interpreter.typed_tensor<int>(1)[0] = 2;
interpreter.typed_tensor<int>(1)[1] = 2;
interpreter.typed_tensor<int>(1)[2] = 2;
interpreter.typed_tensor<int>(1)[3] = 2;
interpreter.typed_tensor<int>(1)[4] = 0;
interpreter.typed_tensor<int>(1)[5] = 0;
interpreter.typed_tensor<int>(1)[6] = 0;
interpreter.typed_tensor<int>(1)[7] = 0;
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
// Both the output and intermediate tensor sizes should reflect the output
// from the dynamic pad operation.
ASSERT_EQ(interpreter.tensor(2)->bytes, sizeof(float) * 6 * 6);
ASSERT_EQ(interpreter.tensor(3)->bytes, sizeof(float) * 6 * 6);
// Now configure [[4,4],[6,6]] padding and execute the graph.
interpreter.typed_tensor<int>(1)[0] = 4;
interpreter.typed_tensor<int>(1)[1] = 4;
interpreter.typed_tensor<int>(1)[2] = 6;
interpreter.typed_tensor<int>(1)[3] = 6;
interpreter.typed_tensor<int>(1)[4] = 0;
interpreter.typed_tensor<int>(1)[5] = 0;
interpreter.typed_tensor<int>(1)[6] = 0;
interpreter.typed_tensor<int>(1)[7] = 0;
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
// Again, the output and intermediate tensor sizes should reflect the *new*
// resize from the latest pad operation.
ASSERT_EQ(interpreter.tensor(2)->bytes, sizeof(float) * 10 * 14);
ASSERT_EQ(interpreter.tensor(3)->bytes, sizeof(float) * 10 * 14);
}
TEST(InterpreterTensorsCapacityTest, TestWithinHeadroom) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(Interpreter::kTensorsReservedCapacity),
kTfLiteOk);
TfLiteRegistration registration = {nullptr, nullptr, nullptr, nullptr};
registration.prepare = [](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* first_tensor = context->tensors;
int new_tensor_index;
context->AddTensors(context, Interpreter::kTensorsCapacityHeadroom,
&new_tensor_index);
EXPECT_EQ(first_tensor, context->tensors);
return kTfLiteOk;
};
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®istration),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
TEST(InterpreterTensorsCapacityTest, TestExceedHeadroom) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(Interpreter::kTensorsReservedCapacity),
kTfLiteOk);
TfLiteRegistration registration = {nullptr, nullptr, nullptr, nullptr};
registration.prepare = [](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* first_tensor = context->tensors;
int new_tensor_index;
// Add enough tensors to trigger buffer re-allocation.
context->AddTensors(
context,
(context->tensors_size + Interpreter::kTensorsCapacityHeadroom + 1) * 2,
&new_tensor_index);
EXPECT_NE(first_tensor, context->tensors);
return kTfLiteOk;
};
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®istration),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
struct TestExternalContext : public TfLiteExternalContext {
static constexpr TfLiteExternalContextType kType = kTfLiteGemmLowpContext;
static TestExternalContext* Get(TfLiteContext* context) {
return reinterpret_cast<TestExternalContext*>(
context->GetExternalContext(context, kType));
}
static void Set(TfLiteContext* context, TestExternalContext* value) {
context->SetExternalContext(context, kType, value);
}
int num_refreshes = 0;
};
TEST_F(InterpreterTest, GetSetResetExternalContexts) {
auto* context = GetInterpreterContext();
TestExternalContext external_context;
external_context.Refresh = [](TfLiteContext* context) {
auto* ptr = TestExternalContext::Get(context);
if (ptr != nullptr) {
++ptr->num_refreshes;
}
return kTfLiteOk;
};
EXPECT_EQ(TestExternalContext::Get(context), nullptr);
ASSERT_EQ(interpreter_.SetNumThreads(4), kTfLiteOk);
TestExternalContext::Set(context, &external_context);
EXPECT_EQ(TestExternalContext::Get(context), &external_context);
ASSERT_EQ(interpreter_.SetNumThreads(4), kTfLiteOk);
ASSERT_EQ(interpreter_.SetNumThreads(5), kTfLiteOk);
EXPECT_EQ(external_context.num_refreshes, 2);
// Reset refresh count to 0
external_context.num_refreshes = 0;
// Below should not call external context refresh
ASSERT_EQ(interpreter_.SetNumThreads(-2), kTfLiteError);
EXPECT_EQ(external_context.num_refreshes, 0);
ASSERT_EQ(interpreter_.SetNumThreads(-1), kTfLiteOk);
EXPECT_EQ(external_context.num_refreshes, 1);
TestExternalContext::Set(context, nullptr);
EXPECT_EQ(TestExternalContext::Get(context), nullptr);
ASSERT_EQ(interpreter_.SetNumThreads(4), kTfLiteOk);
}
struct TestCpuBackendContext : public TfLiteInternalBackendContext {
// Count the number of calls to ClearCaches for the backend context.
void ClearCaches() override { ++num_calls; }
void SetMaxNumThreads(int num_threads) override {}
int num_calls = 0;
};
TEST_F(InterpreterTest, ExternalBackendContextClearsCachesOnDelete) {
ExternalCpuBackendContext external_cpu_context;
TestCpuBackendContext* cpu_backend_context = new TestCpuBackendContext();
external_cpu_context.set_internal_backend_context(
std::unique_ptr<TfLiteInternalBackendContext>(cpu_backend_context));
{
// Create an interpreter with an external Cpu backend context and ensure
// it goes out of scope.
Interpreter interpreter;
interpreter.SetExternalContext(kTfLiteCpuBackendContext,
&external_cpu_context);
EXPECT_EQ(cpu_backend_context->num_calls, 0);
}
EXPECT_EQ(cpu_backend_context->num_calls, 1);
}
// Test fixture that allows playing with execution plans. It creates a two
// node graph that can be executed in either [0,1] order or [1,0] order.
// The CopyOp records when it is invoked in the class member run_order_
// so we can test whether the execution plan was honored.
class TestExecutionPlan : public ::testing::Test {
// Encapsulates the node ids and provides them to a C primitive data type
// Allocatable with placement new, but never destructed, so make sure this
// doesn't own any heap allocated data. This is then is used as op local
// data to allow access to the test fixture data.
class CallReporting {
public:
CallReporting(int node_id, std::vector<int>* run_order)
: node_id_(node_id), run_order_(run_order) {}
void Record() { run_order_->push_back(node_id_); }
private:
// The node id for this particular node
int node_id_;
// A pointer to the global run-order
std::vector<int>* run_order_;
};
// Build a kernel registration for an op that copies its one input
// to an output
TfLiteRegistration CopyOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Set output size to input size
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
return context->ResizeTensor(context, tensor1, newSize);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
CallReporting* call_reporting =
static_cast<CallReporting*>(node->builtin_data);
// Copy input data to output data.
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
a1->data.f[i] = a0->data.f[i];
}
call_reporting->Record();
return kTfLiteOk;
};
return reg;
}
// Adds a copy node going from tensor `input` to output tensor `output`.
// Note, input is used as the node_id. Inject run_order as op accessible
// data. Note: this is a little strange of a way to do this, but it is
// using op functionality to avoid static global variables.
void MakeCopyNode(int input, int output) {
// Ownership of call_reporting is taken by interpreter (malloc is used due
// to nodes being a C99 interface so free() is used).
TfLiteRegistration copy_op = CopyOpRegistration();
CallReporting* call_reporting_1 =
static_cast<CallReporting*>(malloc(sizeof(CallReporting)));
new (call_reporting_1) CallReporting(input, &run_order_);
ASSERT_EQ(interpreter_.AddNodeWithParameters(
{0}, {2}, nullptr, 0, static_cast<void*>(call_reporting_1),
©_op),
kTfLiteOk);
ASSERT_EQ(interpreter_.ResizeInputTensor(input, {3}), kTfLiteOk);
}
void SetUp() final {
// Add two inputs and two outputs that don't depend on each other
ASSERT_EQ(interpreter_.AddTensors(4), kTfLiteOk);
interpreter_.SetInputs({0, 1});
interpreter_.SetOutputs({2, 3});
TfLiteQuantizationParams quantized;
for (int tensor_index = 0; tensor_index < 4; tensor_index++) {
ASSERT_EQ(interpreter_.SetTensorParametersReadWrite(
tensor_index, kTfLiteFloat32, "", {3}, quantized),
kTfLiteOk);
}
// Define two copy functions that also use the user_data to report that
// they were called.
// i.e. tensor[2] = copy(tensor[0]); tensor[3] = copy(tensor[1]);
// thus we can reorder the two nodes arbitrary and still satisfy dependency
// order.
MakeCopyNode(0, 2);
MakeCopyNode(1, 3);
ASSERT_EQ(interpreter_.AllocateTensors(), kTfLiteOk);
}
protected:
Interpreter interpreter_;
// list of node_ids that were run
std::vector<int> run_order_;
};
TEST_F(TestExecutionPlan, DefaultExecutionPlan) {
// Check default order
ASSERT_EQ(interpreter_.Invoke(), kTfLiteOk);
ASSERT_EQ(run_order_, std::vector<int>({0, 1}));
}
TEST_F(TestExecutionPlan, ReversedExecutionPlan) {
// Check reversed order
interpreter_.SetExecutionPlan({1, 0});
ASSERT_EQ(interpreter_.Invoke(), kTfLiteOk);
ASSERT_EQ(run_order_, std::vector<int>({1, 0}));
}
TEST_F(TestExecutionPlan, SubsetExecutionPlan) {
// Check running only node index 1
interpreter_.SetExecutionPlan({1});
ASSERT_EQ(interpreter_.Invoke(), kTfLiteOk);
ASSERT_EQ(run_order_, std::vector<int>({1}));
}
TEST_F(TestExecutionPlan, NullExecutionPlan) {
// Check nothing executed.
interpreter_.SetExecutionPlan({});
ASSERT_EQ(interpreter_.Invoke(), kTfLiteOk);
ASSERT_EQ(run_order_, std::vector<int>());
}
TEST(TestDelegateOwnership, ProperlyDisposed) {
struct TfLiteInterpreterOwnedDelegate : public TfLiteDelegate {
TfLiteInterpreterOwnedDelegate(bool* destroyed, bool* prepared)
: destroyed(destroyed), prepared(prepared) {
flags = kTfLiteDelegateFlagsNone;
Prepare = [](TfLiteContext*, TfLiteDelegate* delegate) -> TfLiteStatus {
*static_cast<TfLiteInterpreterOwnedDelegate*>(delegate)->prepared =
true;
return kTfLiteOk;
};
}
~TfLiteInterpreterOwnedDelegate() { *destroyed = true; }
bool* destroyed;
bool* prepared;
};
// Construct a delegate with flags for indicating preparation/destruction.
bool destroyed = false;
bool prepared = false;
std::unique_ptr<TfLiteInterpreterOwnedDelegate> delegate(
new TfLiteInterpreterOwnedDelegate(&destroyed, &prepared));
{
// Create an interpreter and assemble a simple graph.
Interpreter interpreter;
TfLiteRegistration registration = {nullptr, nullptr, nullptr, nullptr};
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®istration),
kTfLiteOk);
// Pass delegate ownership to that interpreter.
ASSERT_EQ(InterpreterTest::ModifyGraphWithDelegate(&interpreter,
std::move(delegate)),
kTfLiteOk);
// The delegate should be prepared as normal, and should be preserved.
EXPECT_TRUE(prepared);
EXPECT_FALSE(destroyed);
// Interpreter interaction should not impact the delegate's validity.
interpreter.AllocateTensors();
interpreter.Invoke();
EXPECT_FALSE(destroyed);
}
// Only after the interpreter is destroyed should the delegate be destroyed.
EXPECT_TRUE(destroyed);
}
// CancellationData contains the data required to cancel a call to Invoke().
struct CancellationData {
bool is_cancelled = false;
};
// Indicates whether Invoke() has been cancelled based on the value of the
// CancellationData object passed in.
bool CheckCancellation(void* data) {
CancellationData* cancellation_data =
static_cast<struct CancellationData*>(data);
return cancellation_data->is_cancelled;
}
static struct CancellationData cancellation_data_;
// Test fixture to test cancellation within the Interpreter.
class CancellationTest : public ::testing::Test {
public:
TfLiteStatus Invoke() { return interpreter_.Invoke(); }
void Cancel() { cancellation_data_.is_cancelled = true; }
// Adds an CancelOp with input tensor `input` and output tensor `output`.
void MakeCancelNode(int input, int output) {
TfLiteRegistration op = CancelOpRegistration();
ASSERT_EQ(interpreter_.AddNodeWithParameters({input}, {output}, nullptr, 0,
nullptr, &op),
kTfLiteOk);
ASSERT_EQ(interpreter_.ResizeInputTensor(input, {3}), kTfLiteOk);
}
// Adds an OkOp with input tensor `input` and output tensor `output`.
void MakeOkNode(int input, int output) {
TfLiteRegistration op = OkOpRegistration();
ASSERT_EQ(interpreter_.AddNodeWithParameters({input}, {output}, nullptr, 0,
nullptr, &op),
kTfLiteOk);
ASSERT_EQ(interpreter_.ResizeInputTensor(input, {3}), kTfLiteOk);
}
Interpreter interpreter_;
private:
// Build the kernel registration for an op that cancels the operation.
TfLiteRegistration CancelOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
// Set output size to the input size in CancelOp::Prepare(). Code exists to
// have a framework in Prepare. The input and output tensors are not used.
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* in_tensor = GetInput(context, node, 0);
TfLiteTensor* out_tensor = GetOutput(context, node, 0);
TfLiteIntArray* new_size = TfLiteIntArrayCopy(in_tensor->dims);
return context->ResizeTensor(context, out_tensor, new_size);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
cancellation_data_.is_cancelled = true;
return kTfLiteOk;
};
return reg;
}
// Build the kernel registration for an op that returns kTfLiteOk.
TfLiteRegistration OkOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
// Set output size to the input size in OkOp::Prepare(). Code exists to have
// a framework in Prepare. The input and output tensors are not used.
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* in_tensor = GetInput(context, node, 0);
TfLiteTensor* out_tensor = GetOutput(context, node, 0);
TfLiteIntArray* new_size = TfLiteIntArrayCopy(in_tensor->dims);
return context->ResizeTensor(context, out_tensor, new_size);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
return kTfLiteOk;
};
return reg;
}
void SetUp() final {
cancellation_data_.is_cancelled = false;
// Set up the interpreter. Create the input and output tensors.
int num_tensors = 3;
ASSERT_EQ(interpreter_.AddTensors(num_tensors), kTfLiteOk);
interpreter_.SetInputs({0});
interpreter_.SetOutputs({2});
TfLiteQuantizationParams quantized;
for (int tensor_index = 0; tensor_index < num_tensors; tensor_index++) {
ASSERT_EQ(interpreter_.SetTensorParametersReadWrite(
tensor_index, kTfLiteFloat32, "", {3}, quantized),
kTfLiteOk);
}
interpreter_.SetCancellationFunction(&cancellation_data_,
&CheckCancellation);
}
};
TEST_F(CancellationTest, CancelBeforeInvoke) {
// Cancel prior to calling Invoke.
CancellationTest::MakeOkNode(1, 2);
ASSERT_EQ(interpreter_.AllocateTensors(), kTfLiteOk);
CancellationTest::Cancel();
TfLiteStatus invoke_error_code = CancellationTest::Invoke();
ASSERT_EQ(invoke_error_code, kTfLiteError);
}
TEST_F(CancellationTest, CancelDuringInvoke) {
// Tests a model which sets the cancel in order to test cancellation works
// between ops.
//
// The first op will set the cancellation bit to true. The second op returns
// `kTfLiteOk` if executed.
CancellationTest::MakeCancelNode(0, 1);
CancellationTest::MakeOkNode(1, 2);
ASSERT_EQ(interpreter_.AllocateTensors(), kTfLiteOk);
TfLiteStatus invoke_error_code = CancellationTest::Invoke();
ASSERT_EQ(invoke_error_code, kTfLiteError);
}
// Tests functionality related to custom memory allocations in TFLite.
class TestCustomAllocation : public ::testing::Test {
protected:
void SetUp() override {
// Simple model with two custom ops that add 2 float tensors each.
interpreter_.reset(new Interpreter);
interpreter_->AddTensors(7);
interpreter_->SetInputs({0, 1});
interpreter_->SetOutputs({3, 4, 6});
TfLiteQuantizationParams quant;
interpreter_->SetTensorParametersReadWrite(0, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(1, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(2, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(3, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(4, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(5, kTfLiteFloat32, "", {3},
quant, /*is_variable=*/true);
interpreter_->SetTensorParametersReadWrite(6, kTfLiteFloat32, "", {3},
quant);
auto* add_reg = ops::builtin::Register_ADD();
TfLiteAddParams* builtin_data0 =
reinterpret_cast<TfLiteAddParams*>(malloc(sizeof(TfLiteAddParams)));
TfLiteAddParams* builtin_data1 =
reinterpret_cast<TfLiteAddParams*>(malloc(sizeof(TfLiteAddParams)));
TfLiteAddParams* builtin_data2 =
reinterpret_cast<TfLiteAddParams*>(malloc(sizeof(TfLiteAddParams)));
TfLiteAddParams* builtin_data3 =
reinterpret_cast<TfLiteAddParams*>(malloc(sizeof(TfLiteAddParams)));
builtin_data0->activation = kTfLiteActNone;
builtin_data1->activation = kTfLiteActNone;
builtin_data2->activation = kTfLiteActNone;
builtin_data3->activation = kTfLiteActNone;
interpreter_->AddNodeWithParameters({0, 0}, {2}, nullptr, 0, builtin_data0,
add_reg);
interpreter_->AddNodeWithParameters({1, 1}, {3}, nullptr, 0, builtin_data1,
add_reg);
interpreter_->AddNodeWithParameters({2, 1}, {4}, nullptr, 0, builtin_data2,
add_reg);
interpreter_->AddNodeWithParameters({0, 5}, {6}, nullptr, 0, builtin_data3,
add_reg);
interpreter_->SetVariables({5});
}
void AssignCustomAllocForTensor(int tensor_idx, int required_alignment) {
const TfLiteTensor* tensor = interpreter_->tensor(tensor_idx);
auto tensor_alloc = NewCustomAlloc(tensor->bytes, required_alignment);
ASSERT_EQ(
interpreter_->SetCustomAllocationForTensor(tensor_idx, tensor_alloc),
kTfLiteOk);
}
void VerifyInvoke() {
std::vector<float> input = {1.0f, 2.0f, 3.0f};
std::vector<float> variable = {0.0f, 1.0f, 2.0f};
std::vector<float> expected_output = {2.0f, 4.0f, 6.0f};
// typed_tensor<...> should work irrespective of custom alloc, since it
// accesses output_tensor.data.
memcpy(interpreter_->typed_tensor<float>(interpreter_->variables()[0]),
variable.data(), 3 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(0), input.data(),
3 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(1), input.data(),
3 * sizeof(float));
ASSERT_EQ(interpreter_->Invoke(), kTfLiteOk);
TfLiteTensor* output_tensor =
interpreter_->tensor(interpreter_->outputs()[0]);
for (int i = 0; i < 3; ++i) {
EXPECT_EQ(output_tensor->data.f[i], expected_output[i]) << i;
}
}
// Actual initialized allocation is more than num_bytes, to account for
// required_allocation.
TfLiteCustomAllocation NewCustomAlloc(size_t num_bytes,
int required_alignment) {
// Extra memory to ensure alignment.
char* new_alloc = new char[num_bytes + required_alignment];
char* new_underlying_buffer_aligned_ptr = reinterpret_cast<char*>(
AlignTo(required_alignment, reinterpret_cast<intptr_t>(new_alloc)));
custom_alloc_buffers_.emplace_back(new_alloc);
return TfLiteCustomAllocation(
{new_underlying_buffer_aligned_ptr, num_bytes});
}
intptr_t AlignTo(size_t alignment, intptr_t offset) {
return offset % alignment == 0 ? offset
: offset + (alignment - offset % alignment);
}
void TearDown() override {
interpreter_.reset();
custom_alloc_buffers_.clear();
}
protected:
TfLiteAddParams add_params_;
std::unique_ptr<Interpreter> interpreter_;
std::vector<std::unique_ptr<char[]>> custom_alloc_buffers_;
};
TEST_F(TestCustomAllocation, InvalidAlignment) {
const TfLiteTensor* input_tensor =
interpreter_->tensor(interpreter_->inputs()[0]);
intptr_t dummy_ptr = kDefaultTensorAlignment - 1;
TfLiteCustomAllocation input_alloc{reinterpret_cast<void*>(dummy_ptr),
input_tensor->bytes};
ASSERT_EQ(interpreter_->SetCustomAllocationForTensor(
interpreter_->inputs()[0], input_alloc),
kTfLiteError);
// Allocate tensors & Invoke should still work.
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, InsufficientBytes) {
auto input_alloc = NewCustomAlloc(4, kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->SetCustomAllocationForTensor(
interpreter_->inputs()[0], input_alloc),
kTfLiteError);
// Allocate tensors & Invoke should still work.
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, CustomInputAlloc) {
// Set custom allocation for one input tensor.
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, CustomInputAlloc_MultipleAssigns) {
// Set custom allocation for one input tensor.
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, CustomInputAlloc_AllocateTensorsBefore) {
// Allocate tensors.
// Allocating now will cause TFLite to reserve some extra memory, but nothing
// should break.
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, CustomInputAndOutputAllocs) {
// Set custom allocations for all IO tensors.
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->inputs()[1],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->outputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->outputs()[1],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
// Ensure that custom allocs work for tensors on persistent arena as well.
TEST_F(TestCustomAllocation, CustomAlloc_VariableTensor) {
// Set custom allocation for one input tensor.
AssignCustomAllocForTensor(interpreter_->variables()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
AssignCustomAllocForTensor(interpreter_->variables()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
std::vector<float> input = {2.0f, 3.0f, 4.0f};
std::vector<float> variable = {1.0f, 2.0f, 3.0f};
std::vector<float> expected_output = {3.0f, 5.0f, 7.0f};
memcpy(interpreter_->typed_tensor<float>(interpreter_->variables()[0]),
variable.data(), 3 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(0), input.data(), 3 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(1), input.data(), 3 * sizeof(float));
ASSERT_EQ(interpreter_->Invoke(), kTfLiteOk);
// expected_output = input + variable
TfLiteTensor* output_tensor =
interpreter_->tensor(interpreter_->outputs()[2]);
for (int i = 0; i < 3; ++i) {
EXPECT_EQ(output_tensor->data.f[i], expected_output[i]) << i;
}
}
TEST_F(TestCustomAllocation, ResizeTensorsWithoutEnoughMemory) {
// Set custom allocations for all input tensors.
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->inputs()[1],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
// Now resize tensors to double the size.
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[0], {2, 3}),
kTfLiteOk);
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[1], {2, 3}),
kTfLiteOk);
// Since the custom memory previously allocated isn't enough,
// AllocateTensors() will fail.
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteError);
// Interpreter should no longer be in invokable state, so expect failure.
ASSERT_EQ(interpreter_->Invoke(), kTfLiteError);
}
TEST_F(TestCustomAllocation, ResizeTensorsWithEnoughMemory) {
// Set custom allocations for all input tensors, with double the required
// memory.
const TfLiteTensor* input0_tensor =
interpreter_->tensor(interpreter_->inputs()[0]);
auto input0_alloc =
NewCustomAlloc(2 * input0_tensor->bytes, kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->SetCustomAllocationForTensor(
interpreter_->inputs()[0], input0_alloc),
kTfLiteOk);
const TfLiteTensor* input1_tensor =
interpreter_->tensor(interpreter_->inputs()[1]);
auto input1_alloc =
NewCustomAlloc(2 * input1_tensor->bytes, kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->SetCustomAllocationForTensor(
interpreter_->inputs()[1], input1_alloc),
kTfLiteOk);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
// Now resize tensors to double the size.
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[0], {6, 1}),
kTfLiteOk);
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[1], {6, 1}),
kTfLiteOk);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
std::vector<float> input = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f};
std::vector<float> expected_output = {2.0f, 4.0f, 6.0f, 8.0f, 10.0f, 12.0f};
TfLiteTensor* tensor = interpreter_->tensor(interpreter_->outputs()[0]);
memcpy(interpreter_->typed_tensor<float>(0), input.data(), 6 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(1), input.data(), 6 * sizeof(float));
ASSERT_EQ(interpreter_->Invoke(), kTfLiteOk);
for (int i = 0; i < 6; ++i) {
EXPECT_EQ(tensor->data.f[i], expected_output[i]) << i;
}
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[0], {3, 1}),
kTfLiteOk);
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[1], {3, 1}),
kTfLiteOk);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
} // namespace | 506 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::TEST | tflite::TEST( BasicInterpreter , CheckResize) | ['BasicInterpreter', 'CheckResize'] | TEST(BasicInterpreter, CheckResize) {
const float floats[] = {-3., -4.};
const int32_t int32s[] = {-3, -4};
const uint8_t uint8s[] = {3, 4};
const int64_t int64s[] = {6, -7};
const int16_t int16s[] = {8, -9};
const Eigen::half float16s[] = {Eigen::half_impl::float_to_half_rtne(-3.f),
Eigen::half_impl::float_to_half_rtne(-4.f)};
struct {
TfLiteType type;
size_t size;
const char* array;
} cases[] = {
{kTfLiteFloat32, sizeof(float), reinterpret_cast<const char*>(floats)},
{kTfLiteInt32, sizeof(int32_t), reinterpret_cast<const char*>(int32s)},
{kTfLiteUInt8, sizeof(uint8_t), reinterpret_cast<const char*>(uint8s)},
{kTfLiteInt64, sizeof(int64_t), reinterpret_cast<const char*>(int64s)},
{kTfLiteInt16, sizeof(int16_t), reinterpret_cast<const char*>(int16s)},
{kTfLiteFloat16, sizeof(TfLiteFloat16),
reinterpret_cast<const char*>(float16s)},
};
for (auto test : cases) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
interpreter.SetInputs({0, 1});
interpreter.SetOutputs({});
TfLiteQuantizationParams quant;
ASSERT_EQ(
interpreter.SetTensorParametersReadWrite(0, test.type, "", {3}, quant),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadOnly(
1, test.type, "", {2}, quant, test.array, 2 * test.size),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {1, 2}), kTfLiteOk);
// Resizing a mmapped tensor is not allowed and should produce error.
ASSERT_NE(interpreter.ResizeInputTensor(1, {3}), kTfLiteOk);
// Set the tensor to be mmapped but with a buffer size that is insufficient
// to match the dimensionality.
ASSERT_NE(interpreter.SetTensorParametersReadOnly(
1, test.type, "", {2}, quant, test.array, 1 * test.size),
kTfLiteOk);
// Allocating should work since we should have our last correct array
// values in place.
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
}
TEST(BasicInterpreter, CheckAlignment) {
struct {
TfLiteType type;
} cases[] = {{kTfLiteFloat32}, {kTfLiteInt32}, {kTfLiteUInt8},
{kTfLiteInt64}, {kTfLiteInt16}, {kTfLiteFloat16}};
for (auto test : cases) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(4), kTfLiteOk);
for (int i = 0; i < 4; i++) {
TfLiteQuantizationParams quant;
interpreter.SetTensorParametersReadWrite(i, test.type, "", {2 * i + 1},
quant);
}
interpreter.AllocateTensors();
for (int i = 0; i < 4; i++) {
const TfLiteTensor& tensor = *interpreter.tensor(i);
ASSERT_EQ(reinterpret_cast<intptr_t>(tensor.data.raw) % 4, 0);
}
}
}
TEST(BasicInterpreter, CheckArenaAllocation) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(10), kTfLiteOk);
TfLiteQuantizationParams quant;
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
std::vector<int> sizes{2048, 4096, 1023, 2047, 1021,
2047, 1023, 2046, 0, 2048};
for (size_t i = 0; i < sizes.size(); ++i) {
interpreter.SetTensorParametersReadWrite(static_cast<int>(i), kTfLiteUInt8,
"", {sizes[i]}, quant);
}
interpreter.SetInputs({0, 1});
interpreter.SetOutputs({9, 4});
interpreter.AddNodeWithParameters({0, 1}, {2, 3}, nullptr, 0, nullptr, ®);
interpreter.AddNodeWithParameters({2, 1}, {4, 5}, nullptr, 0, nullptr, ®);
interpreter.AddNodeWithParameters({4, 3}, {6, 7}, nullptr, 0, nullptr, ®);
interpreter.AddNodeWithParameters({6, 5}, {8}, nullptr, 0, nullptr, ®);
interpreter.AddNodeWithParameters({8, 7}, {9}, nullptr, 0, nullptr, ®);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_LT(interpreter.tensor(0)->data.raw, interpreter.tensor(1)->data.raw);
ASSERT_LT(interpreter.tensor(1)->data.raw, interpreter.tensor(3)->data.raw);
ASSERT_EQ(interpreter.tensor(3)->data.raw, interpreter.tensor(9)->data.raw);
ASSERT_LT(interpreter.tensor(3)->data.raw, interpreter.tensor(5)->data.raw);
ASSERT_LT(interpreter.tensor(5)->data.raw, interpreter.tensor(2)->data.raw);
ASSERT_EQ(interpreter.tensor(2)->data.raw, interpreter.tensor(7)->data.raw);
ASSERT_LT(interpreter.tensor(2)->data.raw, interpreter.tensor(4)->data.raw);
// #4 is the one with the largest pointer.
ASSERT_EQ(interpreter.tensor(8)->data.raw, nullptr);
}
TEST(BasicInterpreter, BufferAccess) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Verify we get a valid pointer.
ASSERT_NE(interpreter.typed_tensor<float>(0), nullptr);
// Verify incorrect pointer is not returned.
ASSERT_EQ(interpreter.typed_tensor<int>(0), nullptr);
// Verify that raw c interface ptr matches safe interface.
ASSERT_EQ(interpreter.typed_tensor<float>(0), interpreter.tensor(0)->data.f);
}
TEST(BasicInterpreter, NoOpInterpreter) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(interpreter.inputs()[0], {1, 2, 3}),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
}
TEST(BasicInterpreter, RedundantAllocateTensors) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
const auto data_raw = interpreter.tensor(0)->data.raw;
ASSERT_NE(data_raw, nullptr);
// A redundant allocation request should have no impact.
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.tensor(0)->data.raw, data_raw);
}
TEST(BasicInterpreter, RedundantAllocateTensorsWithDynamicInputs) {
Interpreter interpreter;
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
interpreter.SetInputs({0});
interpreter.SetOutputs({1});
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
1, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
// Configure the input tensor as dynamic.
interpreter.tensor(0)->data.raw = nullptr;
interpreter.tensor(0)->allocation_type = kTfLiteDynamic;
ASSERT_EQ(interpreter.ResizeInputTensor(interpreter.inputs()[0], {1, 2, 3}),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_NE(interpreter.tensor(1)->data.raw, nullptr);
// Reset the output tensor's buffer.
interpreter.tensor(1)->data.raw = nullptr;
// A redundant allocation request should be honored, as the input tensor
// was marked dynamic.
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_NE(interpreter.tensor(1)->data.raw, nullptr);
}
TEST(BasicInterpreter, ResizingTensors) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
int t = interpreter.inputs()[0];
TfLiteTensor* tensor = interpreter.tensor(t);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
tensor->data.f[5] = 0.123f;
// Changing from kTfLiteArenaRw to kTfLiteDynamic is quite complicate: we need
// to unset data.raw, otherwise Realloc will try to free that memory.
tensor->data.raw = nullptr;
tensor->allocation_type = kTfLiteDynamic;
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 4}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 8 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 1 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {0}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 0);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 0}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 0);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// TODO(ahentz): We shouldn't have to force reallocation, but
// ResizeInputTensor doesn't realloc dynamic tensors. Also note that
// TfLiteTensorRealloc(tensor->bytes, tensor) is a no-op.
TfLiteTensorRealloc(9 * sizeof(float), tensor);
tensor->data.f[7] = 0.123f;
ASSERT_EQ(interpreter.ResizeInputTensor(t, {2, 2, 4}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 16 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// TODO(ahentz): We shouldn't have to force reallocation, but
// ResizeInputTensor doesn't realloc dynamic tensors. Also note that
// TfLiteTensorRealloc(tensor->bytes, tensor) is a no-op.
TfLiteTensorRealloc(17 * sizeof(float), tensor);
tensor->data.f[15] = 0.123f;
}
TEST(BasicInterpreter, NoopResizingTensors) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {3}, TfLiteQuantizationParams()),
kTfLiteOk);
int t = interpreter.inputs()[0];
TfLiteTensor* tensor = interpreter.tensor(t);
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
tensor->data.f[5] = 0.123f;
// Resizing to the same size should not trigger re-allocation.
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_NE(tensor->data.raw, nullptr);
ASSERT_EQ(tensor->data.f[5], 0.123f);
// Explicitly allocating should be a no-op, as no resize was performed.
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_NE(tensor->data.raw, nullptr);
ASSERT_EQ(tensor->data.f[5], 0.123f);
}
TEST(BasicInterpreter, ResizingTensorsStrictInvalid) {
// Tests ResizeInputTensorStrict where `dims_signature` is not specified.
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {1, 1, 3}, TfLiteQuantizationParams()),
kTfLiteOk);
int t = interpreter.inputs()[0];
TfLiteTensor* tensor = interpreter.tensor(t);
ASSERT_EQ(interpreter.ResizeInputTensorStrict(t, {1, 1, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 3 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Invalid becuase `dims_signature` is not specified.
ASSERT_EQ(interpreter.ResizeInputTensorStrict(t, {1, 2, 3}), kTfLiteError);
EXPECT_EQ(tensor->bytes, 3 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Assert that ResizeInputTensor works for this value.
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
TEST(BasicInterpreter, ResizingTensorsStrict) {
// Tests ResizeInputTensorStrict where `dims_signature` is specified.
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
std::vector<int> dims_signature = {-1, -1, 3};
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "", {1, 1, 3}, TfLiteQuantizationParams(),
false, &dims_signature),
kTfLiteOk);
int t = interpreter.inputs()[0];
TfLiteTensor* tensor = interpreter.tensor(t);
ASSERT_EQ(interpreter.ResizeInputTensorStrict(t, {1, 2, 3}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensorStrict(t, {1, 2, 4}), kTfLiteError);
EXPECT_EQ(tensor->bytes, 6 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Assert that ResizeInputTensor works for this value.
ASSERT_EQ(interpreter.ResizeInputTensor(t, {1, 2, 4}), kTfLiteOk);
EXPECT_EQ(tensor->bytes, 8 * sizeof(float));
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
// Simple op that does input = output.
TfLiteRegistration GetPassthroughOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.init = [](TfLiteContext* context, const char*, size_t) -> void* {
auto* first_new_tensor = new int;
context->AddTensors(context, 2, first_new_tensor);
return first_new_tensor;
};
reg.free = [](TfLiteContext* context, void* buffer) {
delete static_cast<int*>(buffer);
};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
auto* first_new_tensor = static_cast<int*>(node->user_data);
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
TF_LITE_ENSURE_STATUS(context->ResizeTensor(context, tensor1, newSize));
TfLiteIntArrayFree(node->temporaries);
node->temporaries = TfLiteIntArrayCreate(2);
for (int i = 0; i < 2; ++i) {
node->temporaries->data[i] = *(first_new_tensor) + i;
}
auto setup_temporary = [&](int id) {
TfLiteTensor* tmp = &context->tensors[id];
tmp->type = kTfLiteFloat32;
tmp->allocation_type = kTfLiteArenaRw;
return context->ResizeTensor(context, tmp,
TfLiteIntArrayCopy(tensor0->dims));
};
TF_LITE_ENSURE_STATUS(setup_temporary(node->temporaries->data[0]));
TF_LITE_ENSURE_STATUS(setup_temporary(node->temporaries->data[1]));
return kTfLiteOk;
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
auto populate = [&](int id) {
TfLiteTensor* t = &context->tensors[id];
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
t->data.f[i] = a0->data.f[i];
}
};
populate(node->outputs->data[0]);
populate(node->temporaries->data[0]);
populate(node->temporaries->data[1]);
return kTfLiteOk;
};
return reg;
}
TEST(BasicInterpreter, OneOpInterpreter) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "in1",
{3}, quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteFloat32, "out0",
{3}, quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.GetInputName(0), "in1");
ASSERT_EQ(interpreter.GetOutputName(0), "out0");
TfLiteRegistration reg = GetPassthroughOpRegistration();
ASSERT_EQ(
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {3}), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
}
TEST(BasicInterpreter, ReleaseNonPersistentMemory) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "in1",
{3}, quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteFloat32, "out0",
{3}, quantized),
kTfLiteOk);
TfLiteRegistration reg = GetPassthroughOpRegistration();
ASSERT_EQ(
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {3}), kTfLiteOk);
// AllocateTensors() hasn't been called yet, so this should be a no-op.
ASSERT_EQ(interpreter.ReleaseNonPersistentMemory(), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.ReleaseNonPersistentMemory(), kTfLiteOk);
// Invoke() now fails because non-persistent arenas have been released.
ASSERT_NE(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
// ResizeInputTensors just after ReleaseNonPersistentMemory should also need
// AllocateTensors, without causing any unexpected crashes.
ASSERT_EQ(interpreter.ReleaseNonPersistentMemory(), kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {4}), kTfLiteOk);
ASSERT_NE(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
}
// Forcefully divides tensor allocation in three steps: one before invocation
// and two more at invocation time. This happens because we use string tensors
// and their sizes can't be determined until invocation time.
TEST(BasicInterpreter, ThreeStepAllocate) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(5), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({4}), kTfLiteOk);
TfLiteQuantizationParams quantized;
// String tensor with one string of length 3
union {
char raw_bytes[15];
struct {
int32_t num_strs;
int32_t offsets[2];
char str_data[3];
} tensor_data;
} data;
data.tensor_data = {1, {12, 15}, {'A', 'B', 'C'}};
// Read only string tensor.
ASSERT_EQ(interpreter.SetTensorParametersReadOnly(0, kTfLiteString, "", {1},
quantized, data.raw_bytes,
sizeof(data.raw_bytes)),
kTfLiteOk);
// Read-write string tensor.
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteString, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(2, kTfLiteInt32, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(3, kTfLiteString, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(4, kTfLiteInt32, "", {1},
quantized),
kTfLiteOk);
// String-in String-out node.
TfLiteRegistration reg_copy = {nullptr, nullptr, nullptr, nullptr};
reg_copy.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
DynamicBuffer buf;
StringRef str_ref = GetString(input, 0);
buf.AddString(str_ref);
buf.WriteToTensorAsVector(output);
return kTfLiteOk;
};
// String-in Int-out node.
TfLiteRegistration reg_len = {nullptr, nullptr, nullptr, nullptr};
reg_len.prepare = [](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* output = GetOutput(context, node, 0);
TfLiteIntArray* outputSize = TfLiteIntArrayCreate(1);
outputSize->data[0] = 1;
return context->ResizeTensor(context, output, outputSize);
};
reg_len.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
a1->data.i32[0] = a0->bytes;
return kTfLiteOk;
};
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®_copy),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({1}, {2}, nullptr, 0, nullptr,
®_len),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {3}, nullptr, 0, nullptr,
®_copy),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({3}, {4}, nullptr, 0, nullptr,
®_len),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.tensor(0)->bytes, 15);
ASSERT_NE(interpreter.tensor(0)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(1)->bytes, 15);
ASSERT_NE(interpreter.tensor(1)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(3)->bytes, 15);
ASSERT_NE(interpreter.tensor(4)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(2)->bytes, 4);
ASSERT_EQ(interpreter.tensor(2)->data.i32[0], 15);
ASSERT_EQ(interpreter.tensor(4)->bytes, 4);
ASSERT_EQ(interpreter.tensor(4)->data.i32[0], 15);
}
TEST(BasicInterpreter, AllocateTwice) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "", {3},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteFloat32, "", {3},
quantized),
kTfLiteOk);
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
return context->ResizeTensor(context, tensor1, newSize);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
a1->data.f[i] = a0->data.f[i];
}
return kTfLiteOk;
};
ASSERT_EQ(
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {3}), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
char* old_tensor0_ptr = interpreter.tensor(0)->data.raw;
char* old_tensor1_ptr = interpreter.tensor(1)->data.raw;
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(old_tensor0_ptr, interpreter.tensor(0)->data.raw);
ASSERT_EQ(old_tensor1_ptr, interpreter.tensor(1)->data.raw);
}
TEST(BasicInterpreter, TestNullErrorReporter) {
TestErrorReporter reporter;
Interpreter interpreter;
}
TEST(BasicInterpreter, TestCustomErrorReporter) {
TestErrorReporter reporter;
Interpreter interpreter(&reporter);
ASSERT_NE(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(reporter.error_messages(),
"Invoke called on model that is not ready.");
ASSERT_EQ(reporter.num_calls(), 1);
}
TEST(BasicInterpreter, TestOverflow) {
TestErrorReporter reporter;
Interpreter interpreter(&reporter);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.AddTensors(1), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
// Overflow testing is pointer word size dependent.
if (sizeof(size_t) == 8) {
// #bits for bytecount = 30 + 30 + 2 = 62 < 64
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 30, 1 << 30}, quantized),
kTfLiteOk);
// #bits for element count = 30 + 30 + 2 = 62 < 64 (no overflow)
// #bits for byte count = 30 + 30 + 2 + 2 = 64 == 64 (overflow)
ASSERT_NE(
interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 30, 1 << 30, 1 << 2}, quantized),
kTfLiteOk);
EXPECT_THAT(
reporter.error_messages(),
testing::EndsWith("BytesRequired number of bytes overflowed.\n"));
// #bits for element count = 30 + 30 + 2 + 4 = 66 > 64 (overflow).
// #bits for byte count = 30 + 30 + 2 + 4 + 2 = 68 > 64 (overflow).
reporter.Reset();
ASSERT_NE(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 30, 1 << 30, 1 << 2, 1 << 4},
quantized),
kTfLiteOk);
EXPECT_THAT(
reporter.error_messages(),
testing::EndsWith("BytesRequired number of elements overflowed.\n"));
} else if (sizeof(size_t) == 4) {
// #bits for bytecount = 14 + 14 + 2 = 30 < 32
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 14, 1 << 14}, quantized),
kTfLiteOk);
// #bits for element count = 14 + 14 + 3 = 31 < 32 (no overflow).
// #bits for byte count = 14 + 14 + 3 + 2 = 33 > 32 (overflow).
ASSERT_NE(
interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 14, 1 << 14, 1 << 3}, quantized),
kTfLiteOk);
EXPECT_THAT(
reporter.error_messages(),
testing::EndsWith("BytesRequired number of bytes overflowed.\n"));
// #bits for element count = 14 + 14 + 4 = 32 == 32 (overflow).
// byte count also overflows, but we don't get to that check.
reporter.Reset();
ASSERT_NE(
interpreter.SetTensorParametersReadWrite(
0, kTfLiteFloat32, "in1", {1 << 14, 1 << 14, 1 << 4}, quantized),
kTfLiteOk);
EXPECT_THAT(
reporter.error_messages(),
testing::EndsWith("BytesRequired number of elements overflowed.\n"));
} else {
// This test failing means that we are using a non 32/64 bit architecture.
ASSERT_TRUE(false);
}
}
TEST(BasicInterpreter, TestUseNNAPI) {
TestErrorReporter reporter;
Interpreter interpreter(&reporter);
interpreter.UseNNAPI(true);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
interpreter.UseNNAPI(false);
ASSERT_EQ(reporter.error_messages(),
"Attempting to disable NNAPI delegate after it's applied.");
}
TEST(BasicInterpreter, TestUnsupportedDelegateFunctions) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
TfLiteRegistration registration = {nullptr, nullptr, nullptr, nullptr};
// These functions are only supported inside Delegate's Prepare function.
// The test verifies that these functions returns `kTfLiteError`, but not
// `kTfLiteOk` or just crashes.
registration.prepare = [](TfLiteContext* context, TfLiteNode* node) {
{
TfLiteIntArray* execution_plan;
EXPECT_EQ(context->GetExecutionPlan(context, &execution_plan),
kTfLiteError);
}
{
TfLiteNode* node;
TfLiteRegistration* registration;
EXPECT_EQ(
context->GetNodeAndRegistration(context, 0, &node, ®istration),
kTfLiteError);
}
{
TfLiteRegistration delegate_registration = {nullptr, nullptr, nullptr,
nullptr};
TfLiteIntArray nodes_to_replace;
nodes_to_replace.size = 0;
EXPECT_EQ(context->ReplaceNodeSubsetsWithDelegateKernels(
context, delegate_registration, &nodes_to_replace, nullptr),
kTfLiteError);
}
return kTfLiteError;
};
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®istration),
kTfLiteOk);
EXPECT_EQ(interpreter.AllocateTensors(), kTfLiteError);
}
TEST(BasicInterpreter, DynamicTensorsResizeDescendants) {
// Assemble a graph with a node that has dynamically sized output (via the
// pad op), followed by a node with a standard element-wise op (negate).
Interpreter interpreter;
interpreter.AddTensors(4);
interpreter.SetInputs({0, 1});
interpreter.SetOutputs({3});
TfLiteQuantizationParams quant;
interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "", {2, 2, 1, 1},
quant);
interpreter.SetTensorParametersReadWrite(1, kTfLiteInt32, "", {4, 2}, quant);
interpreter.SetTensorParametersReadWrite(2, kTfLiteFloat32, "", {}, quant);
interpreter.SetTensorParametersReadWrite(3, kTfLiteFloat32, "", {}, quant);
TfLiteRegistration* pad_op = tflite::ops::builtin::Register_PADV2();
TfLiteRegistration* neg_op = tflite::ops::builtin::Register_NEG();
interpreter.AddNodeWithParameters({0, 1}, {2}, nullptr, 0, nullptr, pad_op);
interpreter.AddNodeWithParameters({2}, {3}, nullptr, 0, nullptr, neg_op);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
// Configure [[2,2],[4,4]] padding and execute the graph.
interpreter.typed_tensor<int>(1)[0] = 2;
interpreter.typed_tensor<int>(1)[1] = 2;
interpreter.typed_tensor<int>(1)[2] = 2;
interpreter.typed_tensor<int>(1)[3] = 2;
interpreter.typed_tensor<int>(1)[4] = 0;
interpreter.typed_tensor<int>(1)[5] = 0;
interpreter.typed_tensor<int>(1)[6] = 0;
interpreter.typed_tensor<int>(1)[7] = 0;
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
// Both the output and intermediate tensor sizes should reflect the output
// from the dynamic pad operation.
ASSERT_EQ(interpreter.tensor(2)->bytes, sizeof(float) * 6 * 6);
ASSERT_EQ(interpreter.tensor(3)->bytes, sizeof(float) * 6 * 6);
// Now configure [[4,4],[6,6]] padding and execute the graph.
interpreter.typed_tensor<int>(1)[0] = 4;
interpreter.typed_tensor<int>(1)[1] = 4;
interpreter.typed_tensor<int>(1)[2] = 6;
interpreter.typed_tensor<int>(1)[3] = 6;
interpreter.typed_tensor<int>(1)[4] = 0;
interpreter.typed_tensor<int>(1)[5] = 0;
interpreter.typed_tensor<int>(1)[6] = 0;
interpreter.typed_tensor<int>(1)[7] = 0;
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
// Again, the output and intermediate tensor sizes should reflect the *new*
// resize from the latest pad operation.
ASSERT_EQ(interpreter.tensor(2)->bytes, sizeof(float) * 10 * 14);
ASSERT_EQ(interpreter.tensor(3)->bytes, sizeof(float) * 10 * 14);
}
TEST(InterpreterTensorsCapacityTest, TestWithinHeadroom) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(Interpreter::kTensorsReservedCapacity),
kTfLiteOk);
TfLiteRegistration registration = {nullptr, nullptr, nullptr, nullptr};
registration.prepare = [](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* first_tensor = context->tensors;
int new_tensor_index;
context->AddTensors(context, Interpreter::kTensorsCapacityHeadroom,
&new_tensor_index);
EXPECT_EQ(first_tensor, context->tensors);
return kTfLiteOk;
};
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®istration),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
TEST(InterpreterTensorsCapacityTest, TestExceedHeadroom) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(Interpreter::kTensorsReservedCapacity),
kTfLiteOk);
TfLiteRegistration registration = {nullptr, nullptr, nullptr, nullptr};
registration.prepare = [](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* first_tensor = context->tensors;
int new_tensor_index;
// Add enough tensors to trigger buffer re-allocation.
context->AddTensors(
context,
(context->tensors_size + Interpreter::kTensorsCapacityHeadroom + 1) * 2,
&new_tensor_index);
EXPECT_NE(first_tensor, context->tensors);
return kTfLiteOk;
};
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®istration),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
}
struct TestExternalContext : public TfLiteExternalContext {
static constexpr TfLiteExternalContextType kType = kTfLiteGemmLowpContext;
static TestExternalContext* Get(TfLiteContext* context) {
return reinterpret_cast<TestExternalContext*>(
context->GetExternalContext(context, kType));
}
static void Set(TfLiteContext* context, TestExternalContext* value) {
context->SetExternalContext(context, kType, value);
}
int num_refreshes = 0;
};
TEST_F(InterpreterTest, GetSetResetExternalContexts) {
auto* context = GetInterpreterContext();
TestExternalContext external_context;
external_context.Refresh = [](TfLiteContext* context) {
auto* ptr = TestExternalContext::Get(context);
if (ptr != nullptr) {
++ptr->num_refreshes;
}
return kTfLiteOk;
};
EXPECT_EQ(TestExternalContext::Get(context), nullptr);
ASSERT_EQ(interpreter_.SetNumThreads(4), kTfLiteOk);
TestExternalContext::Set(context, &external_context);
EXPECT_EQ(TestExternalContext::Get(context), &external_context);
ASSERT_EQ(interpreter_.SetNumThreads(4), kTfLiteOk);
ASSERT_EQ(interpreter_.SetNumThreads(5), kTfLiteOk);
EXPECT_EQ(external_context.num_refreshes, 2);
// Reset refresh count to 0
external_context.num_refreshes = 0;
// Below should not call external context refresh
ASSERT_EQ(interpreter_.SetNumThreads(-2), kTfLiteError);
EXPECT_EQ(external_context.num_refreshes, 0);
ASSERT_EQ(interpreter_.SetNumThreads(-1), kTfLiteOk);
EXPECT_EQ(external_context.num_refreshes, 1);
TestExternalContext::Set(context, nullptr);
EXPECT_EQ(TestExternalContext::Get(context), nullptr);
ASSERT_EQ(interpreter_.SetNumThreads(4), kTfLiteOk);
}
struct TestCpuBackendContext : public TfLiteInternalBackendContext {
// Count the number of calls to ClearCaches for the backend context.
void ClearCaches() override { ++num_calls; }
void SetMaxNumThreads(int num_threads) override {}
int num_calls = 0;
};
TEST_F(InterpreterTest, ExternalBackendContextClearsCachesOnDelete) {
ExternalCpuBackendContext external_cpu_context;
TestCpuBackendContext* cpu_backend_context = new TestCpuBackendContext();
external_cpu_context.set_internal_backend_context(
std::unique_ptr<TfLiteInternalBackendContext>(cpu_backend_context));
{
// Create an interpreter with an external Cpu backend context and ensure
// it goes out of scope.
Interpreter interpreter;
interpreter.SetExternalContext(kTfLiteCpuBackendContext,
&external_cpu_context);
EXPECT_EQ(cpu_backend_context->num_calls, 0);
}
EXPECT_EQ(cpu_backend_context->num_calls, 1);
}
// Test fixture that allows playing with execution plans. It creates a two
// node graph that can be executed in either [0,1] order or [1,0] order.
// The CopyOp records when it is invoked in the class member run_order_
// so we can test whether the execution plan was honored.
class TestExecutionPlan : public ::testing::Test {
// Encapsulates the node ids and provides them to a C primitive data type
// Allocatable with placement new, but never destructed, so make sure this
// doesn't own any heap allocated data. This is then is used as op local
// data to allow access to the test fixture data.
class CallReporting {
public:
CallReporting(int node_id, std::vector<int>* run_order)
: node_id_(node_id), run_order_(run_order) {}
void Record() { run_order_->push_back(node_id_); }
private:
// The node id for this particular node
int node_id_;
// A pointer to the global run-order
std::vector<int>* run_order_;
};
// Build a kernel registration for an op that copies its one input
// to an output
TfLiteRegistration CopyOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Set output size to input size
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
return context->ResizeTensor(context, tensor1, newSize);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
CallReporting* call_reporting =
static_cast<CallReporting*>(node->builtin_data);
// Copy input data to output data.
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
a1->data.f[i] = a0->data.f[i];
}
call_reporting->Record();
return kTfLiteOk;
};
return reg;
}
// Adds a copy node going from tensor `input` to output tensor `output`.
// Note, input is used as the node_id. Inject run_order as op accessible
// data. Note: this is a little strange of a way to do this, but it is
// using op functionality to avoid static global variables.
void MakeCopyNode(int input, int output) {
// Ownership of call_reporting is taken by interpreter (malloc is used due
// to nodes being a C99 interface so free() is used).
TfLiteRegistration copy_op = CopyOpRegistration();
CallReporting* call_reporting_1 =
static_cast<CallReporting*>(malloc(sizeof(CallReporting)));
new (call_reporting_1) CallReporting(input, &run_order_);
ASSERT_EQ(interpreter_.AddNodeWithParameters(
{0}, {2}, nullptr, 0, static_cast<void*>(call_reporting_1),
©_op),
kTfLiteOk);
ASSERT_EQ(interpreter_.ResizeInputTensor(input, {3}), kTfLiteOk);
}
void SetUp() final {
// Add two inputs and two outputs that don't depend on each other
ASSERT_EQ(interpreter_.AddTensors(4), kTfLiteOk);
interpreter_.SetInputs({0, 1});
interpreter_.SetOutputs({2, 3});
TfLiteQuantizationParams quantized;
for (int tensor_index = 0; tensor_index < 4; tensor_index++) {
ASSERT_EQ(interpreter_.SetTensorParametersReadWrite(
tensor_index, kTfLiteFloat32, "", {3}, quantized),
kTfLiteOk);
}
// Define two copy functions that also use the user_data to report that
// they were called.
// i.e. tensor[2] = copy(tensor[0]); tensor[3] = copy(tensor[1]);
// thus we can reorder the two nodes arbitrary and still satisfy dependency
// order.
MakeCopyNode(0, 2);
MakeCopyNode(1, 3);
ASSERT_EQ(interpreter_.AllocateTensors(), kTfLiteOk);
}
protected:
Interpreter interpreter_;
// list of node_ids that were run
std::vector<int> run_order_;
};
TEST_F(TestExecutionPlan, DefaultExecutionPlan) {
// Check default order
ASSERT_EQ(interpreter_.Invoke(), kTfLiteOk);
ASSERT_EQ(run_order_, std::vector<int>({0, 1}));
}
TEST_F(TestExecutionPlan, ReversedExecutionPlan) {
// Check reversed order
interpreter_.SetExecutionPlan({1, 0});
ASSERT_EQ(interpreter_.Invoke(), kTfLiteOk);
ASSERT_EQ(run_order_, std::vector<int>({1, 0}));
}
TEST_F(TestExecutionPlan, SubsetExecutionPlan) {
// Check running only node index 1
interpreter_.SetExecutionPlan({1});
ASSERT_EQ(interpreter_.Invoke(), kTfLiteOk);
ASSERT_EQ(run_order_, std::vector<int>({1}));
}
TEST_F(TestExecutionPlan, NullExecutionPlan) {
// Check nothing executed.
interpreter_.SetExecutionPlan({});
ASSERT_EQ(interpreter_.Invoke(), kTfLiteOk);
ASSERT_EQ(run_order_, std::vector<int>());
}
TEST(TestDelegateOwnership, ProperlyDisposed) {
struct TfLiteInterpreterOwnedDelegate : public TfLiteDelegate {
TfLiteInterpreterOwnedDelegate(bool* destroyed, bool* prepared)
: destroyed(destroyed), prepared(prepared) {
flags = kTfLiteDelegateFlagsNone;
Prepare = [](TfLiteContext*, TfLiteDelegate* delegate) -> TfLiteStatus {
*static_cast<TfLiteInterpreterOwnedDelegate*>(delegate)->prepared =
true;
return kTfLiteOk;
};
}
~TfLiteInterpreterOwnedDelegate() { *destroyed = true; }
bool* destroyed;
bool* prepared;
};
// Construct a delegate with flags for indicating preparation/destruction.
bool destroyed = false;
bool prepared = false;
std::unique_ptr<TfLiteInterpreterOwnedDelegate> delegate(
new TfLiteInterpreterOwnedDelegate(&destroyed, &prepared));
{
// Create an interpreter and assemble a simple graph.
Interpreter interpreter;
TfLiteRegistration registration = {nullptr, nullptr, nullptr, nullptr};
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®istration),
kTfLiteOk);
// Pass delegate ownership to that interpreter.
ASSERT_EQ(InterpreterTest::ModifyGraphWithDelegate(&interpreter,
std::move(delegate)),
kTfLiteOk);
// The delegate should be prepared as normal, and should be preserved.
EXPECT_TRUE(prepared);
EXPECT_FALSE(destroyed);
// Interpreter interaction should not impact the delegate's validity.
interpreter.AllocateTensors();
interpreter.Invoke();
EXPECT_FALSE(destroyed);
}
// Only after the interpreter is destroyed should the delegate be destroyed.
EXPECT_TRUE(destroyed);
}
// CancellationData contains the data required to cancel a call to Invoke().
struct CancellationData {
bool is_cancelled = false;
};
// Indicates whether Invoke() has been cancelled based on the value of the
// CancellationData object passed in.
bool CheckCancellation(void* data) {
CancellationData* cancellation_data =
static_cast<struct CancellationData*>(data);
return cancellation_data->is_cancelled;
}
static struct CancellationData cancellation_data_;
// Test fixture to test cancellation within the Interpreter.
class CancellationTest : public ::testing::Test {
public:
TfLiteStatus Invoke() { return interpreter_.Invoke(); }
void Cancel() { cancellation_data_.is_cancelled = true; }
// Adds an CancelOp with input tensor `input` and output tensor `output`.
void MakeCancelNode(int input, int output) {
TfLiteRegistration op = CancelOpRegistration();
ASSERT_EQ(interpreter_.AddNodeWithParameters({input}, {output}, nullptr, 0,
nullptr, &op),
kTfLiteOk);
ASSERT_EQ(interpreter_.ResizeInputTensor(input, {3}), kTfLiteOk);
}
// Adds an OkOp with input tensor `input` and output tensor `output`.
void MakeOkNode(int input, int output) {
TfLiteRegistration op = OkOpRegistration();
ASSERT_EQ(interpreter_.AddNodeWithParameters({input}, {output}, nullptr, 0,
nullptr, &op),
kTfLiteOk);
ASSERT_EQ(interpreter_.ResizeInputTensor(input, {3}), kTfLiteOk);
}
Interpreter interpreter_;
private:
// Build the kernel registration for an op that cancels the operation.
TfLiteRegistration CancelOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
// Set output size to the input size in CancelOp::Prepare(). Code exists to
// have a framework in Prepare. The input and output tensors are not used.
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* in_tensor = GetInput(context, node, 0);
TfLiteTensor* out_tensor = GetOutput(context, node, 0);
TfLiteIntArray* new_size = TfLiteIntArrayCopy(in_tensor->dims);
return context->ResizeTensor(context, out_tensor, new_size);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
cancellation_data_.is_cancelled = true;
return kTfLiteOk;
};
return reg;
}
// Build the kernel registration for an op that returns kTfLiteOk.
TfLiteRegistration OkOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
// Set output size to the input size in OkOp::Prepare(). Code exists to have
// a framework in Prepare. The input and output tensors are not used.
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* in_tensor = GetInput(context, node, 0);
TfLiteTensor* out_tensor = GetOutput(context, node, 0);
TfLiteIntArray* new_size = TfLiteIntArrayCopy(in_tensor->dims);
return context->ResizeTensor(context, out_tensor, new_size);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
return kTfLiteOk;
};
return reg;
}
void SetUp() final {
cancellation_data_.is_cancelled = false;
// Set up the interpreter. Create the input and output tensors.
int num_tensors = 3;
ASSERT_EQ(interpreter_.AddTensors(num_tensors), kTfLiteOk);
interpreter_.SetInputs({0});
interpreter_.SetOutputs({2});
TfLiteQuantizationParams quantized;
for (int tensor_index = 0; tensor_index < num_tensors; tensor_index++) {
ASSERT_EQ(interpreter_.SetTensorParametersReadWrite(
tensor_index, kTfLiteFloat32, "", {3}, quantized),
kTfLiteOk);
}
interpreter_.SetCancellationFunction(&cancellation_data_,
&CheckCancellation);
}
};
TEST_F(CancellationTest, CancelBeforeInvoke) {
// Cancel prior to calling Invoke.
CancellationTest::MakeOkNode(1, 2);
ASSERT_EQ(interpreter_.AllocateTensors(), kTfLiteOk);
CancellationTest::Cancel();
TfLiteStatus invoke_error_code = CancellationTest::Invoke();
ASSERT_EQ(invoke_error_code, kTfLiteError);
}
TEST_F(CancellationTest, CancelDuringInvoke) {
// Tests a model which sets the cancel in order to test cancellation works
// between ops.
//
// The first op will set the cancellation bit to true. The second op returns
// `kTfLiteOk` if executed.
CancellationTest::MakeCancelNode(0, 1);
CancellationTest::MakeOkNode(1, 2);
ASSERT_EQ(interpreter_.AllocateTensors(), kTfLiteOk);
TfLiteStatus invoke_error_code = CancellationTest::Invoke();
ASSERT_EQ(invoke_error_code, kTfLiteError);
}
// Tests functionality related to custom memory allocations in TFLite.
class TestCustomAllocation : public ::testing::Test {
protected:
void SetUp() override {
// Simple model with two custom ops that add 2 float tensors each.
interpreter_.reset(new Interpreter);
interpreter_->AddTensors(7);
interpreter_->SetInputs({0, 1});
interpreter_->SetOutputs({3, 4, 6});
TfLiteQuantizationParams quant;
interpreter_->SetTensorParametersReadWrite(0, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(1, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(2, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(3, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(4, kTfLiteFloat32, "", {3},
quant);
interpreter_->SetTensorParametersReadWrite(5, kTfLiteFloat32, "", {3},
quant, /*is_variable=*/true);
interpreter_->SetTensorParametersReadWrite(6, kTfLiteFloat32, "", {3},
quant);
auto* add_reg = ops::builtin::Register_ADD();
TfLiteAddParams* builtin_data0 =
reinterpret_cast<TfLiteAddParams*>(malloc(sizeof(TfLiteAddParams)));
TfLiteAddParams* builtin_data1 =
reinterpret_cast<TfLiteAddParams*>(malloc(sizeof(TfLiteAddParams)));
TfLiteAddParams* builtin_data2 =
reinterpret_cast<TfLiteAddParams*>(malloc(sizeof(TfLiteAddParams)));
TfLiteAddParams* builtin_data3 =
reinterpret_cast<TfLiteAddParams*>(malloc(sizeof(TfLiteAddParams)));
builtin_data0->activation = kTfLiteActNone;
builtin_data1->activation = kTfLiteActNone;
builtin_data2->activation = kTfLiteActNone;
builtin_data3->activation = kTfLiteActNone;
interpreter_->AddNodeWithParameters({0, 0}, {2}, nullptr, 0, builtin_data0,
add_reg);
interpreter_->AddNodeWithParameters({1, 1}, {3}, nullptr, 0, builtin_data1,
add_reg);
interpreter_->AddNodeWithParameters({2, 1}, {4}, nullptr, 0, builtin_data2,
add_reg);
interpreter_->AddNodeWithParameters({0, 5}, {6}, nullptr, 0, builtin_data3,
add_reg);
interpreter_->SetVariables({5});
}
void AssignCustomAllocForTensor(int tensor_idx, int required_alignment) {
const TfLiteTensor* tensor = interpreter_->tensor(tensor_idx);
auto tensor_alloc = NewCustomAlloc(tensor->bytes, required_alignment);
ASSERT_EQ(
interpreter_->SetCustomAllocationForTensor(tensor_idx, tensor_alloc),
kTfLiteOk);
}
void VerifyInvoke() {
std::vector<float> input = {1.0f, 2.0f, 3.0f};
std::vector<float> variable = {0.0f, 1.0f, 2.0f};
std::vector<float> expected_output = {2.0f, 4.0f, 6.0f};
// typed_tensor<...> should work irrespective of custom alloc, since it
// accesses output_tensor.data.
memcpy(interpreter_->typed_tensor<float>(interpreter_->variables()[0]),
variable.data(), 3 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(0), input.data(),
3 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(1), input.data(),
3 * sizeof(float));
ASSERT_EQ(interpreter_->Invoke(), kTfLiteOk);
TfLiteTensor* output_tensor =
interpreter_->tensor(interpreter_->outputs()[0]);
for (int i = 0; i < 3; ++i) {
EXPECT_EQ(output_tensor->data.f[i], expected_output[i]) << i;
}
}
// Actual initialized allocation is more than num_bytes, to account for
// required_allocation.
TfLiteCustomAllocation NewCustomAlloc(size_t num_bytes,
int required_alignment) {
// Extra memory to ensure alignment.
char* new_alloc = new char[num_bytes + required_alignment];
char* new_underlying_buffer_aligned_ptr = reinterpret_cast<char*>(
AlignTo(required_alignment, reinterpret_cast<intptr_t>(new_alloc)));
custom_alloc_buffers_.emplace_back(new_alloc);
return TfLiteCustomAllocation(
{new_underlying_buffer_aligned_ptr, num_bytes});
}
intptr_t AlignTo(size_t alignment, intptr_t offset) {
return offset % alignment == 0 ? offset
: offset + (alignment - offset % alignment);
}
void TearDown() override {
interpreter_.reset();
custom_alloc_buffers_.clear();
}
protected:
TfLiteAddParams add_params_;
std::unique_ptr<Interpreter> interpreter_;
std::vector<std::unique_ptr<char[]>> custom_alloc_buffers_;
};
TEST_F(TestCustomAllocation, InvalidAlignment) {
const TfLiteTensor* input_tensor =
interpreter_->tensor(interpreter_->inputs()[0]);
intptr_t dummy_ptr = kDefaultTensorAlignment - 1;
TfLiteCustomAllocation input_alloc{reinterpret_cast<void*>(dummy_ptr),
input_tensor->bytes};
ASSERT_EQ(interpreter_->SetCustomAllocationForTensor(
interpreter_->inputs()[0], input_alloc),
kTfLiteError);
// Allocate tensors & Invoke should still work.
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, InsufficientBytes) {
auto input_alloc = NewCustomAlloc(4, kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->SetCustomAllocationForTensor(
interpreter_->inputs()[0], input_alloc),
kTfLiteError);
// Allocate tensors & Invoke should still work.
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, CustomInputAlloc) {
// Set custom allocation for one input tensor.
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, CustomInputAlloc_MultipleAssigns) {
// Set custom allocation for one input tensor.
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, CustomInputAlloc_AllocateTensorsBefore) {
// Allocate tensors.
// Allocating now will cause TFLite to reserve some extra memory, but nothing
// should break.
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
VerifyInvoke();
}
TEST_F(TestCustomAllocation, CustomInputAndOutputAllocs) {
// Set custom allocations for all IO tensors.
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->inputs()[1],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->outputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->outputs()[1],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
// Ensure that custom allocs work for tensors on persistent arena as well.
TEST_F(TestCustomAllocation, CustomAlloc_VariableTensor) {
// Set custom allocation for one input tensor.
AssignCustomAllocForTensor(interpreter_->variables()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
AssignCustomAllocForTensor(interpreter_->variables()[0],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
std::vector<float> input = {2.0f, 3.0f, 4.0f};
std::vector<float> variable = {1.0f, 2.0f, 3.0f};
std::vector<float> expected_output = {3.0f, 5.0f, 7.0f};
memcpy(interpreter_->typed_tensor<float>(interpreter_->variables()[0]),
variable.data(), 3 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(0), input.data(), 3 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(1), input.data(), 3 * sizeof(float));
ASSERT_EQ(interpreter_->Invoke(), kTfLiteOk);
// expected_output = input + variable
TfLiteTensor* output_tensor =
interpreter_->tensor(interpreter_->outputs()[2]);
for (int i = 0; i < 3; ++i) {
EXPECT_EQ(output_tensor->data.f[i], expected_output[i]) << i;
}
}
TEST_F(TestCustomAllocation, ResizeTensorsWithoutEnoughMemory) {
// Set custom allocations for all input tensors.
AssignCustomAllocForTensor(interpreter_->inputs()[0],
/*required_alignment=*/kDefaultTensorAlignment);
AssignCustomAllocForTensor(interpreter_->inputs()[1],
/*required_alignment=*/kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
// Now resize tensors to double the size.
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[0], {2, 3}),
kTfLiteOk);
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[1], {2, 3}),
kTfLiteOk);
// Since the custom memory previously allocated isn't enough,
// AllocateTensors() will fail.
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteError);
// Interpreter should no longer be in invokable state, so expect failure.
ASSERT_EQ(interpreter_->Invoke(), kTfLiteError);
}
TEST_F(TestCustomAllocation, ResizeTensorsWithEnoughMemory) {
// Set custom allocations for all input tensors, with double the required
// memory.
const TfLiteTensor* input0_tensor =
interpreter_->tensor(interpreter_->inputs()[0]);
auto input0_alloc =
NewCustomAlloc(2 * input0_tensor->bytes, kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->SetCustomAllocationForTensor(
interpreter_->inputs()[0], input0_alloc),
kTfLiteOk);
const TfLiteTensor* input1_tensor =
interpreter_->tensor(interpreter_->inputs()[1]);
auto input1_alloc =
NewCustomAlloc(2 * input1_tensor->bytes, kDefaultTensorAlignment);
ASSERT_EQ(interpreter_->SetCustomAllocationForTensor(
interpreter_->inputs()[1], input1_alloc),
kTfLiteOk);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
// Now resize tensors to double the size.
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[0], {6, 1}),
kTfLiteOk);
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[1], {6, 1}),
kTfLiteOk);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
std::vector<float> input = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f};
std::vector<float> expected_output = {2.0f, 4.0f, 6.0f, 8.0f, 10.0f, 12.0f};
TfLiteTensor* tensor = interpreter_->tensor(interpreter_->outputs()[0]);
memcpy(interpreter_->typed_tensor<float>(0), input.data(), 6 * sizeof(float));
memcpy(interpreter_->typed_tensor<float>(1), input.data(), 6 * sizeof(float));
ASSERT_EQ(interpreter_->Invoke(), kTfLiteOk);
for (int i = 0; i < 6; ++i) {
EXPECT_EQ(tensor->data.f[i], expected_output[i]) << i;
}
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[0], {3, 1}),
kTfLiteOk);
ASSERT_EQ(interpreter_->ResizeInputTensor(interpreter_->inputs()[1], {3, 1}),
kTfLiteOk);
ASSERT_EQ(interpreter_->AllocateTensors(), kTfLiteOk);
VerifyInvoke();
}
} // namespace | 506 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.CancellationTest::CancelOpRegistration | tflite::tflite::TEST.CancellationTest::CancelOpRegistration() | [] | TfLiteRegistration CancelOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
// Set output size to the input size in CancelOp::Prepare(). Code exists to
// have a framework in Prepare. The input and output tensors are not used.
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* in_tensor = GetInput(context, node, 0);
TfLiteTensor* out_tensor = GetOutput(context, node, 0);
TfLiteIntArray* new_size = TfLiteIntArrayCopy(in_tensor->dims);
return context->ResizeTensor(context, out_tensor, new_size);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
cancellation_data_.is_cancelled = true;
return kTfLiteOk;
};
return reg;
} | 116 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.CancellationTest::CancelOpRegistration | tflite::tflite::TEST.CancellationTest::CancelOpRegistration() | [] | TfLiteRegistration CancelOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
// Set output size to the input size in CancelOp::Prepare(). Code exists to
// have a framework in Prepare. The input and output tensors are not used.
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* in_tensor = GetInput(context, node, 0);
TfLiteTensor* out_tensor = GetOutput(context, node, 0);
TfLiteIntArray* new_size = TfLiteIntArrayCopy(in_tensor->dims);
return context->ResizeTensor(context, out_tensor, new_size);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
cancellation_data_.is_cancelled = true;
return kTfLiteOk;
};
return reg;
} | 116 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.CancellationTest::OkOpRegistration | tflite::tflite::TEST.CancellationTest::OkOpRegistration() | [] | TfLiteRegistration OkOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
// Set output size to the input size in OkOp::Prepare(). Code exists to have
// a framework in Prepare. The input and output tensors are not used.
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* in_tensor = GetInput(context, node, 0);
TfLiteTensor* out_tensor = GetOutput(context, node, 0);
TfLiteIntArray* new_size = TfLiteIntArrayCopy(in_tensor->dims);
return context->ResizeTensor(context, out_tensor, new_size);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
return kTfLiteOk;
};
return reg;
} | 110 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.CancellationTest::OkOpRegistration | tflite::tflite::TEST.CancellationTest::OkOpRegistration() | [] | TfLiteRegistration OkOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
// Set output size to the input size in OkOp::Prepare(). Code exists to have
// a framework in Prepare. The input and output tensors are not used.
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* in_tensor = GetInput(context, node, 0);
TfLiteTensor* out_tensor = GetOutput(context, node, 0);
TfLiteIntArray* new_size = TfLiteIntArrayCopy(in_tensor->dims);
return context->ResizeTensor(context, out_tensor, new_size);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
return kTfLiteOk;
};
return reg;
} | 110 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.GetPassthroughOpRegistration | tflite::tflite::TEST.GetPassthroughOpRegistration() | [] | TfLiteRegistration GetPassthroughOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.init = [](TfLiteContext* context, const char*, size_t) -> void* {
auto* first_new_tensor = new int;
context->AddTensors(context, 2, first_new_tensor);
return first_new_tensor;
};
reg.free = [](TfLiteContext* context, void* buffer) {
delete static_cast<int*>(buffer);
};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
auto* first_new_tensor = static_cast<int*>(node->user_data);
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
TF_LITE_ENSURE_STATUS(context->ResizeTensor(context, tensor1, newSize));
TfLiteIntArrayFree(node->temporaries);
node->temporaries = TfLiteIntArrayCreate(2);
for (int i = 0; i < 2; ++i) {
node->temporaries->data[i] = *(first_new_tensor) + i;
}
auto setup_temporary = [&](int id) {
TfLiteTensor* tmp = &context->tensors[id];
tmp->type = kTfLiteFloat32;
tmp->allocation_type = kTfLiteArenaRw;
return context->ResizeTensor(context, tmp,
TfLiteIntArrayCopy(tensor0->dims));
};
TF_LITE_ENSURE_STATUS(setup_temporary(node->temporaries->data[0]));
TF_LITE_ENSURE_STATUS(setup_temporary(node->temporaries->data[1]));
return kTfLiteOk;
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
auto populate = [&](int id) {
TfLiteTensor* t = &context->tensors[id];
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
t->data.f[i] = a0->data.f[i];
}
};
populate(node->outputs->data[0]);
populate(node->temporaries->data[0]);
populate(node->temporaries->data[1]);
return kTfLiteOk;
};
return reg;
} | 455 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.GetPassthroughOpRegistration | tflite::tflite::TEST.GetPassthroughOpRegistration() | [] | TfLiteRegistration GetPassthroughOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.init = [](TfLiteContext* context, const char*, size_t) -> void* {
auto* first_new_tensor = new int;
context->AddTensors(context, 2, first_new_tensor);
return first_new_tensor;
};
reg.free = [](TfLiteContext* context, void* buffer) {
delete static_cast<int*>(buffer);
};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
auto* first_new_tensor = static_cast<int*>(node->user_data);
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
TF_LITE_ENSURE_STATUS(context->ResizeTensor(context, tensor1, newSize));
TfLiteIntArrayFree(node->temporaries);
node->temporaries = TfLiteIntArrayCreate(2);
for (int i = 0; i < 2; ++i) {
node->temporaries->data[i] = *(first_new_tensor) + i;
}
auto setup_temporary = [&](int id) {
TfLiteTensor* tmp = &context->tensors[id];
tmp->type = kTfLiteFloat32;
tmp->allocation_type = kTfLiteArenaRw;
return context->ResizeTensor(context, tmp,
TfLiteIntArrayCopy(tensor0->dims));
};
TF_LITE_ENSURE_STATUS(setup_temporary(node->temporaries->data[0]));
TF_LITE_ENSURE_STATUS(setup_temporary(node->temporaries->data[1]));
return kTfLiteOk;
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
auto populate = [&](int id) {
TfLiteTensor* t = &context->tensors[id];
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
t->data.f[i] = a0->data.f[i];
}
};
populate(node->outputs->data[0]);
populate(node->temporaries->data[0]);
populate(node->temporaries->data[1]);
return kTfLiteOk;
};
return reg;
} | 455 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.TEST | tflite::tflite::TEST.TEST( BasicInterpreter , AllocateTwice) | ['BasicInterpreter', 'AllocateTwice'] | TEST(BasicInterpreter, AllocateTwice) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "", {3},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteFloat32, "", {3},
quantized),
kTfLiteOk);
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
return context->ResizeTensor(context, tensor1, newSize);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
a1->data.f[i] = a0->data.f[i];
}
return kTfLiteOk;
};
ASSERT_EQ(
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {3}), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
char* old_tensor0_ptr = interpreter.tensor(0)->data.raw;
char* old_tensor1_ptr = interpreter.tensor(1)->data.raw;
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(old_tensor0_ptr, interpreter.tensor(0)->data.raw);
ASSERT_EQ(old_tensor1_ptr, interpreter.tensor(1)->data.raw);
} | 422 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.TEST | tflite::tflite::TEST.TEST( BasicInterpreter , AllocateTwice) | ['BasicInterpreter', 'AllocateTwice'] | TEST(BasicInterpreter, AllocateTwice) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(2), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({1}), kTfLiteOk);
TfLiteQuantizationParams quantized;
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(0, kTfLiteFloat32, "", {3},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteFloat32, "", {3},
quantized),
kTfLiteOk);
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
return context->ResizeTensor(context, tensor1, newSize);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
a1->data.f[i] = a0->data.f[i];
}
return kTfLiteOk;
};
ASSERT_EQ(
interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr, ®),
kTfLiteOk);
ASSERT_EQ(interpreter.ResizeInputTensor(0, {3}), kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
char* old_tensor0_ptr = interpreter.tensor(0)->data.raw;
char* old_tensor1_ptr = interpreter.tensor(1)->data.raw;
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(old_tensor0_ptr, interpreter.tensor(0)->data.raw);
ASSERT_EQ(old_tensor1_ptr, interpreter.tensor(1)->data.raw);
} | 422 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.TEST | tflite::tflite::TEST.TEST( BasicInterpreter , ThreeStepAllocate) | ['BasicInterpreter', 'ThreeStepAllocate'] | TEST(BasicInterpreter, ThreeStepAllocate) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(5), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({4}), kTfLiteOk);
TfLiteQuantizationParams quantized;
// String tensor with one string of length 3
union {
char raw_bytes[15];
struct {
int32_t num_strs;
int32_t offsets[2];
char str_data[3];
} tensor_data;
} data;
data.tensor_data = {1, {12, 15}, {'A', 'B', 'C'}};
// Read only string tensor.
ASSERT_EQ(interpreter.SetTensorParametersReadOnly(0, kTfLiteString, "", {1},
quantized, data.raw_bytes,
sizeof(data.raw_bytes)),
kTfLiteOk);
// Read-write string tensor.
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteString, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(2, kTfLiteInt32, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(3, kTfLiteString, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(4, kTfLiteInt32, "", {1},
quantized),
kTfLiteOk);
// String-in String-out node.
TfLiteRegistration reg_copy = {nullptr, nullptr, nullptr, nullptr};
reg_copy.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
DynamicBuffer buf;
StringRef str_ref = GetString(input, 0);
buf.AddString(str_ref);
buf.WriteToTensorAsVector(output);
return kTfLiteOk;
};
// String-in Int-out node.
TfLiteRegistration reg_len = {nullptr, nullptr, nullptr, nullptr};
reg_len.prepare = [](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* output = GetOutput(context, node, 0);
TfLiteIntArray* outputSize = TfLiteIntArrayCreate(1);
outputSize->data[0] = 1;
return context->ResizeTensor(context, output, outputSize);
};
reg_len.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
a1->data.i32[0] = a0->bytes;
return kTfLiteOk;
};
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®_copy),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({1}, {2}, nullptr, 0, nullptr,
®_len),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {3}, nullptr, 0, nullptr,
®_copy),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({3}, {4}, nullptr, 0, nullptr,
®_len),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.tensor(0)->bytes, 15);
ASSERT_NE(interpreter.tensor(0)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(1)->bytes, 15);
ASSERT_NE(interpreter.tensor(1)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(3)->bytes, 15);
ASSERT_NE(interpreter.tensor(4)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(2)->bytes, 4);
ASSERT_EQ(interpreter.tensor(2)->data.i32[0], 15);
ASSERT_EQ(interpreter.tensor(4)->bytes, 4);
ASSERT_EQ(interpreter.tensor(4)->data.i32[0], 15);
} | 737 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.TEST | tflite::tflite::TEST.TEST( BasicInterpreter , ThreeStepAllocate) | ['BasicInterpreter', 'ThreeStepAllocate'] | TEST(BasicInterpreter, ThreeStepAllocate) {
Interpreter interpreter;
ASSERT_EQ(interpreter.AddTensors(5), kTfLiteOk);
ASSERT_EQ(interpreter.SetInputs({0}), kTfLiteOk);
ASSERT_EQ(interpreter.SetOutputs({4}), kTfLiteOk);
TfLiteQuantizationParams quantized;
// String tensor with one string of length 3
union {
char raw_bytes[15];
struct {
int32_t num_strs;
int32_t offsets[2];
char str_data[3];
} tensor_data;
} data;
data.tensor_data = {1, {12, 15}, {'A', 'B', 'C'}};
// Read only string tensor.
ASSERT_EQ(interpreter.SetTensorParametersReadOnly(0, kTfLiteString, "", {1},
quantized, data.raw_bytes,
sizeof(data.raw_bytes)),
kTfLiteOk);
// Read-write string tensor.
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(1, kTfLiteString, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(2, kTfLiteInt32, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(3, kTfLiteString, "", {1},
quantized),
kTfLiteOk);
ASSERT_EQ(interpreter.SetTensorParametersReadWrite(4, kTfLiteInt32, "", {1},
quantized),
kTfLiteOk);
// String-in String-out node.
TfLiteRegistration reg_copy = {nullptr, nullptr, nullptr, nullptr};
reg_copy.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
DynamicBuffer buf;
StringRef str_ref = GetString(input, 0);
buf.AddString(str_ref);
buf.WriteToTensorAsVector(output);
return kTfLiteOk;
};
// String-in Int-out node.
TfLiteRegistration reg_len = {nullptr, nullptr, nullptr, nullptr};
reg_len.prepare = [](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* output = GetOutput(context, node, 0);
TfLiteIntArray* outputSize = TfLiteIntArrayCreate(1);
outputSize->data[0] = 1;
return context->ResizeTensor(context, output, outputSize);
};
reg_len.invoke = [](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
a1->data.i32[0] = a0->bytes;
return kTfLiteOk;
};
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {1}, nullptr, 0, nullptr,
®_copy),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({1}, {2}, nullptr, 0, nullptr,
®_len),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({0}, {3}, nullptr, 0, nullptr,
®_copy),
kTfLiteOk);
ASSERT_EQ(interpreter.AddNodeWithParameters({3}, {4}, nullptr, 0, nullptr,
®_len),
kTfLiteOk);
ASSERT_EQ(interpreter.AllocateTensors(), kTfLiteOk);
ASSERT_EQ(interpreter.Invoke(), kTfLiteOk);
ASSERT_EQ(interpreter.tensor(0)->bytes, 15);
ASSERT_NE(interpreter.tensor(0)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(1)->bytes, 15);
ASSERT_NE(interpreter.tensor(1)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(3)->bytes, 15);
ASSERT_NE(interpreter.tensor(4)->data.raw, nullptr);
ASSERT_EQ(interpreter.tensor(2)->bytes, 4);
ASSERT_EQ(interpreter.tensor(2)->data.i32[0], 15);
ASSERT_EQ(interpreter.tensor(4)->bytes, 4);
ASSERT_EQ(interpreter.tensor(4)->data.i32[0], 15);
} | 737 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.TestExecutionPlan::CopyOpRegistration | tflite::tflite::TEST.TestExecutionPlan::CopyOpRegistration() | [] | TfLiteRegistration CopyOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Set output size to input size
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
return context->ResizeTensor(context, tensor1, newSize);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
CallReporting* call_reporting =
static_cast<CallReporting*>(node->builtin_data);
// Copy input data to output data.
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
a1->data.f[i] = a0->data.f[i];
}
call_reporting->Record();
return kTfLiteOk;
};
return reg;
} | 204 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::tflite::TEST.TestExecutionPlan::CopyOpRegistration | tflite::tflite::TEST.TestExecutionPlan::CopyOpRegistration() | [] | TfLiteRegistration CopyOpRegistration() {
TfLiteRegistration reg = {nullptr, nullptr, nullptr, nullptr};
reg.prepare = [](TfLiteContext* context, TfLiteNode* node) {
// Set output size to input size
const TfLiteTensor* tensor0 = GetInput(context, node, 0);
TfLiteTensor* tensor1 = GetOutput(context, node, 0);
TfLiteIntArray* newSize = TfLiteIntArrayCopy(tensor0->dims);
return context->ResizeTensor(context, tensor1, newSize);
};
reg.invoke = [](TfLiteContext* context, TfLiteNode* node) {
CallReporting* call_reporting =
static_cast<CallReporting*>(node->builtin_data);
// Copy input data to output data.
const TfLiteTensor* a0 = GetInput(context, node, 0);
TfLiteTensor* a1 = GetOutput(context, node, 0);
int num = a0->dims->data[0];
for (int i = 0; i < num; i++) {
a1->data.f[i] = a0->data.f[i];
}
call_reporting->Record();
return kTfLiteOk;
};
return reg;
} | 204 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | Java_org_tensorflow_lite_InterpreterTest_getNativeHandleForDelegate | Java_org_tensorflow_lite_InterpreterTest_getNativeHandleForDelegate( JNIEnv * env , jclass clazz) | ['env', 'clazz'] | Java_org_tensorflow_lite_InterpreterTest_getNativeHandleForDelegate(
JNIEnv* env, jclass clazz) {
// A simple op which outputs a tensor with values of 7.
static TfLiteRegistration registration = {
.init = nullptr,
.free = nullptr,
.prepare =
[](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = tflite::GetInput(context, node, 0);
TfLiteTensor* output = tflite::GetOutput(context, node, 0);
TfLiteIntArray* output_dims = TfLiteIntArrayCopy(input->dims);
output->type = kTfLiteFloat32;
return context->ResizeTensor(context, output, output_dims);
},
.invoke =
[](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* output = tflite::GetOutput(context, node, 0);
std::fill(output->data.f,
output->data.f + tflite::NumElements(output), 7.0f);
return kTfLiteOk;
},
.profiling_string = nullptr,
.builtin_code = 0,
.custom_name = "",
.version = 1,
};
static TfLiteDelegate delegate = {
.data_ = nullptr,
.Prepare = [](TfLiteContext* context,
TfLiteDelegate* delegate) -> TfLiteStatus {
TfLiteIntArray* execution_plan;
TF_LITE_ENSURE_STATUS(
context->GetExecutionPlan(context, &execution_plan));
context->ReplaceNodeSubsetsWithDelegateKernels(
context, registration, execution_plan, delegate);
// Now bind delegate buffer handles for all tensors.
for (size_t i = 0; i < context->tensors_size; ++i) {
context->tensors[i].delegate = delegate;
context->tensors[i].buffer_handle = static_cast<int>(i);
}
return kTfLiteOk;
},
.CopyFromBufferHandle = nullptr,
.CopyToBufferHandle = nullptr,
.FreeBufferHandle = nullptr,
.flags = kTfLiteDelegateFlagsAllowDynamicTensors,
};
return reinterpret_cast<jlong>(&delegate);
} | 328 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | Java_org_tensorflow_lite_InterpreterTest_getNativeHandleForDelegate | Java_org_tensorflow_lite_InterpreterTest_getNativeHandleForDelegate( JNIEnv * env , jclass clazz) | ['env', 'clazz'] | Java_org_tensorflow_lite_InterpreterTest_getNativeHandleForDelegate(
JNIEnv* env, jclass clazz) {
// A simple op which outputs a tensor with values of 7.
static TfLiteRegistration registration = {
.init = nullptr,
.free = nullptr,
.prepare =
[](TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = tflite::GetInput(context, node, 0);
TfLiteTensor* output = tflite::GetOutput(context, node, 0);
TfLiteIntArray* output_dims = TfLiteIntArrayCopy(input->dims);
output->type = kTfLiteFloat32;
return context->ResizeTensor(context, output, output_dims);
},
.invoke =
[](TfLiteContext* context, TfLiteNode* node) {
TfLiteTensor* output = tflite::GetOutput(context, node, 0);
std::fill(output->data.f,
output->data.f + tflite::NumElements(output), 7.0f);
return kTfLiteOk;
},
.profiling_string = nullptr,
.builtin_code = 0,
.custom_name = "",
.version = 1,
};
static TfLiteDelegate delegate = {
.data_ = nullptr,
.Prepare = [](TfLiteContext* context,
TfLiteDelegate* delegate) -> TfLiteStatus {
TfLiteIntArray* execution_plan;
TF_LITE_ENSURE_STATUS(
context->GetExecutionPlan(context, &execution_plan));
context->ReplaceNodeSubsetsWithDelegateKernels(
context, registration, execution_plan, delegate);
// Now bind delegate buffer handles for all tensors.
for (size_t i = 0; i < context->tensors_size; ++i) {
context->tensors[i].delegate = delegate;
context->tensors[i].buffer_handle = static_cast<int>(i);
}
return kTfLiteOk;
},
.CopyFromBufferHandle = nullptr,
.CopyToBufferHandle = nullptr,
.FreeBufferHandle = nullptr,
.flags = kTfLiteDelegateFlagsAllowDynamicTensors,
};
return reinterpret_cast<jlong>(&delegate);
} | 328 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::EvalHashtable | tflite::ops::custom::hashtable::EvalHashtable( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EvalHashtable(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE(context, node->user_data != nullptr);
const auto* params =
reinterpret_cast<const TfLiteHashtableParams*>(node->user_data);
// The resource id is generated based on the given table name.
const int resource_id = std::hash<std::string>{}(params->table_name);
TfLiteTensor* resource_handle_tensor =
GetOutput(context, node, kResourceHandleTensor);
auto* resource_handle_data =
GetTensorData<std::int32_t>(resource_handle_tensor);
resource_handle_data[0] = resource_id;
Subgraph* subgraph = reinterpret_cast<Subgraph*>(context->impl_);
auto& resources = subgraph->resources();
resource::CreateHashtableResourceIfNotAvailable(
&resources, resource_id, params->key_dtype, params->value_dtype);
return kTfLiteOk;
} | 140 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::EvalHashtable | tflite::ops::custom::hashtable::EvalHashtable( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EvalHashtable(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE(context, node->user_data != nullptr);
const auto* params =
reinterpret_cast<const TfLiteHashtableParams*>(node->user_data);
// The resource id is generated based on the given table name.
const int resource_id = std::hash<std::string>{}(params->table_name);
TfLiteTensor* resource_handle_tensor =
GetOutput(context, node, kResourceHandleTensor);
auto* resource_handle_data =
GetTensorData<std::int32_t>(resource_handle_tensor);
resource_handle_data[0] = resource_id;
Subgraph* subgraph = reinterpret_cast<Subgraph*>(context->impl_);
auto& resources = subgraph->resources();
resource::CreateHashtableResourceIfNotAvailable(
&resources, resource_id, params->key_dtype, params->value_dtype);
return kTfLiteOk;
} | 140 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::PrepareHashtable | tflite::ops::custom::hashtable::PrepareHashtable( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus PrepareHashtable(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 0);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
TF_LITE_ENSURE(context, node->user_data != nullptr);
const auto* params =
reinterpret_cast<const TfLiteHashtableParams*>(node->user_data);
TF_LITE_ENSURE(context, !params->table_name.empty());
TF_LITE_ENSURE(context, (params->key_dtype == kTfLiteInt64 &&
params->value_dtype == kTfLiteString) ||
(params->key_dtype == kTfLiteString &&
params->value_dtype == kTfLiteInt64));
TfLiteTensor* resource_handle_tensor =
GetOutput(context, node, kResourceHandleTensor);
TF_LITE_ENSURE(context, resource_handle_tensor != nullptr);
TF_LITE_ENSURE_EQ(context, resource_handle_tensor->type, kTfLiteInt32);
TfLiteIntArray* outputSize = TfLiteIntArrayCreate(1);
outputSize->data[0] = 1;
return context->ResizeTensor(context, resource_handle_tensor, outputSize);
} | 174 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::PrepareHashtable | tflite::ops::custom::hashtable::PrepareHashtable( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus PrepareHashtable(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 0);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
TF_LITE_ENSURE(context, node->user_data != nullptr);
const auto* params =
reinterpret_cast<const TfLiteHashtableParams*>(node->user_data);
TF_LITE_ENSURE(context, !params->table_name.empty());
TF_LITE_ENSURE(context, (params->key_dtype == kTfLiteInt64 &&
params->value_dtype == kTfLiteString) ||
(params->key_dtype == kTfLiteString &&
params->value_dtype == kTfLiteInt64));
TfLiteTensor* resource_handle_tensor =
GetOutput(context, node, kResourceHandleTensor);
TF_LITE_ENSURE(context, resource_handle_tensor != nullptr);
TF_LITE_ENSURE_EQ(context, resource_handle_tensor->type, kTfLiteInt32);
TfLiteIntArray* outputSize = TfLiteIntArrayCreate(1);
outputSize->data[0] = 1;
return context->ResizeTensor(context, resource_handle_tensor, outputSize);
} | 174 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::EvalHashtableFind | tflite::ops::custom::hashtable::EvalHashtableFind( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EvalHashtableFind(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
int resource_id = input_resource_id_tensor->data.i32[0];
const TfLiteTensor* key_tensor = GetInput(context, node, kKeyTensor);
const TfLiteTensor* default_value_tensor =
GetInput(context, node, kDefaultValueTensor);
TfLiteTensor* output_tensor = GetOutput(context, node, 0);
Subgraph* subgraph = reinterpret_cast<Subgraph*>(context->impl_);
auto& resources = subgraph->resources();
auto* lookup = resource::GetHashtableResource(&resources, resource_id);
TF_LITE_ENSURE(context, lookup != nullptr);
TF_LITE_ENSURE_STATUS(
lookup->CheckKeyAndValueTypes(context, key_tensor, output_tensor));
auto result =
lookup->Lookup(context, key_tensor, output_tensor, default_value_tensor);
return result;
} | 160 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::EvalHashtableFind | tflite::ops::custom::hashtable::EvalHashtableFind( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EvalHashtableFind(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
int resource_id = input_resource_id_tensor->data.i32[0];
const TfLiteTensor* key_tensor = GetInput(context, node, kKeyTensor);
const TfLiteTensor* default_value_tensor =
GetInput(context, node, kDefaultValueTensor);
TfLiteTensor* output_tensor = GetOutput(context, node, 0);
Subgraph* subgraph = reinterpret_cast<Subgraph*>(context->impl_);
auto& resources = subgraph->resources();
auto* lookup = resource::GetHashtableResource(&resources, resource_id);
TF_LITE_ENSURE(context, lookup != nullptr);
TF_LITE_ENSURE_STATUS(
lookup->CheckKeyAndValueTypes(context, key_tensor, output_tensor));
auto result =
lookup->Lookup(context, key_tensor, output_tensor, default_value_tensor);
return result;
} | 160 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::PrepareHashtableFind | tflite::ops::custom::hashtable::PrepareHashtableFind( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus PrepareHashtableFind(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 3);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
TF_LITE_ENSURE_EQ(context, input_resource_id_tensor->type, kTfLiteInt32);
TF_LITE_ENSURE_EQ(context, NumDimensions(input_resource_id_tensor), 1);
TF_LITE_ENSURE_EQ(context, SizeOfDimension(input_resource_id_tensor, 0), 1);
const TfLiteTensor* default_value_tensor =
GetInput(context, node, kDefaultValueTensor);
const TfLiteTensor* key_tensor = GetInput(context, node, kKeyTensor);
TfLiteTensor* output_tensor = GetOutput(context, node, kOutputTensor);
TF_LITE_ENSURE_EQ(context, default_value_tensor->type, output_tensor->type);
TF_LITE_ENSURE(context, (key_tensor->type == kTfLiteInt64 &&
output_tensor->type == kTfLiteString) ||
(key_tensor->type == kTfLiteString &&
output_tensor->type == kTfLiteInt64));
return context->ResizeTensor(context, output_tensor,
TfLiteIntArrayCopy(key_tensor->dims));
} | 191 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::PrepareHashtableFind | tflite::ops::custom::hashtable::PrepareHashtableFind( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus PrepareHashtableFind(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 3);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
TF_LITE_ENSURE_EQ(context, input_resource_id_tensor->type, kTfLiteInt32);
TF_LITE_ENSURE_EQ(context, NumDimensions(input_resource_id_tensor), 1);
TF_LITE_ENSURE_EQ(context, SizeOfDimension(input_resource_id_tensor, 0), 1);
const TfLiteTensor* default_value_tensor =
GetInput(context, node, kDefaultValueTensor);
const TfLiteTensor* key_tensor = GetInput(context, node, kKeyTensor);
TfLiteTensor* output_tensor = GetOutput(context, node, kOutputTensor);
TF_LITE_ENSURE_EQ(context, default_value_tensor->type, output_tensor->type);
TF_LITE_ENSURE(context, (key_tensor->type == kTfLiteInt64 &&
output_tensor->type == kTfLiteString) ||
(key_tensor->type == kTfLiteString &&
output_tensor->type == kTfLiteInt64));
return context->ResizeTensor(context, output_tensor,
TfLiteIntArrayCopy(key_tensor->dims));
} | 191 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::EvalHashtableImport | tflite::ops::custom::hashtable::EvalHashtableImport( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EvalHashtableImport(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
const int resource_id = input_resource_id_tensor->data.i32[0];
const TfLiteTensor* key_tensor = GetInput(context, node, kKeyTensor);
const TfLiteTensor* value_tensor = GetInput(context, node, kValueTensor);
Subgraph* subgraph = reinterpret_cast<Subgraph*>(context->impl_);
auto& resources = subgraph->resources();
auto* lookup = resource::GetHashtableResource(&resources, resource_id);
TF_LITE_ENSURE(context, lookup != nullptr);
TF_LITE_ENSURE_STATUS(
lookup->CheckKeyAndValueTypes(context, key_tensor, value_tensor));
// The hashtable resource will only be initialized once, attempting to
// initialize it multiple times will be a no-op.
auto result = lookup->Import(context, key_tensor, value_tensor);
return result;
} | 146 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::EvalHashtableImport | tflite::ops::custom::hashtable::EvalHashtableImport( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EvalHashtableImport(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
const int resource_id = input_resource_id_tensor->data.i32[0];
const TfLiteTensor* key_tensor = GetInput(context, node, kKeyTensor);
const TfLiteTensor* value_tensor = GetInput(context, node, kValueTensor);
Subgraph* subgraph = reinterpret_cast<Subgraph*>(context->impl_);
auto& resources = subgraph->resources();
auto* lookup = resource::GetHashtableResource(&resources, resource_id);
TF_LITE_ENSURE(context, lookup != nullptr);
TF_LITE_ENSURE_STATUS(
lookup->CheckKeyAndValueTypes(context, key_tensor, value_tensor));
// The hashtable resource will only be initialized once, attempting to
// initialize it multiple times will be a no-op.
auto result = lookup->Import(context, key_tensor, value_tensor);
return result;
} | 146 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::PrepareHashtableImport | tflite::ops::custom::hashtable::PrepareHashtableImport( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus PrepareHashtableImport(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 3);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 0);
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
TF_LITE_ENSURE_EQ(context, input_resource_id_tensor->type, kTfLiteInt32);
TF_LITE_ENSURE_EQ(context, NumDimensions(input_resource_id_tensor), 1);
TF_LITE_ENSURE_EQ(context, SizeOfDimension(input_resource_id_tensor, 0), 1);
const TfLiteTensor* key_tensor = GetInput(context, node, kKeyTensor);
const TfLiteTensor* value_tensor = GetInput(context, node, kValueTensor);
TF_LITE_ENSURE(context, (key_tensor->type == kTfLiteInt64 &&
value_tensor->type == kTfLiteString) ||
(key_tensor->type == kTfLiteString &&
value_tensor->type == kTfLiteInt64));
// TODO(b/144731295): Tensorflow lookup ops support 1-D vector in storing
// values.
TF_LITE_ENSURE(context, HaveSameShapes(key_tensor, value_tensor));
return kTfLiteOk;
} | 163 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::PrepareHashtableImport | tflite::ops::custom::hashtable::PrepareHashtableImport( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus PrepareHashtableImport(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 3);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 0);
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
TF_LITE_ENSURE_EQ(context, input_resource_id_tensor->type, kTfLiteInt32);
TF_LITE_ENSURE_EQ(context, NumDimensions(input_resource_id_tensor), 1);
TF_LITE_ENSURE_EQ(context, SizeOfDimension(input_resource_id_tensor, 0), 1);
const TfLiteTensor* key_tensor = GetInput(context, node, kKeyTensor);
const TfLiteTensor* value_tensor = GetInput(context, node, kValueTensor);
TF_LITE_ENSURE(context, (key_tensor->type == kTfLiteInt64 &&
value_tensor->type == kTfLiteString) ||
(key_tensor->type == kTfLiteString &&
value_tensor->type == kTfLiteInt64));
// TODO(b/144731295): Tensorflow lookup ops support 1-D vector in storing
// values.
TF_LITE_ENSURE(context, HaveSameShapes(key_tensor, value_tensor));
return kTfLiteOk;
} | 163 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::EvalHashtableSize | tflite::ops::custom::hashtable::EvalHashtableSize( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EvalHashtableSize(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
int resource_id = input_resource_id_tensor->data.i32[0];
TfLiteTensor* output_tensor = GetOutput(context, node, kOutputTensor);
auto* output_data = GetTensorData<std::int64_t>(output_tensor);
Subgraph* subgraph = reinterpret_cast<Subgraph*>(context->impl_);
auto& resources = subgraph->resources();
auto* lookup = resource::GetHashtableResource(&resources, resource_id);
TF_LITE_ENSURE(context, lookup != nullptr);
output_data[0] = lookup->Size();
return kTfLiteOk;
} | 127 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::EvalHashtableSize | tflite::ops::custom::hashtable::EvalHashtableSize( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EvalHashtableSize(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
int resource_id = input_resource_id_tensor->data.i32[0];
TfLiteTensor* output_tensor = GetOutput(context, node, kOutputTensor);
auto* output_data = GetTensorData<std::int64_t>(output_tensor);
Subgraph* subgraph = reinterpret_cast<Subgraph*>(context->impl_);
auto& resources = subgraph->resources();
auto* lookup = resource::GetHashtableResource(&resources, resource_id);
TF_LITE_ENSURE(context, lookup != nullptr);
output_data[0] = lookup->Size();
return kTfLiteOk;
} | 127 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::PrepareHashtableSize | tflite::ops::custom::hashtable::PrepareHashtableSize( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus PrepareHashtableSize(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
TF_LITE_ENSURE_EQ(context, input_resource_id_tensor->type, kTfLiteInt32);
TF_LITE_ENSURE_EQ(context, NumDimensions(input_resource_id_tensor), 1);
TF_LITE_ENSURE_EQ(context, SizeOfDimension(input_resource_id_tensor, 0), 1);
TfLiteTensor* output_tensor = GetOutput(context, node, kOutputTensor);
TF_LITE_ENSURE(context, output_tensor != nullptr);
TF_LITE_ENSURE_EQ(context, output_tensor->type, kTfLiteInt64);
TfLiteIntArray* outputSize = TfLiteIntArrayCreate(1);
outputSize->data[0] = 1;
return context->ResizeTensor(context, output_tensor, outputSize);
} | 150 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::custom::hashtable::PrepareHashtableSize | tflite::ops::custom::hashtable::PrepareHashtableSize( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus PrepareHashtableSize(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input_resource_id_tensor =
GetInput(context, node, kInputResourceIdTensor);
TF_LITE_ENSURE_EQ(context, input_resource_id_tensor->type, kTfLiteInt32);
TF_LITE_ENSURE_EQ(context, NumDimensions(input_resource_id_tensor), 1);
TF_LITE_ENSURE_EQ(context, SizeOfDimension(input_resource_id_tensor, 0), 1);
TfLiteTensor* output_tensor = GetOutput(context, node, kOutputTensor);
TF_LITE_ENSURE(context, output_tensor != nullptr);
TF_LITE_ENSURE_EQ(context, output_tensor->type, kTfLiteInt64);
TfLiteIntArray* outputSize = TfLiteIntArrayCreate(1);
outputSize->data[0] = 1;
return context->ResizeTensor(context, output_tensor, outputSize);
} | 150 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetMutableInput | tflite::GetMutableInput( const TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | inline TfLiteTensor* GetMutableInput(const TfLiteContext* context,
const TfLiteNode* node, int index) {
if (index >= 0 && index < node->inputs->size) {
const int tensor_index = node->inputs->data[index];
if (tensor_index != kTfLiteOptionalTensor) {
if (context->tensors != nullptr) {
return &context->tensors[tensor_index];
} else {
return context->GetTensor(context, tensor_index);
}
}
}
return nullptr;
} | 89 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetMutableInput | tflite::GetMutableInput( const TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | inline TfLiteTensor* GetMutableInput(const TfLiteContext* context,
const TfLiteNode* node, int index) {
if (index >= 0 && index < node->inputs->size) {
const int tensor_index = node->inputs->data[index];
if (tensor_index != kTfLiteOptionalTensor) {
if (context->tensors != nullptr) {
return &context->tensors[tensor_index];
} else {
return context->GetTensor(context, tensor_index);
}
}
}
return nullptr;
} | 89 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetOutput | tflite::GetOutput( TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | TfLiteTensor* GetOutput(TfLiteContext* context, const TfLiteNode* node,
int index) {
if (index >= 0 && index < node->outputs->size) {
const int tensor_index = node->outputs->data[index];
if (tensor_index != kTfLiteOptionalTensor) {
if (context->tensors != nullptr) {
return &context->tensors[tensor_index];
} else {
return context->GetTensor(context, tensor_index);
}
}
}
return nullptr;
} | 88 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::GetOutput | tflite::GetOutput( TfLiteContext * context , const TfLiteNode * node , int index) | ['context', 'node', 'index'] | TfLiteTensor* GetOutput(TfLiteContext* context, const TfLiteNode* node,
int index) {
if (index >= 0 && index < node->outputs->size) {
const int tensor_index = node->outputs->data[index];
if (tensor_index != kTfLiteOptionalTensor) {
if (context->tensors != nullptr) {
return &context->tensors[tensor_index];
} else {
return context->GetTensor(context, tensor_index);
}
}
}
return nullptr;
} | 88 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::profiling::SimpleOpEval | tflite::profiling::SimpleOpEval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus SimpleOpEval(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input1 = tflite::GetInput(context, node, /*index=*/0);
const TfLiteTensor* input2 = tflite::GetInput(context, node, /*index=*/1);
TfLiteTensor* output = GetOutput(context, node, /*index=*/0);
int32_t* output_data = output->data.i32;
*output_data = *(input1->data.i32) + *(input2->data.i32);
return kTfLiteOk;
} | 91 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:38:44-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332517854
Change-Id: Ic27221dd1f0fbe302f311c2fe5a846ed8ff02016 | e11f55585f614645b360563072ffeb5c3eeff162 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::profiling::SimpleOpEval | tflite::profiling::SimpleOpEval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus SimpleOpEval(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input1 = tflite::GetInput(context, node, /*index=*/0);
const TfLiteTensor* input2 = tflite::GetInput(context, node, /*index=*/1);
TfLiteTensor* output = GetOutput(context, node, /*index=*/0);
int32_t* output_data = output->data.i32;
*output_data = *(input1->data.i32) + *(input2->data.i32);
return kTfLiteOk;
} | 91 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:44:32-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332518902
Change-Id: I92eb164a6101ac3cca66090061a9b56a97288236 | cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::testing::MockCustom::Invoke | tflite::testing::MockCustom::Invoke( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus MockCustom::Invoke(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = tflite::GetInput(context, node, 0);
const int32_t* input_data = input->data.i32;
const TfLiteTensor* weight = tflite::GetInput(context, node, 1);
const uint8_t* weight_data = weight->data.uint8;
TfLiteTensor* output = GetOutput(context, node, 0);
int32_t* output_data = output->data.i32;
output_data[0] =
0; // Catch output tensor sharing memory with an input tensor
output_data[0] = input_data[0] + weight_data[0];
return kTfLiteOk;
} | 116 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:44:32-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332518902
Change-Id: I92eb164a6101ac3cca66090061a9b56a97288236 | cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::testing::MockCustom::Invoke | tflite::testing::MockCustom::Invoke( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus MockCustom::Invoke(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = tflite::GetInput(context, node, 0);
const int32_t* input_data = input->data.i32;
const TfLiteTensor* weight = tflite::GetInput(context, node, 1);
const uint8_t* weight_data = weight->data.uint8;
TfLiteTensor* output = GetOutput(context, node, 0);
int32_t* output_data = output->data.i32;
output_data[0] =
0; // Catch output tensor sharing memory with an input tensor
output_data[0] = input_data[0] + weight_data[0];
return kTfLiteOk;
} | 116 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:44:32-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332518902
Change-Id: I92eb164a6101ac3cca66090061a9b56a97288236 | cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::testing::SimpleStatefulOp::Invoke | tflite::testing::SimpleStatefulOp::Invoke( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus SimpleStatefulOp::Invoke(TfLiteContext* context,
TfLiteNode* node) {
OpData* data = reinterpret_cast<OpData*>(node->user_data);
*data->invoke_count += 1;
const TfLiteTensor* input = GetInput(context, node, kInputTensor);
const uint8_t* input_data = GetTensorData<uint8_t>(input);
int size = NumElements(input->dims);
uint8_t* sorting_buffer = reinterpret_cast<uint8_t*>(
context->GetScratchBuffer(context, data->sorting_buffer));
// Copy inputs data to the sorting buffer. We don't want to mutate the input
// tensor as it might be used by a another node.
for (int i = 0; i < size; i++) {
sorting_buffer[i] = input_data[i];
}
// In place insertion sort on `sorting_buffer`.
for (int i = 1; i < size; i++) {
for (int j = i; j > 0 && sorting_buffer[j] < sorting_buffer[j - 1]; j--) {
std::swap(sorting_buffer[j], sorting_buffer[j - 1]);
}
}
TfLiteTensor* median = GetOutput(context, node, kMedianTensor);
uint8_t* median_data = GetTensorData<uint8_t>(median);
TfLiteTensor* invoke_count = GetOutput(context, node, kInvokeCount);
int32_t* invoke_count_data = GetTensorData<int32_t>(invoke_count);
median_data[0] = sorting_buffer[size / 2];
invoke_count_data[0] = *data->invoke_count;
return kTfLiteOk;
} | 257 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:44:32-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332518902
Change-Id: I92eb164a6101ac3cca66090061a9b56a97288236 | cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::testing::SimpleStatefulOp::Invoke | tflite::testing::SimpleStatefulOp::Invoke( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus SimpleStatefulOp::Invoke(TfLiteContext* context,
TfLiteNode* node) {
OpData* data = reinterpret_cast<OpData*>(node->user_data);
*data->invoke_count += 1;
const TfLiteTensor* input = GetInput(context, node, kInputTensor);
const uint8_t* input_data = GetTensorData<uint8_t>(input);
int size = NumElements(input->dims);
uint8_t* sorting_buffer = reinterpret_cast<uint8_t*>(
context->GetScratchBuffer(context, data->sorting_buffer));
// Copy inputs data to the sorting buffer. We don't want to mutate the input
// tensor as it might be used by a another node.
for (int i = 0; i < size; i++) {
sorting_buffer[i] = input_data[i];
}
// In place insertion sort on `sorting_buffer`.
for (int i = 1; i < size; i++) {
for (int j = i; j > 0 && sorting_buffer[j] < sorting_buffer[j - 1]; j--) {
std::swap(sorting_buffer[j], sorting_buffer[j - 1]);
}
}
TfLiteTensor* median = GetOutput(context, node, kMedianTensor);
uint8_t* median_data = GetTensorData<uint8_t>(median);
TfLiteTensor* invoke_count = GetOutput(context, node, kInvokeCount);
int32_t* invoke_count_data = GetTensorData<int32_t>(invoke_count);
median_data[0] = sorting_buffer[size / 2];
invoke_count_data[0] = *data->invoke_count;
return kTfLiteOk;
} | 257 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:44:32-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332518902
Change-Id: I92eb164a6101ac3cca66090061a9b56a97288236 | cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::testing::SimpleStatefulOp::Prepare | tflite::testing::SimpleStatefulOp::Prepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus SimpleStatefulOp::Prepare(TfLiteContext* context,
TfLiteNode* node) {
OpData* data = reinterpret_cast<OpData*>(node->user_data);
// Make sure that the input is in uint8_t with at least 1 data entry.
const TfLiteTensor* input = tflite::GetInput(context, node, kInputTensor);
if (input->type != kTfLiteUInt8) return kTfLiteError;
if (NumElements(input->dims) == 0) return kTfLiteError;
// Allocate a temporary buffer with the same size of input for sorting.
TF_LITE_ENSURE_STATUS(context->RequestScratchBufferInArena(
context, sizeof(uint8_t) * NumElements(input->dims),
&data->sorting_buffer));
// We can interleave scratch / persistent buffer allocation.
data->invoke_count = reinterpret_cast<int*>(
context->AllocatePersistentBuffer(context, sizeof(int)));
*data->invoke_count = 0;
return kTfLiteOk;
} | 130 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:44:32-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332518902
Change-Id: I92eb164a6101ac3cca66090061a9b56a97288236 | cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::testing::SimpleStatefulOp::Prepare | tflite::testing::SimpleStatefulOp::Prepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus SimpleStatefulOp::Prepare(TfLiteContext* context,
TfLiteNode* node) {
OpData* data = reinterpret_cast<OpData*>(node->user_data);
// Make sure that the input is in uint8_t with at least 1 data entry.
const TfLiteTensor* input = tflite::GetInput(context, node, kInputTensor);
if (input->type != kTfLiteUInt8) return kTfLiteError;
if (NumElements(input->dims) == 0) return kTfLiteError;
// Allocate a temporary buffer with the same size of input for sorting.
TF_LITE_ENSURE_STATUS(context->RequestScratchBufferInArena(
context, sizeof(uint8_t) * NumElements(input->dims),
&data->sorting_buffer));
// We can interleave scratch / persistent buffer allocation.
data->invoke_count = reinterpret_cast<int*>(
context->AllocatePersistentBuffer(context, sizeof(int)));
*data->invoke_count = 0;
return kTfLiteOk;
} | 130 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:50:38-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332520146
Change-Id: I405d986cfc653aaafcfdf4162c0acbd46220b921 | fff2c8326280c07733828f990548979bdc893859 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::micro::concatenation::Eval | tflite::ops::micro::concatenation::Eval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
TfLiteType output_type = GetOutput(context, node, kOutputTensor)->type;
switch (output_type) { // Already know in/outtypes are same.
case kTfLiteFloat32:
EvalUnquantized<float>(context, node);
break;
case kTfLiteInt32:
EvalUnquantized<int32_t>(context, node);
break;
case kTfLiteUInt8:
EvalQuantizedUInt8(context, node);
break;
case kTfLiteInt8:
EvalUnquantized<int8_t>(context, node);
break;
case kTfLiteInt64:
EvalUnquantized<int64_t>(context, node);
break;
default:
TF_LITE_KERNEL_LOG(
context, "Op Concatenation does not currently support Type '%s'.",
TfLiteTypeGetName(output_type));
return kTfLiteError;
}
return kTfLiteOk;
} | 124 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:50:38-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332520146
Change-Id: I405d986cfc653aaafcfdf4162c0acbd46220b921 | fff2c8326280c07733828f990548979bdc893859 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::micro::concatenation::Eval | tflite::ops::micro::concatenation::Eval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
TfLiteType output_type = GetOutput(context, node, kOutputTensor)->type;
switch (output_type) { // Already know in/outtypes are same.
case kTfLiteFloat32:
EvalUnquantized<float>(context, node);
break;
case kTfLiteInt32:
EvalUnquantized<int32_t>(context, node);
break;
case kTfLiteUInt8:
EvalQuantizedUInt8(context, node);
break;
case kTfLiteInt8:
EvalUnquantized<int8_t>(context, node);
break;
case kTfLiteInt64:
EvalUnquantized<int64_t>(context, node);
break;
default:
TF_LITE_KERNEL_LOG(
context, "Op Concatenation does not currently support Type '%s'.",
TfLiteTypeGetName(output_type));
return kTfLiteError;
}
return kTfLiteOk;
} | 124 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:50:38-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332520146
Change-Id: I405d986cfc653aaafcfdf4162c0acbd46220b921 | fff2c8326280c07733828f990548979bdc893859 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::micro::concatenation::Prepare | tflite::ops::micro::concatenation::Prepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
// This function only checks the types. Additional shape validations are
// performed in the reference implementation called during Eval().
const TfLiteConcatenationParams* params =
reinterpret_cast<TfLiteConcatenationParams*>(node->builtin_data);
TfLiteType input_type = GetInput(context, node, 0)->type;
TfLiteType output_type = GetOutput(context, node, kOutputTensor)->type;
// Check activation and input type
TF_LITE_ENSURE_EQ(context, params->activation, kTfLiteActNone);
TF_LITE_ENSURE(context,
input_type == kTfLiteFloat32 || input_type == kTfLiteUInt8 ||
input_type == kTfLiteInt8 || input_type == kTfLiteInt32 ||
input_type == kTfLiteInt64);
// Output type must match input type
TF_LITE_ENSURE_EQ(context, output_type, input_type);
// This implementation does not support large number of input tensors
const int num_inputs = NumInputs(node);
TF_LITE_ENSURE(context, num_inputs <= kMaxInputNum);
// Shapes with dimensions >4 are not yet supported with static allocation.
for (int i = 0; i < num_inputs; ++i) {
const TfLiteTensor* input = GetInput(context, node, i);
int num_dimensions = NumDimensions(input);
if (num_dimensions > 4) {
TF_LITE_KERNEL_LOG(
context,
"Op Concatenation does not currently support num dimensions >4 "
"Tensor has %d dimensions.",
num_dimensions);
return kTfLiteError;
}
}
// Calculate OpData.
TFLITE_DCHECK(node->user_data != nullptr);
OpData* data = static_cast<OpData*>(node->user_data);
TfLiteTensor* output = GetOutput(context, node, kOutputTensor);
switch (output_type) { // Already know in/outtypes are same.
case kTfLiteFloat32:
case kTfLiteInt32:
case kTfLiteInt64: {
data->params.axis = CalculatePositiveAxis(params->axis, output);
data->params.inputs_count = node->inputs->size;
break;
}
case kTfLiteUInt8:
case kTfLiteInt8: {
data->params.axis = CalculatePositiveAxis(params->axis, output);
data->params.inputs_count = node->inputs->size;
float* input_scales =
reinterpret_cast<float*>(context->AllocatePersistentBuffer(
context, node->inputs->size * sizeof(float)));
int32_t* input_zero_points =
reinterpret_cast<int32_t*>(context->AllocatePersistentBuffer(
context, node->inputs->size * sizeof(int32_t)));
// Allocate persistent scale and zeropoint buffers.
// Store input scale and zero point values in OpParams:
for (int i = 0; i < node->inputs->size; ++i) {
const TfLiteTensor* t = GetInput(context, node, i);
input_scales[i] = t->params.scale;
input_zero_points[i] = t->params.zero_point;
}
data->params.input_scale = input_scales;
data->params.input_zeropoint = input_zero_points;
data->params.output_zeropoint = output->params.zero_point;
data->params.output_scale = output->params.scale;
break;
}
default:
TF_LITE_KERNEL_LOG(
context, "Op Concatenation does not currently support Type '%s'.",
TfLiteTypeGetName(output_type));
return kTfLiteError;
}
return kTfLiteOk;
} | 472 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:50:38-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332520146
Change-Id: I405d986cfc653aaafcfdf4162c0acbd46220b921 | fff2c8326280c07733828f990548979bdc893859 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::micro::concatenation::Prepare | tflite::ops::micro::concatenation::Prepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
// This function only checks the types. Additional shape validations are
// performed in the reference implementation called during Eval().
const TfLiteConcatenationParams* params =
reinterpret_cast<TfLiteConcatenationParams*>(node->builtin_data);
TfLiteType input_type = GetInput(context, node, 0)->type;
TfLiteType output_type = GetOutput(context, node, kOutputTensor)->type;
// Check activation and input type
TF_LITE_ENSURE_EQ(context, params->activation, kTfLiteActNone);
TF_LITE_ENSURE(context,
input_type == kTfLiteFloat32 || input_type == kTfLiteUInt8 ||
input_type == kTfLiteInt8 || input_type == kTfLiteInt32 ||
input_type == kTfLiteInt64);
// Output type must match input type
TF_LITE_ENSURE_EQ(context, output_type, input_type);
// This implementation does not support large number of input tensors
const int num_inputs = NumInputs(node);
TF_LITE_ENSURE(context, num_inputs <= kMaxInputNum);
// Shapes with dimensions >4 are not yet supported with static allocation.
for (int i = 0; i < num_inputs; ++i) {
const TfLiteTensor* input = GetInput(context, node, i);
int num_dimensions = NumDimensions(input);
if (num_dimensions > 4) {
TF_LITE_KERNEL_LOG(
context,
"Op Concatenation does not currently support num dimensions >4 "
"Tensor has %d dimensions.",
num_dimensions);
return kTfLiteError;
}
}
// Calculate OpData.
TFLITE_DCHECK(node->user_data != nullptr);
OpData* data = static_cast<OpData*>(node->user_data);
TfLiteTensor* output = GetOutput(context, node, kOutputTensor);
switch (output_type) { // Already know in/outtypes are same.
case kTfLiteFloat32:
case kTfLiteInt32:
case kTfLiteInt64: {
data->params.axis = CalculatePositiveAxis(params->axis, output);
data->params.inputs_count = node->inputs->size;
break;
}
case kTfLiteUInt8:
case kTfLiteInt8: {
data->params.axis = CalculatePositiveAxis(params->axis, output);
data->params.inputs_count = node->inputs->size;
float* input_scales =
reinterpret_cast<float*>(context->AllocatePersistentBuffer(
context, node->inputs->size * sizeof(float)));
int32_t* input_zero_points =
reinterpret_cast<int32_t*>(context->AllocatePersistentBuffer(
context, node->inputs->size * sizeof(int32_t)));
// Allocate persistent scale and zeropoint buffers.
// Store input scale and zero point values in OpParams:
for (int i = 0; i < node->inputs->size; ++i) {
const TfLiteTensor* t = GetInput(context, node, i);
input_scales[i] = t->params.scale;
input_zero_points[i] = t->params.zero_point;
}
data->params.input_scale = input_scales;
data->params.input_zeropoint = input_zero_points;
data->params.output_zeropoint = output->params.zero_point;
data->params.output_scale = output->params.scale;
break;
}
default:
TF_LITE_KERNEL_LOG(
context, "Op Concatenation does not currently support Type '%s'.",
TfLiteTypeGetName(output_type));
return kTfLiteError;
}
return kTfLiteOk;
} | 472 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::EluEval | tflite::ops::builtin::activations::EluEval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EluEval(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
switch (input->type) {
case kTfLiteFloat32: {
optimized_ops::Elu(GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(output), GetTensorData<float>(output));
return kTfLiteOk;
} break;
case kTfLiteInt8: {
OpData* data = reinterpret_cast<OpData*>(node->user_data);
EvalUsingLookupTable(data, input, output);
return kTfLiteOk;
} break;
default:
TF_LITE_KERNEL_LOG(
context, "Only float32 and int8 is supported currently, got %s.",
TfLiteTypeGetName(input->type));
return kTfLiteError;
}
} | 141 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::EluEval | tflite::ops::builtin::activations::EluEval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EluEval(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
switch (input->type) {
case kTfLiteFloat32: {
optimized_ops::Elu(GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(output), GetTensorData<float>(output));
return kTfLiteOk;
} break;
case kTfLiteInt8: {
OpData* data = reinterpret_cast<OpData*>(node->user_data);
EvalUsingLookupTable(data, input, output);
return kTfLiteOk;
} break;
default:
TF_LITE_KERNEL_LOG(
context, "Only float32 and int8 is supported currently, got %s.",
TfLiteTypeGetName(input->type));
return kTfLiteError;
}
} | 141 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::EluPrepare | tflite::ops::builtin::activations::EluPrepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EluPrepare(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
OpData* data = reinterpret_cast<OpData*>(node->user_data);
// Use LUT to handle quantized elu path.
if (input->type == kTfLiteInt8) {
PopulateLookupTable<int8_t>(data, input, output, [](float value) {
return value < 0.0 ? std::exp(value) - 1.0f : value;
});
}
return GenericPrepare(context, node);
} | 113 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::EluPrepare | tflite::ops::builtin::activations::EluPrepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus EluPrepare(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
OpData* data = reinterpret_cast<OpData*>(node->user_data);
// Use LUT to handle quantized elu path.
if (input->type == kTfLiteInt8) {
PopulateLookupTable<int8_t>(data, input, output, [](float value) {
return value < 0.0 ? std::exp(value) - 1.0f : value;
});
}
return GenericPrepare(context, node);
} | 113 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::GenericPrepare | tflite::ops::builtin::activations::GenericPrepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus GenericPrepare(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);
return context->ResizeTensor(context, output,
TfLiteIntArrayCopy(input->dims));
} | 93 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::GenericPrepare | tflite::ops::builtin::activations::GenericPrepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus GenericPrepare(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);
return context->ResizeTensor(context, output,
TfLiteIntArrayCopy(input->dims));
} | 93 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::HardSwishEval | tflite::ops::builtin::activations::HardSwishEval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus HardSwishEval(TfLiteContext* context, TfLiteNode* node) {
HardSwishData* data = static_cast<HardSwishData*>(node->user_data);
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
switch (input->type) {
case kTfLiteFloat32: {
if (kernel_type == kReference) {
reference_ops::HardSwish(
GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(output), GetTensorData<float>(output));
} else {
optimized_ops::HardSwish(
GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(output), GetTensorData<float>(output));
}
return kTfLiteOk;
} break;
case kTfLiteUInt8: {
HardSwishParams& params = data->params;
if (kernel_type == kReference) {
reference_ops::HardSwish(
params, GetTensorShape(input), GetTensorData<uint8_t>(input),
GetTensorShape(output), GetTensorData<uint8_t>(output));
} else {
optimized_ops::HardSwish(
params, GetTensorShape(input), GetTensorData<uint8_t>(input),
GetTensorShape(output), GetTensorData<uint8_t>(output));
}
return kTfLiteOk;
} break;
case kTfLiteInt8: {
HardSwishParams& params = data->params;
if (kernel_type == kReference) {
reference_ops::HardSwish(
params, GetTensorShape(input), GetTensorData<int8_t>(input),
GetTensorShape(output), GetTensorData<int8_t>(output));
} else {
optimized_ops::HardSwish(
params, GetTensorShape(input), GetTensorData<int8_t>(input),
GetTensorShape(output), GetTensorData<int8_t>(output));
}
return kTfLiteOk;
} break;
default:
TF_LITE_KERNEL_LOG(
context,
"Only float32, uint8 and int8 are supported currently, got %s.",
TfLiteTypeGetName(input->type));
return kTfLiteError;
}
} | 354 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::HardSwishEval | tflite::ops::builtin::activations::HardSwishEval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus HardSwishEval(TfLiteContext* context, TfLiteNode* node) {
HardSwishData* data = static_cast<HardSwishData*>(node->user_data);
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
switch (input->type) {
case kTfLiteFloat32: {
if (kernel_type == kReference) {
reference_ops::HardSwish(
GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(output), GetTensorData<float>(output));
} else {
optimized_ops::HardSwish(
GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(output), GetTensorData<float>(output));
}
return kTfLiteOk;
} break;
case kTfLiteUInt8: {
HardSwishParams& params = data->params;
if (kernel_type == kReference) {
reference_ops::HardSwish(
params, GetTensorShape(input), GetTensorData<uint8_t>(input),
GetTensorShape(output), GetTensorData<uint8_t>(output));
} else {
optimized_ops::HardSwish(
params, GetTensorShape(input), GetTensorData<uint8_t>(input),
GetTensorShape(output), GetTensorData<uint8_t>(output));
}
return kTfLiteOk;
} break;
case kTfLiteInt8: {
HardSwishParams& params = data->params;
if (kernel_type == kReference) {
reference_ops::HardSwish(
params, GetTensorShape(input), GetTensorData<int8_t>(input),
GetTensorShape(output), GetTensorData<int8_t>(output));
} else {
optimized_ops::HardSwish(
params, GetTensorShape(input), GetTensorData<int8_t>(input),
GetTensorShape(output), GetTensorData<int8_t>(output));
}
return kTfLiteOk;
} break;
default:
TF_LITE_KERNEL_LOG(
context,
"Only float32, uint8 and int8 are supported currently, got %s.",
TfLiteTypeGetName(input->type));
return kTfLiteError;
}
} | 354 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::HardSwishPrepare | tflite::ops::builtin::activations::HardSwishPrepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus HardSwishPrepare(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_STATUS(GenericPrepare(context, node));
TfLiteTensor* output = GetOutput(context, node, 0);
if (output->type == kTfLiteUInt8 || output->type == kTfLiteInt8) {
HardSwishData* data = static_cast<HardSwishData*>(node->user_data);
HardSwishParams* params = &data->params;
const TfLiteTensor* input = GetInput(context, node, 0);
params->input_zero_point = input->params.zero_point;
params->output_zero_point = output->params.zero_point;
const float input_scale = input->params.scale;
const float hires_input_scale = (1.0f / 128.0f) * input_scale;
const float reluish_scale = 3.0f / 32768.0f;
const float output_scale = output->params.scale;
const float output_multiplier = hires_input_scale / output_scale;
int32_t output_multiplier_fixedpoint_int32;
QuantizeMultiplier(output_multiplier, &output_multiplier_fixedpoint_int32,
¶ms->output_multiplier_exponent);
DownScaleInt32ToInt16Multiplier(
output_multiplier_fixedpoint_int32,
¶ms->output_multiplier_fixedpoint_int16);
TF_LITE_ENSURE(context, params->output_multiplier_exponent <= 0);
const float reluish_multiplier = hires_input_scale / reluish_scale;
int32_t reluish_multiplier_fixedpoint_int32;
QuantizeMultiplier(reluish_multiplier, &reluish_multiplier_fixedpoint_int32,
¶ms->reluish_multiplier_exponent);
DownScaleInt32ToInt16Multiplier(
reluish_multiplier_fixedpoint_int32,
¶ms->reluish_multiplier_fixedpoint_int16);
}
return kTfLiteOk;
} | 239 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::HardSwishPrepare | tflite::ops::builtin::activations::HardSwishPrepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus HardSwishPrepare(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_STATUS(GenericPrepare(context, node));
TfLiteTensor* output = GetOutput(context, node, 0);
if (output->type == kTfLiteUInt8 || output->type == kTfLiteInt8) {
HardSwishData* data = static_cast<HardSwishData*>(node->user_data);
HardSwishParams* params = &data->params;
const TfLiteTensor* input = GetInput(context, node, 0);
params->input_zero_point = input->params.zero_point;
params->output_zero_point = output->params.zero_point;
const float input_scale = input->params.scale;
const float hires_input_scale = (1.0f / 128.0f) * input_scale;
const float reluish_scale = 3.0f / 32768.0f;
const float output_scale = output->params.scale;
const float output_multiplier = hires_input_scale / output_scale;
int32_t output_multiplier_fixedpoint_int32;
QuantizeMultiplier(output_multiplier, &output_multiplier_fixedpoint_int32,
¶ms->output_multiplier_exponent);
DownScaleInt32ToInt16Multiplier(
output_multiplier_fixedpoint_int32,
¶ms->output_multiplier_fixedpoint_int16);
TF_LITE_ENSURE(context, params->output_multiplier_exponent <= 0);
const float reluish_multiplier = hires_input_scale / reluish_scale;
int32_t reluish_multiplier_fixedpoint_int32;
QuantizeMultiplier(reluish_multiplier, &reluish_multiplier_fixedpoint_int32,
¶ms->reluish_multiplier_exponent);
DownScaleInt32ToInt16Multiplier(
reluish_multiplier_fixedpoint_int32,
¶ms->reluish_multiplier_fixedpoint_int16);
}
return kTfLiteOk;
} | 239 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::LeakyReluEval | tflite::ops::builtin::activations::LeakyReluEval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus LeakyReluEval(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
const auto* params =
reinterpret_cast<TfLiteLeakyReluParams*>(node->builtin_data);
const LeakyReluOpData* data =
reinterpret_cast<LeakyReluOpData*>(node->user_data);
LeakyReluParams op_params;
switch (input->type) {
case kTfLiteFloat32: {
op_params.alpha = params->alpha;
optimized_ops::LeakyRelu(
op_params, GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(output), GetTensorData<float>(output));
return kTfLiteOk;
} break;
case kTfLiteUInt8: {
QuantizeLeakyRelu<uint8_t>(input, output, data);
return kTfLiteOk;
} break;
case kTfLiteInt8: {
QuantizeLeakyRelu<int8_t>(input, output, data);
return kTfLiteOk;
} break;
case kTfLiteInt16: {
QuantizeLeakyRelu<int16_t>(input, output, data);
return kTfLiteOk;
} break;
default:
TF_LITE_KERNEL_LOG(
context,
"Only float32, int8, int16 and uint8 is supported currently, got %s.",
TfLiteTypeGetName(input->type));
return kTfLiteError;
}
} | 218 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::LeakyReluEval | tflite::ops::builtin::activations::LeakyReluEval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus LeakyReluEval(TfLiteContext* context, TfLiteNode* node) {
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
const auto* params =
reinterpret_cast<TfLiteLeakyReluParams*>(node->builtin_data);
const LeakyReluOpData* data =
reinterpret_cast<LeakyReluOpData*>(node->user_data);
LeakyReluParams op_params;
switch (input->type) {
case kTfLiteFloat32: {
op_params.alpha = params->alpha;
optimized_ops::LeakyRelu(
op_params, GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(output), GetTensorData<float>(output));
return kTfLiteOk;
} break;
case kTfLiteUInt8: {
QuantizeLeakyRelu<uint8_t>(input, output, data);
return kTfLiteOk;
} break;
case kTfLiteInt8: {
QuantizeLeakyRelu<int8_t>(input, output, data);
return kTfLiteOk;
} break;
case kTfLiteInt16: {
QuantizeLeakyRelu<int16_t>(input, output, data);
return kTfLiteOk;
} break;
default:
TF_LITE_KERNEL_LOG(
context,
"Only float32, int8, int16 and uint8 is supported currently, got %s.",
TfLiteTypeGetName(input->type));
return kTfLiteError;
}
} | 218 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::LeakyReluPrepare | tflite::ops::builtin::activations::LeakyReluPrepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus LeakyReluPrepare(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);
LeakyReluOpData* data = reinterpret_cast<LeakyReluOpData*>(node->user_data);
if (output->type == kTfLiteUInt8 || output->type == kTfLiteInt8 ||
output->type == kTfLiteInt16) {
const auto* params =
reinterpret_cast<TfLiteLeakyReluParams*>(node->builtin_data);
double alpha_multiplier =
input->params.scale * params->alpha / output->params.scale;
QuantizeMultiplier(alpha_multiplier, &data->output_multiplier_alpha,
&data->output_shift_alpha);
double identity_multiplier = input->params.scale / output->params.scale;
QuantizeMultiplier(identity_multiplier, &data->output_multiplier_identity,
&data->output_shift_identity);
}
return context->ResizeTensor(context, output,
TfLiteIntArrayCopy(input->dims));
} | 210 | True | 1 |
CVE-2020-15211 | False | False | False | False | AV:N/AC:M/Au:N/C:P/I:P/A:N | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | NONE | 5.8 | CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N | NETWORK | HIGH | NONE | NONE | UNCHANGED | LOW | LOW | NONE | 4.8 | MEDIUM | 2.2 | 2.5 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'name': 'https://github.com/tensorflow/tensorflow/commit/e11f55585f614645b360563072ffeb5c3eeff162', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'name': 'https://github.com/tensorflow/tensorflow/commit/cd31fd0ce0449a9e0f83dcad08d6ed7f1d6bef3f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'name': 'https://github.com/tensorflow/tensorflow/commit/46d5b0852528ddfd614ded79bccc75589f801bd9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'name': 'https://github.com/tensorflow/tensorflow/commit/00302787b788c5ff04cb6f62aed5a74d936e86c0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cvpc-8phh-8f45', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'name': 'https://github.com/tensorflow/tensorflow/commit/fff2c8326280c07733828f990548979bdc893859', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.3.1', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'name': 'https://github.com/tensorflow/tensorflow/commit/1970c2158b1ffa416d159d03c3370b9a462aee35', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-10/msg00065.html', 'name': 'openSUSE-SU-2020:1766', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionEndExcluding': '1.15.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.1.0', 'versionEndExcluding': '2.1.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.2.0', 'versionEndExcluding': '2.2.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:lite:*:*:*', 'versionStartIncluding': '2.3.0', 'versionEndExcluding': '2.3.1', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code."}] | 2021-09-16T15:45Z | 2020-09-25T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2020-09-18 13:56:43-07:00 | [tflite]: Insert `nullptr` checks when obtaining tensors.
As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages.
We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`).
PiperOrigin-RevId: 332521299
Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56 | 1970c2158b1ffa416d159d03c3370b9a462aee35 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::activations::LeakyReluPrepare | tflite::ops::builtin::activations::LeakyReluPrepare( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus LeakyReluPrepare(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
const TfLiteTensor* input = GetInput(context, node, 0);
TfLiteTensor* output = GetOutput(context, node, 0);
TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);
LeakyReluOpData* data = reinterpret_cast<LeakyReluOpData*>(node->user_data);
if (output->type == kTfLiteUInt8 || output->type == kTfLiteInt8 ||
output->type == kTfLiteInt16) {
const auto* params =
reinterpret_cast<TfLiteLeakyReluParams*>(node->builtin_data);
double alpha_multiplier =
input->params.scale * params->alpha / output->params.scale;
QuantizeMultiplier(alpha_multiplier, &data->output_multiplier_alpha,
&data->output_shift_alpha);
double identity_multiplier = input->params.scale / output->params.scale;
QuantizeMultiplier(identity_multiplier, &data->output_multiplier_identity,
&data->output_shift_identity);
}
return context->ResizeTensor(context, output,
TfLiteIntArrayCopy(input->dims));
} | 210 | True | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.