cve_id
stringlengths 13
16
| obtain_all_privilege
stringclasses 3
values | obtain_user_privilege
stringclasses 2
values | obtain_other_privilege
stringclasses 2
values | user_interaction_required
stringclasses 3
values | cvss2_vector_string
stringclasses 106
values | cvss2_access_vector
stringclasses 4
values | cvss2_access_complexity
stringclasses 4
values | cvss2_authentication
stringclasses 3
values | cvss2_confidentiality_impact
stringclasses 4
values | cvss2_integrity_impact
stringclasses 4
values | cvss2_availability_impact
stringclasses 4
values | cvss2_base_score
stringclasses 50
values | cvss3_vector_string
stringclasses 226
values | cvss3_attack_vector
stringclasses 5
values | cvss3_attack_complexity
stringclasses 3
values | cvss3_privileges_required
stringclasses 4
values | cvss3_user_interaction
stringclasses 3
values | cvss3_scope
stringclasses 3
values | cvss3_confidentiality_impact
stringclasses 4
values | cvss3_integrity_impact
stringclasses 4
values | cvss3_availability_impact
stringclasses 4
values | cvss3_base_score
stringclasses 55
values | cvss3_base_severity
stringclasses 5
values | exploitability_score
stringclasses 22
values | impact_score
stringclasses 15
values | ac_insuf_info
stringclasses 3
values | reference_json
stringlengths 221
23.3k
| problemtype_json
stringclasses 200
values | severity
stringclasses 4
values | cve_nodes
stringlengths 2
33.1k
| cve_description
stringlengths 64
1.99k
| cve_last_modified_date
stringlengths 17
17
| cve_published_date
stringlengths 17
17
| cwe_name
stringclasses 125
values | cwe_description
stringclasses 124
values | cwe_extended_description
stringclasses 95
values | cwe_url
stringclasses 124
values | cwe_is_category
int64 0
1
| commit_author
stringlengths 0
34
| commit_author_date
stringlengths 25
25
| commit_msg
stringlengths 0
13.3k
| commit_hash
stringlengths 40
40
| commit_is_merge
stringclasses 1
value | repo_name
stringclasses 467
values | repo_description
stringclasses 459
values | repo_date_created
stringclasses 467
values | repo_date_last_push
stringclasses 467
values | repo_homepage
stringclasses 294
values | repo_owner
stringclasses 470
values | repo_stars
stringclasses 406
values | repo_forks
stringclasses 352
values | function_name
stringlengths 3
120
| function_signature
stringlengths 6
640
| function_parameters
stringlengths 2
302
| function
stringlengths 12
114k
| function_token_count
stringlengths 1
5
| function_before_change
stringclasses 1
value | labels
int64 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
CVE-2021-41203 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-7pxj-m4jf-r6h2', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-7pxj-m4jf-r6h2', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/368af875869a204b4ac552b9ddda59f6a46a56ec', 'name': 'https://github.com/tensorflow/tensorflow/commit/368af875869a204b4ac552b9ddda59f6a46a56ec', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/abcced051cb1bd8fb05046ac3b6023a7ebcc4578', 'name': 'https://github.com/tensorflow/tensorflow/commit/abcced051cb1bd8fb05046ac3b6023a7ebcc4578', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/e8dc63704c88007ee4713076605c90188d66f3d2', 'name': 'https://github.com/tensorflow/tensorflow/commit/e8dc63704c88007ee4713076605c90188d66f3d2', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/b619c6f865715ca3b15ef1842b5b95edbaa710ad', 'name': 'https://github.com/tensorflow/tensorflow/commit/b619c6f865715ca3b15ef1842b5b95edbaa710ad', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-345'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions an attacker can trigger undefined behavior, integer overflows, segfaults and `CHECK`-fail crashes if they can change saved checkpoints from outside of TensorFlow. This is because the checkpoints loading infrastructure is missing validation for invalid file formats. The fixes will be included in TensorFlow 2.7.0. We will also cherrypick these commits on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.'}] | 2021-11-09T15:32Z | 2021-11-05T21:15Z | Insufficient Verification of Data Authenticity | The software does not sufficiently verify the origin or authenticity of data, in a way that causes it to accept invalid data. | https://cwe.mitre.org/data/definitions/345.html | 0 | A. Unique TensorFlower | 2021-08-24 16:13:07-07:00 | Use BuildTensorShapeBase when parsing unverified TensorShapes during checkpoint loading.
This avoids crashing when the TensorShape has negative dimensions.
PiperOrigin-RevId: 392769882
Change-Id: Id1f7ae7fcf8142193556af47abfda81b13d3cce4 | b619c6f865715ca3b15ef1842b5b95edbaa710ad | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::checkpoint::TensorSliceReader::LoadShard | tensorflow::checkpoint::TensorSliceReader::LoadShard( int shard) const | ['shard'] | void TensorSliceReader::LoadShard(int shard) const {
CHECK_LT(shard, sss_.size());
if (sss_[shard] || !status_.ok()) {
return; // Already loaded, or invalid.
}
string value;
SavedTensorSlices sts;
const string fname = fnames_[shard];
VLOG(1) << "Reading meta data from file " << fname << "...";
Table* table;
Status s = open_function_(fname, &table);
if (!s.ok()) {
status_ = errors::DataLoss("Unable to open table file ", fname, ": ",
s.ToString());
return;
}
sss_[shard].reset(table);
if (!(table->Get(kSavedTensorSlicesKey, &value) &&
ParseProtoUnlimited(&sts, value))) {
status_ = errors::Internal(
"Failed to find the saved tensor slices at the beginning of the "
"checkpoint file: ",
fname);
return;
}
status_ = CheckVersions(sts.meta().versions(), TF_CHECKPOINT_VERSION,
TF_CHECKPOINT_VERSION_MIN_PRODUCER, "Checkpoint",
"checkpoint");
if (!status_.ok()) return;
for (const SavedSliceMeta& ssm : sts.meta().tensor()) {
TensorShape ssm_shape(ssm.shape());
for (const TensorSliceProto& tsp : ssm.slice()) {
TensorSlice ss_slice(tsp);
status_ = RegisterTensorSlice(ssm.name(), ssm_shape, ssm.type(), fname,
ss_slice, &tensors_);
if (!status_.ok()) return;
}
}
} | 285 | True | 1 |
|
CVE-2021-41203 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-7pxj-m4jf-r6h2', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-7pxj-m4jf-r6h2', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/368af875869a204b4ac552b9ddda59f6a46a56ec', 'name': 'https://github.com/tensorflow/tensorflow/commit/368af875869a204b4ac552b9ddda59f6a46a56ec', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/abcced051cb1bd8fb05046ac3b6023a7ebcc4578', 'name': 'https://github.com/tensorflow/tensorflow/commit/abcced051cb1bd8fb05046ac3b6023a7ebcc4578', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/e8dc63704c88007ee4713076605c90188d66f3d2', 'name': 'https://github.com/tensorflow/tensorflow/commit/e8dc63704c88007ee4713076605c90188d66f3d2', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/b619c6f865715ca3b15ef1842b5b95edbaa710ad', 'name': 'https://github.com/tensorflow/tensorflow/commit/b619c6f865715ca3b15ef1842b5b95edbaa710ad', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-345'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions an attacker can trigger undefined behavior, integer overflows, segfaults and `CHECK`-fail crashes if they can change saved checkpoints from outside of TensorFlow. This is because the checkpoints loading infrastructure is missing validation for invalid file formats. The fixes will be included in TensorFlow 2.7.0. We will also cherrypick these commits on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.'}] | 2021-11-09T15:32Z | 2021-11-05T21:15Z | Insufficient Verification of Data Authenticity | The software does not sufficiently verify the origin or authenticity of data, in a way that causes it to accept invalid data. | https://cwe.mitre.org/data/definitions/345.html | 0 | A. Unique TensorFlower | 2021-08-24 17:41:37-07:00 | Add BuildTensorSlice for building from unvalidated TensorSliceProtos.
This avoids several sources of crashes and undefined behavior when loading
invalid checkpoints.
PiperOrigin-RevId: 392785704
Change-Id: Icd9713c768b882f3b58b427eddac376060696833 | e8dc63704c88007ee4713076605c90188d66f3d2 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::checkpoint::TensorSliceReader::LoadShard | tensorflow::checkpoint::TensorSliceReader::LoadShard( int shard) const | ['shard'] | void TensorSliceReader::LoadShard(int shard) const {
CHECK_LT(shard, sss_.size());
if (sss_[shard] || !status_.ok()) {
return; // Already loaded, or invalid.
}
string value;
SavedTensorSlices sts;
const string fname = fnames_[shard];
VLOG(1) << "Reading meta data from file " << fname << "...";
Table* table;
Status s = open_function_(fname, &table);
if (!s.ok()) {
status_ = errors::DataLoss("Unable to open table file ", fname, ": ",
s.ToString());
return;
}
sss_[shard].reset(table);
if (!(table->Get(kSavedTensorSlicesKey, &value) &&
ParseProtoUnlimited(&sts, value))) {
status_ = errors::Internal(
"Failed to find the saved tensor slices at the beginning of the "
"checkpoint file: ",
fname);
return;
}
status_ = CheckVersions(sts.meta().versions(), TF_CHECKPOINT_VERSION,
TF_CHECKPOINT_VERSION_MIN_PRODUCER, "Checkpoint",
"checkpoint");
if (!status_.ok()) return;
for (const SavedSliceMeta& ssm : sts.meta().tensor()) {
TensorShape ssm_shape;
status_ = TensorShape::BuildTensorShapeBase(ssm.shape(), &ssm_shape);
if (!status_.ok()) return;
for (const TensorSliceProto& tsp : ssm.slice()) {
TensorSlice ss_slice(tsp);
status_ = RegisterTensorSlice(ssm.name(), ssm_shape, ssm.type(), fname,
ss_slice, &tensors_);
if (!status_.ok()) return;
}
}
} | 305 | True | 1 |
|
CVE-2021-41197 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-prcg-wp5q-rv7p', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-prcg-wp5q-rv7p', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/a871989d7b6c18cdebf2fb4f0e5c5b62fbc19edf', 'name': 'https://github.com/tensorflow/tensorflow/commit/a871989d7b6c18cdebf2fb4f0e5c5b62fbc19edf', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/d81b1351da3e8c884ff836b64458d94e4a157c15', 'name': 'https://github.com/tensorflow/tensorflow/commit/d81b1351da3e8c884ff836b64458d94e4a157c15', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/issues/46890', 'name': 'https://github.com/tensorflow/tensorflow/issues/46890', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/issues/51908', 'name': 'https://github.com/tensorflow/tensorflow/issues/51908', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/7c1692bd417eb4f9b33ead749a41166d6080af85', 'name': 'https://github.com/tensorflow/tensorflow/commit/7c1692bd417eb4f9b33ead749a41166d6080af85', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions TensorFlow allows tensor to have a large number of dimensions and each dimension can be as large as desired. However, the total number of elements in a tensor must fit within an `int64_t`. If an overflow occurs, `MultiplyWithoutOverflow` would return a negative result. In the majority of TensorFlow codebase this then results in a `CHECK`-failure. Newer constructs exist which return a `Status` instead of crashing the binary. This is similar to CVE-2021-29584. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.'}] | 2021-11-09T14:30Z | 2021-11-05T20:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Yong Tang | 2021-08-31 16:23:54-07:00 | PR #51732: Fix crash of tf.image.crop_and_resize when input is large number
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/51732
This PR is part of the effort in #46890 where
tf.image.crop_and_resize will crash if shape consists of large number.
Signed-off-by: Yong Tang <[email protected]>
Copybara import of the project:
--
c8d87055a56d8740d27ad8bdc74a7459ede6900e by Yong Tang <[email protected]>:
Fix crash of tf.image.crop_and_resize when input is large number
This PR is part of the effort in 46890 where
tf.image.crop_and_resize will crash if shape consists of large number.
Signed-off-by: Yong Tang <[email protected]>
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/51732 from yongtang:46890-tf.image.crop_and_resize c8d87055a56d8740d27ad8bdc74a7459ede6900e
PiperOrigin-RevId: 394109830
Change-Id: If049dad0844df9353722029ee95bc76819eda1f4 | 7c1692bd417eb4f9b33ead749a41166d6080af85 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::CropAndResizeGradImageOp::ComputeAsync | tensorflow::CropAndResizeGradImageOp::ComputeAsync( OpKernelContext * context , DoneCallback done) | ['context', 'done'] | void ComputeAsync(OpKernelContext* context, DoneCallback done) override {
// The shape of 'grads' is [num_boxes, crop_height, crop_width, depth].
const Tensor& grads = context->input(0);
// The shape of 'boxes' is [num_boxes, 4].
const Tensor& boxes = context->input(1);
// The shape of 'box_index' is [num_boxes].
const Tensor& box_index = context->input(2);
// The shape of 'image_size' is [4].
const Tensor& image_size = context->input(3);
// Validate input shapes.
OP_REQUIRES_ASYNC(context, grads.dims() == 4,
errors::InvalidArgument("grads image must be 4-D",
grads.shape().DebugString()),
done);
const int crop_height = grads.dim_size(1);
const int crop_width = grads.dim_size(2);
OP_REQUIRES_ASYNC(
context, crop_height > 0 && crop_width > 0,
errors::InvalidArgument("grads dimensions must be positive"), done);
int num_boxes = 0;
OP_REQUIRES_OK_ASYNC(
context, ParseAndCheckBoxSizes(boxes, box_index, &num_boxes), done);
OP_REQUIRES_ASYNC(
context, grads.dim_size(0) == num_boxes,
errors::InvalidArgument("boxes and grads have incompatible shape"),
done);
OP_REQUIRES_ASYNC(context, image_size.dims() == 1,
errors::InvalidArgument("image_size must be 1-D",
image_size.shape().DebugString()),
done);
OP_REQUIRES_ASYNC(context, image_size.dim_size(0) == 4,
errors::InvalidArgument("image_size must have 4 elements",
image_size.shape().DebugString()),
done);
auto image_size_vec = image_size.vec<int32>();
const int batch_size = internal::SubtleMustCopy(image_size_vec(0));
const int image_height = internal::SubtleMustCopy(image_size_vec(1));
const int image_width = internal::SubtleMustCopy(image_size_vec(2));
const int depth = internal::SubtleMustCopy(image_size_vec(3));
OP_REQUIRES_ASYNC(
context, image_height > 0 && image_width > 0,
errors::InvalidArgument("image dimensions must be positive"), done);
OP_REQUIRES_ASYNC(
context, grads.dim_size(3) == depth,
errors::InvalidArgument("image_size and grads are incompatible"), done);
if (std::is_same<Device, GPUDevice>::value) {
OP_REQUIRES_ASYNC(
context, !OpDeterminismRequired(),
errors::Unimplemented(
"Deterministic GPU implementation of CropAndResizeBackpropImage"
" not available."),
done);
}
// Allocate output tensor.
Tensor* output = nullptr;
OP_REQUIRES_OK_ASYNC(
context,
context->allocate_output(
0, TensorShape({batch_size, image_height, image_width, depth}),
&output),
done);
auto compute_callback = [this, context, output]() {
const Tensor& grads = context->input(0);
const Tensor& boxes = context->input(1);
const Tensor& box_index = context->input(2);
const bool status = functor::CropAndResizeBackpropImage<Device, T>()(
context, grads.tensor<float, 4>(), boxes.tensor<float, 2>(),
box_index.tensor<int32, 1>(), output->tensor<T, 4>(), method_);
if (!status) {
context->SetStatus(errors::Internal(
"Failed to launch CropAndResizeBackpropImage kernel."));
}
};
RunIfBoxIndexIsValid<Device>(context, box_index.tensor<int32, 1>(),
batch_size, std::move(compute_callback),
std::move(done));
} | 599 | True | 1 |
CVE-2021-41197 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-prcg-wp5q-rv7p', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-prcg-wp5q-rv7p', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/a871989d7b6c18cdebf2fb4f0e5c5b62fbc19edf', 'name': 'https://github.com/tensorflow/tensorflow/commit/a871989d7b6c18cdebf2fb4f0e5c5b62fbc19edf', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/d81b1351da3e8c884ff836b64458d94e4a157c15', 'name': 'https://github.com/tensorflow/tensorflow/commit/d81b1351da3e8c884ff836b64458d94e4a157c15', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/issues/46890', 'name': 'https://github.com/tensorflow/tensorflow/issues/46890', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/issues/51908', 'name': 'https://github.com/tensorflow/tensorflow/issues/51908', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/7c1692bd417eb4f9b33ead749a41166d6080af85', 'name': 'https://github.com/tensorflow/tensorflow/commit/7c1692bd417eb4f9b33ead749a41166d6080af85', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions TensorFlow allows tensor to have a large number of dimensions and each dimension can be as large as desired. However, the total number of elements in a tensor must fit within an `int64_t`. If an overflow occurs, `MultiplyWithoutOverflow` would return a negative result. In the majority of TensorFlow codebase this then results in a `CHECK`-failure. Newer constructs exist which return a `Status` instead of crashing the binary. This is similar to CVE-2021-29584. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.'}] | 2021-11-09T14:30Z | 2021-11-05T20:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Yong Tang | 2021-08-31 16:23:54-07:00 | PR #51732: Fix crash of tf.image.crop_and_resize when input is large number
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/51732
This PR is part of the effort in #46890 where
tf.image.crop_and_resize will crash if shape consists of large number.
Signed-off-by: Yong Tang <[email protected]>
Copybara import of the project:
--
c8d87055a56d8740d27ad8bdc74a7459ede6900e by Yong Tang <[email protected]>:
Fix crash of tf.image.crop_and_resize when input is large number
This PR is part of the effort in 46890 where
tf.image.crop_and_resize will crash if shape consists of large number.
Signed-off-by: Yong Tang <[email protected]>
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/51732 from yongtang:46890-tf.image.crop_and_resize c8d87055a56d8740d27ad8bdc74a7459ede6900e
PiperOrigin-RevId: 394109830
Change-Id: If049dad0844df9353722029ee95bc76819eda1f4 | 7c1692bd417eb4f9b33ead749a41166d6080af85 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::CropAndResizeOp::ComputeAsync | tensorflow::CropAndResizeOp::ComputeAsync( OpKernelContext * context , DoneCallback done) | ['context', 'done'] | void ComputeAsync(OpKernelContext* context, DoneCallback done) override {
// The shape of 'image' is [batch_size, image_height, image_width,
// channels].
const Tensor& image = context->input(0);
// The shape of 'boxes' is [num_boxes, 4].
const Tensor& boxes = context->input(1);
// The shape of 'box_index' is [num_boxes].
const Tensor& box_index = context->input(2);
// The shape of 'crop_size' is [2].
const Tensor& crop_size = context->input(3);
// Validate inputs dimensions.
OP_REQUIRES_ASYNC(context, image.dims() == 4,
errors::InvalidArgument("input image must be 4-D",
image.shape().DebugString()),
done);
const int batch_size = image.dim_size(0);
const int image_height = image.dim_size(1);
const int image_width = image.dim_size(2);
const int depth = image.dim_size(3);
OP_REQUIRES_ASYNC(
context, image_height > 0 && image_width > 0,
errors::InvalidArgument("image dimensions must be positive"), done);
int num_boxes = 0;
OP_REQUIRES_OK_ASYNC(
context, ParseAndCheckBoxSizes(boxes, box_index, &num_boxes), done);
OP_REQUIRES_ASYNC(context, crop_size.dims() == 1,
errors::InvalidArgument("crop_size must be 1-D",
crop_size.shape().DebugString()),
done);
OP_REQUIRES_ASYNC(
context, crop_size.dim_size(0) == 2,
errors::InvalidArgument("crop_size must have two elements",
crop_size.shape().DebugString()),
done);
// Copy and validate crop sizes.
auto crop_size_vec = crop_size.vec<int32>();
const int crop_height = internal::SubtleMustCopy(crop_size_vec(0));
const int crop_width = internal::SubtleMustCopy(crop_size_vec(1));
OP_REQUIRES_ASYNC(
context, crop_height > 0 && crop_width > 0,
errors::InvalidArgument("crop dimensions must be positive"), done);
// Allocate output tensor.
Tensor* output = nullptr;
OP_REQUIRES_OK_ASYNC(
context,
context->allocate_output(
0, TensorShape({num_boxes, crop_height, crop_width, depth}),
&output),
done);
auto compute_callback = [this, context, output]() {
const Tensor& image = context->input(0);
const Tensor& boxes = context->input(1);
const Tensor& box_index = context->input(2);
const bool status = functor::CropAndResize<Device, T>()(
context, image.tensor<T, 4>(), boxes.tensor<float, 2>(),
box_index.tensor<int32, 1>(), method_, extrapolation_value_,
output->tensor<float, 4>());
if (!status) {
context->SetStatus(
errors::Internal("Failed to launch CropAndResizeKernel."));
}
};
RunIfBoxIndexIsValid<Device>(context, box_index.tensor<int32, 1>(),
batch_size, std::move(compute_callback),
std::move(done));
} | 514 | True | 1 |
CVE-2021-41225 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/68867bf01239d9e1048f98cbad185bf4761bedd3', 'name': 'https://github.com/tensorflow/tensorflow/commit/68867bf01239d9e1048f98cbad185bf4761bedd3', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-7r94-xv9v-63jw', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-7r94-xv9v-63jw', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-908'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.4.0', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "TensorFlow is an open source platform for machine learning. In affected versions TensorFlow's Grappler optimizer has a use of unitialized variable. If the `train_nodes` vector (obtained from the saved model that gets optimized) does not contain a `Dequeue` node, then `dequeue_node` is left unitialized. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range."}] | 2021-11-10T16:55Z | 2021-11-05T23:15Z | Use of Uninitialized Resource | The software uses or accesses a resource that has not been initialized. | When a resource has not been properly initialized, the software may behave unexpectedly. This may lead to a crash or invalid memory access, but the consequences vary depending on the type of resource and how it is used within the software.
| https://cwe.mitre.org/data/definitions/908.html | 0 | Mihai Maruseac | 2021-09-29 09:20:25-07:00 | Prevent unitialized variable use in grappler.
PiperOrigin-RevId: 399702928
Change-Id: Id7e75451fbff297692dfb687f60ea04b25c96b24 | 68867bf01239d9e1048f98cbad185bf4761bedd3 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::AutoParallel::Initialize | tensorflow::grappler::AutoParallel::Initialize( const GrapplerItem & item) | ['item'] | Status AutoParallel::Initialize(const GrapplerItem& item) {
num_gpus_ = GetNumAvailableGPUs();
LOG(INFO) << "Number of GPUs: " << num_gpus_;
item_ = &item;
graph_ = item.graph;
LOG(INFO) << "Original graph size: " << graph_.node_size();
if (item.fetch.empty()) {
return Status(error::INVALID_ARGUMENT, "No fetch nodes provided.");
}
if (item.MainVariables().empty()) {
return Status(error::INVALID_ARGUMENT, "No variables provided.");
}
for (const auto& init : item.init_ops) {
VLOG(1) << "Init node: " << init;
}
for (const auto& fetch : item.fetch) {
VLOG(1) << "Fetch node: " << fetch;
}
for (const auto& var : item.MainVariables()) {
VLOG(2) << "Variable: " << var->name();
}
const std::set<string> apply_gradients_ops = {"ApplyGradientDescent",
"ApplyProximalGradientDescent",
"ApplyAdadelta",
"ApplyAdagrad",
"ApplyProximalAdagrad",
"ApplyAdagradDA",
"ApplyFtrl",
"ApplyMomentum",
"ApplyAdam",
"ApplyRMSProp",
"ApplyCenteredRMSProp"};
for (int i = 0; i < graph_.node_size(); i++) {
all_nodes_.insert(
std::make_pair(graph_.node(i).name(), graph_.mutable_node(i)));
if (apply_gradients_ops.find(graph_.node(i).op()) !=
apply_gradients_ops.end()) {
apply_gradients_nodes_.insert(graph_.node(i).name());
VLOG(2) << "Apply gradients node: " << graph_.node(i).name();
}
}
auto div_const_node = AddNodeDivConst();
all_nodes_.insert(std::make_pair(div_const_node->name(), div_const_node));
std::map<string, int> gradient_pos = {{"ApplyGradientDescent", 2},
{"ApplyProximalGradientDescent", 4},
{"ApplyAdadelta", 6},
{"ApplyAdagrad", 3},
{"ApplyProximalAdagrad", 5},
{"ApplyAdagradDA", 3},
{"ApplyFtrl", 3},
{"ApplyMomentum", 3},
{"ApplyAdam", 9},
{"ApplyRMSProp", 7},
{"ApplyCenteredRMSProp", 8}};
for (const auto& apply_gradient_node_name : apply_gradients_nodes_) {
auto apply_gradients_op = all_nodes_[apply_gradient_node_name]->op();
auto apply_gradients_node = all_nodes_[apply_gradient_node_name];
auto div_node = AddNodeDiv(
apply_gradient_node_name,
apply_gradients_node->input(gradient_pos[apply_gradients_op]),
div_const_node->name());
all_nodes_.insert(std::make_pair(div_node->name(), div_node));
*apply_gradients_node->mutable_input(gradient_pos[apply_gradients_op]) =
div_node->name();
}
LOG(INFO) << "Graph size after adding div nodes: " << all_nodes_.size();
std::vector<const NodeDef*> train_nodes;
TF_RETURN_IF_ERROR(ComputeTransitiveFanin(graph_, item.fetch, &train_nodes));
LOG(INFO) << "Number of training nodes: " << train_nodes.size();
const NodeDef* dequeue_node;
for (const auto& train_node : train_nodes) {
if (IsDequeueOp(*train_node)) {
dequeue_node = train_node;
break;
}
}
std::vector<const NodeDef*> input_nodes;
if (dequeue_node) {
LOG(INFO) << "Dequeue node: " << dequeue_node->name();
TF_RETURN_IF_ERROR(ComputeTransitiveFanin(graph_, {dequeue_node->name()},
{}, &input_nodes));
}
LOG(INFO) << "Number of input nodes: " << input_nodes.size();
std::set<string> dont_replicate_nodes;
for (const auto& variable : item.MainVariables()) {
dont_replicate_nodes.insert(variable->name());
}
for (const auto& init : item.init_ops) {
dont_replicate_nodes.insert(NodeName(init));
}
// Don't replicate all input nodes, except the dequeue node.
for (const auto& input_node : input_nodes) {
if (input_node->name() != dequeue_node->name()) {
dont_replicate_nodes.insert(input_node->name());
}
}
for (const auto& node : train_nodes) {
if (dont_replicate_nodes.find(node->name()) == dont_replicate_nodes.end()) {
replica_nodes_.insert(node->name());
}
}
LOG(INFO) << "Number of replica nodes: " << replica_nodes_.size();
for (const auto& node : all_nodes_) {
if (replica_nodes_.find(node.first) == replica_nodes_.end()) {
shared_nodes_.insert(node.first);
}
}
LOG(INFO) << "Number of shared nodes: " << shared_nodes_.size();
return Status::OK();
} | 883 | True | 1 |
CVE-2021-41223 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:N/A:P | LOCAL | LOW | NONE | PARTIAL | NONE | PARTIAL | 3.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | NONE | HIGH | 7.1 | HIGH | 1.8 | 5.2 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/aab9998916c2ffbd8f0592059fad352622f89cda', 'name': 'https://github.com/tensorflow/tensorflow/commit/aab9998916c2ffbd8f0592059fad352622f89cda', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-f54p-f6jp-4rhr', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-f54p-f6jp-4rhr', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions the implementation of `FusedBatchNorm` kernels is vulnerable to a heap OOB access. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.'}] | 2021-11-09T15:27Z | 2021-11-05T21:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Reed Wanderman-Milne | 2021-09-29 13:00:50-07:00 | Add shape checks to FusedBatchNorm kernels.
PiperOrigin-RevId: 399755576
Change-Id: If8049fde109cc33badb5509d174b9b95aee1ea5e | aab9998916c2ffbd8f0592059fad352622f89cda | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::FusedBatchNormOpBase::ComputeWithReservedSpace | tensorflow::FusedBatchNormOpBase::ComputeWithReservedSpace( OpKernelContext * context , bool use_reserved_space) | ['context', 'use_reserved_space'] | virtual void ComputeWithReservedSpace(OpKernelContext* context,
bool use_reserved_space) {
Tensor x = context->input(0);
const Tensor& scale = context->input(1);
const Tensor& offset = context->input(2);
const Tensor& estimated_mean = context->input(3);
const Tensor& estimated_variance = context->input(4);
const Tensor* side_input = has_side_input_ ? &context->input(5) : nullptr;
OP_REQUIRES(context, x.dims() == 4 || x.dims() == 5,
errors::InvalidArgument("input must be 4 or 5-dimensional",
x.shape().DebugString()));
OP_REQUIRES(context, scale.dims() == 1,
errors::InvalidArgument("scale must be 1-dimensional",
scale.shape().DebugString()));
OP_REQUIRES(context, offset.dims() == 1,
errors::InvalidArgument("offset must be 1-dimensional",
offset.shape().DebugString()));
OP_REQUIRES(context, estimated_mean.dims() == 1,
errors::InvalidArgument("estimated_mean must be 1-dimensional",
estimated_mean.shape().DebugString()));
OP_REQUIRES(
context, estimated_variance.dims() == 1,
errors::InvalidArgument("estimated_variance must be 1-dimensional",
estimated_variance.shape().DebugString()));
bool use_reshape = (x.dims() == 5);
auto x_shape = x.shape();
TensorShape dest_shape;
if (use_reshape) {
const int64_t in_batch = GetTensorDim(x, tensor_format_, 'N');
int64_t in_planes = GetTensorDim(x, tensor_format_, '0');
int64_t in_rows = GetTensorDim(x, tensor_format_, '1');
int64_t in_cols = GetTensorDim(x, tensor_format_, '2');
const int64_t in_depth = GetTensorDim(x, tensor_format_, 'C');
dest_shape = ShapeFromFormat(tensor_format_, in_batch,
{{in_planes, in_rows * in_cols}}, in_depth);
OP_REQUIRES(context, x.CopyFrom(x, dest_shape),
errors::InvalidArgument("Error during tensor copy."));
}
const auto num_channels = GetTensorDim(x, tensor_format_, 'C');
OP_REQUIRES(
context, scale.NumElements() == num_channels,
errors::InvalidArgument("scale must have the same number of elements "
"as the channels of x, got ",
scale.NumElements(), " and ", num_channels));
OP_REQUIRES(
context, offset.NumElements() == num_channels,
errors::InvalidArgument("offset must have the same number of elements "
"as the channels of x, got ",
offset.NumElements(), " and ", num_channels));
if (estimated_mean.NumElements() != 0) {
OP_REQUIRES(context, estimated_mean.NumElements() == num_channels,
errors::InvalidArgument(
"mean must be empty or have the same number of "
"elements as the channels of x, got ",
estimated_mean.NumElements(), " and ", num_channels));
}
if (estimated_variance.NumElements() != 0) {
OP_REQUIRES(context, estimated_variance.NumElements() == num_channels,
errors::InvalidArgument(
"variance must be empty or have the same number of "
"elements as the channels of x, got ",
estimated_variance.NumElements(), " and ", num_channels));
}
if (has_side_input_) {
OP_REQUIRES(context, side_input->shape() == x.shape(),
errors::InvalidArgument(
"side_input shape must be equal to input shape: ",
side_input->shape().DebugString(),
" != ", x.shape().DebugString()));
}
if (activation_mode_ != FbnActivationMode::kIdentity) {
// NOTE(ezhulenev): This requirement is coming from implementation
// details of cudnnBatchNormalizationForwardTrainingEx.
OP_REQUIRES(
context, !is_training_ || num_channels % 4 == 0,
errors::InvalidArgument("FusedBatchNorm with activation requires "
"channel dimension to be a multiple of 4."));
}
Tensor* y = nullptr;
auto alloc_shape = use_reshape ? dest_shape : x_shape;
OP_REQUIRES_OK(context, context->forward_input_or_allocate_output(
{0}, 0, alloc_shape, &y));
Tensor* batch_mean = nullptr;
OP_REQUIRES_OK(context, context->forward_input_or_allocate_output(
{3}, 1, scale.shape(), &batch_mean));
Tensor* batch_var = nullptr;
OP_REQUIRES_OK(context, context->forward_input_or_allocate_output(
{4}, 2, scale.shape(), &batch_var));
Tensor* saved_mean = nullptr;
OP_REQUIRES_OK(context,
context->allocate_output(3, scale.shape(), &saved_mean));
Tensor* saved_maybe_inv_var = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(4, scale.shape(),
&saved_maybe_inv_var));
if (is_training_) {
functor::FusedBatchNorm<Device, T, U, true>()(
context, x, scale, offset, estimated_mean, estimated_variance,
side_input, epsilon_, exponential_avg_factor_, activation_mode_, y,
batch_mean, batch_var, saved_mean, saved_maybe_inv_var,
tensor_format_, use_reserved_space);
} else {
functor::FusedBatchNorm<Device, T, U, false>()(
context, x, scale, offset, estimated_mean, estimated_variance,
side_input, epsilon_, exponential_avg_factor_, activation_mode_, y,
batch_mean, batch_var, saved_mean, saved_maybe_inv_var,
tensor_format_, use_reserved_space);
}
if (use_reshape) {
OP_REQUIRES(context, y->CopyFrom(*y, x_shape),
errors::InvalidArgument("Error during tensor copy."));
}
} | 913 | True | 1 |
CVE-2021-41220 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'name': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-416'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions the async implementation of `CollectiveReduceV2` suffers from a memory leak and a use after free. This occurs due to the asynchronous computation and the fact that objects that have been `std::move()`d from are still accessed. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, as this version is the only one that is also affected.'}] | 2021-11-10T13:16Z | 2021-11-05T23:15Z | Use After Free | Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code. |
The use of previously-freed memory can have any number of adverse consequences, ranging from the corruption of valid data to the execution of arbitrary code, depending on the instantiation and timing of the flaw. The simplest way data corruption may occur involves the system's reuse of the freed memory. Use-after-free errors have two common and sometimes overlapping causes:
Error conditions and other exceptional circumstances.
Confusion over which part of the program is responsible for freeing the memory.
In this scenario, the memory in question is allocated to another pointer validly at some point after it has been freed. The original pointer to the freed memory is used again and points to somewhere within the new allocation. As the data is changed, it corrupts the validly used memory; this induces undefined behavior in the process.
If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved.
| https://cwe.mitre.org/data/definitions/416.html | 0 | Ran Chen | 2021-10-04 16:09:45-07:00 | Fix undefined behavior in CollectiveReduceV2 and others
We should not call done after it's moved.
PiperOrigin-RevId: 400838185
Change-Id: Ifc979740054b8f8c6f4d50acc89472fe60c4fdb1 | ca38dab9d3ee66c5de06f11af9a4b1200da5ef75 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::CollectiveAllToAllV3OpKernel::ComputeAsync | tensorflow::CollectiveAllToAllV3OpKernel::ComputeAsync( OpKernelContext * c , DoneCallback done) | ['c', 'done'] | void ComputeAsync(OpKernelContext* c, DoneCallback done) override {
auto col_params = new CollectiveParams();
auto done_with_cleanup = [col_params, done = std::move(done)]() {
done();
col_params->Unref();
};
core::RefCountPtr<CollectiveGroupResource> resource;
OP_REQUIRES_OK_ASYNC(c, LookupResource(c, HandleFromInput(c, 1), &resource),
done);
Tensor group_assignment = c->input(2);
OP_REQUIRES_OK_ASYNC(
c,
FillCollectiveParams(col_params, group_assignment,
ALL_TO_ALL_COLLECTIVE, resource.get()),
done);
col_params->instance.shape = c->input(0).shape();
VLOG(1) << "CollectiveAllToAll group_size " << col_params->group.group_size
<< " group_key " << col_params->group.group_key << " instance_key "
<< col_params->instance.instance_key;
// Allocate the output tensor, trying to reuse the input.
Tensor* output = nullptr;
OP_REQUIRES_OK_ASYNC(c,
c->forward_input_or_allocate_output(
{0}, 0, col_params->instance.shape, &output),
done_with_cleanup);
Run(c, col_params, std::move(done_with_cleanup));
} | 211 | True | 1 |
CVE-2021-41220 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'name': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-416'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions the async implementation of `CollectiveReduceV2` suffers from a memory leak and a use after free. This occurs due to the asynchronous computation and the fact that objects that have been `std::move()`d from are still accessed. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, as this version is the only one that is also affected.'}] | 2021-11-10T13:16Z | 2021-11-05T23:15Z | Use After Free | Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code. |
The use of previously-freed memory can have any number of adverse consequences, ranging from the corruption of valid data to the execution of arbitrary code, depending on the instantiation and timing of the flaw. The simplest way data corruption may occur involves the system's reuse of the freed memory. Use-after-free errors have two common and sometimes overlapping causes:
Error conditions and other exceptional circumstances.
Confusion over which part of the program is responsible for freeing the memory.
In this scenario, the memory in question is allocated to another pointer validly at some point after it has been freed. The original pointer to the freed memory is used again and points to somewhere within the new allocation. As the data is changed, it corrupts the validly used memory; this induces undefined behavior in the process.
If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved.
| https://cwe.mitre.org/data/definitions/416.html | 0 | Ran Chen | 2021-10-04 16:09:45-07:00 | Fix undefined behavior in CollectiveReduceV2 and others
We should not call done after it's moved.
PiperOrigin-RevId: 400838185
Change-Id: Ifc979740054b8f8c6f4d50acc89472fe60c4fdb1 | ca38dab9d3ee66c5de06f11af9a4b1200da5ef75 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::CollectiveInitializeCommunicatorOpKernel::CheckInputs | tensorflow::CollectiveInitializeCommunicatorOpKernel::CheckInputs( Tensor group_size_t , Tensor group_key_t) | ['group_size_t', 'group_key_t'] | Status CheckInputs(Tensor group_size_t, Tensor group_key_t) {
if (group_size_t.dims() > 0) {
return errors::Internal(
"Unexpected dimensions on input group_size. "
"It shoulbe a scalar, got tensor with shape ",
group_size_t.shape().DebugString());
}
if (group_key_t.dims() > 0) {
return errors::Internal("Unexpected dimensions on input group_key, got ",
group_key_t.shape().DebugString());
}
auto group_size = group_size_t.unaligned_flat<int32>()(0);
if (group_size <= 0) {
return errors::InvalidArgument(
"group_size must be positive integer but got ", group_size);
}
return Status::OK();
} | 111 | True | 1 |
CVE-2021-41220 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'name': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-416'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions the async implementation of `CollectiveReduceV2` suffers from a memory leak and a use after free. This occurs due to the asynchronous computation and the fact that objects that have been `std::move()`d from are still accessed. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, as this version is the only one that is also affected.'}] | 2021-11-10T13:16Z | 2021-11-05T23:15Z | Use After Free | Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code. |
The use of previously-freed memory can have any number of adverse consequences, ranging from the corruption of valid data to the execution of arbitrary code, depending on the instantiation and timing of the flaw. The simplest way data corruption may occur involves the system's reuse of the freed memory. Use-after-free errors have two common and sometimes overlapping causes:
Error conditions and other exceptional circumstances.
Confusion over which part of the program is responsible for freeing the memory.
In this scenario, the memory in question is allocated to another pointer validly at some point after it has been freed. The original pointer to the freed memory is used again and points to somewhere within the new allocation. As the data is changed, it corrupts the validly used memory; this induces undefined behavior in the process.
If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved.
| https://cwe.mitre.org/data/definitions/416.html | 0 | Ran Chen | 2021-10-04 16:09:45-07:00 | Fix undefined behavior in CollectiveReduceV2 and others
We should not call done after it's moved.
PiperOrigin-RevId: 400838185
Change-Id: Ifc979740054b8f8c6f4d50acc89472fe60c4fdb1 | ca38dab9d3ee66c5de06f11af9a4b1200da5ef75 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::CollectiveOpV2Kernel::FillCollectiveParams | tensorflow::CollectiveOpV2Kernel::FillCollectiveParams( CollectiveParams * col_params , CollectiveType collective_type , const Tensor & group_size , const Tensor & group_key , const Tensor & instance_key) | ['col_params', 'collective_type', 'group_size', 'group_key', 'instance_key'] | Status FillCollectiveParams(CollectiveParams* col_params,
CollectiveType collective_type,
const Tensor& group_size, const Tensor& group_key,
const Tensor& instance_key) {
if (group_size.dims() > 0) {
return errors::Internal("Unexpected dimensions on input group_size, got ",
group_size.shape().DebugString());
}
if (group_key.dims() > 0) {
return errors::Internal("Unexpected dimensions on input group_key, got ",
group_key.shape().DebugString());
}
if (instance_key.dims() > 0) {
return errors::Internal(
"Unexpected dimensions on input instance_key, got ",
instance_key.shape().DebugString());
}
col_params->name = name_;
col_params->group.device_type = device_type_;
col_params->group.group_size = group_size.unaligned_flat<int32>()(0);
if (col_params->group.group_size <= 0) {
return errors::InvalidArgument(
"group_size must be positive integer but got ",
col_params->group.group_size);
}
col_params->group.group_key = group_key.unaligned_flat<int32>()(0);
col_params->instance.type = collective_type;
col_params->instance.instance_key = instance_key.unaligned_flat<int32>()(0);
col_params->instance.data_type = data_type_;
col_params->instance.impl_details.communication_hint = communication_hint_;
col_params->instance.impl_details.timeout_seconds = timeout_seconds_;
return Status::OK();
} | 253 | True | 1 |
CVE-2021-41220 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'name': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-416'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions the async implementation of `CollectiveReduceV2` suffers from a memory leak and a use after free. This occurs due to the asynchronous computation and the fact that objects that have been `std::move()`d from are still accessed. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, as this version is the only one that is also affected.'}] | 2021-11-10T13:16Z | 2021-11-05T23:15Z | Use After Free | Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code. |
The use of previously-freed memory can have any number of adverse consequences, ranging from the corruption of valid data to the execution of arbitrary code, depending on the instantiation and timing of the flaw. The simplest way data corruption may occur involves the system's reuse of the freed memory. Use-after-free errors have two common and sometimes overlapping causes:
Error conditions and other exceptional circumstances.
Confusion over which part of the program is responsible for freeing the memory.
In this scenario, the memory in question is allocated to another pointer validly at some point after it has been freed. The original pointer to the freed memory is used again and points to somewhere within the new allocation. As the data is changed, it corrupts the validly used memory; this induces undefined behavior in the process.
If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved.
| https://cwe.mitre.org/data/definitions/416.html | 0 | Ran Chen | 2021-10-04 16:09:45-07:00 | Fix undefined behavior in CollectiveReduceV2 and others
We should not call done after it's moved.
PiperOrigin-RevId: 400838185
Change-Id: Ifc979740054b8f8c6f4d50acc89472fe60c4fdb1 | ca38dab9d3ee66c5de06f11af9a4b1200da5ef75 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::CollectiveReduceV2OpKernel::ComputeAsync | tensorflow::CollectiveReduceV2OpKernel::ComputeAsync( OpKernelContext * c , DoneCallback done) | ['c', 'done'] | void ComputeAsync(OpKernelContext* c, DoneCallback done) override {
auto col_params = new CollectiveParams();
auto done_with_cleanup = [col_params, done = std::move(done)]() {
done();
col_params->Unref();
};
OP_REQUIRES_OK_ASYNC(c,
FillCollectiveParams(col_params, REDUCTION_COLLECTIVE,
/*group_size*/ c->input(1),
/*group_key*/ c->input(2),
/*instance_key*/ c->input(3)),
done);
col_params->instance.shape = c->input(0).shape();
col_params->merge_op = merge_op_.get();
col_params->final_op = final_op_.get();
VLOG(1) << "CollectiveReduceV2 group_size " << col_params->group.group_size
<< " group_key " << col_params->group.group_key << " instance_key "
<< col_params->instance.instance_key;
// Allocate the output tensor, trying to reuse the input.
Tensor* output = nullptr;
OP_REQUIRES_OK_ASYNC(c,
c->forward_input_or_allocate_output(
{0}, 0, col_params->instance.shape, &output),
done_with_cleanup);
Run(c, col_params, std::move(done_with_cleanup));
} | 204 | True | 1 |
CVE-2021-41220 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-gpfh-jvf9-7wg5', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'name': 'https://github.com/tensorflow/tensorflow/commit/ca38dab9d3ee66c5de06f11af9a4b1200da5ef75', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-416'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions the async implementation of `CollectiveReduceV2` suffers from a memory leak and a use after free. This occurs due to the asynchronous computation and the fact that objects that have been `std::move()`d from are still accessed. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, as this version is the only one that is also affected.'}] | 2021-11-10T13:16Z | 2021-11-05T23:15Z | Use After Free | Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code. |
The use of previously-freed memory can have any number of adverse consequences, ranging from the corruption of valid data to the execution of arbitrary code, depending on the instantiation and timing of the flaw. The simplest way data corruption may occur involves the system's reuse of the freed memory. Use-after-free errors have two common and sometimes overlapping causes:
Error conditions and other exceptional circumstances.
Confusion over which part of the program is responsible for freeing the memory.
In this scenario, the memory in question is allocated to another pointer validly at some point after it has been freed. The original pointer to the freed memory is used again and points to somewhere within the new allocation. As the data is changed, it corrupts the validly used memory; this induces undefined behavior in the process.
If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved.
| https://cwe.mitre.org/data/definitions/416.html | 0 | Ran Chen | 2021-10-04 16:09:45-07:00 | Fix undefined behavior in CollectiveReduceV2 and others
We should not call done after it's moved.
PiperOrigin-RevId: 400838185
Change-Id: Ifc979740054b8f8c6f4d50acc89472fe60c4fdb1 | ca38dab9d3ee66c5de06f11af9a4b1200da5ef75 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::CollectiveReduceV3OpKernel::ComputeAsync | tensorflow::CollectiveReduceV3OpKernel::ComputeAsync( OpKernelContext * c , DoneCallback done) | ['c', 'done'] | void ComputeAsync(OpKernelContext* c, DoneCallback done) override {
auto col_params = new CollectiveParams();
auto done_with_cleanup = [col_params, done = std::move(done)]() {
done();
col_params->Unref();
};
core::RefCountPtr<CollectiveGroupResource> resource;
OP_REQUIRES_OK_ASYNC(c, LookupResource(c, HandleFromInput(c, 1), &resource),
done);
Tensor group_assignment = c->input(2);
OP_REQUIRES_OK_ASYNC(
c,
FillCollectiveParams(col_params, group_assignment, REDUCTION_COLLECTIVE,
resource.get()),
done);
col_params->instance.shape = c->input(0).shape();
col_params->merge_op = merge_op_.get();
col_params->final_op = final_op_.get();
VLOG(1) << "CollectiveReduceV3 group_size " << col_params->group.group_size
<< " group_key " << col_params->group.group_key << " instance_key "
<< col_params->instance.instance_key;
// Allocate the output tensor, trying to reuse the input.
Tensor* output = nullptr;
OP_REQUIRES_OK_ASYNC(c,
c->forward_input_or_allocate_output(
{0}, 0, col_params->instance.shape, &output),
done_with_cleanup);
Run(c, col_params, std::move(done_with_cleanup));
} | 231 | True | 1 |
CVE-2021-41227 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:N/A:N | LOCAL | LOW | NONE | PARTIAL | NONE | NONE | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | NONE | NONE | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-j8c8-67vp-6mx7', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-j8c8-67vp-6mx7', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/3712a2d3455e6ccb924daa5724a3652a86f6b585', 'name': 'https://github.com/tensorflow/tensorflow/commit/3712a2d3455e6ccb924daa5724a3652a86f6b585', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1cb6bb6c2a6019417c9adaf9e6843ba75ee2580b', 'name': 'https://github.com/tensorflow/tensorflow/commit/1cb6bb6c2a6019417c9adaf9e6843ba75ee2580b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.4.0', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions the `ImmutableConst` operation in TensorFlow can be tricked into reading arbitrary memory contents. This is because the `tstring` TensorFlow string class has a special case for memory mapped strings but the operation itself does not offer any support for this datatype. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.'}] | 2021-11-10T13:18Z | 2021-11-05T23:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Edward Schwartz | 2021-10-05 13:40:42-07:00 | Add error checking to ImmutableConst OP that strings are not yet supported.
PiperOrigin-RevId: 401065359
Change-Id: I9dd2bd2a2c36f22f4a05153daf6ebdc4613469d2 | 1cb6bb6c2a6019417c9adaf9e6843ba75ee2580b | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::CreateTempFile | tensorflow::CreateTempFile( Env * env , float value , uint64 size , string * filename) | ['env', 'value', 'size', 'filename'] | Status CreateTempFile(Env* env, float value, uint64 size, string* filename) {
const string dir = testing::TmpDir();
*filename = io::JoinPath(dir, strings::StrCat("file_", value));
std::unique_ptr<WritableFile> file;
TF_RETURN_IF_ERROR(env->NewWritableFile(*filename, &file));
for (uint64 i = 0; i < size; ++i) {
StringPiece sp(static_cast<char*>(static_cast<void*>(&value)),
sizeof(value));
TF_RETURN_IF_ERROR(file->Append(sp));
}
TF_RETURN_IF_ERROR(file->Close());
return Status::OK();
} | 137 | True | 1 |
CVE-2021-41227 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:N/A:N | LOCAL | LOW | NONE | PARTIAL | NONE | NONE | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | NONE | NONE | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-j8c8-67vp-6mx7', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-j8c8-67vp-6mx7', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/3712a2d3455e6ccb924daa5724a3652a86f6b585', 'name': 'https://github.com/tensorflow/tensorflow/commit/3712a2d3455e6ccb924daa5724a3652a86f6b585', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1cb6bb6c2a6019417c9adaf9e6843ba75ee2580b', 'name': 'https://github.com/tensorflow/tensorflow/commit/1cb6bb6c2a6019417c9adaf9e6843ba75ee2580b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.4.0', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions the `ImmutableConst` operation in TensorFlow can be tricked into reading arbitrary memory contents. This is because the `tstring` TensorFlow string class has a special case for memory mapped strings but the operation itself does not offer any support for this datatype. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.'}] | 2021-11-10T13:18Z | 2021-11-05T23:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Edward Schwartz | 2021-10-05 13:40:42-07:00 | Add error checking to ImmutableConst OP that strings are not yet supported.
PiperOrigin-RevId: 401065359
Change-Id: I9dd2bd2a2c36f22f4a05153daf6ebdc4613469d2 | 1cb6bb6c2a6019417c9adaf9e6843ba75ee2580b | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST | tensorflow::TEST( ImmutableConstantOpTest , FromFile) | ['ImmutableConstantOpTest', 'FromFile'] | TEST(ImmutableConstantOpTest, FromFile) {
const TensorShape kFileTensorShape({1000, 1});
Env* env = Env::Default();
auto root = Scope::NewRootScope().ExitOnError();
string two_file, three_file;
TF_ASSERT_OK(CreateTempFile(env, 2.0f, 1000, &two_file));
TF_ASSERT_OK(CreateTempFile(env, 3.0f, 1000, &three_file));
auto node1 = ops::ImmutableConst(root, DT_FLOAT, kFileTensorShape, two_file);
auto node2 =
ops::ImmutableConst(root, DT_FLOAT, kFileTensorShape, three_file);
auto result = ops::MatMul(root, node1, node2, ops::MatMul::TransposeB(true));
GraphDef graph_def;
TF_ASSERT_OK(root.ToGraphDef(&graph_def));
SessionOptions session_options;
session_options.config.mutable_graph_options()
->mutable_optimizer_options()
->set_opt_level(OptimizerOptions::L0);
std::unique_ptr<Session> session(NewSession(session_options));
ASSERT_TRUE(session != nullptr) << "Failed to create session";
TF_ASSERT_OK(session->Create(graph_def)) << "Can't create test graph";
std::vector<Tensor> outputs;
TF_ASSERT_OK(session->Run({}, {result.node()->name() + ":0"}, {}, &outputs));
ASSERT_EQ(outputs.size(), 1);
EXPECT_EQ(outputs.front().flat<float>()(0), 2.0f * 3.0f);
EXPECT_EQ(outputs.front().flat<float>()(1), 2.0f * 3.0f);
EXPECT_EQ(outputs.front().flat<float>()(2), 2.0f * 3.0f);
} | 340 | True | 1 |
CVE-2021-41216 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-3ff2-r28g-w7h9', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-3ff2-r28g-w7h9', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/c79ba87153ee343401dbe9d1954d7f79e521eb14', 'name': 'https://github.com/tensorflow/tensorflow/commit/c79ba87153ee343401dbe9d1954d7f79e521eb14', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.4.0', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In affected versions the shape inference function for `Transpose` is vulnerable to a heap buffer overflow. This occurs whenever `perm` contains negative elements. The shape inference function does not validate that the indices in `perm` are all valid. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.'}] | 2021-11-09T15:53Z | 2021-11-05T23:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Penporn Koanantakool | 2021-10-14 19:39:00-07:00 | Make Transpose's shape inference function validate that negative `perm` values are within the tensor's rank.
PiperOrigin-RevId: 403252853
Change-Id: Ia6b31b45b237312668bb31c2c3b3c7bbce2d2610 | c79ba87153ee343401dbe9d1954d7f79e521eb14 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TransposeShapeFn | tensorflow::TransposeShapeFn( InferenceContext * c) | ['c'] | Status TransposeShapeFn(InferenceContext* c) {
ShapeHandle input = c->input(0);
ShapeHandle perm_shape = c->input(1);
const Tensor* perm = c->input_tensor(1);
DimensionHandle perm_elems = c->NumElements(perm_shape);
// If we don't have rank information on the input or value information on
// perm we can't return any shape information, otherwise we have enough
// information to at least find the rank of the output.
if (!c->RankKnown(input) && !c->ValueKnown(perm_elems) && perm == nullptr) {
c->set_output(0, c->UnknownShape());
return Status::OK();
}
// Find our value of the rank.
int64_t rank;
if (c->RankKnown(input)) {
rank = c->Rank(input);
} else if (c->ValueKnown(perm_elems)) {
rank = c->Value(perm_elems);
} else {
rank = perm->NumElements();
}
if (!c->RankKnown(input) && rank < 2) {
// A permutation array containing a single element is ambiguous. It could
// indicate either a scalar or a 1-dimensional array, both of which the
// transpose op returns unchanged.
c->set_output(0, input);
return Status::OK();
}
std::vector<DimensionHandle> dims;
dims.resize(rank);
TF_RETURN_IF_ERROR(c->WithRank(input, rank, &input));
// Ensure that perm is a vector and has rank elements.
TF_RETURN_IF_ERROR(c->WithRank(perm_shape, 1, &perm_shape));
TF_RETURN_IF_ERROR(c->WithValue(perm_elems, rank, &perm_elems));
// If we know the rank of the input and the value of perm, we can return
// all shape information, otherwise we can only return rank information,
// but no information for the dimensions.
if (perm != nullptr) {
std::vector<int64_t> data;
if (perm->dtype() == DT_INT32) {
data = AsInt64<int32>(perm, rank);
} else {
data = AsInt64<int64_t>(perm, rank);
}
for (int32_t i = 0; i < rank; ++i) {
int64_t in_idx = data[i];
if (in_idx >= rank) {
return errors::InvalidArgument("perm dim ", in_idx,
" is out of range of input rank ", rank);
}
dims[i] = c->Dim(input, in_idx);
}
} else {
for (int i = 0; i < rank; ++i) {
dims[i] = c->UnknownDim();
}
}
c->set_output(0, c->MakeShape(dims));
return Status::OK();
} | 407 | True | 1 |
CVE-2021-41206 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/68422b215e618df5ad375bcdc6d2052e9fd3080a', 'name': 'https://github.com/tensorflow/tensorflow/commit/68422b215e618df5ad375bcdc6d2052e9fd3080a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/4d74d8a00b07441cba090a02e0dd9ed385145bf4', 'name': 'https://github.com/tensorflow/tensorflow/commit/4d74d8a00b07441cba090a02e0dd9ed385145bf4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/579261dcd446385831fe4f7457d802a59685121d', 'name': 'https://github.com/tensorflow/tensorflow/commit/579261dcd446385831fe4f7457d802a59685121d', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/e7f497570abb6b4ae5af4970620cd880e4c0c904', 'name': 'https://github.com/tensorflow/tensorflow/commit/e7f497570abb6b4ae5af4970620cd880e4c0c904', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pgcq-h79j-2f69', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pgcq-h79j-2f69', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/da4aad5946be30e5f049920fa076e1f7ef021261', 'name': 'https://github.com/tensorflow/tensorflow/commit/da4aad5946be30e5f049920fa076e1f7ef021261', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/4dddb2fd0b01cdd196101afbba6518658a2c9e07', 'name': 'https://github.com/tensorflow/tensorflow/commit/4dddb2fd0b01cdd196101afbba6518658a2c9e07', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-354'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.4.0', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "TensorFlow is an open source platform for machine learning. In affected versions several TensorFlow operations are missing validation for the shapes of the tensor arguments involved in the call. Depending on the API, this can result in undefined behavior and segfault or `CHECK`-fail related crashes but in some scenarios writes and reads from heap populated arrays are also possible. We have discovered these issues internally via tooling while working on improving/testing GPU op determinism. As such, we don't have reproducers and there will be multiple fixes for these issues. These fixes will be included in TensorFlow 2.7.0. We will also cherrypick these commits on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range."}] | 2021-11-09T17:56Z | 2021-11-05T22:15Z | Improper Validation of Integrity Check Value | The software does not validate or incorrectly validates the integrity check values or "checksums" of a message. This may prevent it from detecting if the data has been modified or corrupted in transmission. | Improper validation of checksums before use results in an unnecessary risk that can easily be mitigated. The protocol specification describes the algorithm used for calculating the checksum. It is then a simple matter of implementing the calculation and verifying that the calculated checksum and the received checksum match. Improper verification of the calculated checksum and the received checksum can lead to far greater consequences.
| https://cwe.mitre.org/data/definitions/354.html | 0 | Reed Wanderman-Milne | 2021-10-20 15:41:05-07:00 | Fix segfault on OOM in Conv2D.
PiperOrigin-RevId: 404655317
Change-Id: I33588dbd3f5d0fef980e3c908bf5515a9ee09ce7 | e7f497570abb6b4ae5af4970620cd880e4c0c904 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::LaunchGrouped::operator ( ) | tensorflow::LaunchGrouped::operator ( )( OpKernelContext * ctx , const Tensor & input , const Tensor & filter , int row_stride , int col_stride , int row_dilation , int col_dilation , const Padding & padding , const std :: vector<int64_t> & explicit_paddings , Tensor * output , TensorFormat data_format) | ['ctx', 'input', 'filter', 'row_stride', 'col_stride', 'row_dilation', 'col_dilation', 'padding', 'explicit_paddings', 'output', 'data_format'] | void operator()(OpKernelContext* ctx, const Tensor& input,
const Tensor& filter, int row_stride, int col_stride,
int row_dilation, int col_dilation, const Padding& padding,
const std::vector<int64_t>& explicit_paddings, Tensor* output,
TensorFormat data_format) {
DCHECK(data_format == FORMAT_NHWC)
<< "Grouped conv implementation only "
"supports NHWC tensor format for now.";
const int64_t in_depth = input.dim_size(3);
const int64_t patch_depth = filter.dim_size(2);
const int64_t num_groups = in_depth / patch_depth;
// Shuffle input/filter tensors to have group as a leading dimension.
std::array<int64_t, 5> shuffle({3, 0, 1, 2, 4});
// Compute pre shuffle dimemnsions.
auto pre_shuffle = [&](const Tensor& tensor) -> std::array<int64, 5> {
return {tensor.dim_size(0), tensor.dim_size(1), tensor.dim_size(2),
num_groups, tensor.dim_size(3) / num_groups};
};
// Compute post shuffle dimemnsions.
auto post_shuffle = [&](const Tensor& tensor) -> std::array<int64, 5> {
return {num_groups, tensor.dim_size(0), tensor.dim_size(1),
tensor.dim_size(2), tensor.dim_size(3) / num_groups};
};
auto& device = ctx->eigen_device<CPUDevice>();
absl::BlockingCounter shuffles_completed(2);
auto on_shuffled = [&]() { shuffles_completed.DecrementCount(); };
// Shuffle input into temporary tensor.
Tensor input_shuffled(input.dtype(), TensorShape(post_shuffle(input)));
input_shuffled.tensor<T, 5>().device(device, on_shuffled) =
input.shaped<T, 5>(pre_shuffle(input)).shuffle(shuffle);
// Shuffle filter into temporary tensor.
Tensor filter_shuffled(filter.dtype(), TensorShape(post_shuffle(filter)));
filter_shuffled.tensor<T, 5>().device(device, on_shuffled) =
filter.shaped<T, 5>(pre_shuffle(filter)).shuffle(shuffle);
// Wait for the completion of input/filter shuffles.
shuffles_completed.Wait();
// Write group convolution results into temporary output tensor.
Tensor output_shuffled(output->dtype(), TensorShape(post_shuffle(*output)));
for (int64_t i = 0; i < num_groups; ++i) {
// TODO(ezhulenev): Run this loop using `parallelFor` (regular parallelFor
// will lead to deadlock, SpatialConvolution has to use async Eigen
// assignment). This requires small changes to Eigen to support async
// exeuction for tensor chipping operation.
// TODO(ezhulenev): Grouped convolution should also support 1x1 filter
// optimization.
auto input_slice = input_shuffled.tensor<T, 5>().template chip<0>(i);
auto filter_slice = filter_shuffled.tensor<T, 5>().template chip<0>(i);
auto output_slice = output_shuffled.tensor<T, 5>().template chip<0>(i);
if (padding == EXPLICIT) {
functor::SpatialConvolution<CPUDevice, T>()(
ctx->eigen_device<CPUDevice>(), output_slice, input_slice,
filter_slice, row_stride, col_stride, row_dilation, col_dilation,
static_cast<int>(explicit_paddings[2]),
static_cast<int>(explicit_paddings[3]),
static_cast<int>(explicit_paddings[4]),
static_cast<int>(explicit_paddings[5]));
} else {
functor::SpatialConvolution<CPUDevice, T>()(
ctx->eigen_device<CPUDevice>(), output_slice, input_slice,
filter_slice, row_stride, col_stride, row_dilation, col_dilation,
BrainPadding2EigenPadding(padding));
}
}
// Shuffle temporary output back into pre-shuffled shape.
std::array<int64_t, 5> rev_shuffle({1, 2, 3, 0, 4});
output->shaped<T, 5>(pre_shuffle(*output)).device(device) =
output_shuffled.tensor<T, 5>().shuffle(rev_shuffle);
} | 686 | True | 1 |
CVE-2021-41208 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/5c8c9a8bfe750f9743d0c859bae112060b216f5c', 'name': 'https://github.com/tensorflow/tensorflow/commit/5c8c9a8bfe750f9743d0c859bae112060b216f5c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-57wx-m983-2f88', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-57wx-m983-2f88', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.4.0', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "TensorFlow is an open source platform for machine learning. In affected versions the code for boosted trees in TensorFlow is still missing validation. As a result, attackers can trigger denial of service (via dereferencing `nullptr`s or via `CHECK`-failures) as well as abuse undefined behavior (binding references to `nullptr`s). An attacker can also read and write from heap buffers, depending on the API that gets used and the arguments that are passed to the call. Given that the boosted trees implementation in TensorFlow is unmaintained, it is recommend to no longer use these APIs. We will deprecate TensorFlow's boosted trees APIs in subsequent releases. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range."}] | 2021-11-09T18:36Z | 2021-11-05T22:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Rohan Jain | 2021-10-26 09:48:51-07:00 | Fixing security fixes in boosted trees ops
PiperOrigin-RevId: 405669548
Change-Id: Iae224d240d1779bcc02405c2fff99785644fbd0d | 5c8c9a8bfe750f9743d0c859bae112060b216f5c | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::BoostedTreesCalculateBestFeatureSplitOp::Compute | tensorflow::BoostedTreesCalculateBestFeatureSplitOp::Compute( OpKernelContext * const context) | ['context'] | void Compute(OpKernelContext* const context) override {
// node_id_range
const Tensor* node_id_range_t;
OP_REQUIRES_OK(context, context->input("node_id_range", &node_id_range_t));
OP_REQUIRES(
context, node_id_range_t->NumElements() == 2,
errors::InvalidArgument("node_id_range argument must have shape [2]"));
const auto node_id_range = node_id_range_t->vec<int32>();
const int32_t node_id_first = node_id_range(0); // inclusive
const int32_t node_id_last = node_id_range(1); // exclusive
const Tensor* stats_summary_t;
OP_REQUIRES_OK(context, context->input("stats_summary", &stats_summary_t));
OP_REQUIRES(
context, stats_summary_t->shape().dims() == 4,
errors::InvalidArgument("stats_summary argument must have rank 4"));
TTypes<float, 4>::ConstTensor stats_summary =
stats_summary_t->tensor<float, 4>();
const int32_t feature_dims = stats_summary_t->dim_size(1);
// The last bucket is for default/missing value.
const int32_t num_buckets = stats_summary_t->dim_size(2) - 1;
const int32_t logits_dim = logits_dim_;
const int32_t hessian_dim = stats_summary_t->dim_size(3) - logits_dim;
DCHECK_GT(hessian_dim, 0);
DCHECK_LE(hessian_dim, logits_dim * logits_dim);
const Tensor* l1_t;
OP_REQUIRES_OK(context, context->input("l1", &l1_t));
OP_REQUIRES(context, l1_t->NumElements() == 1,
errors::InvalidArgument("l1 argument must be a scalar"));
const auto l1 = l1_t->scalar<float>()();
DCHECK_GE(l1, 0);
if (logits_dim_ > 1) {
// Multi-class L1 regularization not supported yet.
DCHECK_EQ(l1, 0);
}
const Tensor* l2_t;
OP_REQUIRES_OK(context, context->input("l2", &l2_t));
OP_REQUIRES(context, l2_t->NumElements() == 1,
errors::InvalidArgument("l2 argument must be a scalar"));
const auto l2 = l2_t->scalar<float>()();
DCHECK_GE(l2, 0);
const Tensor* tree_complexity_t;
OP_REQUIRES_OK(context,
context->input("tree_complexity", &tree_complexity_t));
OP_REQUIRES(
context, tree_complexity_t->NumElements() == 1,
errors::InvalidArgument("tree_complexity argument must be a scalar"));
const auto tree_complexity = tree_complexity_t->scalar<float>()();
const Tensor* min_node_weight_t;
OP_REQUIRES_OK(context,
context->input("min_node_weight", &min_node_weight_t));
OP_REQUIRES(
context, min_node_weight_t->NumElements() == 1,
errors::InvalidArgument("min_node_weight argument must be a scalar"));
const auto min_node_weight = min_node_weight_t->scalar<float>()();
std::vector<int32> output_node_ids;
std::vector<float> output_gains;
std::vector<int32> output_feature_dimensions;
std::vector<int32> output_thresholds;
std::vector<Eigen::VectorXf> output_left_node_contribs;
std::vector<Eigen::VectorXf> output_right_node_contribs;
std::vector<std::string> output_split_types;
// TODO(tanzheny) parallelize the computation.
// Iterate each node and find the best gain per node.
for (int32_t node_id = node_id_first; node_id < node_id_last; ++node_id) {
float best_gain = std::numeric_limits<float>::lowest();
int32_t best_bucket = 0;
int32_t best_f_dim = 0;
string best_split_type;
Eigen::VectorXf best_contrib_for_left(logits_dim);
Eigen::VectorXf best_contrib_for_right(logits_dim);
float parent_gain;
// Including default bucket.
ConstMatrixMap stats_mat(&stats_summary(node_id, 0, 0, 0),
num_buckets + 1, logits_dim + hessian_dim);
const Eigen::VectorXf total_grad =
stats_mat.leftCols(logits_dim).colwise().sum();
const Eigen::VectorXf total_hess =
stats_mat.rightCols(hessian_dim).colwise().sum();
if (total_hess.norm() < min_node_weight) {
continue;
}
Eigen::VectorXf parent_weight(logits_dim);
CalculateWeightsAndGains(total_grad, total_hess, l1, l2, &parent_weight,
&parent_gain);
if (split_type_ == "inequality") {
CalculateBestInequalitySplit(
stats_summary, node_id, feature_dims, logits_dim, hessian_dim,
num_buckets, min_node_weight, l1, l2, &best_gain, &best_bucket,
&best_f_dim, &best_split_type, &best_contrib_for_left,
&best_contrib_for_right);
} else {
CalculateBestEqualitySplit(
stats_summary, total_grad, total_hess, node_id, feature_dims,
logits_dim, hessian_dim, num_buckets, l1, l2, &best_gain,
&best_bucket, &best_f_dim, &best_split_type, &best_contrib_for_left,
&best_contrib_for_right);
}
if (best_gain == std::numeric_limits<float>::lowest()) {
// Do not add the node if not split if found.
continue;
}
output_node_ids.push_back(node_id);
// Remove the parent gain for the parent node.
output_gains.push_back(best_gain - parent_gain);
output_feature_dimensions.push_back(best_f_dim);
// default direction is fixed for dense splits.
// TODO(tanzheny) account for default values.
output_split_types.push_back(best_split_type);
output_thresholds.push_back(best_bucket);
output_left_node_contribs.push_back(best_contrib_for_left);
output_right_node_contribs.push_back(best_contrib_for_right);
} // for node id
const int num_nodes = output_node_ids.size();
// output_node_ids
Tensor* output_node_ids_t = nullptr;
OP_REQUIRES_OK(context, context->allocate_output("node_ids", {num_nodes},
&output_node_ids_t));
auto output_node_ids_vec = output_node_ids_t->vec<int32>();
// output_gains
Tensor* output_gains_t;
OP_REQUIRES_OK(context, context->allocate_output("gains", {num_nodes},
&output_gains_t));
auto output_gains_vec = output_gains_t->vec<float>();
// output_feature_dimensions
Tensor* output_feature_dimension_t;
OP_REQUIRES_OK(context,
context->allocate_output("feature_dimensions", {num_nodes},
&output_feature_dimension_t));
auto output_feature_dimensions_vec =
output_feature_dimension_t->vec<int32>();
// output_thresholds
Tensor* output_thresholds_t;
OP_REQUIRES_OK(context, context->allocate_output("thresholds", {num_nodes},
&output_thresholds_t));
auto output_thresholds_vec = output_thresholds_t->vec<int32>();
// output_left_node_contribs
Tensor* output_left_node_contribs_t;
OP_REQUIRES_OK(context, context->allocate_output(
"left_node_contribs", {num_nodes, logits_dim},
&output_left_node_contribs_t));
auto output_left_node_contribs_matrix =
output_left_node_contribs_t->matrix<float>();
// output_right_node_contribs
Tensor* output_right_node_contribs_t;
OP_REQUIRES_OK(context, context->allocate_output(
"right_node_contribs", {num_nodes, logits_dim},
&output_right_node_contribs_t));
auto output_right_node_contribs_matrix =
output_right_node_contribs_t->matrix<float>();
// split type
Tensor* output_split_types_t;
OP_REQUIRES_OK(
context, context->allocate_output("split_with_default_directions",
{num_nodes}, &output_split_types_t));
auto output_split_types_vec = output_split_types_t->vec<tstring>();
// Sets output tensors from vectors.
for (int i = 0; i < num_nodes; ++i) {
output_node_ids_vec(i) = output_node_ids[i];
// Adjust the gains to penalize by tree complexity.
output_gains_vec(i) = output_gains[i] - tree_complexity;
output_feature_dimensions_vec(i) = output_feature_dimensions[i];
output_thresholds_vec(i) = output_thresholds[i];
for (int j = 0; j < logits_dim; ++j) {
output_left_node_contribs_matrix(i, j) =
output_left_node_contribs[i][j];
output_right_node_contribs_matrix(i, j) =
output_right_node_contribs[i][j];
}
output_split_types_vec(i) = output_split_types[i];
}
} | 1219 | True | 1 |
CVE-2021-41208 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/5c8c9a8bfe750f9743d0c859bae112060b216f5c', 'name': 'https://github.com/tensorflow/tensorflow/commit/5c8c9a8bfe750f9743d0c859bae112060b216f5c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-57wx-m983-2f88', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-57wx-m983-2f88', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.4.0', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "TensorFlow is an open source platform for machine learning. In affected versions the code for boosted trees in TensorFlow is still missing validation. As a result, attackers can trigger denial of service (via dereferencing `nullptr`s or via `CHECK`-failures) as well as abuse undefined behavior (binding references to `nullptr`s). An attacker can also read and write from heap buffers, depending on the API that gets used and the arguments that are passed to the call. Given that the boosted trees implementation in TensorFlow is unmaintained, it is recommend to no longer use these APIs. We will deprecate TensorFlow's boosted trees APIs in subsequent releases. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range."}] | 2021-11-09T18:36Z | 2021-11-05T22:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Rohan Jain | 2021-10-26 09:48:51-07:00 | Fixing security fixes in boosted trees ops
PiperOrigin-RevId: 405669548
Change-Id: Iae224d240d1779bcc02405c2fff99785644fbd0d | 5c8c9a8bfe750f9743d0c859bae112060b216f5c | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::BoostedTreesCalculateBestFeatureSplitV2::Compute | tensorflow::BoostedTreesCalculateBestFeatureSplitV2::Compute( OpKernelContext * const context) | ['context'] | void Compute(OpKernelContext* const context) override {
// node_id_range
const Tensor* node_id_range_t;
OP_REQUIRES_OK(context, context->input("node_id_range", &node_id_range_t));
const auto node_id_range = node_id_range_t->vec<int32>();
OP_REQUIRES(
context, node_id_range_t->dims() == 1,
errors::InvalidArgument("node_id_range must be a rank 1 tensor, but "
"given node_id_range has dims of ",
node_id_range_t->dims()));
OP_REQUIRES(context, node_id_range_t->dim_size(0) == 2,
errors::InvalidArgument(
"node_id_range must be a rank 1 tensor with shape=[2], but "
"given node_id_range has shape ",
node_id_range_t->dim_size(0), " on its first dim"));
const int32_t node_id_first = node_id_range(0); // Inclusive.
const int32_t node_id_last = node_id_range(1); // Exclusive.
// Get stats_summaries_list.
OpInputList stats_summaries_list;
OP_REQUIRES_OK(context, context->input_list("stats_summaries_list",
&stats_summaries_list));
// Infer dimensions of a stats_summary.
DCHECK_GT(stats_summaries_list.size(), 0);
const int32_t feature_dims = stats_summaries_list[0].dim_size(1);
// The last bucket is for default/missing value.
const int32_t num_buckets = stats_summaries_list[0].dim_size(2) - 1;
const int32_t logits_dim = logits_dim_;
const int32_t hessian_dim =
stats_summaries_list[0].dim_size(3) - logits_dim;
DCHECK_GT(hessian_dim, 0);
DCHECK_LE(hessian_dim, logits_dim * logits_dim);
// Vector of stats_summaries; each element is stats for feature of shape
// [max_splits, feature_dim, num_buckets, logits_dim + hessian_dim].
std::vector<TTypes<float, 4>::ConstTensor> stats_summaries;
DCHECK_EQ(stats_summaries_list.size(), num_features_);
stats_summaries.reserve(num_features_);
for (const auto& tensor : stats_summaries_list) {
stats_summaries.emplace_back(tensor.tensor<float, 4>());
}
// Split types.
const Tensor* split_types_t;
OP_REQUIRES_OK(context, context->input("split_types", &split_types_t));
const auto split_types = split_types_t->vec<tstring>();
DCHECK_EQ(split_types.size(), num_features_);
// Validate.
for (int i = 0; i < num_features_; ++i) {
if (!(split_types(i) == kInequalitySplit ||
split_types(i) == kEqualitySplit)) {
OP_REQUIRES_OK(
context,
errors::Aborted(
"Operation received an exception: Incorrect split type"));
}
}
// Feature ids.
const Tensor* candidate_feature_ids_t;
OP_REQUIRES_OK(context, context->input("candidate_feature_ids",
&candidate_feature_ids_t));
const auto candidate_feature_ids = candidate_feature_ids_t->vec<int32>();
DCHECK_EQ(candidate_feature_ids.size(), num_features_);
// L1, L2, tree_complexity, min_node_weight.
const Tensor* l1_t;
OP_REQUIRES_OK(context, context->input("l1", &l1_t));
const auto l1 = l1_t->scalar<float>()();
DCHECK_GE(l1, 0);
if (logits_dim_ > 1) {
// Multi-class L1 regularization not supported yet.
DCHECK_EQ(l1, 0);
}
const Tensor* l2_t;
OP_REQUIRES_OK(context, context->input("l2", &l2_t));
const auto l2 = l2_t->scalar<float>()();
DCHECK_GE(l2, 0);
const Tensor* tree_complexity_t;
OP_REQUIRES_OK(context,
context->input("tree_complexity", &tree_complexity_t));
const auto tree_complexity = tree_complexity_t->scalar<float>()();
const Tensor* min_node_weight_t;
OP_REQUIRES_OK(context,
context->input("min_node_weight", &min_node_weight_t));
const auto min_node_weight = min_node_weight_t->scalar<float>()();
std::vector<int32> output_node_ids;
std::vector<float> output_gains;
std::vector<int32> output_feature_ids;
std::vector<int32> output_feature_dimensions;
std::vector<int32> output_thresholds;
std::vector<Eigen::VectorXf> output_left_node_contribs;
std::vector<Eigen::VectorXf> output_right_node_contribs;
std::vector<string> output_split_types;
// TODO(tanzheny) parallelize the computation.
// Iterate each node and find the best gain per node.
float parent_gain;
for (int32_t node_id = node_id_first; node_id < node_id_last; ++node_id) {
float best_gain = std::numeric_limits<float>::lowest();
int32_t best_bucket;
int32_t best_f_id;
int32_t best_f_dim;
string best_split_type;
Eigen::VectorXf best_contrib_for_left(logits_dim);
Eigen::VectorXf best_contrib_for_right(logits_dim);
// Sum of gradient and hessian. Compute parent gain using first feature.
ConstMatrixMap stats_mat(&stats_summaries[0](node_id, 0, 0, 0),
num_buckets + 1, // Including default bucket.
logits_dim + hessian_dim);
const Eigen::VectorXf total_grad =
stats_mat.leftCols(logits_dim).colwise().sum();
const Eigen::VectorXf total_hess =
stats_mat.rightCols(hessian_dim).colwise().sum();
if (total_hess.norm() < min_node_weight) {
continue;
}
Eigen::VectorXf unused(logits_dim);
CalculateWeightsAndGains(total_grad, total_hess, l1, l2, &unused,
&parent_gain);
for (int f_idx = 0; f_idx < num_features_; ++f_idx) {
const string split_type = split_types(f_idx);
TTypes<float, 4>::ConstTensor stats_summary = stats_summaries[f_idx];
float f_best_gain = std::numeric_limits<float>::lowest();
int32_t f_best_bucket;
int32_t f_best_f_dim;
string f_best_split_type;
Eigen::VectorXf f_best_contrib_for_left(logits_dim);
Eigen::VectorXf f_best_contrib_for_right(logits_dim);
if (split_type == kInequalitySplit) {
CalculateBestInequalitySplit(
stats_summary, node_id, feature_dims, logits_dim, hessian_dim,
num_buckets, min_node_weight, l1, l2, &f_best_gain,
&f_best_bucket, &f_best_f_dim, &f_best_split_type,
&f_best_contrib_for_left, &f_best_contrib_for_right);
} else {
CalculateBestEqualitySplit(
stats_summary, total_grad, total_hess, node_id, feature_dims,
logits_dim, hessian_dim, num_buckets, l1, l2, &f_best_gain,
&f_best_bucket, &f_best_f_dim, &f_best_split_type,
&f_best_contrib_for_left, &f_best_contrib_for_right);
}
if (f_best_gain > best_gain) {
best_gain = f_best_gain;
best_f_id = candidate_feature_ids(f_idx);
best_f_dim = f_best_f_dim;
best_split_type = f_best_split_type;
best_bucket = f_best_bucket;
best_contrib_for_left = f_best_contrib_for_left;
best_contrib_for_right = f_best_contrib_for_right;
}
} // For feature id.
if (best_gain == std::numeric_limits<float>::lowest()) {
// Do not add the node if no split is found.
continue;
}
output_node_ids.push_back(node_id);
// Remove the parent gain for the parent node.
output_gains.push_back(best_gain - parent_gain);
output_feature_ids.push_back(best_f_id);
output_feature_dimensions.push_back(best_f_dim);
// Default direction is fixed for dense splits.
// TODO(tanzheny) account for default values.
output_split_types.push_back(best_split_type);
output_thresholds.push_back(best_bucket);
output_left_node_contribs.push_back(best_contrib_for_left);
output_right_node_contribs.push_back(best_contrib_for_right);
} // for node id.
const int num_nodes = output_node_ids.size();
// output_node_ids
Tensor* output_node_ids_t = nullptr;
OP_REQUIRES_OK(context, context->allocate_output("node_ids", {num_nodes},
&output_node_ids_t));
auto output_node_ids_vec = output_node_ids_t->vec<int32>();
// output_gains
Tensor* output_gains_t;
OP_REQUIRES_OK(context, context->allocate_output("gains", {num_nodes},
&output_gains_t));
auto output_gains_vec = output_gains_t->vec<float>();
// output_feature_ids
Tensor* output_features_ids_t;
OP_REQUIRES_OK(context, context->allocate_output("feature_ids", {num_nodes},
&output_features_ids_t));
auto output_features_vec = output_features_ids_t->vec<int32>();
// output_feature_dimensions
Tensor* output_feature_dimension_t;
OP_REQUIRES_OK(context,
context->allocate_output("feature_dimensions", {num_nodes},
&output_feature_dimension_t));
auto output_feature_dimensions_vec =
output_feature_dimension_t->vec<int32>();
// output_thresholds
Tensor* output_thresholds_t;
OP_REQUIRES_OK(context, context->allocate_output("thresholds", {num_nodes},
&output_thresholds_t));
auto output_thresholds_vec = output_thresholds_t->vec<int32>();
// output_left_node_contribs
Tensor* output_left_node_contribs_t;
OP_REQUIRES_OK(context, context->allocate_output(
"left_node_contribs", {num_nodes, logits_dim},
&output_left_node_contribs_t));
auto output_left_node_contribs_matrix =
output_left_node_contribs_t->matrix<float>();
// output_right_node_contribs
Tensor* output_right_node_contribs_t;
OP_REQUIRES_OK(context, context->allocate_output(
"right_node_contribs", {num_nodes, logits_dim},
&output_right_node_contribs_t));
auto output_right_node_contribs_matrix =
output_right_node_contribs_t->matrix<float>();
// split type
Tensor* output_split_types_t;
OP_REQUIRES_OK(
context, context->allocate_output("split_with_default_directions",
{num_nodes}, &output_split_types_t));
auto output_split_types_vec = output_split_types_t->vec<tstring>();
// Sets output tensors from vectors.
for (int i = 0; i < num_nodes; ++i) {
output_node_ids_vec(i) = output_node_ids[i];
output_features_vec(i) = output_feature_ids[i];
// Adjust the gains to penalize by tree complexity.
output_gains_vec(i) = output_gains[i] - tree_complexity;
output_feature_dimensions_vec(i) = output_feature_dimensions[i];
output_thresholds_vec(i) = output_thresholds[i];
for (int j = 0; j < logits_dim; ++j) {
output_left_node_contribs_matrix(i, j) =
output_left_node_contribs[i][j];
output_right_node_contribs_matrix(i, j) =
output_right_node_contribs[i][j];
}
output_split_types_vec(i) = output_split_types[i];
}
} | 1525 | True | 1 |
CVE-2021-41208 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/5c8c9a8bfe750f9743d0c859bae112060b216f5c', 'name': 'https://github.com/tensorflow/tensorflow/commit/5c8c9a8bfe750f9743d0c859bae112060b216f5c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-57wx-m983-2f88', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-57wx-m983-2f88', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.4.0', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "TensorFlow is an open source platform for machine learning. In affected versions the code for boosted trees in TensorFlow is still missing validation. As a result, attackers can trigger denial of service (via dereferencing `nullptr`s or via `CHECK`-failures) as well as abuse undefined behavior (binding references to `nullptr`s). An attacker can also read and write from heap buffers, depending on the API that gets used and the arguments that are passed to the call. Given that the boosted trees implementation in TensorFlow is unmaintained, it is recommend to no longer use these APIs. We will deprecate TensorFlow's boosted trees APIs in subsequent releases. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range."}] | 2021-11-09T18:36Z | 2021-11-05T22:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Rohan Jain | 2021-10-26 09:48:51-07:00 | Fixing security fixes in boosted trees ops
PiperOrigin-RevId: 405669548
Change-Id: Iae224d240d1779bcc02405c2fff99785644fbd0d | 5c8c9a8bfe750f9743d0c859bae112060b216f5c | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::BoostedTreesCalculateBestGainsPerFeatureOp::Compute | tensorflow::BoostedTreesCalculateBestGainsPerFeatureOp::Compute( OpKernelContext * const context) | ['context'] | void Compute(OpKernelContext* const context) override {
// node_id_range
const Tensor* node_id_range_t;
OP_REQUIRES_OK(context, context->input("node_id_range", &node_id_range_t));
OP_REQUIRES(
context, node_id_range_t->dims() == 1,
errors::InvalidArgument("node_id_range must be a rank 1 tensor, but "
"given node_id_range has dims of ",
node_id_range_t->dims()));
OP_REQUIRES(context, node_id_range_t->dim_size(0) == 2,
errors::InvalidArgument(
"node_id_range must be a rank 1 tensor with shape=[2], but "
"given node_id_range has shape ",
node_id_range_t->dim_size(0), " on its first dim"));
const auto node_id_range = node_id_range_t->vec<int32>();
const int32_t node_id_first = node_id_range(0); // inclusive
const int32_t node_id_last = node_id_range(1); // exclusive
// stats_summary_list
OpInputList stats_summary_list;
OP_REQUIRES_OK(context, context->input_list("stats_summary_list",
&stats_summary_list));
const int64_t num_buckets = stats_summary_list[0].dim_size(1);
// Check for single logit: 1 gradient + 1 hessian value.
DCHECK_EQ(stats_summary_list[0].dim_size(2), 2);
std::vector<TTypes<float, 3>::ConstTensor> stats_summary;
stats_summary.reserve(stats_summary_list.size());
for (const auto& tensor : stats_summary_list) {
stats_summary.emplace_back(tensor.tensor<float, 3>());
}
const Tensor* l1_t;
OP_REQUIRES_OK(context, context->input("l1", &l1_t));
const auto l1 = l1_t->scalar<float>()();
const Tensor* l2_t;
OP_REQUIRES_OK(context, context->input("l2", &l2_t));
const auto l2 = l2_t->scalar<float>()();
const Tensor* tree_complexity_t;
OP_REQUIRES_OK(context,
context->input("tree_complexity", &tree_complexity_t));
const auto tree_complexity = tree_complexity_t->scalar<float>()();
const Tensor* min_node_weight_t;
OP_REQUIRES_OK(context,
context->input("min_node_weight", &min_node_weight_t));
const auto min_node_weight = min_node_weight_t->scalar<float>()();
// Allocate output lists of tensors:
OpOutputList output_node_ids_list;
OP_REQUIRES_OK(
context, context->output_list("node_ids_list", &output_node_ids_list));
OpOutputList output_gains_list;
OP_REQUIRES_OK(context,
context->output_list("gains_list", &output_gains_list));
OpOutputList output_thresholds_list;
OP_REQUIRES_OK(context, context->output_list("thresholds_list",
&output_thresholds_list));
OpOutputList output_left_node_contribs_list;
OP_REQUIRES_OK(context,
context->output_list("left_node_contribs_list",
&output_left_node_contribs_list));
OpOutputList output_right_node_contribs_list;
OP_REQUIRES_OK(context,
context->output_list("right_node_contribs_list",
&output_right_node_contribs_list));
// Use identity later to convert float to Eigen::Matrix type for input to
// CalculateWeightsAndGains. This op only supports single dimension logits.
Eigen::MatrixXf identity;
identity.setIdentity(1, 1);
// Get the best split info per node for each feature.
for (int feature_idx = 0; feature_idx < num_features_; ++feature_idx) {
std::vector<float> cum_grad;
std::vector<float> cum_hess;
cum_grad.reserve(num_buckets);
cum_hess.reserve(num_buckets);
std::vector<int32> output_node_ids;
std::vector<float> output_gains;
std::vector<int32> output_thresholds;
std::vector<float> output_left_node_contribs;
std::vector<float> output_right_node_contribs;
for (int node_id = node_id_first; node_id < node_id_last; ++node_id) {
// Calculate gains.
cum_grad.clear();
cum_hess.clear();
float total_grad = 0.0;
float total_hess = 0.0;
for (int bucket = 0; bucket < num_buckets; ++bucket) {
// TODO(nponomareva): Consider multi-dimensional gradients/hessians.
total_grad += stats_summary[feature_idx](node_id, bucket, 0);
total_hess += stats_summary[feature_idx](node_id, bucket, 1);
cum_grad.push_back(total_grad);
cum_hess.push_back(total_hess);
}
// Check if node has enough of average hessian.
if (total_hess < min_node_weight) {
// Do not split the node because not enough avg hessian.
continue;
}
float best_gain = std::numeric_limits<float>::lowest();
float best_bucket = 0;
float best_contrib_for_left = 0.0;
float best_contrib_for_right = 0.0;
// Parent gain.
float parent_gain;
Eigen::VectorXf unused(1);
CalculateWeightsAndGains(total_grad * identity, total_hess * identity,
l1, l2, &unused, &parent_gain);
for (int bucket = 0; bucket < num_buckets; ++bucket) {
const float cum_grad_bucket = cum_grad[bucket];
const float cum_hess_bucket = cum_hess[bucket];
// Left child.
Eigen::VectorXf contrib_for_left(1);
float gain_for_left;
CalculateWeightsAndGains(cum_grad_bucket * identity,
cum_hess_bucket * identity, l1, l2,
&contrib_for_left, &gain_for_left);
// Right child.
// use contrib_for_right.
Eigen::VectorXf contrib_for_right(1);
float gain_for_right;
CalculateWeightsAndGains((total_grad - cum_grad_bucket) * identity,
(total_hess - cum_hess_bucket) * identity,
l1, l2, &contrib_for_right, &gain_for_right);
if (GainIsLarger(gain_for_left + gain_for_right, best_gain)) {
best_gain = gain_for_left + gain_for_right;
best_bucket = bucket;
best_contrib_for_left = contrib_for_left[0];
best_contrib_for_right = contrib_for_right[0];
}
} // for bucket
output_node_ids.push_back(node_id);
// Remove the parent gain for the parent node.
output_gains.push_back(best_gain - parent_gain);
output_thresholds.push_back(best_bucket);
output_left_node_contribs.push_back(best_contrib_for_left);
output_right_node_contribs.push_back(best_contrib_for_right);
} // for node_id
const int num_nodes = output_node_ids.size();
// output_node_ids
Tensor* output_node_ids_t;
OP_REQUIRES_OK(context,
output_node_ids_list.allocate(feature_idx, {num_nodes},
&output_node_ids_t));
auto output_node_ids_vec = output_node_ids_t->vec<int32>();
// output_gains
Tensor* output_gains_t;
OP_REQUIRES_OK(context, output_gains_list.allocate(
feature_idx, {num_nodes}, &output_gains_t));
auto output_gains_vec = output_gains_t->vec<float>();
// output_thresholds
Tensor* output_thresholds_t;
OP_REQUIRES_OK(context,
output_thresholds_list.allocate(feature_idx, {num_nodes},
&output_thresholds_t));
auto output_thresholds_vec = output_thresholds_t->vec<int32>();
// output_left_node_contribs
Tensor* output_left_node_contribs_t;
OP_REQUIRES_OK(context, output_left_node_contribs_list.allocate(
feature_idx, {num_nodes, 1},
&output_left_node_contribs_t));
auto output_left_node_contribs_matrix =
output_left_node_contribs_t->matrix<float>();
// output_right_node_contribs
Tensor* output_right_node_contribs_t;
OP_REQUIRES_OK(context, output_right_node_contribs_list.allocate(
feature_idx, {num_nodes, 1},
&output_right_node_contribs_t));
auto output_right_node_contribs_matrix =
output_right_node_contribs_t->matrix<float>();
// Sets output tensors from vectors.
for (int i = 0; i < num_nodes; ++i) {
output_node_ids_vec(i) = output_node_ids[i];
// Adjust the gains to penalize by tree complexity.
output_gains_vec(i) = output_gains[i] - tree_complexity;
output_thresholds_vec(i) = output_thresholds[i];
output_left_node_contribs_matrix(i, 0) = output_left_node_contribs[i];
// This op only supports 1-dimensional logits.
output_right_node_contribs_matrix(i, 0) = output_right_node_contribs[i];
}
} // for f
} | 1165 | True | 1 |
CVE-2021-41208 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/5c8c9a8bfe750f9743d0c859bae112060b216f5c', 'name': 'https://github.com/tensorflow/tensorflow/commit/5c8c9a8bfe750f9743d0c859bae112060b216f5c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-57wx-m983-2f88', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-57wx-m983-2f88', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndExcluding': '2.6.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.5.0', 'versionEndExcluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.4.0', 'versionEndExcluding': '2.4.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "TensorFlow is an open source platform for machine learning. In affected versions the code for boosted trees in TensorFlow is still missing validation. As a result, attackers can trigger denial of service (via dereferencing `nullptr`s or via `CHECK`-failures) as well as abuse undefined behavior (binding references to `nullptr`s). An attacker can also read and write from heap buffers, depending on the API that gets used and the arguments that are passed to the call. Given that the boosted trees implementation in TensorFlow is unmaintained, it is recommend to no longer use these APIs. We will deprecate TensorFlow's boosted trees APIs in subsequent releases. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range."}] | 2021-11-09T18:36Z | 2021-11-05T22:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Rohan Jain | 2021-10-26 09:48:51-07:00 | Fixing security fixes in boosted trees ops
PiperOrigin-RevId: 405669548
Change-Id: Iae224d240d1779bcc02405c2fff99785644fbd0d | 5c8c9a8bfe750f9743d0c859bae112060b216f5c | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::BoostedTreesSparseAggregateStatsOp::Compute | tensorflow::BoostedTreesSparseAggregateStatsOp::Compute( OpKernelContext * const context) | ['context'] | void Compute(OpKernelContext* const context) override {
// node_ids.
const Tensor* node_ids_t;
OP_REQUIRES_OK(context, context->input("node_ids", &node_ids_t));
const auto node_ids = node_ids_t->vec<int32>();
// gradients.
const Tensor* gradients_t;
OP_REQUIRES_OK(context, context->input("gradients", &gradients_t));
const auto gradients = gradients_t->matrix<float>();
// hessians.
const Tensor* hessians_t;
OP_REQUIRES_OK(context, context->input("hessians", &hessians_t));
const auto hessians = hessians_t->matrix<float>();
// feature indices.
const Tensor* feature_indices_t;
OP_REQUIRES_OK(context,
context->input("feature_indices", &feature_indices_t));
const auto feature_indices = feature_indices_t->matrix<int32>();
// feature values.
const Tensor* feature_values_t;
OP_REQUIRES_OK(context,
context->input("feature_values", &feature_values_t));
const auto feature_values = feature_values_t->vec<int32>();
// feature shape.
const Tensor* feature_shape_t;
OP_REQUIRES_OK(context, context->input("feature_shape", &feature_shape_t));
OP_REQUIRES(context, TensorShapeUtils::IsVector(feature_shape_t->shape()),
errors::InvalidArgument(
"Input shapes should be a vector but received shapes ",
feature_shape_t->shape().DebugString()));
const auto feature_shape = feature_shape_t->vec<int32>();
const int64_t batch_size = gradients_t->dim_size(0);
const int64_t logits_dims = gradients_t->dim_size(1);
const int64_t hessians_dims = hessians_t->dim_size(1);
const int64_t stats_dims = logits_dims + hessians_dims;
const int64_t num_sparse_entries = feature_indices_t->dim_size(0);
const int32_t feature_dims = feature_shape(1);
DCHECK_LE(num_sparse_entries, batch_size * feature_dims);
// Aggregate statistics info to map.
StatsPartitionMap stats_map;
int prev_instance = 0;
int prev_f_dim = -1;
for (int i = 0; i < num_sparse_entries; ++i) {
// the instance number within a batch
const int32_t instance = feature_indices(i, 0);
DCHECK_LE(instance, batch_size);
DCHECK_GE(instance, prev_instance);
// the node id within a tree.
const int32_t node_id = node_ids(instance);
DCHECK_LE(node_id, max_splits_);
// the feature dimension.
const int32_t f_dim = feature_indices(i, 1);
DCHECK_LE(f_dim, feature_dims);
// the bucket id of the value.
const int32_t bucket_id = feature_values(i);
DCHECK_LE(bucket_id, num_buckets_);
// Add statistics for the missing entries into default bucket.
// The last bucket is default bucket.
const int missing_entry_bucket = num_buckets_;
AddRangeStats(prev_instance, instance, prev_f_dim, f_dim, &stats_map,
gradients, hessians, node_ids, feature_dims,
missing_entry_bucket, logits_dims, stats_dims);
prev_instance = instance;
prev_f_dim = f_dim;
// Add statistics for the non-missing entry into
// (cur_instance, cur_f_dim, bucket_id).
AddInstanceStatsToMap(instance, f_dim, bucket_id, logits_dims, stats_dims,
&stats_map, gradients, hessians, node_ids);
}
AddRangeStats(prev_instance, batch_size - 1, prev_f_dim, feature_dims,
&stats_map, gradients, hessians, node_ids, feature_dims,
num_buckets_, logits_dims, stats_dims);
// Serialize statistics info map to tensor output.
const int64_t num_slots = stats_map.size() * stats_dims;
Tensor* summary_indices_t = nullptr;
OP_REQUIRES_OK(context,
context->allocate_output("stats_summary_indices",
TensorShape({num_slots, 4}),
&summary_indices_t));
auto summary_indices = summary_indices_t->matrix<int32>();
Tensor* summary_values_t = nullptr;
OP_REQUIRES_OK(context, context->allocate_output("stats_summary_values",
TensorShape({num_slots}),
&summary_values_t));
auto summary_values = summary_values_t->vec<float>();
int entry_index = 0;
for (auto& iter : stats_map) {
for (int stat_dim = 0; stat_dim < stats_dims; ++stat_dim) {
summary_indices(entry_index, 0) = iter.first.node_id;
summary_indices(entry_index, 1) = iter.first.feature_dim;
summary_indices(entry_index, 2) = iter.first.bucket_id;
summary_indices(entry_index, 3) = stat_dim;
summary_values(entry_index) = iter.second[stat_dim];
++entry_index;
}
}
Tensor* summary_shape_t = nullptr;
OP_REQUIRES_OK(
context, context->allocate_output("stats_summary_shape",
TensorShape({4}), &summary_shape_t));
auto summary_shape = summary_shape_t->vec<int32>();
summary_shape(0) = max_splits_;
summary_shape(1) = feature_dims;
summary_shape(2) = num_buckets_ + 1;
summary_shape(3) = stats_dims;
} | 768 | True | 1 |
CVE-2022-23564 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/14fea662350e7c26eb5fe1be2ac31704e5682ee6', 'name': 'https://github.com/tensorflow/tensorflow/commit/14fea662350e7c26eb5fe1be2ac31704e5682ee6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8rcj-c8pj-v3m3', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8rcj-c8pj-v3m3', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. When decoding a resource handle tensor from protobuf, a TensorFlow process can encounter cases where a `CHECK` assertion is invalidated based on user controlled arguments. This allows attackers to cause denial of services in TensorFlow processes. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T17:49Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-06 10:22:51-07:00 | Prevent `CHECK`-fail when decoding resource handles from proto
In certain scenarios, the proto might contain tensors that have too many elements (overflow). This is a `CHECK`-fail in general, but we should prevent this, given how many CVEs caused by that we have received this year (a large fraction of 200).
PiperOrigin-RevId: 408049766
Change-Id: I2ac20b247aa8ed9110846fbdb7a0a9401f2c168c | 14fea662350e7c26eb5fe1be2ac31704e5682ee6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::DecodeResourceHandleList | tensorflow::DecodeResourceHandleList( std :: unique_ptr<port::StringListDecoder> d , ResourceHandle * ps , int64_t n) | ['d', 'ps', 'n'] | bool DecodeResourceHandleList(std::unique_ptr<port::StringListDecoder> d,
ResourceHandle* ps, int64_t n) {
std::vector<uint32> sizes(n);
if (!d->ReadSizes(&sizes)) return false;
ResourceHandleProto proto;
for (int i = 0; i < n; ++i) {
if (!proto.ParseFromArray(d->Data(sizes[i]), sizes[i])) {
return false;
}
ps[i].FromProto(proto);
}
return true;
} | 106 | True | 1 |
CVE-2022-23564 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/14fea662350e7c26eb5fe1be2ac31704e5682ee6', 'name': 'https://github.com/tensorflow/tensorflow/commit/14fea662350e7c26eb5fe1be2ac31704e5682ee6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8rcj-c8pj-v3m3', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8rcj-c8pj-v3m3', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. When decoding a resource handle tensor from protobuf, a TensorFlow process can encounter cases where a `CHECK` assertion is invalidated based on user controlled arguments. This allows attackers to cause denial of services in TensorFlow processes. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T17:49Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-06 10:22:51-07:00 | Prevent `CHECK`-fail when decoding resource handles from proto
In certain scenarios, the proto might contain tensors that have too many elements (overflow). This is a `CHECK`-fail in general, but we should prevent this, given how many CVEs caused by that we have received this year (a large fraction of 200).
PiperOrigin-RevId: 408049766
Change-Id: I2ac20b247aa8ed9110846fbdb7a0a9401f2c168c | 14fea662350e7c26eb5fe1be2ac31704e5682ee6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::ResourceHandle::FromProto | tensorflow::ResourceHandle::FromProto( const ResourceHandleProto & proto) | ['proto'] | void ResourceHandle::FromProto(const ResourceHandleProto& proto) {
set_device(proto.device());
set_container(proto.container());
set_name(proto.name());
set_hash_code(proto.hash_code());
set_maybe_type_name(proto.maybe_type_name());
std::vector<DtypeAndPartialTensorShape> dtypes_and_shapes;
for (const auto& dtype_and_shape : proto.dtypes_and_shapes()) {
DataType dtype = dtype_and_shape.dtype();
PartialTensorShape shape(dtype_and_shape.shape());
dtypes_and_shapes.push_back(DtypeAndPartialTensorShape{dtype, shape});
}
dtypes_and_shapes_ = std::move(dtypes_and_shapes);
} | 119 | True | 1 |
CVE-2022-23564 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/14fea662350e7c26eb5fe1be2ac31704e5682ee6', 'name': 'https://github.com/tensorflow/tensorflow/commit/14fea662350e7c26eb5fe1be2ac31704e5682ee6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8rcj-c8pj-v3m3', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8rcj-c8pj-v3m3', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. When decoding a resource handle tensor from protobuf, a TensorFlow process can encounter cases where a `CHECK` assertion is invalidated based on user controlled arguments. This allows attackers to cause denial of services in TensorFlow processes. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T17:49Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-06 10:22:51-07:00 | Prevent `CHECK`-fail when decoding resource handles from proto
In certain scenarios, the proto might contain tensors that have too many elements (overflow). This is a `CHECK`-fail in general, but we should prevent this, given how many CVEs caused by that we have received this year (a large fraction of 200).
PiperOrigin-RevId: 408049766
Change-Id: I2ac20b247aa8ed9110846fbdb7a0a9401f2c168c | 14fea662350e7c26eb5fe1be2ac31704e5682ee6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::ResourceHandle::ParseFromString | tensorflow::ResourceHandle::ParseFromString( const string & s) | ['s'] | bool ResourceHandle::ParseFromString(const string& s) {
ResourceHandleProto proto;
const bool status = proto.ParseFromString(s);
if (status) FromProto(proto);
return status;
} | 37 | True | 1 |
CVE-2022-23564 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/14fea662350e7c26eb5fe1be2ac31704e5682ee6', 'name': 'https://github.com/tensorflow/tensorflow/commit/14fea662350e7c26eb5fe1be2ac31704e5682ee6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8rcj-c8pj-v3m3', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8rcj-c8pj-v3m3', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. When decoding a resource handle tensor from protobuf, a TensorFlow process can encounter cases where a `CHECK` assertion is invalidated based on user controlled arguments. This allows attackers to cause denial of services in TensorFlow processes. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T17:49Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-06 10:22:51-07:00 | Prevent `CHECK`-fail when decoding resource handles from proto
In certain scenarios, the proto might contain tensors that have too many elements (overflow). This is a `CHECK`-fail in general, but we should prevent this, given how many CVEs caused by that we have received this year (a large fraction of 200).
PiperOrigin-RevId: 408049766
Change-Id: I2ac20b247aa8ed9110846fbdb7a0a9401f2c168c | 14fea662350e7c26eb5fe1be2ac31704e5682ee6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::ResourceHandle::ResourceHandle | tensorflow::ResourceHandle::ResourceHandle( const ResourceHandleProto & proto) | ['proto'] | ResourceHandle::ResourceHandle(const ResourceHandleProto& proto) {
FromProto(proto);
} | 16 | True | 1 |
CVE-2022-23566 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-5qw5-89mw-wcg2', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-5qw5-89mw-wcg2', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/shape_inference.h#L394', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/shape_inference.h#L394', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/97282c6d0d34476b6ba033f961590b783fa184cd', 'name': 'https://github.com/tensorflow/tensorflow/commit/97282c6d0d34476b6ba033f961590b783fa184cd', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/costs/graph_properties.cc#L1132-L1141', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/costs/graph_properties.cc#L1132-L1141', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. TensorFlow is vulnerable to a heap OOB write in `Grappler`. The `set_output` function writes to an array at the specified index. Hence, this gives a malicious user a write primitive. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T16:05Z | 2022-02-04T23:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2021-11-08 05:48:40-08:00 | Prevent a crash due to heap OOB write in grappler.
PiperOrigin-RevId: 408318417
Change-Id: If095feb8c001e3a8ac4a85b7387b81e8309df47d | 97282c6d0d34476b6ba033f961590b783fa184cd | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::SymbolicShapeRefiner::SetUnknownShape | tensorflow::grappler::SymbolicShapeRefiner::SetUnknownShape( const NodeDef * node , int output_port) | ['node', 'output_port'] | Status SetUnknownShape(const NodeDef* node, int output_port) {
shape_inference::ShapeHandle shape =
GetUnknownOutputShape(node, output_port);
InferenceContext* ctx = GetContext(node);
if (ctx == nullptr) {
return errors::InvalidArgument("Missing context");
}
ctx->set_output(output_port, shape);
return Status::OK();
} | 65 | True | 1 |
CVE-2022-23565 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/c2b31ff2d3151acb230edc3f5b1832d2c713a9e0', 'name': 'https://github.com/tensorflow/tensorflow/commit/c2b31ff2d3151acb230edc3f5b1832d2c713a9e0', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4v5p-v5h9-6xjx', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4v5p-v5h9-6xjx', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can trigger denial of service via assertion failure by altering a `SavedModel` on disk such that `AttrDef`s of some operation are duplicated. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T17:38Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-08 10:14:10-08:00 | Remove a `DCHECK`-fail, log an error instead.
`DCHECK` in debug mode results in crashes. TensorFlow has had multiple vulnerabilities due to this.
Outside of debug mode, `DCHECK` is a no-op.
A better alternative is to report an error to the log buffer and continue. This should happen both in debug mode and in prod mode.
PiperOrigin-RevId: 408375925
Change-Id: Id5b3e19c73f3fbe0cc4bba26ca44ff9607bb6356 | c2b31ff2d3151acb230edc3f5b1832d2c713a9e0 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::RepeatedAttrDefEqual | tensorflow::RepeatedAttrDefEqual( const protobuf :: RepeatedPtrField<OpDef::AttrDef> & a1 , const protobuf :: RepeatedPtrField<OpDef::AttrDef> & a2) | ['a1', 'a2'] | bool RepeatedAttrDefEqual(
const protobuf::RepeatedPtrField<OpDef::AttrDef>& a1,
const protobuf::RepeatedPtrField<OpDef::AttrDef>& a2) {
std::unordered_map<string, const OpDef::AttrDef*> a1_set;
for (const OpDef::AttrDef& def : a1) {
DCHECK(a1_set.find(def.name()) == a1_set.end())
<< "AttrDef names must be unique, but '" << def.name()
<< "' appears more than once";
a1_set[def.name()] = &def;
}
for (const OpDef::AttrDef& def : a2) {
auto iter = a1_set.find(def.name());
if (iter == a1_set.end()) return false;
if (!AttrDefEqual(*iter->second, def)) return false;
a1_set.erase(iter);
}
if (!a1_set.empty()) return false;
return true;
} | 178 | True | 1 |
CVE-2022-23572 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/cb164786dc891ea11d3a900e90367c339305dc7b', 'name': 'https://github.com/tensorflow/tensorflow/commit/cb164786dc891ea11d3a900e90367c339305dc7b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-rww7-2gpw-fv6j', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-rww7-2gpw-fv6j', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/shape_inference.cc#L168-L174', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/shape_inference.cc#L168-L174', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-754'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. Under certain scenarios, TensorFlow can fail to specialize a type during shape inference. This case is covered by the `DCHECK` function however, `DCHECK` is a no-op in production builds and an assertion failure in debug builds. In the first case execution proceeds to the `ValueOrDie` line. This results in an assertion failure as `ret` contains an error `Status`, not a value. In the second case we also get a crash due to the assertion failure. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, and TensorFlow 2.6.3, as these are also affected and still in supported range.'}] | 2022-02-10T16:17Z | 2022-02-04T23:15Z | Improper Check for Unusual or Exceptional Conditions | The software does not check or incorrectly checks for unusual or exceptional conditions that are not expected to occur frequently during day to day operation of the software. |
The programmer may assume that certain events or conditions will never occur or do not need to be worried about, such as low memory conditions, lack of access to resources due to restrictive permissions, or misbehaving clients or components. However, attackers may intentionally trigger these unusual conditions, thus violating the programmer's assumptions, possibly introducing instability, incorrect behavior, or a vulnerability.
Note that this entry is not exclusively about the use of exceptions and exception handling, which are mechanisms for both checking and handling unusual or unexpected conditions.
| https://cwe.mitre.org/data/definitions/754.html | 0 | Mihai Maruseac | 2021-11-08 10:28:34-08:00 | Properly handle the case where `SpecializeType()` returns an error `Status`.
If the error case in `SpecializeType()` is reached, then we would get a crash when trying to access the value of an errorenous `StatusOr` object
PiperOrigin-RevId: 408380069
Change-Id: If3c3fc876dcf9384d5ec7a4985adc68c23ea7318 | cb164786dc891ea11d3a900e90367c339305dc7b | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::shape_inference::InferenceContext::PreInputInit | tensorflow::shape_inference::InferenceContext::PreInputInit( const OpDef & op_def , const std :: vector<const Tensor*> & input_tensors , const std :: vector<ShapeHandle> & input_tensors_as_shapes) | ['op_def', 'input_tensors', 'input_tensors_as_shapes'] | void InferenceContext::PreInputInit(
const OpDef& op_def, const std::vector<const Tensor*>& input_tensors,
const std::vector<ShapeHandle>& input_tensors_as_shapes) {
// TODO(mdan): This is also done at graph construction. Run only here instead?
const auto ret = full_type::SpecializeType(attrs_, op_def);
DCHECK(ret.status().ok()) << "while instantiating types: " << ret.status();
ret_types_ = ret.ValueOrDie();
input_tensors_ = input_tensors;
input_tensors_as_shapes_ = input_tensors_as_shapes;
construction_status_ =
NameRangesForNode(attrs_, op_def, &input_name_map_, &output_name_map_);
if (!construction_status_.ok()) return;
int num_outputs = 0;
for (const auto& e : output_name_map_) {
num_outputs = std::max(num_outputs, e.second.second);
}
outputs_.assign(num_outputs, nullptr);
output_handle_shapes_and_types_.resize(num_outputs);
} | 158 | True | 1 |
CVE-2022-23570 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/8a513cec4bec15961fbfdedcaa5376522980455c', 'name': 'https://github.com/tensorflow/tensorflow/commit/8a513cec4bec15961fbfdedcaa5376522980455c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9p77-mmrw-69c7', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9p77-mmrw-69c7', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/full_type_util.cc#L104-L106', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/full_type_util.cc#L104-L106', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}, {'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. When decoding a tensor from protobuf, TensorFlow might do a null-dereference if attributes of some mutable arguments to some operations are missing from the proto. This is guarded by a `DCHECK`. However, `DCHECK` is a no-op in production builds and an assertion failure in debug builds. In the first case execution proceeds to the dereferencing of the null pointer, whereas in the second case it results in a crash due to the assertion failure. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, and TensorFlow 2.6.3, as these are also affected and still in supported range.'}] | 2022-02-10T16:07Z | 2022-02-04T23:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Mihai Maruseac | 2021-11-08 10:35:47-08:00 | Prevent null dereference read in `SpecializeType()`
For some adversarial protos, the attribute for a key might not exist.
PiperOrigin-RevId: 408382090
Change-Id: Ie7eabe532c9ff280fce5dce1f6cdb93c76c2e040 | 8a513cec4bec15961fbfdedcaa5376522980455c | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::full_type::SpecializeType | tensorflow::full_type::SpecializeType( const AttrSlice & attrs , const OpDef & op_def) | ['attrs', 'op_def'] | StatusOr<FullTypeDef> SpecializeType(const AttrSlice& attrs,
const OpDef& op_def) {
FullTypeDef ft;
ft.set_type_id(TFT_PRODUCT);
for (int i = 0; i < op_def.output_arg_size(); i++) {
auto* t = ft.add_args();
*t = op_def.output_arg(i).experimental_full_type();
// Resolve dependent types. The convention for op registrations is to use
// attributes as type variables.
// See https://www.tensorflow.org/guide/create_op#type_polymorphism.
// Once the op signature can be defined entirely in FullType, this
// convention can be deprecated.
//
// Note: While this code performs some basic verifications, it generally
// assumes consistent op defs and attributes. If more complete
// verifications are needed, they should be done by separately, and in a
// way that can be reused for type inference.
for (int j = 0; j < t->args_size(); j++) {
auto* arg = t->mutable_args(i);
if (arg->type_id() == TFT_VAR) {
const auto* attr = attrs.Find(arg->s());
DCHECK(attr != nullptr);
if (attr->value_case() == AttrValue::kList) {
const auto& attr_list = attr->list();
arg->set_type_id(TFT_PRODUCT);
for (int i = 0; i < attr_list.type_size(); i++) {
map_dtype_to_tensor(attr_list.type(i), arg->add_args());
}
} else if (attr->value_case() == AttrValue::kType) {
map_dtype_to_tensor(attr->type(), arg);
} else {
return Status(error::UNIMPLEMENTED,
absl::StrCat("unknown attribute type",
attrs.DebugString(), " key=", arg->s()));
}
arg->clear_s();
}
}
}
return ft;
} | 269 | True | 1 |
CVE-2022-23570 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/8a513cec4bec15961fbfdedcaa5376522980455c', 'name': 'https://github.com/tensorflow/tensorflow/commit/8a513cec4bec15961fbfdedcaa5376522980455c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9p77-mmrw-69c7', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9p77-mmrw-69c7', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/full_type_util.cc#L104-L106', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/full_type_util.cc#L104-L106', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}, {'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. When decoding a tensor from protobuf, TensorFlow might do a null-dereference if attributes of some mutable arguments to some operations are missing from the proto. This is guarded by a `DCHECK`. However, `DCHECK` is a no-op in production builds and an assertion failure in debug builds. In the first case execution proceeds to the dereferencing of the null pointer, whereas in the second case it results in a crash due to the assertion failure. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, and TensorFlow 2.6.3, as these are also affected and still in supported range.'}] | 2022-02-10T16:07Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-08 10:35:47-08:00 | Prevent null dereference read in `SpecializeType()`
For some adversarial protos, the attribute for a key might not exist.
PiperOrigin-RevId: 408382090
Change-Id: Ie7eabe532c9ff280fce5dce1f6cdb93c76c2e040 | 8a513cec4bec15961fbfdedcaa5376522980455c | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::full_type::SpecializeType | tensorflow::full_type::SpecializeType( const AttrSlice & attrs , const OpDef & op_def) | ['attrs', 'op_def'] | StatusOr<FullTypeDef> SpecializeType(const AttrSlice& attrs,
const OpDef& op_def) {
FullTypeDef ft;
ft.set_type_id(TFT_PRODUCT);
for (int i = 0; i < op_def.output_arg_size(); i++) {
auto* t = ft.add_args();
*t = op_def.output_arg(i).experimental_full_type();
// Resolve dependent types. The convention for op registrations is to use
// attributes as type variables.
// See https://www.tensorflow.org/guide/create_op#type_polymorphism.
// Once the op signature can be defined entirely in FullType, this
// convention can be deprecated.
//
// Note: While this code performs some basic verifications, it generally
// assumes consistent op defs and attributes. If more complete
// verifications are needed, they should be done by separately, and in a
// way that can be reused for type inference.
for (int j = 0; j < t->args_size(); j++) {
auto* arg = t->mutable_args(i);
if (arg->type_id() == TFT_VAR) {
const auto* attr = attrs.Find(arg->s());
DCHECK(attr != nullptr);
if (attr->value_case() == AttrValue::kList) {
const auto& attr_list = attr->list();
arg->set_type_id(TFT_PRODUCT);
for (int i = 0; i < attr_list.type_size(); i++) {
map_dtype_to_tensor(attr_list.type(i), arg->add_args());
}
} else if (attr->value_case() == AttrValue::kType) {
map_dtype_to_tensor(attr->type(), arg);
} else {
return Status(error::UNIMPLEMENTED,
absl::StrCat("unknown attribute type",
attrs.DebugString(), " key=", arg->s()));
}
arg->clear_s();
}
}
}
return ft;
} | 269 | True | 1 |
CVE-2022-23574 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-77gp-3h4r-6428', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-77gp-3h4r-6428', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/full_type_util.cc#L81-L102', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/full_type_util.cc#L81-L102', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/0657c83d08845cc434175934c642299de2c0f042', 'name': 'https://github.com/tensorflow/tensorflow/commit/0657c83d08845cc434175934c642299de2c0f042', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Tensorflow is an Open Source Machine Learning Framework. There is a typo in TensorFlow's `SpecializeType` which results in heap OOB read/write. Due to a typo, `arg` is initialized to the `i`th mutable argument in a loop where the loop index is `j`. Hence it is possible to assign to `arg` from outside the vector of arguments. Since this is a mutable proto value, it allows both read and write to outside of bounds data. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, and TensorFlow 2.6.3, as these are also affected and still in supported range."}] | 2022-02-10T15:45Z | 2022-02-04T23:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Mihai Maruseac | 2021-11-09 04:44:43-08:00 | Fix heap OOB read/write due to incorrect indexing.
PiperOrigin-RevId: 408578046
Change-Id: Ifc9ffea49e5890f55fcb2c27568611052c3ddcfa | 0657c83d08845cc434175934c642299de2c0f042 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::full_type::SpecializeType | tensorflow::full_type::SpecializeType( const AttrSlice & attrs , const OpDef & op_def) | ['attrs', 'op_def'] | StatusOr<FullTypeDef> SpecializeType(const AttrSlice& attrs,
const OpDef& op_def) {
FullTypeDef ft;
ft.set_type_id(TFT_PRODUCT);
for (int i = 0; i < op_def.output_arg_size(); i++) {
auto* t = ft.add_args();
*t = op_def.output_arg(i).experimental_full_type();
// Resolve dependent types. The convention for op registrations is to use
// attributes as type variables.
// See https://www.tensorflow.org/guide/create_op#type_polymorphism.
// Once the op signature can be defined entirely in FullType, this
// convention can be deprecated.
//
// Note: While this code performs some basic verifications, it generally
// assumes consistent op defs and attributes. If more complete
// verifications are needed, they should be done by separately, and in a
// way that can be reused for type inference.
for (int j = 0; j < t->args_size(); j++) {
auto* arg = t->mutable_args(i);
if (arg->type_id() == TFT_VAR) {
const auto* attr = attrs.Find(arg->s());
if (attr == nullptr) {
return Status(
error::INVALID_ARGUMENT,
absl::StrCat("Could not find an attribute for key ", arg->s()));
}
if (attr->value_case() == AttrValue::kList) {
const auto& attr_list = attr->list();
arg->set_type_id(TFT_PRODUCT);
for (int i = 0; i < attr_list.type_size(); i++) {
map_dtype_to_tensor(attr_list.type(i), arg->add_args());
}
} else if (attr->value_case() == AttrValue::kType) {
map_dtype_to_tensor(attr->type(), arg);
} else {
return Status(error::UNIMPLEMENTED,
absl::StrCat("unknown attribute type",
attrs.DebugString(), " key=", arg->s()));
}
arg->clear_s();
}
}
}
return ft;
} | 291 | True | 1 |
CVE-2022-23574 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-77gp-3h4r-6428', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-77gp-3h4r-6428', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/full_type_util.cc#L81-L102', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/full_type_util.cc#L81-L102', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/0657c83d08845cc434175934c642299de2c0f042', 'name': 'https://github.com/tensorflow/tensorflow/commit/0657c83d08845cc434175934c642299de2c0f042', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Tensorflow is an Open Source Machine Learning Framework. There is a typo in TensorFlow's `SpecializeType` which results in heap OOB read/write. Due to a typo, `arg` is initialized to the `i`th mutable argument in a loop where the loop index is `j`. Hence it is possible to assign to `arg` from outside the vector of arguments. Since this is a mutable proto value, it allows both read and write to outside of bounds data. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, and TensorFlow 2.6.3, as these are also affected and still in supported range."}] | 2022-02-10T15:45Z | 2022-02-04T23:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2021-11-09 04:44:43-08:00 | Fix heap OOB read/write due to incorrect indexing.
PiperOrigin-RevId: 408578046
Change-Id: Ifc9ffea49e5890f55fcb2c27568611052c3ddcfa | 0657c83d08845cc434175934c642299de2c0f042 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::full_type::SpecializeType | tensorflow::full_type::SpecializeType( const AttrSlice & attrs , const OpDef & op_def) | ['attrs', 'op_def'] | StatusOr<FullTypeDef> SpecializeType(const AttrSlice& attrs,
const OpDef& op_def) {
FullTypeDef ft;
ft.set_type_id(TFT_PRODUCT);
for (int i = 0; i < op_def.output_arg_size(); i++) {
auto* t = ft.add_args();
*t = op_def.output_arg(i).experimental_full_type();
// Resolve dependent types. The convention for op registrations is to use
// attributes as type variables.
// See https://www.tensorflow.org/guide/create_op#type_polymorphism.
// Once the op signature can be defined entirely in FullType, this
// convention can be deprecated.
//
// Note: While this code performs some basic verifications, it generally
// assumes consistent op defs and attributes. If more complete
// verifications are needed, they should be done by separately, and in a
// way that can be reused for type inference.
for (int j = 0; j < t->args_size(); j++) {
auto* arg = t->mutable_args(i);
if (arg->type_id() == TFT_VAR) {
const auto* attr = attrs.Find(arg->s());
if (attr == nullptr) {
return Status(
error::INVALID_ARGUMENT,
absl::StrCat("Could not find an attribute for key ", arg->s()));
}
if (attr->value_case() == AttrValue::kList) {
const auto& attr_list = attr->list();
arg->set_type_id(TFT_PRODUCT);
for (int i = 0; i < attr_list.type_size(); i++) {
map_dtype_to_tensor(attr_list.type(i), arg->add_args());
}
} else if (attr->value_case() == AttrValue::kType) {
map_dtype_to_tensor(attr->type(), arg);
} else {
return Status(error::UNIMPLEMENTED,
absl::StrCat("unknown attribute type",
attrs.DebugString(), " key=", arg->s()));
}
arg->clear_s();
}
}
}
return ft;
} | 291 | True | 1 |
CVE-2022-23576 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/b9bd6cfd1c50e6807846af9a86f9b83cafc9c8ae', 'name': 'https://github.com/tensorflow/tensorflow/commit/b9bd6cfd1c50e6807846af9a86f9b83cafc9c8ae', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L1598-L1617', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L1598-L1617', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-wm93-f238-7v37', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-wm93-f238-7v37', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `OpLevelCostEstimator::CalculateOutputSize` is vulnerable to an integer overflow if an attacker can create an operation which would involve tensors with large enough number of elements. We can have a large enough number of dimensions in `output_shape.dim()` or just a small number of dimensions being large enough to cause an overflow in the multiplication. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T15:10Z | 2022-02-04T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Mihai Maruseac | 2021-11-09 14:05:53-08:00 | Prevent integer overflow in `OpLevelCostEstimator::CalculateOutputSize`.
In order to not change the API, we return a negative value in case of overflow. A better fix is to change the API to return a status instead.
PiperOrigin-RevId: 408701427
Change-Id: Idf31e7f0bf18ca824d084fdd355e1f653f145c20 | b9bd6cfd1c50e6807846af9a86f9b83cafc9c8ae | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimator::CalculateOutputSize | tensorflow::grappler::OpLevelCostEstimator::CalculateOutputSize( const OpInfo & op_info , bool * found_unknown_shapes) | ['op_info', 'found_unknown_shapes'] | int64_t OpLevelCostEstimator::CalculateOutputSize(const OpInfo& op_info,
bool* found_unknown_shapes) {
int64_t total_output_size = 0;
// Use float as default for calculations.
for (const auto& output : op_info.outputs()) {
DataType dt = output.dtype();
const auto& original_output_shape = output.shape();
int64_t output_size = DataTypeSize(BaseType(dt));
int num_dims = std::max(1, original_output_shape.dim_size());
auto output_shape = MaybeGetMinimumShape(original_output_shape, num_dims,
found_unknown_shapes);
for (const auto& dim : output_shape.dim()) {
output_size *= dim.size();
}
total_output_size += output_size;
VLOG(1) << "Output Size: " << output_size
<< " Total Output Size:" << total_output_size;
}
return total_output_size;
} | 142 | True | 1 |
CVE-2022-23575 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L1552-L1558', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L1552-L1558', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/fcd18ce3101f245b083b30655c27b239dc72221e', 'name': 'https://github.com/tensorflow/tensorflow/commit/fcd18ce3101f245b083b30655c27b239dc72221e', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c94w-c95p-phf8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c94w-c95p-phf8', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `OpLevelCostEstimator::CalculateTensorSize` is vulnerable to an integer overflow if an attacker can create an operation which would involve a tensor with large enough number of elements. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T15:49Z | 2022-02-04T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Mihai Maruseac | 2021-11-09 14:54:52-08:00 | Prevent integer overflow in `OpLevelCostEstimator::CalculateTensorSize`.
In order to not change the API, we return a negative value in case of overflow. A better fix is to change the API to return a status instead.
PiperOrigin-RevId: 408713061
Change-Id: I3771475b0c72a2844a3854086966562fd33f2da5 | fcd18ce3101f245b083b30655c27b239dc72221e | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimator::CalculateTensorSize | tensorflow::grappler::OpLevelCostEstimator::CalculateTensorSize( const OpInfo :: TensorProperties & tensor , bool * found_unknown_shapes) | ['tensor', 'found_unknown_shapes'] | int64_t OpLevelCostEstimator::CalculateTensorSize(
const OpInfo::TensorProperties& tensor, bool* found_unknown_shapes) {
int64_t count = CalculateTensorElementCount(tensor, found_unknown_shapes);
int size = DataTypeSize(BaseType(tensor.dtype()));
VLOG(2) << "Count: " << count << " DataTypeSize: " << size;
return count * size;
} | 64 | True | 1 |
CVE-2022-23581 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:N/A:P | NETWORK | LOW | NONE | NONE | NONE | PARTIAL | 5.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fq86-3f29-px2c', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fq86-3f29-px2c', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1fb27733f943295d874417630edd3b38b34ce082', 'name': 'https://github.com/tensorflow/tensorflow/commit/1fb27733f943295d874417630edd3b38b34ce082', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/240655511cd3e701155f944a972db71b6c0b1bb6', 'name': 'https://github.com/tensorflow/tensorflow/commit/240655511cd3e701155f944a972db71b6c0b1bb6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1', 'name': 'https://github.com/tensorflow/tensorflow/commit/ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/constant_folding.cc#L1687-L1742', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/constant_folding.cc#L1687-L1742', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The Grappler optimizer in TensorFlow can be used to cause a denial of service by altering a `SavedModel` such that `IsSimplifiableReshape` would trigger `CHECK` failures. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T01:34Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-11 08:58:30-08:00 | Make `IsSimplifiableReshape` return `Status` instead of `bool`.
This is to allow remove `CHECK`-fails in subsequent commits.
PiperOrigin-RevId: 409160987
Change-Id: I3f050218a3832271395c4372a0b8ea05f1c03d80 | ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::ConstantFolding::IsSimplifiableReshape | tensorflow::grappler::ConstantFolding::IsSimplifiableReshape( const NodeDef & node , const GraphProperties & properties) const | ['node', 'properties'] | bool ConstantFolding::IsSimplifiableReshape(
const NodeDef& node, const GraphProperties& properties) const {
if (!IsReshape(node)) {
return false;
}
CHECK_LE(2, node.input_size());
const NodeDef* new_shape = node_map_->GetNode(node.input(1));
if (!IsReallyConstant(*new_shape)) {
return false;
}
TensorVector outputs;
auto outputs_cleanup = gtl::MakeCleanup([&outputs] {
for (const auto& output : outputs) {
delete output.tensor;
}
});
Status s = EvaluateNode(*new_shape, TensorVector(), &outputs);
if (!s.ok()) {
return false;
}
CHECK_EQ(1, outputs.size());
const std::vector<OpInfo::TensorProperties>& props =
properties.GetInputProperties(node.name());
if (props.empty()) {
return false;
}
const OpInfo::TensorProperties& prop = props[0];
if (prop.dtype() == DT_INVALID) {
return false;
}
const PartialTensorShape shape(prop.shape());
if (!shape.IsFullyDefined()) {
return false;
}
PartialTensorShape new_dims;
if (outputs[0]->dtype() == DT_INT32) {
std::vector<int32> shp;
for (int i = 0; i < outputs[0]->NumElements(); ++i) {
int32_t dim = outputs[0]->flat<int32>()(i);
shp.push_back(dim);
}
TF_CHECK_OK(TensorShapeUtils::MakeShape(shp, &new_dims));
} else {
std::vector<int64_t> shp;
for (int i = 0; i < outputs[0]->NumElements(); ++i) {
int64_t dim = outputs[0]->flat<int64_t>()(i);
shp.push_back(dim);
}
TF_CHECK_OK(TensorShapeUtils::MakeShape(shp, &new_dims));
}
return shape.IsCompatibleWith(new_dims);
} | 402 | True | 1 |
CVE-2022-23581 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:N/A:P | NETWORK | LOW | NONE | NONE | NONE | PARTIAL | 5.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fq86-3f29-px2c', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fq86-3f29-px2c', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1fb27733f943295d874417630edd3b38b34ce082', 'name': 'https://github.com/tensorflow/tensorflow/commit/1fb27733f943295d874417630edd3b38b34ce082', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/240655511cd3e701155f944a972db71b6c0b1bb6', 'name': 'https://github.com/tensorflow/tensorflow/commit/240655511cd3e701155f944a972db71b6c0b1bb6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1', 'name': 'https://github.com/tensorflow/tensorflow/commit/ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/constant_folding.cc#L1687-L1742', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/constant_folding.cc#L1687-L1742', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The Grappler optimizer in TensorFlow can be used to cause a denial of service by altering a `SavedModel` such that `IsSimplifiableReshape` would trigger `CHECK` failures. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T01:34Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-11 08:58:30-08:00 | Make `IsSimplifiableReshape` return `Status` instead of `bool`.
This is to allow remove `CHECK`-fails in subsequent commits.
PiperOrigin-RevId: 409160987
Change-Id: I3f050218a3832271395c4372a0b8ea05f1c03d80 | ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::ConstantFolding::SimplifyReshape | tensorflow::grappler::ConstantFolding::SimplifyReshape( const GraphProperties & properties , bool use_shape_info , NodeDef * node) | ['properties', 'use_shape_info', 'node'] | bool ConstantFolding::SimplifyReshape(const GraphProperties& properties,
bool use_shape_info, NodeDef* node) {
if (!use_shape_info || node->attr().count("T") == 0 ||
!IsSimplifiableReshape(*node, properties)) {
return false;
}
DataType output_type = node->attr().at("T").type();
node->set_op("Identity");
EraseRegularNodeAttributes(node);
(*node->mutable_attr())["T"].set_type(output_type);
*node->mutable_input(1) = AsControlDependency(node->input(1));
return true;
} | 118 | True | 1 |
CVE-2022-23581 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:N/A:P | NETWORK | LOW | NONE | NONE | NONE | PARTIAL | 5.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fq86-3f29-px2c', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fq86-3f29-px2c', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1fb27733f943295d874417630edd3b38b34ce082', 'name': 'https://github.com/tensorflow/tensorflow/commit/1fb27733f943295d874417630edd3b38b34ce082', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/240655511cd3e701155f944a972db71b6c0b1bb6', 'name': 'https://github.com/tensorflow/tensorflow/commit/240655511cd3e701155f944a972db71b6c0b1bb6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1', 'name': 'https://github.com/tensorflow/tensorflow/commit/ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/constant_folding.cc#L1687-L1742', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/constant_folding.cc#L1687-L1742', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The Grappler optimizer in TensorFlow can be used to cause a denial of service by altering a `SavedModel` such that `IsSimplifiableReshape` would trigger `CHECK` failures. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T01:34Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-11 09:16:14-08:00 | Remove `CHECK`-fails from `IsSimplifiableReshape`
PiperOrigin-RevId: 409164987
Change-Id: I58c7dd459ff348c3dbae95e00c4c5e63b30a4e65 | 1fb27733f943295d874417630edd3b38b34ce082 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::ConstantFolding::IsSimplifiableReshape | tensorflow::grappler::ConstantFolding::IsSimplifiableReshape( const NodeDef & node , const GraphProperties & properties) const | ['node', 'properties'] | Status ConstantFolding::IsSimplifiableReshape(
const NodeDef& node, const GraphProperties& properties) const {
if (!IsReshape(node)) {
return errors::Internal("Node ", node.name(), " is not a Reshape node");
}
CHECK_LE(2, node.input_size());
const NodeDef* new_shape = node_map_->GetNode(node.input(1));
if (!IsReallyConstant(*new_shape)) {
return errors::Internal("Node ", node.name(), " has shape ",
new_shape->DebugString(),
" which is not a constant");
}
TensorVector outputs;
auto outputs_cleanup = gtl::MakeCleanup([&outputs] {
for (const auto& output : outputs) {
delete output.tensor;
}
});
Status s = EvaluateNode(*new_shape, TensorVector(), &outputs);
if (!s.ok()) {
return errors::Internal("Could not evaluate node ", node.name());
}
CHECK_EQ(1, outputs.size());
const std::vector<OpInfo::TensorProperties>& props =
properties.GetInputProperties(node.name());
if (props.empty()) {
return errors::Internal("Node ", node.name(), " has no properties");
}
const OpInfo::TensorProperties& prop = props[0];
if (prop.dtype() == DT_INVALID) {
return errors::Internal("Node ", node.name(), " has property ",
prop.DebugString(), " with invalid dtype");
}
const PartialTensorShape shape(prop.shape());
if (!shape.IsFullyDefined()) {
return errors::Internal("Node ", node.name(), " has property ",
prop.DebugString(), " with shape ",
shape.DebugString(), " which is not fully defined");
}
PartialTensorShape new_dims;
if (outputs[0]->dtype() == DT_INT32) {
std::vector<int32> shp;
for (int i = 0; i < outputs[0]->NumElements(); ++i) {
int32_t dim = outputs[0]->flat<int32>()(i);
shp.push_back(dim);
}
TF_CHECK_OK(TensorShapeUtils::MakeShape(shp, &new_dims));
} else {
std::vector<int64_t> shp;
for (int i = 0; i < outputs[0]->NumElements(); ++i) {
int64_t dim = outputs[0]->flat<int64_t>()(i);
shp.push_back(dim);
}
TF_CHECK_OK(TensorShapeUtils::MakeShape(shp, &new_dims));
}
if (!shape.IsCompatibleWith(new_dims)) {
return errors::Internal("Expected shape ", shape.DebugString(),
"to be compatible with ", new_dims.DebugString());
}
return Status::OK();
} | 543 | True | 1 |
CVE-2022-23581 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:N/A:P | NETWORK | LOW | NONE | NONE | NONE | PARTIAL | 5.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fq86-3f29-px2c', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fq86-3f29-px2c', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1fb27733f943295d874417630edd3b38b34ce082', 'name': 'https://github.com/tensorflow/tensorflow/commit/1fb27733f943295d874417630edd3b38b34ce082', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/240655511cd3e701155f944a972db71b6c0b1bb6', 'name': 'https://github.com/tensorflow/tensorflow/commit/240655511cd3e701155f944a972db71b6c0b1bb6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1', 'name': 'https://github.com/tensorflow/tensorflow/commit/ebc1a2ffe5a7573d905e99bd0ee3568ee07c12c1', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/constant_folding.cc#L1687-L1742', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/constant_folding.cc#L1687-L1742', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The Grappler optimizer in TensorFlow can be used to cause a denial of service by altering a `SavedModel` such that `IsSimplifiableReshape` would trigger `CHECK` failures. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T01:34Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-11 09:24:31-08:00 | Eliminate `CHECK`-fails from `IsSimplifiableReshape` via `MakeShape(<invalid shape>)`
PiperOrigin-RevId: 409166738
Change-Id: I7f0a3590b8acae3f3e3e2fe636e1f5ef285693cf | 240655511cd3e701155f944a972db71b6c0b1bb6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::ConstantFolding::IsSimplifiableReshape | tensorflow::grappler::ConstantFolding::IsSimplifiableReshape( const NodeDef & node , const GraphProperties & properties) const | ['node', 'properties'] | Status ConstantFolding::IsSimplifiableReshape(
const NodeDef& node, const GraphProperties& properties) const {
if (!IsReshape(node)) {
return errors::Internal("Node ", node.name(), " is not a Reshape node");
}
if (2 > node.input_size()) {
return errors::Internal("Node ", node.name(),
" must have at most 2 inputs but has ",
node.input_size());
}
const NodeDef* new_shape = node_map_->GetNode(node.input(1));
if (!IsReallyConstant(*new_shape)) {
return errors::Internal("Node ", node.name(), " has shape ",
new_shape->DebugString(),
" which is not a constant");
}
TensorVector outputs;
auto outputs_cleanup = gtl::MakeCleanup([&outputs] {
for (const auto& output : outputs) {
delete output.tensor;
}
});
Status s = EvaluateNode(*new_shape, TensorVector(), &outputs);
if (!s.ok()) {
return errors::Internal("Could not evaluate node ", node.name());
}
if (outputs.size() != 1) {
return errors::Internal("Node ", node.name(),
" must have exactly 1 output but has ",
outputs.size());
}
const std::vector<OpInfo::TensorProperties>& props =
properties.GetInputProperties(node.name());
if (props.empty()) {
return errors::Internal("Node ", node.name(), " has no properties");
}
const OpInfo::TensorProperties& prop = props[0];
if (prop.dtype() == DT_INVALID) {
return errors::Internal("Node ", node.name(), " has property ",
prop.DebugString(), " with invalid dtype");
}
const PartialTensorShape shape(prop.shape());
if (!shape.IsFullyDefined()) {
return errors::Internal("Node ", node.name(), " has property ",
prop.DebugString(), " with shape ",
shape.DebugString(), " which is not fully defined");
}
PartialTensorShape new_dims;
if (outputs[0]->dtype() == DT_INT32) {
std::vector<int32> shp;
for (int i = 0; i < outputs[0]->NumElements(); ++i) {
int32_t dim = outputs[0]->flat<int32>()(i);
shp.push_back(dim);
}
TF_CHECK_OK(TensorShapeUtils::MakeShape(shp, &new_dims));
} else {
std::vector<int64_t> shp;
for (int i = 0; i < outputs[0]->NumElements(); ++i) {
int64_t dim = outputs[0]->flat<int64_t>()(i);
shp.push_back(dim);
}
TF_CHECK_OK(TensorShapeUtils::MakeShape(shp, &new_dims));
}
if (!shape.IsCompatibleWith(new_dims)) {
return errors::Internal("Expected shape ", shape.DebugString(),
"to be compatible with ", new_dims.DebugString());
}
return Status::OK();
} | 589 | True | 1 |
CVE-2022-23579 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:N/A:P | NETWORK | LOW | NONE | NONE | NONE | PARTIAL | 5.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/dependency_optimizer.cc#L59-L98', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/optimizers/dependency_optimizer.cc#L59-L98', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/92dba16749fae36c246bec3f9ba474d9ddeb7662', 'name': 'https://github.com/tensorflow/tensorflow/commit/92dba16749fae36c246bec3f9ba474d9ddeb7662', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-5f2r-qp73-37mr', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-5f2r-qp73-37mr', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The Grappler optimizer in TensorFlow can be used to cause a denial of service by altering a `SavedModel` such that `SafeToRemoveIdentity` would trigger `CHECK` failures. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T01:28Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-11 10:43:29-08:00 | Prevent a null-pointer dereference / `CHECK`-fail in grappler.
PiperOrigin-RevId: 409187354
Change-Id: I369c249cca32e6c56ec193f0ebbf2f2768fc7d43 | 92dba16749fae36c246bec3f9ba474d9ddeb7662 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::DependencyOptimizer::SafeToRemoveIdentity | tensorflow::grappler::DependencyOptimizer::SafeToRemoveIdentity( const NodeDef & node) const | ['node'] | bool DependencyOptimizer::SafeToRemoveIdentity(const NodeDef& node) const {
if (!IsIdentity(node) && !IsIdentityN(node)) {
return true;
}
if (nodes_to_preserve_.find(node.name()) != nodes_to_preserve_.end()) {
return false;
}
if (!fetch_nodes_known_) {
// The output values of this node may be needed.
return false;
}
if (node.input_size() < 1) {
// Node lacks input, is invalid
return false;
}
const NodeDef* input = node_map_->GetNode(NodeName(node.input(0)));
CHECK(input != nullptr) << "node = " << node.name()
<< " input = " << node.input(0);
// Don't remove Identity nodes corresponding to Variable reads or following
// Recv.
if (IsVariable(*input) || IsRecv(*input)) {
return false;
}
for (const auto& consumer : node_map_->GetOutputs(node.name())) {
if (node.input_size() > 1 && (IsRetval(*consumer) || IsMerge(*consumer))) {
return false;
}
if (IsSwitch(*input)) {
for (const string& consumer_input : consumer->input()) {
if (consumer_input == AsControlDependency(node.name())) {
return false;
}
}
}
}
return true;
} | 242 | True | 1 |
CVE-2022-23582 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4j82-5ccr-4r8v', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4j82-5ccr-4r8v', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/c2426bba00a01de6913738df8fa78e0215fcce02', 'name': 'https://github.com/tensorflow/tensorflow/commit/c2426bba00a01de6913738df8fa78e0215fcce02', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/attr_value_util.cc#L46-L50', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/attr_value_util.cc#L46-L50', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. A malicious user can cause a denial of service by altering a `SavedModel` such that `TensorByteSize` would trigger `CHECK` failures. `TensorShape` constructor throws a `CHECK`-fail if shape is partial or has a number of elements that would overflow the size of an `int`. The `PartialTensorShape` constructor instead does not cause a `CHECK`-abort if the shape is partial, which is exactly what this function needs to be able to return `-1`. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T13:19Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-11 11:50:53-08:00 | Use `PartialTensorShape` instead of `TensorShape`.
`TensorShape` constructor throws a CHECK-fail if shape is partial/overflows which the other doesn't. We are only determining the number of elements in the shape and partial shape should be used as it returns negative number when needed.
PiperOrigin-RevId: 409205384
Change-Id: Ia56542ff9ec758f2c9ffc7e4dcc9fa7eecd86e7b | c2426bba00a01de6913738df8fa78e0215fcce02 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TensorByteSize | tensorflow::TensorByteSize( const TensorProto & t) | ['t'] | int64_t TensorByteSize(const TensorProto& t) {
// num_elements returns -1 if shape is not fully defined.
int64_t num_elems = TensorShape(t.tensor_shape()).num_elements();
return num_elems < 0 ? -1 : num_elems * DataTypeSize(t.dtype());
} | 44 | True | 1 |
CVE-2022-23584 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e746adbfcfee15e9cfdb391ff746c765b99bdf9b', 'name': 'https://github.com/tensorflow/tensorflow/commit/e746adbfcfee15e9cfdb391ff746c765b99bdf9b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-24x4-6qmh-88qg', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-24x4-6qmh-88qg', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/kernels/image/decode_image_op.cc#L339-L346', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/kernels/image/decode_image_op.cc#L339-L346', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-416'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. A malicious user can cause a use after free behavior when decoding PNG images. After `png::CommonFreeDecode(&decode)` gets called, the values of `decode.width` and `decode.height` are in an unspecified state. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T17:23Z | 2022-02-04T23:15Z | Use After Free | Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code. |
The use of previously-freed memory can have any number of adverse consequences, ranging from the corruption of valid data to the execution of arbitrary code, depending on the instantiation and timing of the flaw. The simplest way data corruption may occur involves the system's reuse of the freed memory. Use-after-free errors have two common and sometimes overlapping causes:
Error conditions and other exceptional circumstances.
Confusion over which part of the program is responsible for freeing the memory.
In this scenario, the memory in question is allocated to another pointer validly at some point after it has been freed. The original pointer to the freed memory is used again and points to somewhere within the new allocation. As the data is changed, it corrupts the validly used memory; this induces undefined behavior in the process.
If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved.
| https://cwe.mitre.org/data/definitions/416.html | 0 | Mihai Maruseac | 2021-11-11 19:12:19-08:00 | Prevent use after free in `DecodePng` kernel.
We are cleaning up the memory in `decode` and then we are using an `OP_REQUIRES` to check an invariant on the `decode` data.
PiperOrigin-RevId: 409299145
Change-Id: I4eb93aaca52483eb202e89b78df07fbb2f6cb254 | e746adbfcfee15e9cfdb391ff746c765b99bdf9b | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::DecodeImageV2Op::DecodePngV2 | tensorflow::DecodeImageV2Op::DecodePngV2( OpKernelContext * context , StringPiece input) | ['context', 'input'] | void DecodePngV2(OpKernelContext* context, StringPiece input) {
int channel_bits = (data_type_ == DataType::DT_UINT8) ? 8 : 16;
png::DecodeContext decode;
OP_REQUIRES(
context, png::CommonInitDecode(input, channels_, channel_bits, &decode),
errors::InvalidArgument("Invalid PNG. Failed to initialize decoder."));
// Verify that width and height are not too large:
// - verify width and height don't overflow int.
// - width can later be multiplied by channels_ and sizeof(uint16), so
// verify single dimension is not too large.
// - verify when width and height are multiplied together, there are a few
// bits to spare as well.
const int width = static_cast<int>(decode.width);
const int height = static_cast<int>(decode.height);
const int64_t total_size =
static_cast<int64_t>(width) * static_cast<int64_t>(height);
if (width != static_cast<int64_t>(decode.width) || width <= 0 ||
width >= (1LL << 27) || height != static_cast<int64_t>(decode.height) ||
height <= 0 || height >= (1LL << 27) || total_size >= (1LL << 29)) {
png::CommonFreeDecode(&decode);
OP_REQUIRES(context, false,
errors::InvalidArgument("PNG size too large for int: ",
decode.width, " by ", decode.height));
}
Tensor* output = nullptr;
Status status;
// By the existing API, we support decoding PNG with `DecodeGif` op.
// We need to make sure to return 4-D shapes when using `DecodeGif`.
if (op_type_ == "DecodeGif") {
status = context->allocate_output(
0, TensorShape({1, height, width, decode.channels}), &output);
} else {
status = context->allocate_output(
0, TensorShape({height, width, decode.channels}), &output);
}
if (op_type_ == "DecodeBmp") {
// TODO(b/171060723): Only DecodeBmp as op_type_ is not acceptable here
// because currently `decode_(jpeg|png|gif)` ops can decode any one of
// jpeg, png or gif but not bmp. Similarly, `decode_bmp` cannot decode
// anything but bmp formats. This behavior needs to be revisited. For more
// details, please refer to the bug.
OP_REQUIRES(context, false,
errors::InvalidArgument(
"Trying to decode PNG format using DecodeBmp op. Use "
"`decode_png` or `decode_image` instead."));
} else if (op_type_ == "DecodeAndCropJpeg") {
OP_REQUIRES(context, false,
errors::InvalidArgument(
"DecodeAndCropJpeg operation can run on JPEG only, but "
"detected PNG."));
}
if (!status.ok()) png::CommonFreeDecode(&decode);
OP_REQUIRES_OK(context, status);
if (data_type_ == DataType::DT_UINT8) {
OP_REQUIRES(
context,
png::CommonFinishDecode(
reinterpret_cast<png_bytep>(output->flat<uint8>().data()),
decode.channels * width * sizeof(uint8), &decode),
errors::InvalidArgument("Invalid PNG data, size ", input.size()));
} else if (data_type_ == DataType::DT_UINT16) {
OP_REQUIRES(
context,
png::CommonFinishDecode(
reinterpret_cast<png_bytep>(output->flat<uint16>().data()),
decode.channels * width * sizeof(uint16), &decode),
errors::InvalidArgument("Invalid PNG data, size ", input.size()));
} else if (data_type_ == DataType::DT_FLOAT) {
// `png::CommonFinishDecode` does not support `float`. First allocate
// uint16 buffer for the image and decode in uint16 (lossless). Wrap the
// buffer in `unique_ptr` so that we don't forget to delete the buffer.
std::unique_ptr<uint16[]> buffer(
new uint16[height * width * decode.channels]);
OP_REQUIRES(
context,
png::CommonFinishDecode(reinterpret_cast<png_bytep>(buffer.get()),
decode.channels * width * sizeof(uint16),
&decode),
errors::InvalidArgument("Invalid PNG data, size ", input.size()));
// Convert uint16 image data to desired data type.
// Use eigen threadpooling to speed up the copy operation.
const auto& device = context->eigen_device<Eigen::ThreadPoolDevice>();
TTypes<uint16, 3>::UnalignedConstTensor buf(buffer.get(), height, width,
decode.channels);
float scale = 1. / std::numeric_limits<uint16>::max();
// Fill output tensor with desired dtype.
output->tensor<float, 3>().device(device) = buf.cast<float>() * scale;
}
} | 644 | True | 1 |
CVE-2022-23586 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/3d89911481ba6ebe8c88c1c0b595412121e6c645', 'name': 'https://github.com/tensorflow/tensorflow/commit/3d89911481ba6ebe8c88c1c0b595412121e6c645', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/function.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/function.cc', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/dcc21c7bc972b10b6fb95c2fb0f4ab5a59680ec2', 'name': 'https://github.com/tensorflow/tensorflow/commit/dcc21c7bc972b10b6fb95c2fb0f4ab5a59680ec2', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-43jf-985q-588j', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-43jf-985q-588j', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. A malicious user can cause a denial of service by altering a `SavedModel` such that assertions in `function.cc` would be falsified and crash the Python interpreter. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T17:33Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-12 08:12:05-08:00 | Eliminate `CHECK`-fail from `function.cc`.
PiperOrigin-RevId: 409414744
Change-Id: Ic854e12ab2edb88b165d32e2d632c4ee654d71ad | 3d89911481ba6ebe8c88c1c0b595412121e6c645 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::FunctionInstantiationHelper::BuildInputArgIndex | tensorflow::FunctionInstantiationHelper::BuildInputArgIndex( const OpDef :: ArgDef & arg_def , AttrSlice attr_values , const FunctionDef :: ArgAttrs * arg_attrs , bool ints_on_device , int64_t resource_arg_unique_id) | ['arg_def', 'attr_values', 'arg_attrs', 'ints_on_device', 'resource_arg_unique_id'] | Status BuildInputArgIndex(const OpDef::ArgDef& arg_def, AttrSlice attr_values,
const FunctionDef::ArgAttrs* arg_attrs,
bool ints_on_device,
int64_t resource_arg_unique_id) {
bool is_type_list;
DataTypeVector dtypes;
TF_RETURN_IF_ERROR(
ArgNumType(attr_values, arg_def, &is_type_list, &dtypes));
CHECK_GE(dtypes.size(), size_t{1});
int arg_index = result_.nodes.size();
TF_RETURN_IF_ERROR(
AddItem(arg_def.name(), {true, arg_index, 0, is_type_list, dtypes}));
// Creates dtypes.size() nodes in the graph.
for (size_t i = 0; i < dtypes.size(); ++i) {
TF_RETURN_IF_ERROR(AddItem(strings::StrCat(arg_def.name(), ":", i),
{true, arg_index, 0, false, {dtypes[i]}}));
DCHECK_EQ(arg_index, result_.nodes.size());
string name = arg_def.name();
if (dtypes.size() > 1) {
strings::StrAppend(&name, "_", i);
}
NodeDef* gnode = AddNode(name);
if (ints_on_device && dtypes[i] == DataType::DT_INT32) {
gnode->set_op(FunctionLibraryDefinition::kDeviceArgOp);
} else {
gnode->set_op(FunctionLibraryDefinition::kArgOp);
}
DataType dtype = arg_def.is_ref() ? MakeRefType(dtypes[i]) : dtypes[i];
AddAttr("T", dtype, gnode);
AddAttr("index", arg_index, gnode);
if (resource_arg_unique_id >= 0) {
AddAttr("_resource_arg_unique_id", resource_arg_unique_id, gnode);
}
if (arg_attrs) {
for (const auto& arg_attr : arg_attrs->attr()) {
AddAttr(arg_attr.first, arg_attr.second, gnode->mutable_attr());
}
}
result_.arg_types.push_back(dtypes[i]);
++arg_index;
}
return Status::OK();
} | 364 | True | 1 |
CVE-2022-23586 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/3d89911481ba6ebe8c88c1c0b595412121e6c645', 'name': 'https://github.com/tensorflow/tensorflow/commit/3d89911481ba6ebe8c88c1c0b595412121e6c645', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/function.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/framework/function.cc', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/dcc21c7bc972b10b6fb95c2fb0f4ab5a59680ec2', 'name': 'https://github.com/tensorflow/tensorflow/commit/dcc21c7bc972b10b6fb95c2fb0f4ab5a59680ec2', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-43jf-985q-588j', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-43jf-985q-588j', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-617'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. A malicious user can cause a denial of service by altering a `SavedModel` such that assertions in `function.cc` would be falsified and crash the Python interpreter. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T17:33Z | 2022-02-04T23:15Z | Reachable Assertion | The product contains an assert() or similar statement that can be triggered by an attacker, which leads to an application exit or other behavior that is more severe than necessary. |
While assertion is good for catching logic errors and reducing the chances of reaching more serious vulnerability conditions, it can still lead to a denial of service.
For example, if a server handles multiple simultaneous connections, and an assert() occurs in one single connection that causes all other connections to be dropped, this is a reachable assertion that leads to a denial of service.
| https://cwe.mitre.org/data/definitions/617.html | 0 | Mihai Maruseac | 2021-11-12 08:19:38-08:00 | Eliminate debug `CHECK`-fail from `function.cc`
PiperOrigin-RevId: 409416119
Change-Id: I8376ee464d434e9b970ff0ad49edfdaa2a273cfe | dcc21c7bc972b10b6fb95c2fb0f4ab5a59680ec2 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::FunctionInstantiationHelper::BuildInputArgIndex | tensorflow::FunctionInstantiationHelper::BuildInputArgIndex( const OpDef :: ArgDef & arg_def , AttrSlice attr_values , const FunctionDef :: ArgAttrs * arg_attrs , bool ints_on_device , int64_t resource_arg_unique_id) | ['arg_def', 'attr_values', 'arg_attrs', 'ints_on_device', 'resource_arg_unique_id'] | Status BuildInputArgIndex(const OpDef::ArgDef& arg_def, AttrSlice attr_values,
const FunctionDef::ArgAttrs* arg_attrs,
bool ints_on_device,
int64_t resource_arg_unique_id) {
bool is_type_list;
DataTypeVector dtypes;
TF_RETURN_IF_ERROR(
ArgNumType(attr_values, arg_def, &is_type_list, &dtypes));
if (dtypes.size() < size_t{1}) {
return errors::Internal("Expected a list of at least one dtype");
}
int arg_index = result_.nodes.size();
TF_RETURN_IF_ERROR(
AddItem(arg_def.name(), {true, arg_index, 0, is_type_list, dtypes}));
// Creates dtypes.size() nodes in the graph.
for (size_t i = 0; i < dtypes.size(); ++i) {
TF_RETURN_IF_ERROR(AddItem(strings::StrCat(arg_def.name(), ":", i),
{true, arg_index, 0, false, {dtypes[i]}}));
DCHECK_EQ(arg_index, result_.nodes.size());
string name = arg_def.name();
if (dtypes.size() > 1) {
strings::StrAppend(&name, "_", i);
}
NodeDef* gnode = AddNode(name);
if (ints_on_device && dtypes[i] == DataType::DT_INT32) {
gnode->set_op(FunctionLibraryDefinition::kDeviceArgOp);
} else {
gnode->set_op(FunctionLibraryDefinition::kArgOp);
}
DataType dtype = arg_def.is_ref() ? MakeRefType(dtypes[i]) : dtypes[i];
AddAttr("T", dtype, gnode);
AddAttr("index", arg_index, gnode);
if (resource_arg_unique_id >= 0) {
AddAttr("_resource_arg_unique_id", resource_arg_unique_id, gnode);
}
if (arg_attrs) {
for (const auto& arg_attr : arg_attrs->attr()) {
AddAttr(arg_attr.first, arg_attr.second, gnode->mutable_attr());
}
}
result_.arg_types.push_back(dtypes[i]);
++arg_index;
}
return Status::OK();
} | 373 | True | 1 |
CVE-2022-23592 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:N/A:P | NETWORK | LOW | SINGLE | PARTIAL | NONE | PARTIAL | 5.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | NONE | HIGH | 8.1 | HIGH | 2.8 | 5.2 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/274df9b02330b790aa8de1cee164b70f72b9b244/tensorflow/core/graph/graph.cc#L223-L229', 'name': 'https://github.com/tensorflow/tensorflow/blob/274df9b02330b790aa8de1cee164b70f72b9b244/tensorflow/core/graph/graph.cc#L223-L229', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/c99d98cd189839dcf51aee94e7437b54b31f8abd', 'name': 'https://github.com/tensorflow/tensorflow/commit/c99d98cd189839dcf51aee94e7437b54b31f8abd', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-vq36-27g6-p492', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-vq36-27g6-p492', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.8.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Tensorflow is an Open Source Machine Learning Framework. TensorFlow's type inference can cause a heap out of bounds read as the bounds checking is done in a `DCHECK` (which is a no-op during production). An attacker can control the `input_idx` variable such that `ix` would be larger than the number of values in `node_t.args`. The fix will be included in TensorFlow 2.8.0. This is the only affected version."}] | 2022-02-10T02:18Z | 2022-02-04T23:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Dan Moldovan | 2021-11-12 17:42:30-08:00 | Handle invalid inputs instead of crashing.
PiperOrigin-RevId: 409549744
Change-Id: I7f5935b34b53f7e426a5462fcc027bdbf5dcda24 | c99d98cd189839dcf51aee94e7437b54b31f8abd | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::Node::RunForwardTypeInference | tensorflow::Node::RunForwardTypeInference() | [] | void Node::RunForwardTypeInference() {
VLOG(4) << "Forward type inference: " << props_->node_def.DebugString();
if (props_->fwd_type_fn == nullptr) {
return;
}
std::vector<Node*> input_nodes(props_->input_types.size(), nullptr);
std::vector<int> input_idx(props_->input_types.size(), 0);
for (const auto& edge : in_edges_) {
if (edge->IsControlEdge()) {
continue;
}
DCHECK(edge->dst_input() < input_nodes.size()) << DebugString();
int i = edge->dst_input();
input_nodes.at(i) = edge->src();
input_idx.at(i) = edge->src_output();
}
// Note: technically, we could use a very generic type when some of the inputs
// are unknown. But there is an expectation that a node will have complete
// inputs soon, so updating intermediate types is largely unnecessary.
for (const auto* node : input_nodes) {
if (node == nullptr) {
// Incomplete inputs, bail.
ClearTypeInfo();
return;
}
}
static FullTypeDef* no_type = new FullTypeDef();
std::vector<std::reference_wrapper<const FullTypeDef>> input_types;
for (int i = 0; i < input_nodes.size(); i++) {
const auto* node = input_nodes[i];
if (node->def().has_experimental_type()) {
const auto& node_t = node->def().experimental_type();
if (node_t.type_id() != TFT_UNSET) {
int ix = input_idx[i];
DCHECK(ix < node_t.args_size())
<< "input " << i << " should have an output " << ix
<< " but instead only has " << node_t.args_size()
<< " outputs: " << node_t.DebugString();
input_types.emplace_back(node_t.args(ix));
} else {
input_types.emplace_back(*no_type);
}
} else {
// Incomplete inputs, bail.
ClearTypeInfo();
return;
}
}
const auto infer_type = props_->fwd_type_fn(input_types);
const FullTypeDef infer_typedef = infer_type.ValueOrDie();
if (infer_typedef.type_id() != TFT_UNSET) {
MaybeCopyOnWrite();
*(props_->node_def.mutable_experimental_type()) = infer_typedef;
}
} | 406 | True | 1 |
CVE-2022-23587 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0aaaae6eca5a7175a193696383f582f53adab23f', 'name': 'https://github.com/tensorflow/tensorflow/commit/0aaaae6eca5a7175a193696383f582f53adab23f', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L2621-L2689', 'name': 'https://github.com/tensorflow/tensorflow/blob/a1320ec1eac186da1d03f033109191f715b2b130/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L2621-L2689', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8jj7-5vxc-pg2q', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8jj7-5vxc-pg2q', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. Under certain scenarios, Grappler component of TensorFlow is vulnerable to an integer overflow during cost estimation for crop and resize. Since the cropping parameters are user controlled, a malicious person can trigger undefined behavior. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T17:38Z | 2022-02-04T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Mihai Maruseac | 2021-11-13 08:19:05-08:00 | Prevent overflow in grappler cost estimation of crop&resize op.
The crop parameters are user controlled, so we should make sure a user can not trigger an overflow maliciously.
PiperOrigin-RevId: 409670234
Change-Id: I7994734a98b037c5642e051240329d16f959aae4 | 0aaaae6eca5a7175a193696383f582f53adab23f | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimator::PredictCropAndResize | tensorflow::grappler::OpLevelCostEstimator::PredictCropAndResize( const OpContext & op_context , NodeCosts * node_costs) const | ['op_context', 'node_costs'] | Status OpLevelCostEstimator::PredictCropAndResize(const OpContext& op_context,
NodeCosts* node_costs) const {
bool found_unknown_shapes = false;
const auto method = op_context.op_info.attr().find("method");
bool use_bilinear_interp;
if (method == op_context.op_info.attr().end() ||
method->second.s() == "bilinear") {
use_bilinear_interp = true;
} else if (method->second.s() == "nearest") {
use_bilinear_interp = false;
} else {
LOG(WARNING) << "method attr in CropAndResize invalid; expected bilinear "
"or nearest.";
return PredictCostOfAnUnknownOp(op_context, node_costs);
}
const int64_t num_boxes = op_context.op_info.inputs(1).shape().dim(0).size();
const auto crop_shape = MaybeGetMinimumShape(
op_context.op_info.outputs(0).shape(), 4, &found_unknown_shapes);
const int64_t crop_height = crop_shape.dim(1).size();
const int64_t crop_width = crop_shape.dim(2).size();
const int64_t output_elements = CalculateTensorElementCount(
op_context.op_info.outputs(0), &found_unknown_shapes);
#define EIGEN_COST(X) Eigen::internal::functor_traits<Eigen::internal::X>::Cost
const auto sub_cost = EIGEN_COST(scalar_difference_op<float>);
const auto add_cost = EIGEN_COST(scalar_sum_op<float>);
const auto mul_cost = EIGEN_COST(scalar_product_op<float>);
auto div_cost = EIGEN_COST(scalar_div_cost<float>);
const auto floor_cost = EIGEN_COST(scalar_floor_op<float>);
const auto ceil_cost = EIGEN_COST(scalar_ceil_op<float>);
auto round_cost = EIGEN_COST(scalar_round_op<float>);
const auto cast_to_float_cost = Eigen::internal::functor_traits<
Eigen::internal::scalar_cast_op<int64_t, float>>::Cost;
#undef EIGEN_COST
// Computing ops following
// tensorflow/core/kernels/image/crop_and_resize_op.cc at 08/25/2020. Op
// calculation differs from rough estimate in implementation, as it separates
// out cost per box from cost per pixel and cost per element.
// Ops for variables height_scale and width_scale.
int64_t ops = (sub_cost * 6 + mul_cost * 2 + div_cost * 2) * num_boxes;
// Ops for variable in_y.
ops += (mul_cost * 2 + sub_cost + add_cost) * crop_height * num_boxes;
// Ops for variable in_x (same computation across both branches).
ops += (mul_cost * 2 + sub_cost + add_cost) * crop_height * crop_width *
num_boxes;
// Specify op_cost based on the method.
if (use_bilinear_interp) {
// Ops for variables top_y_index, bottom_y_index, y_lerp.
ops += (floor_cost + ceil_cost + sub_cost) * crop_height * num_boxes;
// Ops for variables left_x, right_x, x_lerp;
ops += (floor_cost + ceil_cost + sub_cost) * crop_height * crop_width *
num_boxes;
// Ops for innermost loop across depth.
ops +=
(cast_to_float_cost * 4 + add_cost * 3 + sub_cost * 3 + mul_cost * 3) *
output_elements;
} else /* method == "nearest" */ {
// Ops for variables closest_x_index and closest_y_index.
ops += round_cost * 2 * crop_height * crop_width * num_boxes;
// Ops for innermost loop across depth.
ops += cast_to_float_cost * output_elements;
}
return PredictDefaultNodeCosts(ops, op_context, &found_unknown_shapes,
node_costs);
} | 463 | True | 1 |
CVE-2022-21732 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e3749a6d5d1e8d11806d4a2e9cc3123d1a90b75e', 'name': 'https://github.com/tensorflow/tensorflow/commit/e3749a6d5d1e8d11806d4a2e9cc3123d1a90b75e', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c582-c96p-r5cq', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c582-c96p-r5cq', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/data/experimental/threadpool_dataset_op.cc#L79-L135', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/data/experimental/threadpool_dataset_op.cc#L79-L135', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-770'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `ThreadPoolHandle` can be used to trigger a denial of service attack by allocating too much memory. This is because the `num_threads` argument is only checked to not be negative, but there is no upper bound on its value. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T03:08Z | 2022-02-03T12:15Z | Allocation of Resources Without Limits or Throttling | The software allocates a reusable resource or group of resources on behalf of an actor without imposing any restrictions on the size or number of resources that can be allocated, in violation of the intended security policy for that actor. | Code frequently has to work with limited resources, so programmers must be careful to ensure that resources are not consumed too quickly, or too easily. Without use of quotas, resource limits, or other protection mechanisms, it can be easy for an attacker to consume many resources by rapidly making many requests, or causing larger resources to be used than is needed. When too many resources are allocated, or if a single resource is too large, then it can prevent the code from working correctly, possibly leading to a denial of service.
| https://cwe.mitre.org/data/definitions/770.html | 0 | Andrew Audibert | 2021-11-18 16:10:34-08:00 | [tf.data] Set limit on number of threads used in threadpool_dataset.
PiperOrigin-RevId: 410922677
Change-Id: Ib25814a99043ab10805b5d2d7088ae0e0b7b04fd | e3749a6d5d1e8d11806d4a2e9cc3123d1a90b75e | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::data::experimental::PrivateThreadPoolDatasetOp::MakeDataset | tensorflow::data::experimental::PrivateThreadPoolDatasetOp::MakeDataset( OpKernelContext * ctx , DatasetBase * input , DatasetBase ** output) | ['ctx', 'input', 'output'] | void PrivateThreadPoolDatasetOp::MakeDataset(OpKernelContext* ctx,
DatasetBase* input,
DatasetBase** output) {
int64_t num_threads = 0;
OP_REQUIRES_OK(
ctx, ParseScalarArgument<int64_t>(ctx, "num_threads", &num_threads));
OP_REQUIRES(ctx, num_threads >= 0,
errors::InvalidArgument("`num_threads` must be >= 0"));
*output = new Dataset(ctx, input, num_threads);
} | 70 | True | 1 |
CVE-2022-21732 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e3749a6d5d1e8d11806d4a2e9cc3123d1a90b75e', 'name': 'https://github.com/tensorflow/tensorflow/commit/e3749a6d5d1e8d11806d4a2e9cc3123d1a90b75e', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c582-c96p-r5cq', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c582-c96p-r5cq', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/data/experimental/threadpool_dataset_op.cc#L79-L135', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/data/experimental/threadpool_dataset_op.cc#L79-L135', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-770'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `ThreadPoolHandle` can be used to trigger a denial of service attack by allocating too much memory. This is because the `num_threads` argument is only checked to not be negative, but there is no upper bound on its value. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T03:08Z | 2022-02-03T12:15Z | Allocation of Resources Without Limits or Throttling | The software allocates a reusable resource or group of resources on behalf of an actor without imposing any restrictions on the size or number of resources that can be allocated, in violation of the intended security policy for that actor. | Code frequently has to work with limited resources, so programmers must be careful to ensure that resources are not consumed too quickly, or too easily. Without use of quotas, resource limits, or other protection mechanisms, it can be easy for an attacker to consume many resources by rapidly making many requests, or causing larger resources to be used than is needed. When too many resources are allocated, or if a single resource is too large, then it can prevent the code from working correctly, possibly leading to a denial of service.
| https://cwe.mitre.org/data/definitions/770.html | 0 | Andrew Audibert | 2021-11-18 16:10:34-08:00 | [tf.data] Set limit on number of threads used in threadpool_dataset.
PiperOrigin-RevId: 410922677
Change-Id: Ib25814a99043ab10805b5d2d7088ae0e0b7b04fd | e3749a6d5d1e8d11806d4a2e9cc3123d1a90b75e | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::data::experimental::PrivateThreadPoolDatasetOp::MakeDatasetFromOptions | tensorflow::data::experimental::PrivateThreadPoolDatasetOp::MakeDatasetFromOptions( OpKernelContext * ctx , DatasetBase * input , int32_t num_threads , DatasetBase ** output) | ['ctx', 'input', 'num_threads', 'output'] | void PrivateThreadPoolDatasetOp::MakeDatasetFromOptions(OpKernelContext* ctx,
DatasetBase* input,
int32_t num_threads,
DatasetBase** output) {
OP_REQUIRES(ctx, num_threads >= 0,
errors::InvalidArgument("`num_threads` must be >= 0"));
*output = new Dataset(ctx,
DatasetContext(DatasetContext::Params(
{PrivateThreadPoolDatasetOp::kDatasetType,
PrivateThreadPoolDatasetOp::kDatasetOp})),
input, num_threads);
} | 68 | True | 1 |
CVE-2022-21732 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e3749a6d5d1e8d11806d4a2e9cc3123d1a90b75e', 'name': 'https://github.com/tensorflow/tensorflow/commit/e3749a6d5d1e8d11806d4a2e9cc3123d1a90b75e', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c582-c96p-r5cq', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c582-c96p-r5cq', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/data/experimental/threadpool_dataset_op.cc#L79-L135', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/data/experimental/threadpool_dataset_op.cc#L79-L135', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-770'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `ThreadPoolHandle` can be used to trigger a denial of service attack by allocating too much memory. This is because the `num_threads` argument is only checked to not be negative, but there is no upper bound on its value. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T03:08Z | 2022-02-03T12:15Z | Allocation of Resources Without Limits or Throttling | The software allocates a reusable resource or group of resources on behalf of an actor without imposing any restrictions on the size or number of resources that can be allocated, in violation of the intended security policy for that actor. | Code frequently has to work with limited resources, so programmers must be careful to ensure that resources are not consumed too quickly, or too easily. Without use of quotas, resource limits, or other protection mechanisms, it can be easy for an attacker to consume many resources by rapidly making many requests, or causing larger resources to be used than is needed. When too many resources are allocated, or if a single resource is too large, then it can prevent the code from working correctly, possibly leading to a denial of service.
| https://cwe.mitre.org/data/definitions/770.html | 0 | Andrew Audibert | 2021-11-18 16:10:34-08:00 | [tf.data] Set limit on number of threads used in threadpool_dataset.
PiperOrigin-RevId: 410922677
Change-Id: Ib25814a99043ab10805b5d2d7088ae0e0b7b04fd | e3749a6d5d1e8d11806d4a2e9cc3123d1a90b75e | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::data::experimental::ThreadPoolHandleOp::ThreadPoolHandleOp | tensorflow::data::experimental::ThreadPoolHandleOp::ThreadPoolHandleOp( OpKernelConstruction * ctx) | ['ctx'] | explicit ThreadPoolHandleOp(OpKernelConstruction* ctx) : OpKernel(ctx) {
OP_REQUIRES_OK(ctx, ctx->GetAttr("display_name", &display_name_));
OP_REQUIRES_OK(ctx, ctx->GetAttr("num_threads", &num_threads_));
OP_REQUIRES_OK(ctx, ctx->GetAttr("max_intra_op_parallelism",
&max_intra_op_parallelism_));
OP_REQUIRES(
ctx, num_threads_ > 0,
errors::InvalidArgument("`num_threads` must be greater than zero."));
} | 74 | True | 1 |
CVE-2022-21725 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'name': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'name': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-369'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The estimator for the cost of some convolution operations can be made to execute a division by 0. The function fails to check that the stride argument is strictly positive. Hence, the fix is to add a check for the stride argument to ensure it is valid. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:56Z | 2022-02-03T13:15Z | Divide By Zero | The product divides a value by zero. | This weakness typically occurs when an unexpected value is provided to the product, or if an error occurs that is not properly detected. It frequently occurs in calculations involving physical dimensions such as size, length, width, and height.
| https://cwe.mitre.org/data/definitions/369.html | 0 | Isha Arkatkar | 2021-11-23 14:27:24-08:00 | Internal change
PiperOrigin-RevId: 411896058
Change-Id: Ia031058247e3cf382957a6662d3f9e1cbb481ca2 | 3218043d6d3a019756607643cf65574fbfef5d7a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimator::PredictAvgPool | tensorflow::grappler::OpLevelCostEstimator::PredictAvgPool( const OpContext & op_context , NodeCosts * node_costs) const | ['op_context', 'node_costs'] | Status OpLevelCostEstimator::PredictAvgPool(const OpContext& op_context,
NodeCosts* node_costs) const {
bool found_unknown_shapes = false;
const auto& op_info = op_context.op_info;
// x: op_info.inputs(0)
ConvolutionDimensions dims = OpDimensionsFromInputs(
op_info.inputs(0).shape(), op_info, &found_unknown_shapes);
// kx * ky - 1 additions and 1 multiplication per output.
int64_t ops = dims.batch * dims.ox * dims.oy * dims.oz * dims.kx * dims.ky;
node_costs->num_compute_ops = ops;
int64_t input_size;
if (dims.ky >= dims.sy) {
input_size = CalculateTensorSize(op_info.inputs(0), &found_unknown_shapes);
} else { // dims.ky < dims.sy
// vertical stride is larger than vertical kernel; assuming row-major
// format, skip unnecessary rows (or read every kx rows per sy rows, as the
// others are not used for output).
const auto data_size = DataTypeSize(BaseType(op_info.inputs(0).dtype()));
input_size = data_size * dims.batch * dims.ix * dims.ky * dims.oy * dims.iz;
}
node_costs->num_input_bytes_accessed = {input_size};
const int64_t output_size =
CalculateOutputSize(op_info, &found_unknown_shapes);
node_costs->num_output_bytes_accessed = {output_size};
node_costs->max_memory = output_size;
if (found_unknown_shapes) {
node_costs->inaccurate = true;
node_costs->num_nodes_with_unknown_shapes = 1;
}
return Status::OK();
} | 222 | True | 1 |
CVE-2022-21725 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'name': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'name': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-369'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The estimator for the cost of some convolution operations can be made to execute a division by 0. The function fails to check that the stride argument is strictly positive. Hence, the fix is to add a check for the stride argument to ensure it is valid. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:56Z | 2022-02-03T13:15Z | Divide By Zero | The product divides a value by zero. | This weakness typically occurs when an unexpected value is provided to the product, or if an error occurs that is not properly detected. It frequently occurs in calculations involving physical dimensions such as size, length, width, and height.
| https://cwe.mitre.org/data/definitions/369.html | 0 | Isha Arkatkar | 2021-11-23 14:27:24-08:00 | Internal change
PiperOrigin-RevId: 411896058
Change-Id: Ia031058247e3cf382957a6662d3f9e1cbb481ca2 | 3218043d6d3a019756607643cf65574fbfef5d7a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimator::PredictAvgPoolGrad | tensorflow::grappler::OpLevelCostEstimator::PredictAvgPoolGrad( const OpContext & op_context , NodeCosts * node_costs) const | ['op_context', 'node_costs'] | Status OpLevelCostEstimator::PredictAvgPoolGrad(const OpContext& op_context,
NodeCosts* node_costs) const {
bool found_unknown_shapes = false;
const auto& op_info = op_context.op_info;
// x's shape: op_info.inputs(0)
// y_grad: op_info.inputs(1)
// Extract x_shape from op_info.inputs(0).value() or op_info.outputs(0).
bool shape_found = false;
TensorShapeProto x_shape;
if (op_info.inputs_size() >= 1 && op_info.inputs(0).has_value()) {
const TensorProto& value = op_info.inputs(0).value();
shape_found = GetTensorShapeProtoFromTensorProto(value, &x_shape);
}
if (!shape_found && op_info.outputs_size() > 0) {
x_shape = op_info.outputs(0).shape();
shape_found = true;
}
if (!shape_found) {
// Set the minimum shape that's feasible.
x_shape.Clear();
for (int i = 0; i < 4; ++i) {
x_shape.add_dim()->set_size(1);
}
found_unknown_shapes = true;
}
ConvolutionDimensions dims =
OpDimensionsFromInputs(x_shape, op_info, &found_unknown_shapes);
int64_t ops = 0;
if (dims.kx <= dims.sx && dims.ky <= dims.sy) {
// Non-overlapping window.
ops = dims.batch * dims.iz * (dims.ix * dims.iy + dims.ox * dims.oy);
} else {
// Overlapping window.
ops = dims.batch * dims.iz *
(dims.ix * dims.iy + dims.ox * dims.oy * (dims.kx * dims.ky + 1));
}
auto s = PredictDefaultNodeCosts(ops, op_context, &found_unknown_shapes,
node_costs);
node_costs->max_memory = node_costs->num_total_output_bytes();
return s;
} | 300 | True | 1 |
CVE-2022-21725 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'name': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'name': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-369'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The estimator for the cost of some convolution operations can be made to execute a division by 0. The function fails to check that the stride argument is strictly positive. Hence, the fix is to add a check for the stride argument to ensure it is valid. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:56Z | 2022-02-03T13:15Z | Divide By Zero | The product divides a value by zero. | This weakness typically occurs when an unexpected value is provided to the product, or if an error occurs that is not properly detected. It frequently occurs in calculations involving physical dimensions such as size, length, width, and height.
| https://cwe.mitre.org/data/definitions/369.html | 0 | Isha Arkatkar | 2021-11-23 14:27:24-08:00 | Internal change
PiperOrigin-RevId: 411896058
Change-Id: Ia031058247e3cf382957a6662d3f9e1cbb481ca2 | 3218043d6d3a019756607643cf65574fbfef5d7a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimator::PredictFusedBatchNorm | tensorflow::grappler::OpLevelCostEstimator::PredictFusedBatchNorm( const OpContext & op_context , NodeCosts * node_costs) const | ['op_context', 'node_costs'] | Status OpLevelCostEstimator::PredictFusedBatchNorm(
const OpContext& op_context, NodeCosts* node_costs) const {
bool found_unknown_shapes = false;
const auto& op_info = op_context.op_info;
// x: op_info.inputs(0)
// scale: op_info.inputs(1)
// offset: op_info.inputs(2)
// mean: op_info.inputs(3) --> only for inference
// variance: op_info.inputs(4) --> only for inference
ConvolutionDimensions dims = OpDimensionsFromInputs(
op_info.inputs(0).shape(), op_info, &found_unknown_shapes);
const bool is_training = IsTraining(op_info);
int64_t ops = 0;
const auto rsqrt_cost = Eigen::internal::functor_traits<
Eigen::internal::scalar_rsqrt_op<float>>::Cost;
if (is_training) {
ops = dims.iz * (dims.batch * dims.ix * dims.iy * 4 + 6 + rsqrt_cost);
} else {
ops = dims.batch * dims.ix * dims.iy * dims.iz * 2;
}
node_costs->num_compute_ops = ops;
const int64_t size_nhwc =
CalculateTensorSize(op_info.inputs(0), &found_unknown_shapes);
const int64_t size_c =
CalculateTensorSize(op_info.inputs(1), &found_unknown_shapes);
if (is_training) {
node_costs->num_input_bytes_accessed = {size_nhwc, size_c, size_c};
node_costs->num_output_bytes_accessed = {size_nhwc, size_c, size_c, size_c,
size_c};
// FusedBatchNorm in training mode internally re-reads the input tensor:
// one for mean/variance, and the 2nd internal read forthe actual scaling.
// Assume small intermediate data such as mean / variance (size_c) can be
// cached on-chip.
node_costs->internal_read_bytes = size_nhwc;
} else {
node_costs->num_input_bytes_accessed = {size_nhwc, size_c, size_c, size_c,
size_c};
node_costs->num_output_bytes_accessed = {size_nhwc};
}
node_costs->max_memory = node_costs->num_total_output_bytes();
if (found_unknown_shapes) {
node_costs->inaccurate = true;
node_costs->num_nodes_with_unknown_shapes = 1;
}
return Status::OK();
} | 285 | True | 1 |
CVE-2022-21725 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'name': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'name': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-369'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The estimator for the cost of some convolution operations can be made to execute a division by 0. The function fails to check that the stride argument is strictly positive. Hence, the fix is to add a check for the stride argument to ensure it is valid. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:56Z | 2022-02-03T13:15Z | Divide By Zero | The product divides a value by zero. | This weakness typically occurs when an unexpected value is provided to the product, or if an error occurs that is not properly detected. It frequently occurs in calculations involving physical dimensions such as size, length, width, and height.
| https://cwe.mitre.org/data/definitions/369.html | 0 | Isha Arkatkar | 2021-11-23 14:27:24-08:00 | Internal change
PiperOrigin-RevId: 411896058
Change-Id: Ia031058247e3cf382957a6662d3f9e1cbb481ca2 | 3218043d6d3a019756607643cf65574fbfef5d7a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimator::PredictFusedBatchNormGrad | tensorflow::grappler::OpLevelCostEstimator::PredictFusedBatchNormGrad( const OpContext & op_context , NodeCosts * node_costs) const | ['op_context', 'node_costs'] | Status OpLevelCostEstimator::PredictFusedBatchNormGrad(
const OpContext& op_context, NodeCosts* node_costs) const {
bool found_unknown_shapes = false;
const auto& op_info = op_context.op_info;
// y_backprop: op_info.inputs(0)
// x: op_info.inputs(1)
// scale: op_info.inputs(2)
// mean: op_info.inputs(3)
// variance or inverse of variance: op_info.inputs(4)
ConvolutionDimensions dims = OpDimensionsFromInputs(
op_info.inputs(1).shape(), op_info, &found_unknown_shapes);
int64_t ops = 0;
const auto rsqrt_cost = Eigen::internal::functor_traits<
Eigen::internal::scalar_rsqrt_op<float>>::Cost;
ops = dims.iz * (dims.batch * dims.ix * dims.iy * 11 + 5 + rsqrt_cost);
node_costs->num_compute_ops = ops;
const int64_t size_nhwc =
CalculateTensorSize(op_info.inputs(1), &found_unknown_shapes);
const int64_t size_c =
CalculateTensorSize(op_info.inputs(2), &found_unknown_shapes);
// TODO(dyoon): fix missing memory cost for variance input (size_c) and
// yet another read of y_backprop (size_nhwc) internally.
node_costs->num_input_bytes_accessed = {size_nhwc, size_nhwc, size_c, size_c};
node_costs->num_output_bytes_accessed = {size_nhwc, size_c, size_c};
// FusedBatchNormGrad has to read y_backprop internally.
node_costs->internal_read_bytes = size_nhwc;
node_costs->max_memory = node_costs->num_total_output_bytes();
if (found_unknown_shapes) {
node_costs->inaccurate = true;
node_costs->num_nodes_with_unknown_shapes = 1;
}
return Status::OK();
} | 212 | True | 1 |
CVE-2022-21725 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'name': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'name': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-369'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The estimator for the cost of some convolution operations can be made to execute a division by 0. The function fails to check that the stride argument is strictly positive. Hence, the fix is to add a check for the stride argument to ensure it is valid. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:56Z | 2022-02-03T13:15Z | Divide By Zero | The product divides a value by zero. | This weakness typically occurs when an unexpected value is provided to the product, or if an error occurs that is not properly detected. It frequently occurs in calculations involving physical dimensions such as size, length, width, and height.
| https://cwe.mitre.org/data/definitions/369.html | 0 | Isha Arkatkar | 2021-11-23 14:27:24-08:00 | Internal change
PiperOrigin-RevId: 411896058
Change-Id: Ia031058247e3cf382957a6662d3f9e1cbb481ca2 | 3218043d6d3a019756607643cf65574fbfef5d7a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimator::PredictMaxPool | tensorflow::grappler::OpLevelCostEstimator::PredictMaxPool( const OpContext & op_context , NodeCosts * node_costs) const | ['op_context', 'node_costs'] | Status OpLevelCostEstimator::PredictMaxPool(const OpContext& op_context,
NodeCosts* node_costs) const {
bool found_unknown_shapes = false;
const auto& op_info = op_context.op_info;
// x: op_info.inputs(0)
ConvolutionDimensions dims = OpDimensionsFromInputs(
op_info.inputs(0).shape(), op_info, &found_unknown_shapes);
// kx * ky - 1 comparisons per output (kx * xy > 1)
// or 1 copy per output (kx * k1 = 1).
int per_output_ops = dims.kx * dims.ky == 1 ? 1 : dims.kx * dims.ky - 1;
int64_t ops = dims.batch * dims.ox * dims.oy * dims.oz * per_output_ops;
node_costs->num_compute_ops = ops;
int64_t input_size = 0;
if (dims.ky >= dims.sy) {
input_size = CalculateTensorSize(op_info.inputs(0), &found_unknown_shapes);
} else { // dims.ky < dims.sy
// Vertical stride is larger than vertical kernel; assuming row-major
// format, skip unnecessary rows (or read every kx rows per sy rows, as the
// others are not used for output).
const auto data_size = DataTypeSize(BaseType(op_info.inputs(0).dtype()));
input_size = data_size * dims.batch * dims.ix * dims.ky * dims.oy * dims.iz;
}
node_costs->num_input_bytes_accessed = {input_size};
const int64_t output_size =
CalculateOutputSize(op_info, &found_unknown_shapes);
node_costs->num_output_bytes_accessed = {output_size};
node_costs->max_memory = output_size;
if (found_unknown_shapes) {
node_costs->inaccurate = true;
node_costs->num_nodes_with_unknown_shapes = 1;
}
return Status::OK();
} | 243 | True | 1 |
CVE-2022-21725 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'name': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'name': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-369'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The estimator for the cost of some convolution operations can be made to execute a division by 0. The function fails to check that the stride argument is strictly positive. Hence, the fix is to add a check for the stride argument to ensure it is valid. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:56Z | 2022-02-03T13:15Z | Divide By Zero | The product divides a value by zero. | This weakness typically occurs when an unexpected value is provided to the product, or if an error occurs that is not properly detected. It frequently occurs in calculations involving physical dimensions such as size, length, width, and height.
| https://cwe.mitre.org/data/definitions/369.html | 0 | Isha Arkatkar | 2021-11-23 14:27:24-08:00 | Internal change
PiperOrigin-RevId: 411896058
Change-Id: Ia031058247e3cf382957a6662d3f9e1cbb481ca2 | 3218043d6d3a019756607643cf65574fbfef5d7a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimator::PredictMaxPoolGrad | tensorflow::grappler::OpLevelCostEstimator::PredictMaxPoolGrad( const OpContext & op_context , NodeCosts * node_costs) const | ['op_context', 'node_costs'] | Status OpLevelCostEstimator::PredictMaxPoolGrad(const OpContext& op_context,
NodeCosts* node_costs) const {
bool found_unknown_shapes = false;
const auto& op_info = op_context.op_info;
// x: op_info.inputs(0)
// y: op_info.inputs(1)
// y_grad: op_info.inputs(2)
if (op_info.inputs_size() < 3) {
return errors::InvalidArgument("MaxPoolGrad op has invalid inputs: ",
op_info.ShortDebugString());
}
ConvolutionDimensions dims = OpDimensionsFromInputs(
op_info.inputs(0).shape(), op_info, &found_unknown_shapes);
int64_t ops = 0;
if (dims.kx == 1 && dims.ky == 1) {
// 1x1 window. No need to know which input was max.
ops = dims.batch * dims.ix * dims.iy * dims.iz;
} else if (dims.kx <= dims.sx && dims.ky <= dims.sy) {
// Non-overlapping window: re-run maxpool, then assign zero or y_grad.
ops = dims.batch * dims.iz *
(dims.ox * dims.oy * (dims.kx * dims.ky - 1) + dims.ix * dims.iy);
} else {
// Overlapping window: initialize with zeros, re-run maxpool, then
// accumulate y_gad to proper x_grad locations.
ops = dims.batch * dims.iz *
(dims.ox * dims.oy * (dims.kx * dims.ky - 1) + dims.ix * dims.iy * 2);
}
node_costs->num_compute_ops = ops;
// Just read x and y_grad; no need to read y as we assume MaxPoolGrad re-run
// MaxPool internally.
const int64_t input0_size =
CalculateTensorSize(op_info.inputs(0), &found_unknown_shapes);
const int64_t input2_size =
CalculateTensorSize(op_info.inputs(2), &found_unknown_shapes);
node_costs->num_input_bytes_accessed = {input0_size, 0, input2_size};
// Write x_grad; size equal to x.
const int64_t output_size =
CalculateTensorSize(op_info.inputs(0), &found_unknown_shapes);
node_costs->num_output_bytes_accessed = {output_size};
node_costs->max_memory = output_size;
if (found_unknown_shapes) {
node_costs->inaccurate = true;
node_costs->num_nodes_with_unknown_shapes = 1;
}
return Status::OK();
} | 331 | True | 1 |
CVE-2022-21725 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'name': 'https://github.com/tensorflow/tensorflow/blob/ffa202a17ab7a4a10182b746d230ea66f021fe16/tensorflow/core/grappler/costs/op_level_cost_estimator.cc#L189-L198', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'name': 'https://github.com/tensorflow/tensorflow/commit/3218043d6d3a019756607643cf65574fbfef5d7a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-v3f7-j968-4h5f', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-369'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The estimator for the cost of some convolution operations can be made to execute a division by 0. The function fails to check that the stride argument is strictly positive. Hence, the fix is to add a check for the stride argument to ensure it is valid. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:56Z | 2022-02-03T13:15Z | Divide By Zero | The product divides a value by zero. | This weakness typically occurs when an unexpected value is provided to the product, or if an error occurs that is not properly detected. It frequently occurs in calculations involving physical dimensions such as size, length, width, and height.
| https://cwe.mitre.org/data/definitions/369.html | 0 | Isha Arkatkar | 2021-11-23 14:27:24-08:00 | Internal change
PiperOrigin-RevId: 411896058
Change-Id: Ia031058247e3cf382957a6662d3f9e1cbb481ca2 | 3218043d6d3a019756607643cf65574fbfef5d7a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::grappler::OpLevelCostEstimatorTest::ValidateOpDimensionsFromInputs | tensorflow::grappler::OpLevelCostEstimatorTest::ValidateOpDimensionsFromInputs( const int n , const int h , const int w , const int c , const int kx , const int ky , const int sx , const int sy , const string & data_format , const string & padding) | ['n', 'h', 'w', 'c', 'kx', 'ky', 'sx', 'sy', 'data_format', 'padding'] | void ValidateOpDimensionsFromInputs(const int n, const int h, const int w,
const int c, const int kx, const int ky,
const int sx, const int sy,
const string& data_format,
const string& padding) {
OpContext op_context;
int ho;
int wo;
if (data_format == "NHWC") {
op_context = DescribePoolingOp("MaxPool", {n, h, w, c}, {1, kx, ky, 1},
{1, sx, sy, 1}, "NHWC", padding);
ho = op_context.op_info.outputs(0).shape().dim(1).size();
wo = op_context.op_info.outputs(0).shape().dim(2).size();
} else {
op_context = DescribePoolingOp("MaxPool", {n, c, h, w}, {1, 1, kx, ky},
{1, 1, sx, sy}, "NCHW", padding);
ho = op_context.op_info.outputs(0).shape().dim(2).size();
wo = op_context.op_info.outputs(0).shape().dim(3).size();
}
bool found_unknown_shapes;
auto dims = OpLevelCostEstimator::OpDimensionsFromInputs(
op_context.op_info.inputs(0).shape(), op_context.op_info,
&found_unknown_shapes);
Padding padding_enum;
if (padding == "VALID") {
padding_enum = Padding::VALID;
} else {
padding_enum = Padding::SAME;
}
EXPECT_EQ(n, dims.batch);
EXPECT_EQ(h, dims.ix);
EXPECT_EQ(w, dims.iy);
EXPECT_EQ(c, dims.iz);
EXPECT_EQ(kx, dims.kx);
EXPECT_EQ(ky, dims.ky);
EXPECT_EQ(sx, dims.sx);
EXPECT_EQ(sy, dims.sy);
EXPECT_EQ(ho, dims.ox);
EXPECT_EQ(wo, dims.oy);
EXPECT_EQ(c, dims.oz);
EXPECT_EQ(padding_enum, dims.padding);
} | 409 | True | 1 |
CVE-2022-21731 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-m4hf-j54p-p353', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-m4hf-j54p-p353', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/08d7b00c0a5a20926363849f611729f53f3ec022', 'name': 'https://github.com/tensorflow/tensorflow/commit/08d7b00c0a5a20926363849f611729f53f3ec022', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/framework/common_shape_fns.cc#L1961-L2059', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/framework/common_shape_fns.cc#L1961-L2059', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/framework/shape_inference.cc#L345-L358', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/framework/shape_inference.cc#L345-L358', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-843'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of shape inference for `ConcatV2` can be used to trigger a denial of service attack via a segfault caused by a type confusion. The `axis` argument is translated into `concat_dim` in the `ConcatShapeHelper` helper function. Then, a value for `min_rank` is computed based on `concat_dim`. This is then used to validate that the `values` tensor has at least the required rank. However, `WithRankAtLeast` receives the lower bound as a 64-bits value and then compares it against the maximum 32-bits integer value that could be represented. Due to the fact that `min_rank` is a 32-bits value and the value of `axis`, the `rank` argument is a negative value, so the error check is bypassed. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T03:06Z | 2022-02-03T12:15Z | Access of Resource Using Incompatible Type ('Type Confusion') | The program allocates or initializes a resource such as a pointer, object, or variable using one type, but it later accesses that resource using a type that is incompatible with the original type. |
When the program accesses the resource using an incompatible type, this could trigger logical errors because the resource does not have expected properties. In languages without memory safety, such as C and C++, type confusion can lead to out-of-bounds memory access.
While this weakness is frequently associated with unions when parsing data with many different embedded object types in C, it can be present in any application that can interpret the same variable or memory location in multiple ways.
This weakness is not unique to C and C++. For example, errors in PHP applications can be triggered by providing array parameters when scalars are expected, or vice versa. Languages such as Perl, which perform automatic conversion of a variable of one type when it is accessed as if it were another type, can also contain these issues.
| https://cwe.mitre.org/data/definitions/843.html | 0 | Isha Arkatkar | 2021-11-24 13:09:02-08:00 | Fix Segfault in Concat V2 shape function.
PiperOrigin-RevId: 412120654
Change-Id: I3ff915faea694f9ad8b00024e9af2de9909011be | 08d7b00c0a5a20926363849f611729f53f3ec022 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::shape_inference::ConcatShapeHelper | tensorflow::shape_inference::ConcatShapeHelper( InferenceContext * c , int start_value_index , int end_value_index , int dim_index) | ['c', 'start_value_index', 'end_value_index', 'dim_index'] | Status ConcatShapeHelper(InferenceContext* c, int start_value_index,
int end_value_index, int dim_index) {
ShapeHandle unused;
TF_RETURN_IF_ERROR(c->WithRank(c->input(dim_index), 0, &unused));
const Tensor* concat_dim_t = c->input_tensor(dim_index);
if (concat_dim_t == nullptr) {
// Return an unknown shape with same rank as inputs, or an unknown rank
// if no input's rank is known.
// Find rank.
int32_t rank = InferenceContext::kUnknownRank;
for (int i = start_value_index; i < end_value_index; ++i) {
if (rank == InferenceContext::kUnknownRank) rank = c->Rank(c->input(i));
if (rank != InferenceContext::kUnknownRank) {
break;
}
}
if (rank == InferenceContext::kUnknownRank) {
c->set_output(0, c->UnknownShape());
return Status::OK();
} else if (rank == 0) {
return errors::InvalidArgument(
"Can't concatenate scalars (use tf.stack instead)");
} else {
for (int i = start_value_index; i < end_value_index; ++i) {
// Check that all the inputs are of the correct rank.
TF_RETURN_IF_ERROR(c->WithRank(c->input(i), rank, &unused));
}
}
// Build result of <rank> different unknown dims.
std::vector<DimensionHandle> dims;
dims.reserve(rank);
for (int i = 0; i < rank; ++i) dims.push_back(c->UnknownDim());
c->set_output(0, c->MakeShape(dims));
return Status::OK();
}
// Merge all the non-concat dims, and sum the concat dim to make an output
// shape.
int64_t concat_dim;
if (concat_dim_t->dtype() == DT_INT32) {
concat_dim = static_cast<int64_t>(concat_dim_t->flat<int32>()(0));
} else {
concat_dim = concat_dim_t->flat<int64_t>()(0);
}
// Minimum required number of dimensions.
const int min_rank = concat_dim < 0 ? -concat_dim : concat_dim + 1;
ShapeHandle output_before;
ShapeHandle output_after;
ShapeHandle input = c->input(end_value_index - 1);
TF_RETURN_IF_ERROR(c->WithRankAtLeast(input, min_rank, &input));
TF_RETURN_IF_ERROR(c->Subshape(input, 0, concat_dim, &output_before));
DimensionHandle output_middle = c->Dim(input, concat_dim);
if (concat_dim == -1) {
output_after = c->Scalar(); // no dimensions.
} else {
TF_RETURN_IF_ERROR(c->Subshape(input, concat_dim + 1, &output_after));
}
for (int i = end_value_index - 2; i >= start_value_index; --i) {
ShapeHandle before;
ShapeHandle after;
input = c->input(i);
TF_RETURN_IF_ERROR(c->WithRankAtLeast(input, min_rank, &input));
TF_RETURN_IF_ERROR(c->Subshape(input, 0, concat_dim, &before));
DimensionHandle middle = c->Dim(input, concat_dim);
if (concat_dim == -1) {
after = c->Scalar();
} else {
TF_RETURN_IF_ERROR(c->Subshape(input, concat_dim + 1, &after));
}
TF_RETURN_IF_ERROR(c->Merge(before, output_before, &output_before));
TF_RETURN_IF_ERROR(c->Add(output_middle, middle, &output_middle));
TF_RETURN_IF_ERROR(c->Merge(after, output_after, &output_after));
}
ShapeHandle s;
TF_RETURN_IF_ERROR(
c->Concatenate(output_before, c->Vector(output_middle), &s));
TF_RETURN_IF_ERROR(c->Concatenate(s, output_after, &s));
c->set_output(0, s);
return Status::OK();
} | 643 | True | 1 |
CVE-2022-21729 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-34f9-hjfq-rr8j', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-34f9-hjfq-rr8j', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/58b34c6c8250983948b5a781b426f6aa01fd47af', 'name': 'https://github.com/tensorflow/tensorflow/commit/58b34c6c8250983948b5a781b426f6aa01fd47af', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/unravel_index_op.cc#L36-L135', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/unravel_index_op.cc#L36-L135', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `UnravelIndex` is vulnerable to a division by zero caused by an integer overflow bug. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:57Z | 2022-02-03T13:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Isha Arkatkar | 2021-11-30 14:50:38-08:00 | Fix integer overflow leading to divide by zero error in Unravel index kernel when dimensions product exceeds max int value.
PiperOrigin-RevId: 413250052
Change-Id: I9450b6e8acecd2e881a64b882e2b7c70e8e9289a | 58b34c6c8250983948b5a781b426f6aa01fd47af | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::UnravelIndexOp::UnravelIndexOp | tensorflow::UnravelIndexOp::UnravelIndexOp( OpKernelConstruction * ctx) | ['ctx'] | explicit UnravelIndexOp(OpKernelConstruction* ctx) : OpKernel(ctx) {} | 13 | True | 1 |
CVE-2022-21740 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/2b7100d6cdff36aa21010a82269bc05a6d1cc74a', 'name': 'https://github.com/tensorflow/tensorflow/commit/2b7100d6cdff36aa21010a82269bc05a6d1cc74a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/count_ops.cc#L168-L273', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/count_ops.cc#L168-L273', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/adbbabdb0d3abb3cdeac69e38a96de1d678b24b3', 'name': 'https://github.com/tensorflow/tensorflow/commit/adbbabdb0d3abb3cdeac69e38a96de1d678b24b3', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-44qp-9wwf-734r', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-44qp-9wwf-734r', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `SparseCountSparseOutput` is vulnerable to a heap overflow. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T05:36Z | 2022-02-03T15:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2021-12-07 19:36:18-08:00 | Cleanup and remove duplicate validation in `SparseCount`.
We have valdiation that is duplicated, checking different conditions, in different formats and failing to capture all cases. This should fix all the previous bugs.
PiperOrigin-RevId: 414886981
Change-Id: Ibf0bba0beb057b76d505324bb9487565daf95f01 | 2b7100d6cdff36aa21010a82269bc05a6d1cc74a | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::SparseCount::Compute | tensorflow::SparseCount::Compute( OpKernelContext * context) | ['context'] | void Compute(OpKernelContext* context) override {
const Tensor& indices = context->input(0);
const Tensor& values = context->input(1);
const Tensor& shape = context->input(2);
const Tensor& weights = context->input(3);
bool use_weights = weights.NumElements() > 0;
OP_REQUIRES(context, TensorShapeUtils::IsMatrix(indices.shape()),
errors::InvalidArgument(
"Input indices must be a 2-dimensional tensor. Got: ",
indices.shape().DebugString()));
if (use_weights) {
OP_REQUIRES(
context, weights.shape() == values.shape(),
errors::InvalidArgument(
"Weights and values must have the same shape. Weight shape: ",
weights.shape().DebugString(),
"; values shape: ", values.shape().DebugString()));
}
OP_REQUIRES(context, shape.NumElements() != 0,
errors::InvalidArgument(
"The shape argument requires at least one element."));
bool is_1d = shape.NumElements() == 1;
auto shape_vector = shape.flat<int64_t>();
int num_batches = is_1d ? 1 : shape_vector(0);
int num_values = values.NumElements();
for (int b = 0; b < shape_vector.size(); b++) {
OP_REQUIRES(context, shape_vector(b) >= 0,
errors::InvalidArgument(
"Elements in dense_shape must be >= 0. Instead got:",
shape.DebugString()));
}
OP_REQUIRES(context, num_values == indices.shape().dim_size(0),
errors::InvalidArgument(
"Number of values must match first dimension of indices.",
"Got ", num_values,
" values, indices shape: ", indices.shape().DebugString()));
const auto indices_values = indices.matrix<int64_t>();
const auto values_values = values.flat<T>();
const auto weight_values = weights.flat<W>();
auto per_batch_counts = BatchedMap<W>(num_batches);
T max_value = 0;
OP_REQUIRES(context, num_values <= indices.shape().dim_size(0),
errors::InvalidArgument(
"The first dimension of indices must be equal to or "
"greather than number of values. ( ",
indices.shape().dim_size(0), " vs. ", num_values, " )"));
OP_REQUIRES(context, indices.shape().dim_size(1) > 0,
errors::InvalidArgument("The second dimension of indices must "
"be greater than 0. Received: ",
indices.shape().dim_size(1)));
for (int idx = 0; idx < num_values; ++idx) {
int batch = is_1d ? 0 : indices_values(idx, 0);
if (batch >= num_batches) {
OP_REQUIRES(context, batch < num_batches,
errors::InvalidArgument(
"Indices value along the first dimension must be ",
"lower than the first index of the shape.", "Got ",
batch, " as batch and ", num_batches,
" as the first dimension of the shape."));
}
const auto& value = values_values(idx);
if (value >= 0 && (maxlength_ <= 0 || value < maxlength_)) {
if (binary_output_) {
per_batch_counts[batch][value] = 1;
} else if (use_weights) {
per_batch_counts[batch][value] += weight_values(idx);
} else {
per_batch_counts[batch][value]++;
}
if (value > max_value) {
max_value = value;
}
}
}
int num_output_values = GetOutputSize(max_value, maxlength_, minlength_);
OP_REQUIRES_OK(context, OutputSparse<W>(per_batch_counts, num_output_values,
is_1d, context));
} | 623 | True | 1 |
CVE-2022-21740 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/2b7100d6cdff36aa21010a82269bc05a6d1cc74a', 'name': 'https://github.com/tensorflow/tensorflow/commit/2b7100d6cdff36aa21010a82269bc05a6d1cc74a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/count_ops.cc#L168-L273', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/count_ops.cc#L168-L273', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/adbbabdb0d3abb3cdeac69e38a96de1d678b24b3', 'name': 'https://github.com/tensorflow/tensorflow/commit/adbbabdb0d3abb3cdeac69e38a96de1d678b24b3', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-44qp-9wwf-734r', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-44qp-9wwf-734r', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `SparseCountSparseOutput` is vulnerable to a heap overflow. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T05:36Z | 2022-02-03T15:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mihai Maruseac | 2021-12-07 19:44:33-08:00 | Further validate sparse tensor for `SparseCount`: indices must be valid within dense shape.
PiperOrigin-RevId: 414888122
Change-Id: I4552bd74c135ecd4bcb5448acc0a3ce9402d8286 | adbbabdb0d3abb3cdeac69e38a96de1d678b24b3 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::SparseCount::Compute | tensorflow::SparseCount::Compute( OpKernelContext * context) | ['context'] | void Compute(OpKernelContext* context) override {
const Tensor& indices = context->input(0);
const Tensor& values = context->input(1);
const Tensor& shape = context->input(2);
const Tensor& weights = context->input(3);
bool use_weights = weights.NumElements() > 0;
OP_REQUIRES(context, TensorShapeUtils::IsMatrix(indices.shape()),
errors::InvalidArgument(
"Input indices must be a 2-dimensional tensor. Got: ",
indices.shape().DebugString()));
OP_REQUIRES(context, TensorShapeUtils::IsVector(values.shape()),
errors::InvalidArgument("Input values must be a vector. Got: ",
values.shape().DebugString()));
OP_REQUIRES(context, TensorShapeUtils::IsVector(shape.shape()),
errors::InvalidArgument("Input shape must be a vector. Got: ",
shape.shape().DebugString()));
OP_REQUIRES(context,
values.shape().dim_size(0) == indices.shape().dim_size(0),
errors::InvalidArgument(
"Number of values must match first dimension of indices.",
"Got ", values.shape().dim_size(0),
" values, indices shape: ", indices.shape().DebugString()));
OP_REQUIRES(
context, shape.shape().dim_size(0) == indices.shape().dim_size(1),
errors::InvalidArgument(
"Number of dimensions must match second dimension of indices.",
"Got ", shape.shape().dim_size(0),
" dimensions, indices shape: ", indices.shape().DebugString()));
OP_REQUIRES(context, shape.NumElements() > 0,
errors::InvalidArgument(
"The shape argument requires at least one element."));
if (use_weights) {
OP_REQUIRES(
context, weights.shape() == values.shape(),
errors::InvalidArgument(
"Weights and values must have the same shape. Weight shape: ",
weights.shape().DebugString(),
"; values shape: ", values.shape().DebugString()));
}
bool is_1d = shape.NumElements() == 1;
auto shape_vector = shape.flat<int64_t>();
int num_batches = is_1d ? 1 : shape_vector(0);
int num_values = values.NumElements();
const auto indices_values = indices.matrix<int64_t>();
const auto values_values = values.flat<T>();
const auto weight_values = weights.flat<W>();
auto per_batch_counts = BatchedMap<W>(num_batches);
T max_value = 0;
for (int idx = 0; idx < num_values; ++idx) {
int batch = is_1d ? 0 : indices_values(idx, 0);
if (batch >= num_batches) {
OP_REQUIRES(context, batch < num_batches,
errors::InvalidArgument(
"Indices value along the first dimension must be ",
"lower than the first index of the shape.", "Got ",
batch, " as batch and ", num_batches,
" as the first dimension of the shape."));
}
const auto& value = values_values(idx);
if (value >= 0 && (maxlength_ <= 0 || value < maxlength_)) {
if (binary_output_) {
per_batch_counts[batch][value] = 1;
} else if (use_weights) {
per_batch_counts[batch][value] += weight_values(idx);
} else {
per_batch_counts[batch][value]++;
}
if (value > max_value) {
max_value = value;
}
}
}
int num_output_values = GetOutputSize(max_value, maxlength_, minlength_);
OP_REQUIRES_OK(context, OutputSparse<W>(per_batch_counts, num_output_values,
is_1d, context));
} | 641 | True | 1 |
CVE-2022-23568 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/sparse_tensors_map_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/sparse_tensors_map_ops.cc', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-6445-fm66-fvq2', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-6445-fm66-fvq2', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/b51b82fe65ebace4475e3c54eb089c18a4403f1c', 'name': 'https://github.com/tensorflow/tensorflow/commit/b51b82fe65ebace4475e3c54eb089c18a4403f1c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/a68f68061e263a88321c104a6c911fe5598050a8', 'name': 'https://github.com/tensorflow/tensorflow/commit/a68f68061e263a88321c104a6c911fe5598050a8', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `AddManySparseToTensorsMap` is vulnerable to an integer overflow which results in a `CHECK`-fail when building new `TensorShape` objects (so, an assert failure based denial of service). We are missing some validation on the shapes of the input tensors as well as directly constructing a large `TensorShape` with user-provided dimensions. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:55Z | 2022-02-03T12:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Mihai Maruseac | 2021-12-09 14:32:48-08:00 | Add missing validation to `AddManySparseToTensorsMap`.
Sparse tensors have a set of requirements for the 3 components and not all of them were checked.
PiperOrigin-RevId: 415358027
Change-Id: I96cbb672999cd1da772c22fabbd15507e32e12dc | b51b82fe65ebace4475e3c54eb089c18a4403f1c | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::AddManySparseToTensorsMapOp::Compute | tensorflow::AddManySparseToTensorsMapOp::Compute( OpKernelContext * context) | ['context'] | void Compute(OpKernelContext* context) override {
const Tensor* input_indices;
const Tensor* input_values;
const Tensor* input_shape;
SparseTensorsMap* map;
OP_REQUIRES_OK(context, context->input("sparse_indices", &input_indices));
OP_REQUIRES_OK(context, context->input("sparse_values", &input_values));
OP_REQUIRES_OK(context, context->input("sparse_shape", &input_shape));
OP_REQUIRES_OK(context, GetMap(context, true /* is_writing */, &map));
OP_REQUIRES(context, TensorShapeUtils::IsMatrix(input_indices->shape()),
errors::InvalidArgument(
"Input indices should be a matrix but received shape ",
input_indices->shape().DebugString()));
OP_REQUIRES(context, TensorShapeUtils::IsVector(input_values->shape()),
errors::InvalidArgument(
"Input values should be a vector but received shape ",
input_values->shape().DebugString()));
OP_REQUIRES(context, TensorShapeUtils::IsVector(input_shape->shape()),
errors::InvalidArgument(
"Input shape should be a vector but received shape ",
input_shape->shape().DebugString()));
int rank = input_shape->NumElements();
OP_REQUIRES(
context, rank > 1,
errors::InvalidArgument(
"Rank of input SparseTensor should be > 1, but saw rank: ", rank));
auto input_shape_vec = input_shape->vec<int64_t>();
int new_num_elements = 1;
bool overflow_ocurred = false;
for (int i = 0; i < input_shape_vec.size(); i++) {
new_num_elements =
MultiplyWithoutOverflow(new_num_elements, input_shape_vec(i));
if (new_num_elements < 0) {
overflow_ocurred = true;
break;
}
}
OP_REQUIRES(
context, !overflow_ocurred,
errors::Internal("Encountered overflow from large input shape."));
TensorShape tensor_input_shape(input_shape_vec);
gtl::InlinedVector<int64_t, 8> std_order(rank);
std::iota(std_order.begin(), std_order.end(), 0);
SparseTensor input_st;
OP_REQUIRES_OK(context, SparseTensor::Create(*input_indices, *input_values,
tensor_input_shape, std_order,
&input_st));
const int64_t N = input_shape_vec(0);
Tensor sparse_handles(DT_INT64, TensorShape({N}));
auto sparse_handles_t = sparse_handles.vec<int64_t>();
OP_REQUIRES_OK(context, input_st.IndicesValid());
// We can generate the output shape proto string now, for all
// minibatch entries.
TensorShape output_shape;
OP_REQUIRES_OK(context, TensorShapeUtils::MakeShape(
input_shape_vec.data() + 1,
input_shape->NumElements() - 1, &output_shape));
// Get groups by minibatch dimension
std::unordered_set<int64_t> visited;
sparse::GroupIterable minibatch = input_st.group({0});
for (const auto& subset : minibatch) {
const int64_t b = subset.group()[0];
visited.insert(b);
OP_REQUIRES(
context, b > -1 && b < N,
errors::InvalidArgument(
"Received unexpected column 0 value in input SparseTensor: ", b,
" < 0 or >= N (= ", N, ")"));
const auto indices = subset.indices();
const auto values = subset.values<T>();
const int64_t num_entries = values.size();
Tensor output_indices = Tensor(DT_INT64, {num_entries, rank - 1});
Tensor output_values = Tensor(DataTypeToEnum<T>::value, {num_entries});
auto output_indices_t = output_indices.matrix<int64_t>();
auto output_values_t = output_values.vec<T>();
for (int i = 0; i < num_entries; ++i) {
for (int d = 1; d < rank; ++d) {
output_indices_t(i, d - 1) = indices(i, d);
}
output_values_t(i) = values(i);
}
SparseTensor st_i;
OP_REQUIRES_OK(context,
SparseTensor::Create(output_indices, output_values,
output_shape, &st_i));
int64_t handle;
OP_REQUIRES_OK(context, map->AddSparseTensor(context, st_i, &handle));
sparse_handles_t(b) = handle;
}
// Fill in any gaps; we must provide an empty ST for batch entries
// the grouper didn't find.
if (visited.size() < N) {
Tensor empty_indices(DT_INT64, {0, rank - 1});
Tensor empty_values(DataTypeToEnum<T>::value, {0});
SparseTensor empty_st;
OP_REQUIRES_OK(context, SparseTensor::Create(empty_indices, empty_values,
output_shape, &empty_st));
for (int64_t b = 0; b < N; ++b) {
// We skipped this batch entry.
if (visited.find(b) == visited.end()) {
int64_t handle;
OP_REQUIRES_OK(context,
map->AddSparseTensor(context, empty_st, &handle));
sparse_handles_t(b) = handle;
}
}
}
context->set_output(0, sparse_handles);
} | 849 | True | 1 |
CVE-2022-21736 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pfjj-m3jj-9jc9', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pfjj-m3jj-9jc9', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/965b97e4a9650495cda5a8c210ef6684b4b9eceb', 'name': 'https://github.com/tensorflow/tensorflow/commit/965b97e4a9650495cda5a8c210ef6684b4b9eceb', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/data/sparse_tensor_slice_dataset_op.cc#L227-L292', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/data/sparse_tensor_slice_dataset_op.cc#L227-L292', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `SparseTensorSliceDataset` has an undefined behavior: under certain condition it can be made to dereference a `nullptr` value. The 3 input arguments to `SparseTensorSliceDataset` represent a sparse tensor. However, there are some preconditions that these arguments must satisfy but these are not validated in the implementation. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T03:19Z | 2022-02-03T12:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Mihai Maruseac | 2021-12-09 15:49:11-08:00 | Properly validate sparse tensor in `SparseTensorSliceDataset`
Existing validation was incomplete.
PiperOrigin-RevId: 415375048
Change-Id: I14cd18f29ede73286f3ffac35171bd15828997e9 | 965b97e4a9650495cda5a8c210ef6684b4b9eceb | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::data::SparseTensorSliceDatasetOp::MakeDataset | tensorflow::data::SparseTensorSliceDatasetOp::MakeDataset( OpKernelContext * ctx , DatasetBase ** output) | ['ctx', 'output'] | void MakeDataset(OpKernelContext* ctx, DatasetBase** output) override {
// Create a new SparseTensorSliceDatasetOp::Dataset, insert it in
// the step container, and return it as the output.
const Tensor* indices;
OP_REQUIRES_OK(ctx, ctx->input("indices", &indices));
const Tensor* values;
OP_REQUIRES_OK(ctx, ctx->input("values", &values));
const Tensor* dense_shape;
OP_REQUIRES_OK(ctx, ctx->input("dense_shape", &dense_shape));
OP_REQUIRES(ctx, TensorShapeUtils::IsMatrix(indices->shape()),
errors::InvalidArgument(
"Input indices should be a matrix but received shape ",
indices->shape().DebugString()));
const auto num_indices = indices->NumElements();
const auto num_values = values->NumElements();
if (num_indices == 0 || num_values == 0) {
OP_REQUIRES(ctx, num_indices == num_values,
errors::InvalidArgument(
"If indices or values are empty, the other one must also "
"be. Got indices of shape ",
indices->shape().DebugString(), " and values of shape ",
values->shape().DebugString()));
}
OP_REQUIRES(ctx, TensorShapeUtils::IsVector(values->shape()),
errors::InvalidArgument(
"Input values should be a vector but received shape ",
indices->shape().DebugString()));
OP_REQUIRES(ctx, TensorShapeUtils::IsVector(dense_shape->shape()),
errors::InvalidArgument(
"Input shape should be a vector but received shape ",
dense_shape->shape().DebugString()));
// We currently ensure that `sparse_tensor` is ordered in the
// batch dimension.
// TODO(mrry): Investigate ways to avoid this unconditional check
// if we can be sure that the sparse tensor was produced in an
// appropriate order (e.g. by `tf.parse_example()` or a Dataset
// that batches elements into rows of a SparseTensor).
int64_t previous_batch_index = -1;
for (int64_t i = 0; i < indices->dim_size(0); ++i) {
int64_t next_batch_index = indices->matrix<int64_t>()(i, 0);
OP_REQUIRES(
ctx, next_batch_index >= previous_batch_index,
errors::Unimplemented("The SparseTensor must be ordered in the batch "
"dimension; handling arbitrarily ordered input "
"is not currently supported."));
previous_batch_index = next_batch_index;
}
gtl::InlinedVector<int64_t, 8> std_order(dense_shape->NumElements(), 0);
sparse::SparseTensor tensor;
OP_REQUIRES_OK(
ctx, sparse::SparseTensor::Create(
*indices, *values, TensorShape(dense_shape->vec<int64_t>()),
std_order, &tensor));
*output = new Dataset<T>(ctx, std::move(tensor));
} | 387 | True | 1 |
CVE-2022-23568 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/sparse_tensors_map_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/sparse_tensors_map_ops.cc', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-6445-fm66-fvq2', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-6445-fm66-fvq2', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/b51b82fe65ebace4475e3c54eb089c18a4403f1c', 'name': 'https://github.com/tensorflow/tensorflow/commit/b51b82fe65ebace4475e3c54eb089c18a4403f1c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/a68f68061e263a88321c104a6c911fe5598050a8', 'name': 'https://github.com/tensorflow/tensorflow/commit/a68f68061e263a88321c104a6c911fe5598050a8', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementation of `AddManySparseToTensorsMap` is vulnerable to an integer overflow which results in a `CHECK`-fail when building new `TensorShape` objects (so, an assert failure based denial of service). We are missing some validation on the shapes of the input tensors as well as directly constructing a large `TensorShape` with user-provided dimensions. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T04:55Z | 2022-02-03T12:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Mihai Maruseac | 2021-12-09 16:17:26-08:00 | Replace faulty overflow check with a builder for `TensorShape`.
Prevents an integer overflow that was not caught before.
PiperOrigin-RevId: 415381595
Change-Id: I76585ddedc912bd9f4a390aeafa8e2ced1a28863 | a68f68061e263a88321c104a6c911fe5598050a8 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::AddManySparseToTensorsMapOp::Compute | tensorflow::AddManySparseToTensorsMapOp::Compute( OpKernelContext * context) | ['context'] | void Compute(OpKernelContext* context) override {
const Tensor* input_indices;
const Tensor* input_values;
const Tensor* input_shape;
SparseTensorsMap* map;
OP_REQUIRES_OK(context, context->input("sparse_indices", &input_indices));
OP_REQUIRES_OK(context, context->input("sparse_values", &input_values));
OP_REQUIRES_OK(context, context->input("sparse_shape", &input_shape));
OP_REQUIRES_OK(context, GetMap(context, true /* is_writing */, &map));
OP_REQUIRES(context, TensorShapeUtils::IsMatrix(input_indices->shape()),
errors::InvalidArgument(
"Input indices should be a matrix but received shape ",
input_indices->shape().DebugString()));
OP_REQUIRES(context, TensorShapeUtils::IsVector(input_values->shape()),
errors::InvalidArgument(
"Input values should be a vector but received shape ",
input_values->shape().DebugString()));
OP_REQUIRES(context, TensorShapeUtils::IsVector(input_shape->shape()),
errors::InvalidArgument(
"Input shape should be a vector but received shape ",
input_shape->shape().DebugString()));
OP_REQUIRES(
context,
input_values->shape().dim_size(0) == input_indices->shape().dim_size(0),
errors::InvalidArgument(
"Number of values must match first dimension of indices. ", "Got ",
input_values->shape().dim_size(0),
" values, indices shape: ", input_indices->shape().DebugString()));
OP_REQUIRES(
context,
input_shape->shape().dim_size(0) == input_indices->shape().dim_size(1),
errors::InvalidArgument(
"Number of dimensions must match second dimension of indices. ",
"Got ", input_shape->shape().dim_size(0),
" dimensions, indices shape: ",
input_indices->shape().DebugString()));
int rank = input_shape->NumElements();
OP_REQUIRES(
context, rank > 1,
errors::InvalidArgument(
"Rank of input SparseTensor should be > 1, but saw rank: ", rank));
auto input_shape_vec = input_shape->vec<int64_t>();
int new_num_elements = 1;
bool overflow_ocurred = false;
for (int i = 0; i < input_shape_vec.size(); i++) {
new_num_elements =
MultiplyWithoutOverflow(new_num_elements, input_shape_vec(i));
if (new_num_elements < 0) {
overflow_ocurred = true;
break;
}
}
OP_REQUIRES(
context, !overflow_ocurred,
errors::Internal("Encountered overflow from large input shape."));
TensorShape tensor_input_shape(input_shape_vec);
gtl::InlinedVector<int64_t, 8> std_order(rank);
std::iota(std_order.begin(), std_order.end(), 0);
SparseTensor input_st;
OP_REQUIRES_OK(context, SparseTensor::Create(*input_indices, *input_values,
tensor_input_shape, std_order,
&input_st));
const int64_t N = input_shape_vec(0);
Tensor sparse_handles(DT_INT64, TensorShape({N}));
auto sparse_handles_t = sparse_handles.vec<int64_t>();
OP_REQUIRES_OK(context, input_st.IndicesValid());
// We can generate the output shape proto string now, for all
// minibatch entries.
TensorShape output_shape;
OP_REQUIRES_OK(context, TensorShapeUtils::MakeShape(
input_shape_vec.data() + 1,
input_shape->NumElements() - 1, &output_shape));
// Get groups by minibatch dimension
std::unordered_set<int64_t> visited;
sparse::GroupIterable minibatch = input_st.group({0});
for (const auto& subset : minibatch) {
const int64_t b = subset.group()[0];
visited.insert(b);
OP_REQUIRES(
context, b > -1 && b < N,
errors::InvalidArgument(
"Received unexpected column 0 value in input SparseTensor: ", b,
" < 0 or >= N (= ", N, ")"));
const auto indices = subset.indices();
const auto values = subset.values<T>();
const int64_t num_entries = values.size();
Tensor output_indices = Tensor(DT_INT64, {num_entries, rank - 1});
Tensor output_values = Tensor(DataTypeToEnum<T>::value, {num_entries});
auto output_indices_t = output_indices.matrix<int64_t>();
auto output_values_t = output_values.vec<T>();
for (int i = 0; i < num_entries; ++i) {
for (int d = 1; d < rank; ++d) {
output_indices_t(i, d - 1) = indices(i, d);
}
output_values_t(i) = values(i);
}
SparseTensor st_i;
OP_REQUIRES_OK(context,
SparseTensor::Create(output_indices, output_values,
output_shape, &st_i));
int64_t handle;
OP_REQUIRES_OK(context, map->AddSparseTensor(context, st_i, &handle));
sparse_handles_t(b) = handle;
}
// Fill in any gaps; we must provide an empty ST for batch entries
// the grouper didn't find.
if (visited.size() < N) {
Tensor empty_indices(DT_INT64, {0, rank - 1});
Tensor empty_values(DataTypeToEnum<T>::value, {0});
SparseTensor empty_st;
OP_REQUIRES_OK(context, SparseTensor::Create(empty_indices, empty_values,
output_shape, &empty_st));
for (int64_t b = 0; b < N; ++b) {
// We skipped this batch entry.
if (visited.find(b) == visited.end()) {
int64_t handle;
OP_REQUIRES_OK(context,
map->AddSparseTensor(context, empty_st, &handle));
sparse_handles_t(b) = handle;
}
}
}
context->set_output(0, sparse_handles);
} | 967 | True | 1 |
CVE-2022-23567 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-rrx2-r989-2c43', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-rrx2-r989-2c43', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'name': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/e952a89b7026b98fe8cbe626514a93ed68b7c510', 'name': 'https://github.com/tensorflow/tensorflow/commit/e952a89b7026b98fe8cbe626514a93ed68b7c510', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1b54cadd19391b60b6fcccd8d076426f7221d5e8', 'name': 'https://github.com/tensorflow/tensorflow/commit/1b54cadd19391b60b6fcccd8d076426f7221d5e8', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/sparse_dense_binary_op_shared.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/sparse_dense_binary_op_shared.cc', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. The implementations of `Sparse*Cwise*` ops are vulnerable to integer overflows. These can be used to trigger large allocations (so, OOM based denial of service) or `CHECK`-fails when building new `TensorShape` objects (so, assert failures based denial of service). We are missing some validation on the shapes of the input tensors as well as directly constructing a large `TensorShape` with user-provided dimensions. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T03:23Z | 2022-02-03T12:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Mihai Maruseac | 2021-12-10 09:46:48-08:00 | Prevent overflow in sparse dense cwise ops.
PiperOrigin-RevId: 415543171
Change-Id: I22dab7c41be2121ab5efe5403ca0e2f9b7cb24b8 | e952a89b7026b98fe8cbe626514a93ed68b7c510 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::SparseDenseBinaryOpShared::Compute | tensorflow::SparseDenseBinaryOpShared::Compute( OpKernelContext * ctx) | ['ctx'] | void Compute(OpKernelContext *ctx) override {
const Tensor *indices_t, *values_t, *shape_t, *dense_t;
OP_REQUIRES_OK(ctx, ctx->input("sp_indices", &indices_t));
OP_REQUIRES_OK(ctx, ctx->input("sp_values", &values_t));
OP_REQUIRES_OK(ctx, ctx->input("sp_shape", &shape_t));
OP_REQUIRES_OK(ctx, ctx->input("dense", &dense_t));
// Validations.
OP_REQUIRES(ctx, TensorShapeUtils::IsMatrix(indices_t->shape()),
errors::InvalidArgument(
"Input sp_indices should be a matrix but received shape: ",
indices_t->shape().DebugString()));
OP_REQUIRES(ctx,
TensorShapeUtils::IsVector(values_t->shape()) &&
TensorShapeUtils::IsVector(shape_t->shape()),
errors::InvalidArgument(
"Inputs sp_values and sp_shape should be vectors "
"but received shapes: ",
values_t->shape().DebugString(), " and ",
shape_t->shape().DebugString()));
OP_REQUIRES(
ctx, TensorShapeUtils::IsVector(shape_t->shape()),
errors::InvalidArgument("Input sp_shape must be a vector. Got: ",
shape_t->shape().DebugString()));
OP_REQUIRES(
ctx, values_t->dim_size(0) == indices_t->dim_size(0),
errors::InvalidArgument(
"The first dimension of values and indices should match. (",
values_t->dim_size(0), " vs. ", indices_t->dim_size(0), ")"));
OP_REQUIRES(
ctx, shape_t->shape().dim_size(0) == indices_t->shape().dim_size(1),
errors::InvalidArgument(
"Number of dimensions must match second dimension of indices. ",
"Got ", shape_t->shape().dim_size(0),
" dimensions, indices shape: ", indices_t->shape().DebugString()));
OP_REQUIRES(ctx, shape_t->NumElements() > 0,
errors::InvalidArgument(
"The shape argument requires at least one element."));
const auto indices_mat = indices_t->matrix<int64_t>();
const auto shape_vec = shape_t->vec<int64_t>();
const auto lhs_dims = BCast::FromShape(TensorShape(shape_vec));
const auto rhs_dims = BCast::FromShape(dense_t->shape());
BCast b(lhs_dims, rhs_dims, false); // false for keeping the same num dims.
// True iff (size(lhs) >= size(rhs)) and all dims in lhs is greater or equal
// to dims in rhs (from right to left).
auto VecGreaterEq = [](ArraySlice<int64_t> lhs, ArraySlice<int64_t> rhs) {
if (lhs.size() < rhs.size()) return false;
for (size_t i = 0; i < rhs.size(); ++i) {
if (lhs[lhs.size() - 1 - i] < rhs[rhs.size() - 1 - i]) return false;
}
return true;
};
OP_REQUIRES(ctx, VecGreaterEq(lhs_dims, rhs_dims) && b.IsValid(),
errors::InvalidArgument(
"SparseDenseBinaryOpShared broadcasts dense to sparse "
"only; got incompatible shapes: [",
absl::StrJoin(lhs_dims, ","), "] vs. [",
absl::StrJoin(rhs_dims, ","), "]"));
Tensor *output_values = nullptr;
Tensor dense_gathered;
const int64_t nnz = indices_t->dim_size(0);
OP_REQUIRES_OK(ctx,
ctx->allocate_output(0, TensorShape({nnz}), &output_values));
OP_REQUIRES_OK(
ctx, ctx->allocate_temp(DataTypeToEnum<T>::value, TensorShape({nnz}),
&dense_gathered));
bool op_is_div = false;
if (absl::StrContains(ctx->op_kernel().type_string_view(), "Div")) {
op_is_div = true;
}
// Pulls relevant entries from the dense side, with reshape and broadcasting
// *of the dense side* taken into account. Use a TensorRef to avoid blowing
// up memory.
//
// We can directly use the sparse indices to look up dense side, because
// "b.y_reshape()" and "b.y_bcast()" are guaranteed to have rank "ndims".
auto dense_gathered_flat = dense_gathered.flat<T>();
const int ndims = lhs_dims.size();
switch (ndims) {
#define CASE(NDIM) \
case NDIM: { \
TensorRef<Eigen::Tensor<const T, NDIM, Eigen::RowMajor>> rhs_ref = \
dense_t->shaped<T, NDIM>(b.y_reshape()) \
.broadcast(BCast::ToIndexArray<NDIM>(b.y_bcast())); \
Eigen::array<Eigen::DenseIndex, NDIM> idx; \
bool indices_valid = true; \
for (int i = 0; i < nnz; ++i) { \
for (int d = 0; d < NDIM; ++d) { \
idx[d] = internal::SubtleMustCopy(indices_mat(i, d)); \
if (!FastBoundsCheck(idx[d], rhs_ref.dimension(d))) { \
indices_valid = false; \
} \
} \
OP_REQUIRES( \
ctx, indices_valid, \
errors::InvalidArgument("Provided indices are out-of-bounds w.r.t. " \
"dense side with broadcasted shape")); \
dense_gathered_flat(i) = rhs_ref.coeff(idx); \
if (op_is_div) { \
OP_REQUIRES(ctx, dense_gathered_flat(i) != 0, \
errors::InvalidArgument( \
"SparseDenseCwiseDiv cannot divide by zero," \
"but input dense tensor contains zero ")); \
} \
} \
break; \
}
CASE(1);
CASE(2);
CASE(3);
CASE(4);
CASE(5);
default:
OP_REQUIRES(
ctx, false,
errors::InvalidArgument("Only tensors with ranks between 1 and 5 "
"are currently supported. Tensor rank: ",
ndims));
#undef CASE
}
output_values->flat<T>().device(ctx->eigen_device<Device>()) =
values_t->flat<T>().binaryExpr(dense_gathered_flat,
typename Functor::func());
} | 747 | True | 1 |
CVE-2022-23558 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/a1e1511dde36b3f8aa27a6ec630838e7ea40e091', 'name': 'https://github.com/tensorflow/tensorflow/commit/a1e1511dde36b3f8aa27a6ec630838e7ea40e091', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/c/common.c#L24-L33', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/c/common.c#L24-L33', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/c/common.c#L53-L60', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/c/common.c#L53-L60', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9gwq-6cwj-47h3', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9gwq-6cwj-47h3', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would cause an integer overflow in `TfLiteIntArrayCreate`. The `TfLiteIntArrayGetSizeInBytes` returns an `int` instead of a `size_t. An attacker can control model inputs such that `computed_size` overflows the size of `int` datatype. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T18:50Z | 2022-02-04T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Karim Nosir | 2021-12-14 18:09:55-08:00 | [lite] Update TfLiteIntArrayCreate to return size_t
PiperOrigin-RevId: 416439896
Change-Id: I847f69b68d1ddaff4b1e925a09b8b69c1756653b | a1e1511dde36b3f8aa27a6ec630838e7ea40e091 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | TfLiteIntArrayCreate | TfLiteIntArrayCreate( int size) | ['size'] | TfLiteIntArray* TfLiteIntArrayCreate(int size) {
int alloc_size = TfLiteIntArrayGetSizeInBytes(size);
if (alloc_size <= 0) return NULL;
TfLiteIntArray* ret = (TfLiteIntArray*)malloc(alloc_size);
if (!ret) return ret;
ret->size = size;
return ret;
} | 54 | True | 1 |
CVE-2022-23558 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/a1e1511dde36b3f8aa27a6ec630838e7ea40e091', 'name': 'https://github.com/tensorflow/tensorflow/commit/a1e1511dde36b3f8aa27a6ec630838e7ea40e091', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/c/common.c#L24-L33', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/c/common.c#L24-L33', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/c/common.c#L53-L60', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/c/common.c#L53-L60', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9gwq-6cwj-47h3', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9gwq-6cwj-47h3', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would cause an integer overflow in `TfLiteIntArrayCreate`. The `TfLiteIntArrayGetSizeInBytes` returns an `int` instead of a `size_t. An attacker can control model inputs such that `computed_size` overflows the size of `int` datatype. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T18:50Z | 2022-02-04T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Karim Nosir | 2021-12-14 18:09:55-08:00 | [lite] Update TfLiteIntArrayCreate to return size_t
PiperOrigin-RevId: 416439896
Change-Id: I847f69b68d1ddaff4b1e925a09b8b69c1756653b | a1e1511dde36b3f8aa27a6ec630838e7ea40e091 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | TfLiteIntArrayGetSizeInBytes | TfLiteIntArrayGetSizeInBytes( int size) | ['size'] | int TfLiteIntArrayGetSizeInBytes(int size) {
static TfLiteIntArray dummy;
int computed_size = sizeof(dummy) + sizeof(dummy.data[0]) * size;
#if defined(_MSC_VER)
// Context for why this is needed is in http://b/189926408#comment21
computed_size -= sizeof(dummy.data[0]);
#endif
return computed_size;
} | 46 | True | 1 |
CVE-2022-23559 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/embedding_lookup_sparse.cc#L179-L189', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/embedding_lookup_sparse.cc#L179-L189', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-98p5-x8x4-c9m5', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-98p5-x8x4-c9m5', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/a4e401da71458d253b05e41f28637b65baf64be4', 'name': 'https://github.com/tensorflow/tensorflow/commit/a4e401da71458d253b05e41f28637b65baf64be4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1de49725a5fc4e48f1a3b902ec3599ee99283043', 'name': 'https://github.com/tensorflow/tensorflow/commit/1de49725a5fc4e48f1a3b902ec3599ee99283043', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/f19be71717c497723ba0cea0379e84f061a75e01', 'name': 'https://github.com/tensorflow/tensorflow/commit/f19be71717c497723ba0cea0379e84f061a75e01', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would cause an integer overflow in embedding lookup operations. Both `embedding_size` and `lookup_size` are products of values provided by the user. Hence, a malicious user could trigger overflows in the multiplication. In certain scenarios, this can then result in heap OOB read/write. Users are advised to upgrade to a patched version.'}] | 2022-02-09T18:53Z | 2022-02-04T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Karim Nosir | 2021-12-16 14:32:07-08:00 | [lite] Move MultiplyAndCheckOverflow to util to be able to share it.
PiperOrigin-RevId: 416897229
Change-Id: I5feb44881bdcbb6ed911da4f17c55bb978754059 | f19be71717c497723ba0cea0379e84f061a75e01 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::MultiplyAndCheckOverflow | tflite::MultiplyAndCheckOverflow( size_t a , size_t b , size_t * product) | ['a', 'b', 'product'] | TfLiteStatus MultiplyAndCheckOverflow(size_t a, size_t b, size_t* product) {
// Multiplying a * b where a and b are size_t cannot result in overflow in a
// size_t accumulator if both numbers have no non-zero bits in their upper
// half.
constexpr size_t size_t_bits = 8 * sizeof(size_t);
constexpr size_t overflow_upper_half_bit_position = size_t_bits / 2;
*product = a * b;
// If neither integers have non-zero bits past 32 bits can't overflow.
// Otherwise check using slow devision.
if (TFLITE_EXPECT_FALSE((a | b) >> overflow_upper_half_bit_position != 0)) {
if (a != 0 && *product / a != b) return kTfLiteError;
}
return kTfLiteOk;
} | 77 | True | 1 |
CVE-2022-23560 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/6364463d6f5b6254cac3d6aedf999b6a96225038', 'name': 'https://github.com/tensorflow/tensorflow/commit/6364463d6f5b6254cac3d6aedf999b6a96225038', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/internal/utils/sparsity_format_converter.cc#L252-L293', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/internal/utils/sparsity_format_converter.cc#L252-L293', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvf-hxvg-f67v', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvf-hxvg-f67v', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would allow limited reads and writes outside of arrays in TFLite. This exploits missing validation in the conversion from sparse tensors to dense tensors. The fix is included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range. Users are advised to upgrade as soon as possible.'}] | 2022-02-09T18:55Z | 2022-02-04T23:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Karim Nosir | 2021-12-16 15:37:14-08:00 | [lite] Add some safety checks to avoid out of bound access for sparsity format
PiperOrigin-RevId: 416910386
Change-Id: Ic0b4dc048dc4b5a6309c572b8c4c9f776e4db60a | 6364463d6f5b6254cac3d6aedf999b6a96225038 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::internal::sparsity::FormatConverter<T>::InitSparseToDenseConverter | tflite::internal::sparsity::FormatConverter<T>::InitSparseToDenseConverter( std :: vector<int> shape , std :: vector<int> traversal_order , std :: vector<TfLiteDimensionType> format , std :: vector<int> dense_size , std :: vector<std::vector<int>> segments , std :: vector<std::vector<int>> indices , std :: vector<int> block_map) | ['shape', 'traversal_order', 'format', 'dense_size', 'segments', 'indices', 'block_map'] | void FormatConverter<T>::InitSparseToDenseConverter(
std::vector<int> shape, std::vector<int> traversal_order,
std::vector<TfLiteDimensionType> format, std::vector<int> dense_size,
std::vector<std::vector<int>> segments,
std::vector<std::vector<int>> indices, std::vector<int> block_map) {
dense_shape_ = std::move(shape);
traversal_order_ = std::move(traversal_order);
block_map_ = std::move(block_map);
format_ = std::move(format);
dense_size_ = 1;
for (int i = 0; i < dense_shape_.size(); i++) {
dense_size_ *= dense_shape_[i];
}
dim_metadata_.resize(2 * format_.size());
for (int i = 0; i < format_.size(); i++) {
if (format_[i] == kTfLiteDimDense) {
dim_metadata_[2 * i] = {dense_size[i]};
} else {
dim_metadata_[2 * i] = std::move(segments[i]);
dim_metadata_[2 * i + 1] = std::move(indices[i]);
}
}
int original_rank = dense_shape_.size();
int block_dim = 0;
blocked_shape_.resize(original_rank);
block_size_.resize(block_map_.size());
for (int i = 0; i < original_rank; i++) {
if (block_dim < block_map_.size() && block_map_[block_dim] == i) {
int orig_dim = traversal_order_[original_rank + block_dim];
block_size_[block_dim] = dense_size[orig_dim];
blocked_shape_[i] = dense_shape_[i] / dense_size[orig_dim];
block_dim++;
} else {
blocked_shape_[i] = dense_shape_[i];
}
}
} | 358 | True | 1 |
CVE-2022-23560 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/6364463d6f5b6254cac3d6aedf999b6a96225038', 'name': 'https://github.com/tensorflow/tensorflow/commit/6364463d6f5b6254cac3d6aedf999b6a96225038', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/internal/utils/sparsity_format_converter.cc#L252-L293', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/internal/utils/sparsity_format_converter.cc#L252-L293', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvf-hxvg-f67v', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvf-hxvg-f67v', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would allow limited reads and writes outside of arrays in TFLite. This exploits missing validation in the conversion from sparse tensors to dense tensors. The fix is included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range. Users are advised to upgrade as soon as possible.'}] | 2022-02-09T18:55Z | 2022-02-04T23:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Karim Nosir | 2021-12-16 15:37:14-08:00 | [lite] Add some safety checks to avoid out of bound access for sparsity format
PiperOrigin-RevId: 416910386
Change-Id: Ic0b4dc048dc4b5a6309c572b8c4c9f776e4db60a | 6364463d6f5b6254cac3d6aedf999b6a96225038 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::internal::sparsity::FormatConverter<T>::InitSparseToDenseConverter | tflite::internal::sparsity::FormatConverter<T>::InitSparseToDenseConverter( std :: vector<int> shape , std :: vector<int> traversal_order , std :: vector<TfLiteDimensionType> format , std :: vector<int> dense_size , std :: vector<std::vector<int>> segments , std :: vector<std::vector<int>> indices , std :: vector<int> block_map) | ['shape', 'traversal_order', 'format', 'dense_size', 'segments', 'indices', 'block_map'] | void FormatConverter<T>::InitSparseToDenseConverter(
std::vector<int> shape, std::vector<int> traversal_order,
std::vector<TfLiteDimensionType> format, std::vector<int> dense_size,
std::vector<std::vector<int>> segments,
std::vector<std::vector<int>> indices, std::vector<int> block_map) {
dense_shape_ = std::move(shape);
traversal_order_ = std::move(traversal_order);
block_map_ = std::move(block_map);
format_ = std::move(format);
dense_size_ = 1;
for (int i = 0; i < dense_shape_.size(); i++) {
dense_size_ *= dense_shape_[i];
}
dim_metadata_.resize(2 * format_.size());
for (int i = 0; i < format_.size(); i++) {
if (format_[i] == kTfLiteDimDense) {
dim_metadata_[2 * i] = {dense_size[i]};
} else {
dim_metadata_[2 * i] = std::move(segments[i]);
dim_metadata_[2 * i + 1] = std::move(indices[i]);
}
}
int original_rank = dense_shape_.size();
int block_dim = 0;
blocked_shape_.resize(original_rank);
block_size_.resize(block_map_.size());
for (int i = 0; i < original_rank; i++) {
if (block_dim < block_map_.size() && block_map_[block_dim] == i) {
int orig_dim = traversal_order_[original_rank + block_dim];
block_size_[block_dim] = dense_size[orig_dim];
blocked_shape_[i] = dense_shape_[i] / dense_size[orig_dim];
block_dim++;
} else {
blocked_shape_[i] = dense_shape_[i];
}
}
} | 358 | True | 1 |
CVE-2022-23560 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/6364463d6f5b6254cac3d6aedf999b6a96225038', 'name': 'https://github.com/tensorflow/tensorflow/commit/6364463d6f5b6254cac3d6aedf999b6a96225038', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/internal/utils/sparsity_format_converter.cc#L252-L293', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/internal/utils/sparsity_format_converter.cc#L252-L293', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvf-hxvg-f67v', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvf-hxvg-f67v', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would allow limited reads and writes outside of arrays in TFLite. This exploits missing validation in the conversion from sparse tensors to dense tensors. The fix is included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range. Users are advised to upgrade as soon as possible.'}] | 2022-02-09T18:55Z | 2022-02-04T23:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Karim Nosir | 2021-12-16 15:37:14-08:00 | [lite] Add some safety checks to avoid out of bound access for sparsity format
PiperOrigin-RevId: 416910386
Change-Id: Ic0b4dc048dc4b5a6309c572b8c4c9f776e4db60a | 6364463d6f5b6254cac3d6aedf999b6a96225038 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::internal::sparsity::FormatConverter<T>::Populate | tflite::internal::sparsity::FormatConverter<T>::Populate( const T * src_data , std :: vector<int> indices , int level , int prev_idx , int * src_data_ptr , T * dest_data) | ['src_data', 'indices', 'level', 'prev_idx', 'src_data_ptr', 'dest_data'] | void FormatConverter<T>::Populate(const T* src_data, std::vector<int> indices,
int level, int prev_idx, int* src_data_ptr,
T* dest_data) {
if (level == indices.size()) {
int orig_rank = dense_shape_.size();
std::vector<int> orig_idx;
orig_idx.resize(orig_rank);
int i = 0;
for (; i < orig_idx.size(); i++) {
int orig_dim = traversal_order_[i];
orig_idx[orig_dim] = indices[i];
}
for (; i < indices.size(); i++) {
const int block_idx = traversal_order_[i] - orig_rank;
const int orig_dim = block_map_[block_idx];
orig_idx[orig_dim] =
orig_idx[orig_dim] * block_size_[block_idx] + indices[i];
}
dest_data[GetFlattenedIndex(orig_idx, dense_shape_)] =
src_data[*src_data_ptr];
*src_data_ptr = *src_data_ptr + 1;
return;
}
const int metadata_idx = 2 * level;
const int shape_of_level = dim_metadata_[metadata_idx][0];
if (format_[level] == kTfLiteDimDense) {
for (int i = 0; i < shape_of_level; i++) {
indices[level] = i;
Populate(src_data, indices, level + 1, prev_idx * shape_of_level + i,
src_data_ptr, dest_data);
}
} else {
const auto& array_segments = dim_metadata_[metadata_idx];
const auto& array_indices = dim_metadata_[metadata_idx + 1];
for (int i = array_segments[prev_idx]; i < array_segments[prev_idx + 1];
i++) {
indices[level] = array_indices[i];
Populate(src_data, indices, level + 1, i, src_data_ptr, dest_data);
}
}
} | 344 | True | 1 |
CVE-2022-23560 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/6364463d6f5b6254cac3d6aedf999b6a96225038', 'name': 'https://github.com/tensorflow/tensorflow/commit/6364463d6f5b6254cac3d6aedf999b6a96225038', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/internal/utils/sparsity_format_converter.cc#L252-L293', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/internal/utils/sparsity_format_converter.cc#L252-L293', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvf-hxvg-f67v', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvf-hxvg-f67v', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}, {'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would allow limited reads and writes outside of arrays in TFLite. This exploits missing validation in the conversion from sparse tensors to dense tensors. The fix is included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range. Users are advised to upgrade as soon as possible.'}] | 2022-02-09T18:55Z | 2022-02-04T23:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Karim Nosir | 2021-12-16 15:37:14-08:00 | [lite] Add some safety checks to avoid out of bound access for sparsity format
PiperOrigin-RevId: 416910386
Change-Id: Ic0b4dc048dc4b5a6309c572b8c4c9f776e4db60a | 6364463d6f5b6254cac3d6aedf999b6a96225038 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::internal::sparsity::FormatConverter<T>::Populate | tflite::internal::sparsity::FormatConverter<T>::Populate( const T * src_data , std :: vector<int> indices , int level , int prev_idx , int * src_data_ptr , T * dest_data) | ['src_data', 'indices', 'level', 'prev_idx', 'src_data_ptr', 'dest_data'] | void FormatConverter<T>::Populate(const T* src_data, std::vector<int> indices,
int level, int prev_idx, int* src_data_ptr,
T* dest_data) {
if (level == indices.size()) {
int orig_rank = dense_shape_.size();
std::vector<int> orig_idx;
orig_idx.resize(orig_rank);
int i = 0;
for (; i < orig_idx.size(); i++) {
int orig_dim = traversal_order_[i];
orig_idx[orig_dim] = indices[i];
}
for (; i < indices.size(); i++) {
const int block_idx = traversal_order_[i] - orig_rank;
const int orig_dim = block_map_[block_idx];
orig_idx[orig_dim] =
orig_idx[orig_dim] * block_size_[block_idx] + indices[i];
}
dest_data[GetFlattenedIndex(orig_idx, dense_shape_)] =
src_data[*src_data_ptr];
*src_data_ptr = *src_data_ptr + 1;
return;
}
const int metadata_idx = 2 * level;
const int shape_of_level = dim_metadata_[metadata_idx][0];
if (format_[level] == kTfLiteDimDense) {
for (int i = 0; i < shape_of_level; i++) {
indices[level] = i;
Populate(src_data, indices, level + 1, prev_idx * shape_of_level + i,
src_data_ptr, dest_data);
}
} else {
const auto& array_segments = dim_metadata_[metadata_idx];
const auto& array_indices = dim_metadata_[metadata_idx + 1];
for (int i = array_segments[prev_idx]; i < array_segments[prev_idx + 1];
i++) {
indices[level] = array_indices[i];
Populate(src_data, indices, level + 1, i, src_data_ptr, dest_data);
}
}
} | 344 | True | 1 |
CVE-2022-23559 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/embedding_lookup_sparse.cc#L179-L189', 'name': 'https://github.com/tensorflow/tensorflow/blob/ca6f96b62ad84207fbec580404eaa7dd7403a550/tensorflow/lite/kernels/embedding_lookup_sparse.cc#L179-L189', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-98p5-x8x4-c9m5', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-98p5-x8x4-c9m5', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/a4e401da71458d253b05e41f28637b65baf64be4', 'name': 'https://github.com/tensorflow/tensorflow/commit/a4e401da71458d253b05e41f28637b65baf64be4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/1de49725a5fc4e48f1a3b902ec3599ee99283043', 'name': 'https://github.com/tensorflow/tensorflow/commit/1de49725a5fc4e48f1a3b902ec3599ee99283043', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/f19be71717c497723ba0cea0379e84f061a75e01', 'name': 'https://github.com/tensorflow/tensorflow/commit/f19be71717c497723ba0cea0379e84f061a75e01', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would cause an integer overflow in embedding lookup operations. Both `embedding_size` and `lookup_size` are products of values provided by the user. Hence, a malicious user could trigger overflows in the multiplication. In certain scenarios, this can then result in heap OOB read/write. Users are advised to upgrade to a patched version.'}] | 2022-02-09T18:53Z | 2022-02-04T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Karim Nosir | 2021-12-21 08:48:11-08:00 | [lite] Check for overflow when creating required bytes.
PiperOrigin-RevId: 417629001
Change-Id: Ia7feb3ea8e988f4fd4b3c98c1a1fed4557d99fd7 | 1de49725a5fc4e48f1a3b902ec3599ee99283043 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::Eval | tflite::ops::builtin::Eval( TfLiteContext * context , TfLiteNode * node) | ['context', 'node'] | TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {
auto* params =
reinterpret_cast<TfLiteEmbeddingLookupSparseParams*>(node->builtin_data);
TfLiteTensor* output;
TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));
const TfLiteTensor* ids;
TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &ids));
const TfLiteTensor* indices;
TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 1, &indices));
const TfLiteTensor* dense_shape;
TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 2, &dense_shape));
const TfLiteTensor* weights;
TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 3, &weights));
const TfLiteTensor* value;
TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 4, &value));
const int lookup_rank = SizeOfDimension(indices, 1);
const int embedding_rank = NumDimensions(value);
const int num_lookups = SizeOfDimension(ids, 0);
const int num_rows = SizeOfDimension(value, 0);
// The last dimension gets replaced by the embedding.
const int output_rank = (lookup_rank - 1) + (embedding_rank - 1);
// Make sure that the actual dense shape of the sparse tensor represented by
// (loopkup, indices, dense_shape) is consistent.
TF_LITE_ENSURE_EQ(context, SizeOfDimension(dense_shape, 0), lookup_rank);
// Resize output tensor.
TfLiteIntArray* output_shape = TfLiteIntArrayCreate(output_rank);
TF_LITE_ENSURE(context, output_shape != nullptr);
int k = 0;
int embedding_size = 1;
int lookup_size = 1;
for (int i = 0; i < lookup_rank - 1; i++, k++) {
const int dim = dense_shape->data.i32[i];
lookup_size *= dim;
output_shape->data[k] = dim;
}
for (int i = 1; i < embedding_rank; i++, k++) {
const int dim = SizeOfDimension(value, i);
embedding_size *= dim;
output_shape->data[k] = dim;
}
TF_LITE_ENSURE_STATUS(context->ResizeTensor(context, output, output_shape));
const int output_size = lookup_size * embedding_size;
TfLiteTensorRealloc(output_size * sizeof(float), output);
float* output_ptr = GetTensorData<float>(output);
const float* weights_ptr = GetTensorData<float>(weights);
const float* value_ptr = GetTensorData<float>(value);
std::fill_n(output_ptr, output_size, 0.0f);
// Keep track of the current bucket for aggregation/combination.
int current_output_offset = 0;
float current_total_weight = 0.0;
float current_squares_weight = 0.0;
int num_elements = 0;
for (int i = 0; i < num_lookups; i++) {
int idx = ids->data.i32[i];
if (idx >= num_rows || idx < 0) {
context->ReportError(context,
"Embedding Lookup Sparse: index out of bounds. "
"Got %d, and bounds are [0, %d]",
idx, num_rows - 1);
return kTfLiteError;
}
// Check where we need to aggregate.
const int example_indices_offset = i * lookup_rank;
int output_bucket = 0;
int stride = 1;
for (int k = (lookup_rank - 1) - 1; k >= 0; k--) {
output_bucket += indices->data.i32[example_indices_offset + k] * stride;
stride *= dense_shape->data.i32[k];
}
const int output_offset = output_bucket * embedding_size;
// If we are in a new aggregation bucket and the combiner is not the sum,
// go back and finalize the result of the previous bucket.
if (output_offset != current_output_offset) {
FinalizeAggregation(params->combiner, num_elements, current_total_weight,
current_squares_weight, embedding_size,
&output_ptr[current_output_offset]);
// Track next bucket.
num_elements = 0;
current_total_weight = 0.0;
current_squares_weight = 0.0;
current_output_offset = output_offset;
}
// Add element to aggregation.
++num_elements;
const int example_embedding_offset = idx * embedding_size;
const float w = weights_ptr[i];
current_squares_weight += w * w;
current_total_weight += w;
for (int k = 0; k < embedding_size; k++) {
output_ptr[current_output_offset + k] +=
value_ptr[example_embedding_offset + k] * w;
}
}
// Finalize last bucket.
FinalizeAggregation(params->combiner, num_elements, current_total_weight,
current_squares_weight, embedding_size,
&GetTensorData<float>(output)[current_output_offset]);
return kTfLiteOk;
} | 739 | True | 1 |
CVE-2022-23561 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9c78-vcq7-7vxq', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-9c78-vcq7-7vxq', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/6c0b2b70eeee588591680f5b7d5d38175fd7cdf6', 'name': 'https://github.com/tensorflow/tensorflow/commit/6c0b2b70eeee588591680f5b7d5d38175fd7cdf6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would cause a write outside of bounds of an array in TFLite. In fact, the attacker can override the linked list used by the memory allocator. This can be leveraged for an arbitrary write primitive under certain conditions. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-09T18:13Z | 2022-02-04T23:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Karim Nosir | 2021-12-21 08:50:37-08:00 | [lite] add validation check for sparse fully connected
PiperOrigin-RevId: 417629354
Change-Id: If96171c4bd4f5fdb01d6368d6deab19d1c9beca7 | 6c0b2b70eeee588591680f5b7d5d38175fd7cdf6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::fully_connected::EvalFloat | tflite::ops::builtin::fully_connected::EvalFloat( TfLiteContext * context , TfLiteNode * node , TfLiteFullyConnectedParams * params , OpData * data , const TfLiteTensor * input , const TfLiteTensor * filter , const TfLiteTensor * bias , TfLiteTensor * output) | ['context', 'node', 'params', 'data', 'input', 'filter', 'bias', 'output'] | TfLiteStatus EvalFloat(TfLiteContext* context, TfLiteNode* node,
TfLiteFullyConnectedParams* params, OpData* data,
const TfLiteTensor* input, const TfLiteTensor* filter,
const TfLiteTensor* bias, TfLiteTensor* output) {
float output_activation_min, output_activation_max;
CalculateActivationRange(params->activation, &output_activation_min,
&output_activation_max);
if (kernel_type == kReference) {
FullyConnectedParams op_params;
op_params.float_activation_min = output_activation_min;
op_params.float_activation_max = output_activation_max;
if (filter->sparsity != nullptr) {
const auto& sparsity = *filter->sparsity;
reference_ops::FullyConnectedSparseWeight(
sparsity, op_params, GetTensorShape(input),
GetTensorData<float>(input), GetTensorShape(filter),
GetTensorData<float>(filter), GetTensorShape(bias),
GetTensorData<float>(bias), GetTensorShape(output),
GetTensorData<float>(output));
} else {
reference_ops::FullyConnected(
op_params, GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(filter), GetTensorData<float>(filter),
GetTensorShape(bias), GetTensorData<float>(bias),
GetTensorShape(output), GetTensorData<float>(output));
}
} else if (kernel_type == kLegacyPie) {
return EvalPie(context, node, params, data, input, filter, bias, output);
} else {
FullyConnectedParams op_params;
op_params.float_activation_min = output_activation_min;
op_params.float_activation_max = output_activation_max;
if (filter->sparsity != nullptr) {
const auto& sparsity = *filter->sparsity;
if (!SupportedSparsityFormat(sparsity)) {
TF_LITE_KERNEL_LOG(context,
"Unsupported sparse fully-connected weight format.");
return kTfLiteError;
}
if (sparsity.dim_metadata_size == kDimMetadataSizeRandomSparse) {
// Random sparse.
optimized_ops::FullyConnectedSparseWeight(
sparsity, op_params, GetTensorShape(input),
GetTensorData<float>(input), GetTensorShape(filter),
GetTensorData<float>(filter), GetTensorShape(bias),
GetTensorData<float>(bias), GetTensorShape(output),
GetTensorData<float>(output));
} else if (sparsity.dim_metadata_size == kDimMetadataSizeBlockSparse &&
sparsity.dim_metadata[2].dense_size == 4) {
// Block sparse with block size of 1x4.
optimized_ops::FullyConnectedSparseWeight1x4(
sparsity, op_params, GetTensorShape(input),
GetTensorData<float>(input), GetTensorShape(filter),
GetTensorData<float>(filter), GetTensorShape(bias),
GetTensorData<float>(bias), GetTensorShape(output),
GetTensorData<float>(output),
CpuBackendContext::GetFromContext(context));
} else {
TF_LITE_KERNEL_LOG(context,
"Unsupported sparse fully-connected weight format.");
return kTfLiteError;
}
} else {
op_params.lhs_cacheable = IsConstantTensor(filter);
op_params.rhs_cacheable = IsConstantTensor(input);
optimized_ops::FullyConnected(
op_params, GetTensorShape(input), GetTensorData<float>(input),
GetTensorShape(filter), GetTensorData<float>(filter),
GetTensorShape(bias), GetTensorData<float>(bias),
GetTensorShape(output), GetTensorData<float>(output),
CpuBackendContext::GetFromContext(context));
}
}
return kTfLiteOk;
} | 574 | True | 1 |
CVE-2022-23595 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:N/A:P | NETWORK | LOW | SINGLE | NONE | NONE | PARTIAL | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/274df9b02330b790aa8de1cee164b70f72b9b244/tensorflow/compiler/jit/xla_platform_info.cc#L43-L104', 'name': 'https://github.com/tensorflow/tensorflow/blob/274df9b02330b790aa8de1cee164b70f72b9b244/tensorflow/compiler/jit/xla_platform_info.cc#L43-L104', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/e21af685e1828f7ca65038307df5cc06de4479e8', 'name': 'https://github.com/tensorflow/tensorflow/commit/e21af685e1828f7ca65038307df5cc06de4479e8', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fpcp-9h7m-ffpx', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-fpcp-9h7m-ffpx', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndIncluding': '2.5.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.6.0', 'versionEndIncluding': '2.6.2', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Tensorflow is an Open Source Machine Learning Framework. When building an XLA compilation cache, if default settings are used, TensorFlow triggers a null pointer dereference. In the default scenario, all devices are allowed, so `flr->config_proto` is `nullptr`. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.'}] | 2022-02-10T02:10Z | 2022-02-04T23:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Smit Hinsu | 2022-01-07 16:20:27-08:00 | Fix Null-pointer dereference in BuildXlaCompilationCache
If ConfigProto is not used, then use the default settings which is to allow all devices.
PiperOrigin-RevId: 420391800
Change-Id: I88161ad7042990aef678e77b597a2fb2c8f815be | e21af685e1828f7ca65038307df5cc06de4479e8 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::BuildXlaCompilationCache | tensorflow::BuildXlaCompilationCache( DeviceBase * device , FunctionLibraryRuntime * flr , const XlaPlatformInfo & platform_info , XlaCompilationCache ** cache) | ['device', 'flr', 'platform_info', 'cache'] | Status BuildXlaCompilationCache(DeviceBase* device, FunctionLibraryRuntime* flr,
const XlaPlatformInfo& platform_info,
XlaCompilationCache** cache) {
if (platform_info.xla_device_metadata()) {
*cache = new XlaCompilationCache(
platform_info.xla_device_metadata()->client(),
platform_info.xla_device_metadata()->jit_device_type());
return Status::OK();
}
auto platform =
se::MultiPlatformManager::PlatformWithId(platform_info.platform_id());
if (!platform.ok()) {
return platform.status();
}
StatusOr<xla::Compiler*> compiler_for_platform =
xla::Compiler::GetForPlatform(platform.ValueOrDie());
if (!compiler_for_platform.ok()) {
// In some rare cases (usually in unit tests with very small clusters) we
// may end up transforming an XLA cluster with at least one GPU operation
// (which would normally force the cluster to be compiled using XLA:GPU)
// into an XLA cluster with no GPU operations (i.e. containing only CPU
// operations). Such a cluster can fail compilation (in way that
// MarkForCompilation could not have detected) if the CPU JIT is not linked
// in.
//
// So bail out of _XlaCompile in this case, and let the executor handle the
// situation for us.
const Status& status = compiler_for_platform.status();
if (status.code() == error::NOT_FOUND) {
return errors::Unimplemented("Could not find compiler for platform ",
platform.ValueOrDie()->Name(), ": ",
status.ToString());
}
}
xla::LocalClientOptions client_options;
client_options.set_platform(platform.ValueOrDie());
client_options.set_intra_op_parallelism_threads(
device->tensorflow_cpu_worker_threads()->num_threads);
string allowed_gpus =
flr->config_proto()->gpu_options().visible_device_list();
TF_ASSIGN_OR_RETURN(absl::optional<std::set<int>> gpu_ids,
ParseVisibleDeviceList(allowed_gpus));
client_options.set_allowed_devices(gpu_ids);
auto client = xla::ClientLibrary::GetOrCreateLocalClient(client_options);
if (!client.ok()) {
return client.status();
}
const XlaOpRegistry::DeviceRegistration* registration;
if (!XlaOpRegistry::GetCompilationDevice(platform_info.device_type().type(),
®istration)) {
return errors::InvalidArgument("No JIT device registered for ",
platform_info.device_type().type());
}
*cache = new XlaCompilationCache(
client.ValueOrDie(), DeviceType(registration->compilation_device_name));
return Status::OK();
} | 362 | True | 1 |
CVE-2022-29212 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/issues/43661', 'name': 'https://github.com/tensorflow/tensorflow/issues/43661', 'refsource': 'MISC', 'tags': ['Exploit', 'Issue Tracking', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8wwm-6264-x792', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-8wwm-6264-x792', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/a989426ee1346693cc015792f11d715f6944f2b8', 'name': 'https://github.com/tensorflow/tensorflow/commit/a989426ee1346693cc015792f11d715f6944f2b8', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/lite/kernels/internal/quantization_util.cc#L114-L123', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/lite/kernels/internal/quantization_util.cc#L114-L123', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, certain TFLite models that were created using TFLite model converter would crash when loaded in the TFLite interpreter. The culprit is that during quantization the scale of values could be greater than 1 but code was always assuming sub-unit scaling. Thus, since code was calling `QuantizeMultiplierSmallerThanOneExp`, the `TFLITE_CHECK_LT` assertion would trigger and abort the process. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-03T15:16Z | 2022-05-21T00:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Songyi Han | 2022-03-07 15:09:14-08:00 | Improve to cover scale value greater than one
PiperOrigin-RevId: 433050921 | a989426ee1346693cc015792f11d715f6944f2b8 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tflite::ops::builtin::comparisons::ComparisonQuantized | tflite::ops::builtin::comparisons::ComparisonQuantized( const TfLiteTensor * input1 , const TfLiteTensor * input2 , TfLiteTensor * output , bool requires_broadcast) | ['input1', 'input2', 'output', 'requires_broadcast'] | void ComparisonQuantized(const TfLiteTensor* input1, const TfLiteTensor* input2,
TfLiteTensor* output, bool requires_broadcast) {
if (input1->type == kTfLiteUInt8 || input1->type == kTfLiteInt8) {
auto input1_offset = -input1->params.zero_point;
auto input2_offset = -input2->params.zero_point;
const int left_shift = 8;
int32 input1_multiplier;
int input1_shift;
QuantizeMultiplierSmallerThanOneExp(input1->params.scale,
&input1_multiplier, &input1_shift);
int32 input2_multiplier;
int input2_shift;
QuantizeMultiplierSmallerThanOneExp(input2->params.scale,
&input2_multiplier, &input2_shift);
ComparisonParams op_params;
op_params.left_shift = left_shift;
op_params.input1_offset = input1_offset;
op_params.input1_multiplier = input1_multiplier;
op_params.input1_shift = input1_shift;
op_params.input2_offset = input2_offset;
op_params.input2_multiplier = input2_multiplier;
op_params.input2_shift = input2_shift;
if (requires_broadcast) {
reference_ops::BroadcastComparison4DSlowWithScaling<input_dtype, opname>(
op_params, GetTensorShape(input1), GetTensorData<input_dtype>(input1),
GetTensorShape(input2), GetTensorData<input_dtype>(input2),
GetTensorShape(output), GetTensorData<bool>(output));
} else {
reference_ops::ComparisonWithScaling<input_dtype, opname>(
op_params, GetTensorShape(input1), GetTensorData<input_dtype>(input1),
GetTensorShape(input2), GetTensorData<input_dtype>(input2),
GetTensorShape(output), GetTensorData<bool>(output));
}
}
} | 261 | True | 1 |
CVE-2022-29211 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/e57fd691c7b0fd00ea3bfe43444f30c1969748b5', 'name': 'https://github.com/tensorflow/tensorflow/commit/e57fd691c7b0fd00ea3bfe43444f30c1969748b5', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-xrp2-fhq4-4q3w', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-xrp2-fhq4-4q3w', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/issues/45770', 'name': 'https://github.com/tensorflow/tensorflow/issues/45770', 'refsource': 'MISC', 'tags': ['Exploit', 'Issue Tracking', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/histogram_op.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/histogram_op.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/histogram_op.cc#L35-L74', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/histogram_op.cc#L35-L74', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.histogram_fixed_width` is vulnerable to a crash when the values array contain `Not a Number` (`NaN`) elements. The implementation assumes that all floating point operations are defined and then converts a floating point result to an integer index. If `values` contains `NaN` then the result of the division is still `NaN` and the cast to `int32` would result in a crash. This only occurs on the CPU implementation. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-03T15:02Z | 2022-05-21T00:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Mihai Maruseac | 2022-04-20 11:35:47-07:00 | Prevent crash when histogram is called with NaN values.
Fixes #45770
PiperOrigin-RevId: 443149951 | e57fd691c7b0fd00ea3bfe43444f30c1969748b5 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::HistogramFixedWidthOp::Compute | tensorflow::HistogramFixedWidthOp::Compute( OpKernelContext * ctx) | ['ctx'] | void Compute(OpKernelContext* ctx) override {
const Tensor& values_tensor = ctx->input(0);
const Tensor& value_range_tensor = ctx->input(1);
const Tensor& nbins_tensor = ctx->input(2);
OP_REQUIRES(ctx, TensorShapeUtils::IsVector(value_range_tensor.shape()),
errors::InvalidArgument("value_range should be a vector."));
OP_REQUIRES(ctx, (value_range_tensor.shape().num_elements() == 2),
errors::InvalidArgument(
"value_range should be a vector of 2 elements."));
OP_REQUIRES(ctx, TensorShapeUtils::IsScalar(nbins_tensor.shape()),
errors::InvalidArgument("nbins should be a scalar."));
const auto values = values_tensor.flat<T>();
const auto value_range = value_range_tensor.flat<T>();
const auto nbins = nbins_tensor.scalar<int32>()();
OP_REQUIRES(
ctx, (value_range(0) < value_range(1)),
errors::InvalidArgument("value_range should satisfy value_range[0] < "
"value_range[1], but got '[",
value_range(0), ", ", value_range(1), "]'"));
OP_REQUIRES(
ctx, (nbins > 0),
errors::InvalidArgument("nbins should be a positive number, but got '",
nbins, "'"));
Tensor* out_tensor;
OP_REQUIRES_OK(ctx,
ctx->allocate_output(0, TensorShape({nbins}), &out_tensor));
auto out = out_tensor->flat<Tout>();
OP_REQUIRES_OK(
ctx, functor::HistogramFixedWidthFunctor<Device, T, Tout>::Compute(
ctx, values, value_range, nbins, out));
} | 286 | True | 1 |
CVE-2022-29204 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/20cb18724b0bf6c09071a3f53434c4eec53cc147', 'name': 'https://github.com/tensorflow/tensorflow/commit/20cb18724b0bf6c09071a3f53434c4eec53cc147', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/unsorted_segment_join_op.cc#L83-L14', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/unsorted_segment_join_op.cc#L83-L14', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'name': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hx9q-2mx4-m4pg', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hx9q-2mx4-m4pg', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/84563f265f28b3c36a15335c8b005d405260e943', 'name': 'https://github.com/tensorflow/tensorflow/commit/84563f265f28b3c36a15335c8b005d405260e943', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.UnsortedSegmentJoin` does not fully validate the input arguments. This results in a `CHECK`-failure which can be used to trigger a denial of service attack. The code assumes `num_segments` is a positive scalar but there is no validation. Since this value is used to allocate the output tensor, a negative value would result in a `CHECK`-failure (assertion failure), as per TFSA-2021-198. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T19:27Z | 2022-05-20T23:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Mihai Maruseac | 2022-04-20 12:05:26-07:00 | Allow 0 for number of segments in `unsorted_segment_join_op.cc`
Related to the fix for #55305
PiperOrigin-RevId: 443157549 | 20cb18724b0bf6c09071a3f53434c4eec53cc147 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::UnsortedSegmentJoinOp::Compute | tensorflow::UnsortedSegmentJoinOp::Compute( OpKernelContext * context) | ['context'] | void Compute(OpKernelContext* context) override {
const Tensor& input = context->input(0);
const TensorShape& input_shape = input.shape();
const int32_t input_dims = input_shape.dims();
const Tensor& segment_id = context->input(1);
const TensorShape& segment_id_shape = segment_id.shape();
const int32_t segment_dims = segment_id_shape.dims();
const Tensor& num_segments_tensor = context->input(2);
OP_REQUIRES(context, num_segments_tensor.NumElements() != 0,
errors::InvalidArgument("Number of segments cannot be empty."));
auto num_segments = num_segments_tensor.scalar<NUM_SEGMENTS_TYPE>()();
OP_REQUIRES(context, num_segments > 0,
errors::InvalidArgument("Number of segments must be positive"));
OP_REQUIRES(context, segment_dims != 0,
errors::InvalidArgument("Segment_id cannot have rank 0"));
OP_REQUIRES(
context, segment_dims <= input_dims,
errors::OutOfRange("Invalid segment_id rank ", segment_dims,
" for input with ", input_dims, " dimension(s)"));
for (auto i = 0; i < segment_dims; i++) {
OP_REQUIRES(
context, segment_id_shape.dim_size(i) == input_shape.dim_size(i),
errors::InvalidArgument(
"Segment dimension is ", segment_id_shape.dim_size(i),
" while input dimension is ", input_dims, " in rank ", i));
}
// Making output tensor.
Tensor* output_tensor = nullptr;
TensorShape output_shape =
GetOutputShape(input_shape, segment_id_shape, num_segments);
OP_REQUIRES_OK(context, context->allocate_output("output", output_shape,
&output_tensor));
// Preparating flat tensors.
auto output_flat = output_tensor->flat<tstring>();
auto flat_segment_id = segment_id.flat<INDICES_TYPE>();
auto flat_input = input.flat<tstring>();
for (int i = 0; i < flat_segment_id.size(); i++) {
OP_REQUIRES(
context,
((flat_segment_id(i) < num_segments) && (flat_segment_id(i) >= 0)),
errors::InvalidArgument(
"segment_ids are not allowed to exceed num_segments or"
" to have negative values."));
}
int64_t big_stride;
int64_t small_stride;
std::tie(big_stride, small_stride) =
GetStrides<INDICES_TYPE>(input_shape, segment_id_shape);
auto relative_offset_set =
GetFlattenedRelativeOffsets<INDICES_TYPE>(small_stride, big_stride);
for (auto start_offset = 0; start_offset < big_stride; start_offset++) {
for (auto i = 0; i < relative_offset_set.size(); i++) {
auto output_index = start_offset + flat_segment_id(i) * big_stride;
auto offset = start_offset + relative_offset_set[i];
if (output_flat(output_index).length() != 0)
output_flat(output_index).append(separator_.c_str());
output_flat(output_index).append(flat_input(offset));
}
}
} | 494 | True | 1 |
CVE-2022-29205 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-54ch-gjq5-4976', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-54ch-gjq5-4976', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/237822b59fc504dda2c564787f5d3ad9c4aa62d9', 'name': 'https://github.com/tensorflow/tensorflow/commit/237822b59fc504dda2c564787f5d3ad9c4aa62d9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L480-L482', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L480-L482', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L296-L320', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L296-L320', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}, {'lang': 'en', 'value': 'CWE-908'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, there is a potential for segfault / denial of service in TensorFlow by calling `tf.compat.v1.*` ops which don't yet have support for quantized types, which was added after migration to TensorFlow 2.x. In these scenarios, since the kernel is missing, a `nullptr` value is passed to `ParseDimensionValue` for the `py_value` argument. Then, this is dereferenced, resulting in segfault. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue."}] | 2022-06-02T19:31Z | 2022-05-20T23:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Antonio Sanchez | 2022-04-27 20:50:13-07:00 | Fix tf.compat.v1.placeholder_with_default vulnerability with quantized types.
When iterating through the tensor to extract shape values, an underlying missing kernel
(`StridedSlice` for quantized types) causes an error, which then results in a `nullptr`
being passed to `ParseDimensionValue()`, causing a segfault.
The `nullptr` check allows the missing kernel error to propagate.
Adding the missing kernel registrations allows the shape values
to be extracted successfully.
PiperOrigin-RevId: 445045957 | 237822b59fc504dda2c564787f5d3ad9c4aa62d9 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | SetOpAttrScalar | SetOpAttrScalar( TFE_Context * ctx , TFE_Op * op , const char * key , PyObject * py_value , TF_AttrType type , tensorflow :: gtl :: FlatMap<string,int64_t> * attr_list_sizes , TF_Status * status) | ['ctx', 'op', 'key', 'py_value', 'type', 'attr_list_sizes', 'status'] | bool SetOpAttrScalar(TFE_Context* ctx, TFE_Op* op, const char* key,
PyObject* py_value, TF_AttrType type,
tensorflow::gtl::FlatMap<string, int64_t>* attr_list_sizes,
TF_Status* status) {
if (type == TF_ATTR_STRING) {
tensorflow::StringPiece value;
if (!ParseStringValue(key, py_value, status, &value)) return false;
TFE_OpSetAttrString(op, key, value.data(), value.size());
} else if (type == TF_ATTR_INT) {
int64_t value;
if (!ParseInt64Value(key, py_value, status, &value)) return false;
TFE_OpSetAttrInt(op, key, value);
// attr_list_sizes is set for all int attributes (since at this point we are
// not aware if that attribute might be used to calculate the size of an
// output list or not).
if (attr_list_sizes != nullptr) (*attr_list_sizes)[key] = value;
} else if (type == TF_ATTR_FLOAT) {
float value;
if (!ParseFloatValue(key, py_value, status, &value)) return false;
TFE_OpSetAttrFloat(op, key, value);
} else if (type == TF_ATTR_BOOL) {
unsigned char value;
if (!ParseBoolValue(key, py_value, status, &value)) return false;
TFE_OpSetAttrBool(op, key, value);
} else if (type == TF_ATTR_TYPE) {
int value;
if (!ParseTypeValue(key, py_value, status, &value)) return false;
TFE_OpSetAttrType(op, key, static_cast<TF_DataType>(value));
} else if (type == TF_ATTR_SHAPE) {
if (py_value == Py_None) {
TFE_OpSetAttrShape(op, key, nullptr, -1, status);
} else {
if (!PySequence_Check(py_value)) {
TF_SetStatus(status, TF_INVALID_ARGUMENT,
tensorflow::strings::StrCat(
"Expecting None or sequence value for attr", key,
", got ", py_value->ob_type->tp_name)
.c_str());
return false;
}
const auto num_dims = TensorShapeNumDims(py_value);
if (num_dims == -1) {
TFE_OpSetAttrShape(op, key, nullptr, -1, status);
return true;
}
std::unique_ptr<int64_t[]> dims(new int64_t[num_dims]);
for (int i = 0; i < num_dims; ++i) {
tensorflow::Safe_PyObjectPtr inner_py_value(
PySequence_ITEM(py_value, i));
if (inner_py_value.get() == Py_None) {
dims[i] = -1;
} else if (!ParseDimensionValue(key, inner_py_value.get(), status,
&dims[i])) {
return false;
}
}
TFE_OpSetAttrShape(op, key, dims.get(), num_dims, status);
}
if (!status->status.ok()) return false;
} else if (type == TF_ATTR_FUNC) {
// Allow:
// (1) String function name, OR
// (2) A Python object with a .name attribute
// (A crude test for being a
// tensorflow.python.framework.function._DefinedFunction)
// (which is what the various "defun" or "Defun" decorators do).
// And in the future also allow an object that can encapsulate
// the function name and its attribute values.
tensorflow::StringPiece func_name;
if (!ParseStringValue(key, py_value, status, &func_name)) {
PyObject* name_attr = PyObject_GetAttrString(py_value, "name");
if (name_attr == nullptr ||
!ParseStringValue(key, name_attr, status, &func_name)) {
TF_SetStatus(
status, TF_INVALID_ARGUMENT,
tensorflow::strings::StrCat(
"unable to set function value attribute from a ",
py_value->ob_type->tp_name,
" object. If you think this is an error, please file an issue "
"at https://github.com/tensorflow/tensorflow/issues/new")
.c_str());
return false;
}
}
TF_SetStatus(status, TF_OK, "");
TFE_OpSetAttrFunctionName(op, key, func_name.data(), func_name.size());
} else {
TF_SetStatus(
status, TF_UNIMPLEMENTED,
tensorflow::strings::StrCat("Attr ", key, " has unhandled type ", type)
.c_str());
return false;
}
return true;
} | 665 | True | 1 |
CVE-2022-29205 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-54ch-gjq5-4976', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-54ch-gjq5-4976', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/237822b59fc504dda2c564787f5d3ad9c4aa62d9', 'name': 'https://github.com/tensorflow/tensorflow/commit/237822b59fc504dda2c564787f5d3ad9c4aa62d9', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L480-L482', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L480-L482', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L296-L320', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/python/eager/pywrap_tfe_src.cc#L296-L320', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-476'}, {'lang': 'en', 'value': 'CWE-908'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, there is a potential for segfault / denial of service in TensorFlow by calling `tf.compat.v1.*` ops which don't yet have support for quantized types, which was added after migration to TensorFlow 2.x. In these scenarios, since the kernel is missing, a `nullptr` value is passed to `ParseDimensionValue` for the `py_value` argument. Then, this is dereferenced, resulting in segfault. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue."}] | 2022-06-02T19:31Z | 2022-05-20T23:15Z | Use of Uninitialized Resource | The software uses or accesses a resource that has not been initialized. | When a resource has not been properly initialized, the software may behave unexpectedly. This may lead to a crash or invalid memory access, but the consequences vary depending on the type of resource and how it is used within the software.
| https://cwe.mitre.org/data/definitions/908.html | 0 | Antonio Sanchez | 2022-04-27 20:50:13-07:00 | Fix tf.compat.v1.placeholder_with_default vulnerability with quantized types.
When iterating through the tensor to extract shape values, an underlying missing kernel
(`StridedSlice` for quantized types) causes an error, which then results in a `nullptr`
being passed to `ParseDimensionValue()`, causing a segfault.
The `nullptr` check allows the missing kernel error to propagate.
Adding the missing kernel registrations allows the shape values
to be extracted successfully.
PiperOrigin-RevId: 445045957 | 237822b59fc504dda2c564787f5d3ad9c4aa62d9 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | SetOpAttrScalar | SetOpAttrScalar( TFE_Context * ctx , TFE_Op * op , const char * key , PyObject * py_value , TF_AttrType type , tensorflow :: gtl :: FlatMap<string,int64_t> * attr_list_sizes , TF_Status * status) | ['ctx', 'op', 'key', 'py_value', 'type', 'attr_list_sizes', 'status'] | bool SetOpAttrScalar(TFE_Context* ctx, TFE_Op* op, const char* key,
PyObject* py_value, TF_AttrType type,
tensorflow::gtl::FlatMap<string, int64_t>* attr_list_sizes,
TF_Status* status) {
if (type == TF_ATTR_STRING) {
tensorflow::StringPiece value;
if (!ParseStringValue(key, py_value, status, &value)) return false;
TFE_OpSetAttrString(op, key, value.data(), value.size());
} else if (type == TF_ATTR_INT) {
int64_t value;
if (!ParseInt64Value(key, py_value, status, &value)) return false;
TFE_OpSetAttrInt(op, key, value);
// attr_list_sizes is set for all int attributes (since at this point we are
// not aware if that attribute might be used to calculate the size of an
// output list or not).
if (attr_list_sizes != nullptr) (*attr_list_sizes)[key] = value;
} else if (type == TF_ATTR_FLOAT) {
float value;
if (!ParseFloatValue(key, py_value, status, &value)) return false;
TFE_OpSetAttrFloat(op, key, value);
} else if (type == TF_ATTR_BOOL) {
unsigned char value;
if (!ParseBoolValue(key, py_value, status, &value)) return false;
TFE_OpSetAttrBool(op, key, value);
} else if (type == TF_ATTR_TYPE) {
int value;
if (!ParseTypeValue(key, py_value, status, &value)) return false;
TFE_OpSetAttrType(op, key, static_cast<TF_DataType>(value));
} else if (type == TF_ATTR_SHAPE) {
if (py_value == Py_None) {
TFE_OpSetAttrShape(op, key, nullptr, -1, status);
} else {
if (!PySequence_Check(py_value)) {
TF_SetStatus(status, TF_INVALID_ARGUMENT,
tensorflow::strings::StrCat(
"Expecting None or sequence value for attr", key,
", got ", py_value->ob_type->tp_name)
.c_str());
return false;
}
const auto num_dims = TensorShapeNumDims(py_value);
if (num_dims == -1) {
TFE_OpSetAttrShape(op, key, nullptr, -1, status);
return true;
}
std::unique_ptr<int64_t[]> dims(new int64_t[num_dims]);
for (int i = 0; i < num_dims; ++i) {
tensorflow::Safe_PyObjectPtr inner_py_value(
PySequence_ITEM(py_value, i));
if (inner_py_value.get() == Py_None) {
dims[i] = -1;
} else if (!ParseDimensionValue(key, inner_py_value.get(), status,
&dims[i])) {
return false;
}
}
TFE_OpSetAttrShape(op, key, dims.get(), num_dims, status);
}
if (!status->status.ok()) return false;
} else if (type == TF_ATTR_FUNC) {
// Allow:
// (1) String function name, OR
// (2) A Python object with a .name attribute
// (A crude test for being a
// tensorflow.python.framework.function._DefinedFunction)
// (which is what the various "defun" or "Defun" decorators do).
// And in the future also allow an object that can encapsulate
// the function name and its attribute values.
tensorflow::StringPiece func_name;
if (!ParseStringValue(key, py_value, status, &func_name)) {
PyObject* name_attr = PyObject_GetAttrString(py_value, "name");
if (name_attr == nullptr ||
!ParseStringValue(key, name_attr, status, &func_name)) {
TF_SetStatus(
status, TF_INVALID_ARGUMENT,
tensorflow::strings::StrCat(
"unable to set function value attribute from a ",
py_value->ob_type->tp_name,
" object. If you think this is an error, please file an issue "
"at https://github.com/tensorflow/tensorflow/issues/new")
.c_str());
return false;
}
}
TF_SetStatus(status, TF_OK, "");
TFE_OpSetAttrFunctionName(op, key, func_name.data(), func_name.size());
} else {
TF_SetStatus(
status, TF_UNIMPLEMENTED,
tensorflow::strings::StrCat("Attr ", key, " has unhandled type ", type)
.c_str());
return false;
}
return true;
} | 665 | True | 1 |
CVE-2022-29192 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantize_and_dequantize_op.cc#L148-L226', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantize_and_dequantize_op.cc#L148-L226', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/098e7762d909bac47ce1dbabe6dfd06294cb9d58', 'name': 'https://github.com/tensorflow/tensorflow/commit/098e7762d909bac47ce1dbabe6dfd06294cb9d58', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-h2wq-prv9-2f56', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-h2wq-prv9-2f56', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.8.0', 'versionEndExcluding': '2.8.1', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc2:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizeAndDequantizeV4Grad` does not fully validate the input arguments. This results in a `CHECK`-failure which can be used to trigger a denial of service attack. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T12:58Z | 2022-05-20T21:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Alan Liu | 2022-04-28 11:06:02-07:00 | Fix tf.raw_ops.QuantizeAndDequantizeV4Grad vulnerability with invalid input_min or input_max.
Check that argument is actually a scalar before treating it as such.
PiperOrigin-RevId: 445198280 | 098e7762d909bac47ce1dbabe6dfd06294cb9d58 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::QuantizeAndDequantizeV4GradientOp::Compute | tensorflow::QuantizeAndDequantizeV4GradientOp::Compute( OpKernelContext * ctx) | ['ctx'] | void Compute(OpKernelContext* ctx) override {
const Tensor& gradient = ctx->input(0);
const Tensor& input = ctx->input(1);
Tensor* input_backprop = nullptr;
OP_REQUIRES_OK(ctx,
ctx->allocate_output(0, input.shape(), &input_backprop));
OP_REQUIRES(
ctx, axis_ >= -1,
errors::InvalidArgument("Axis must be at least -1. Found ", axis_));
OP_REQUIRES(ctx, (axis_ == -1 || axis_ < input.shape().dims()),
errors::InvalidArgument(
"Axis should be -1 or 0 or a positive value less than ",
input.shape().dims(), "but given axis value was ", axis_));
OP_REQUIRES(
ctx, input.IsSameSize(gradient),
errors::InvalidArgument("gradient and input must be the same size"));
const int depth = (axis_ == -1) ? 1 : input.dim_size(axis_);
const Tensor& input_min_tensor = ctx->input(2);
OP_REQUIRES(ctx,
input_min_tensor.dims() == 0 || input_min_tensor.dims() == 1,
errors::InvalidArgument(
"Input min tensor must have dimension 1. Recieved ",
input_min_tensor.dims(), "."));
const Tensor& input_max_tensor = ctx->input(3);
OP_REQUIRES(ctx,
input_max_tensor.dims() == 0 || input_max_tensor.dims() == 1,
errors::InvalidArgument(
"Input max tensor must have dimension 1. Recieved ",
input_max_tensor.dims(), "."));
if (axis_ != -1) {
OP_REQUIRES(
ctx, input_min_tensor.dim_size(0) == depth,
errors::InvalidArgument("min has incorrect size, expected ", depth,
" was ", input_min_tensor.dim_size(0)));
OP_REQUIRES(
ctx, input_max_tensor.dim_size(0) == depth,
errors::InvalidArgument("max has incorrect size, expected ", depth,
" was ", input_max_tensor.dim_size(0)));
}
TensorShape min_max_shape(input_min_tensor.shape());
Tensor* input_min_backprop;
OP_REQUIRES_OK(ctx,
ctx->allocate_output(1, min_max_shape, &input_min_backprop));
Tensor* input_max_backprop;
OP_REQUIRES_OK(ctx,
ctx->allocate_output(2, min_max_shape, &input_max_backprop));
if (axis_ == -1) {
functor::QuantizeAndDequantizeOneScaleGradientFunctor<Device, T> f;
f(ctx->eigen_device<Device>(), gradient.template flat<T>(),
input.template flat<T>(), input_min_tensor.scalar<T>(),
input_max_tensor.scalar<T>(), input_backprop->template flat<T>(),
input_min_backprop->template scalar<T>(),
input_max_backprop->template scalar<T>());
} else {
functor::QuantizeAndDequantizePerChannelGradientFunctor<Device, T> f;
f(ctx->eigen_device<Device>(),
gradient.template flat_inner_outer_dims<T, 3>(axis_ - 1),
input.template flat_inner_outer_dims<T, 3>(axis_ - 1),
&input_min_tensor, &input_max_tensor,
input_backprop->template flat_inner_outer_dims<T, 3>(axis_ - 1),
input_min_backprop->template flat<T>(),
input_max_backprop->template flat<T>());
}
} | 579 | True | 1 |
CVE-2022-29208 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:P/A:P | LOCAL | LOW | NONE | NONE | PARTIAL | PARTIAL | 3.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | HIGH | HIGH | 7.1 | HIGH | 1.8 | 5.2 | False | [{'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/30721cf564cb029d34535446d6a5a6357bebc8e7', 'name': 'https://github.com/tensorflow/tensorflow/commit/30721cf564cb029d34535446d6a5a6357bebc8e7', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-2r2f-g8mw-9gvr', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-2r2f-g8mw-9gvr', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.EditDistance` has incomplete validation. Users can pass negative values to cause a segmentation fault based denial of service. In multiple places throughout the code, one may compute an index for a write operation. However, the existing validation only checks against the upper bound of the array. Hence, it is possible to write before the array by massaging the input to generate negative values for `loc`. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-03T02:02Z | 2022-05-20T23:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Alan Liu | 2022-04-29 09:47:37-07:00 | Fix tf.raw_ops.EditDistance vulnerability with negative indices.
Check that indices are non-negative. Fix several identical code sites.
Clean up grammar in error message.
PiperOrigin-RevId: 445442017 | 30721cf564cb029d34535446d6a5a6357bebc8e7 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::EditDistanceOp::Compute | tensorflow::EditDistanceOp::Compute( OpKernelContext * ctx) | ['ctx'] | void Compute(OpKernelContext* ctx) override {
const Tensor* hypothesis_indices;
const Tensor* hypothesis_values;
const Tensor* hypothesis_shape;
const Tensor* truth_indices;
const Tensor* truth_values;
const Tensor* truth_shape;
OP_REQUIRES_OK(ctx, ctx->input("hypothesis_indices", &hypothesis_indices));
OP_REQUIRES_OK(ctx, ctx->input("hypothesis_values", &hypothesis_values));
OP_REQUIRES_OK(ctx, ctx->input("hypothesis_shape", &hypothesis_shape));
OP_REQUIRES_OK(ctx, ctx->input("truth_indices", &truth_indices));
OP_REQUIRES_OK(ctx, ctx->input("truth_values", &truth_values));
OP_REQUIRES_OK(ctx, ctx->input("truth_shape", &truth_shape));
OP_REQUIRES_OK(
ctx, ValidateShapes(ctx, *hypothesis_indices, *hypothesis_values,
*hypothesis_shape, *truth_indices, *truth_values,
*truth_shape));
TensorShape hypothesis_st_shape;
OP_REQUIRES_OK(ctx,
TensorShapeUtils::MakeShape(
hypothesis_shape->vec<int64_t>().data(),
hypothesis_shape->NumElements(), &hypothesis_st_shape));
TensorShape truth_st_shape;
OP_REQUIRES_OK(ctx, TensorShapeUtils::MakeShape(
truth_shape->vec<int64_t>().data(),
truth_shape->NumElements(), &truth_st_shape));
// Assume indices are sorted in row-major order.
std::vector<int64_t> sorted_order(truth_st_shape.dims());
std::iota(sorted_order.begin(), sorted_order.end(), 0);
sparse::SparseTensor hypothesis;
OP_REQUIRES_OK(ctx, sparse::SparseTensor::Create(
*hypothesis_indices, *hypothesis_values,
hypothesis_st_shape, sorted_order, &hypothesis));
sparse::SparseTensor truth;
OP_REQUIRES_OK(ctx, sparse::SparseTensor::Create(
*truth_indices, *truth_values, truth_st_shape,
sorted_order, &truth));
// Group dims 0, 1, ..., RANK - 1. The very last dim is assumed
// to store the variable length sequences.
std::vector<int64_t> group_dims(truth_st_shape.dims() - 1);
std::iota(group_dims.begin(), group_dims.end(), 0);
TensorShape output_shape;
for (int d = 0; d < static_cast<int>(group_dims.size()); ++d) {
output_shape.AddDim(std::max(hypothesis_st_shape.dim_size(d),
truth_st_shape.dim_size(d)));
}
const auto output_elements = output_shape.num_elements();
OP_REQUIRES(
ctx, output_elements > 0,
errors::InvalidArgument("Got output shape ", output_shape.DebugString(),
" which has 0 elements"));
Tensor* output = nullptr;
OP_REQUIRES_OK(ctx, ctx->allocate_output("output", output_shape, &output));
auto output_t = output->flat<float>();
output_t.setZero();
std::vector<int64_t> output_strides(output_shape.dims());
output_strides[output_shape.dims() - 1] = 1;
for (int d = output_shape.dims() - 2; d >= 0; --d) {
output_strides[d] = output_strides[d + 1] * output_shape.dim_size(d + 1);
}
auto hypothesis_grouper = hypothesis.group(group_dims);
auto truth_grouper = truth.group(group_dims);
auto hypothesis_iter = hypothesis_grouper.begin();
auto truth_iter = truth_grouper.begin();
auto cmp = std::equal_to<T>();
while (hypothesis_iter != hypothesis_grouper.end() &&
truth_iter != truth_grouper.end()) {
sparse::Group truth_i = *truth_iter;
sparse::Group hypothesis_j = *hypothesis_iter;
std::vector<int64_t> g_truth = truth_i.group();
std::vector<int64_t> g_hypothesis = hypothesis_j.group();
auto truth_seq = truth_i.values<T>();
auto hypothesis_seq = hypothesis_j.values<T>();
if (g_truth == g_hypothesis) {
auto loc = std::inner_product(g_truth.begin(), g_truth.end(),
output_strides.begin(), int64_t{0});
OP_REQUIRES(
ctx, loc < output_elements,
errors::Internal("Got an inner product ", loc,
" which would require in writing to outside of "
"the buffer for the output tensor (max elements ",
output_elements, ")"));
output_t(loc) =
gtl::LevenshteinDistance<T>(truth_seq, hypothesis_seq, cmp);
if (normalize_) output_t(loc) /= truth_seq.size();
++hypothesis_iter;
++truth_iter;
} else if (g_truth > g_hypothesis) { // zero-length truth
auto loc = std::inner_product(g_hypothesis.begin(), g_hypothesis.end(),
output_strides.begin(), int64_t{0});
OP_REQUIRES(
ctx, loc < output_elements,
errors::Internal("Got an inner product ", loc,
" which would require in writing to outside of "
"the buffer for the output tensor (max elements ",
output_elements, ")"));
output_t(loc) = hypothesis_seq.size();
if (normalize_ && output_t(loc) != 0.0f) {
output_t(loc) = std::numeric_limits<float>::infinity();
}
++hypothesis_iter;
} else { // zero-length hypothesis
auto loc = std::inner_product(g_truth.begin(), g_truth.end(),
output_strides.begin(), int64_t{0});
OP_REQUIRES(
ctx, loc < output_elements,
errors::Internal("Got an inner product ", loc,
" which would require in writing to outside of "
"the buffer for the output tensor (max elements ",
output_elements, ")"));
output_t(loc) = (normalize_) ? 1.0 : truth_seq.size();
++truth_iter;
}
}
while (hypothesis_iter != hypothesis_grouper.end()) { // zero-length truths
sparse::Group hypothesis_j = *hypothesis_iter;
std::vector<int64_t> g_hypothesis = hypothesis_j.group();
auto hypothesis_seq = hypothesis_j.values<T>();
auto loc = std::inner_product(g_hypothesis.begin(), g_hypothesis.end(),
output_strides.begin(), int64_t{0});
OP_REQUIRES(
ctx, loc < output_elements,
errors::Internal("Got an inner product ", loc,
" which would require in writing to outside of the "
"buffer for the output tensor (max elements ",
output_elements, ")"));
output_t(loc) = hypothesis_seq.size();
if (normalize_ && output_t(loc) != 0.0f) {
output_t(loc) = std::numeric_limits<float>::infinity();
}
++hypothesis_iter;
}
while (truth_iter != truth_grouper.end()) { // missing hypotheses
sparse::Group truth_i = *truth_iter;
std::vector<int64_t> g_truth = truth_i.group();
auto truth_seq = truth_i.values<T>();
auto loc = std::inner_product(g_truth.begin(), g_truth.end(),
output_strides.begin(), int64_t{0});
OP_REQUIRES(
ctx, loc < output_elements,
errors::Internal("Got an inner product ", loc,
" which would require in writing to outside of the "
"buffer for the output tensor (max elements ",
output_elements, ")"));
output_t(loc) = (normalize_) ? 1.0 : truth_seq.size();
++truth_iter;
}
} | 1261 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::QuantizedConv2DOp::Compute | tensorflow::QuantizedConv2DOp::Compute( OpKernelContext * context) | ['context'] | void Compute(OpKernelContext* context) override {
// Input tensor is of the following dimensions:
// [ batch, in_rows, in_cols, in_depth ]
const Tensor& input = context->input(0);
// Input filter is of the following dimensions:
// [ filter_rows, filter_cols, in_depth, out_depth]
const Tensor& filter = context->input(1);
// For 2D convolution, there should be 4 dimensions.
OP_REQUIRES(context, input.dims() == 4,
errors::InvalidArgument("input must be 4-dimensional",
input.shape().DebugString()));
OP_REQUIRES(context, filter.dims() == 4,
errors::InvalidArgument("filter must be 4-dimensional: ",
filter.shape().DebugString()));
const float min_input = context->input(2).flat<float>()(0);
const float max_input = context->input(3).flat<float>()(0);
const float min_filter = context->input(4).flat<float>()(0);
const float max_filter = context->input(5).flat<float>()(0);
const int32_t offset_input =
FloatToQuantizedUnclamped<T1>(0.0f, min_input, max_input);
const int32_t offset_filter =
FloatToQuantizedUnclamped<T2>(0.0f, min_filter, max_filter);
const int32_t offset_output = 0;
const int32_t mult_output = 1;
const int32_t shift_output = 0;
// The last dimension for input is in_depth. It must be the same as the
// filter's in_depth.
const int64_t in_depth = input.dim_size(3);
OP_REQUIRES(context, in_depth == filter.dim_size(2),
errors::InvalidArgument(
"input and filter must have the same depth: ", in_depth,
" vs ", filter.dim_size(2)));
// The last dimension for filter is out_depth.
const int64_t out_depth = filter.dim_size(3);
// The second dimension for input is rows/height.
// The first dimension for filter is rows/height.
const int64_t input_rows = input.dim_size(1);
const int64_t filter_rows = filter.dim_size(0);
// The third dimension for input is columns/width.
// The second dimension for filter is columns/width.
const int64_t input_cols = input.dim_size(2);
const int64_t filter_cols = filter.dim_size(1);
// The first dimension for input is batch.
const int64_t batch = input.dim_size(0);
// For now we take the stride from the second dimension only (we
// assume row = col stride, and do not support striding on the
// batch or depth dimension).
const int stride = strides_[1];
int64_t out_rows = 0, out_cols = 0, pad_rows = 0, pad_cols = 0;
OP_REQUIRES_OK(context,
GetWindowedOutputSize(input_rows, filter_rows, stride,
padding_, &out_rows, &pad_rows));
OP_REQUIRES_OK(context,
GetWindowedOutputSize(input_cols, filter_cols, stride,
padding_, &out_cols, &pad_cols));
CHECK_GT(batch, 0);
CHECK_GT(out_rows, 0);
CHECK_GT(out_cols, 0);
CHECK_GT(out_depth, 0);
TensorShape out_shape({batch, out_rows, out_cols, out_depth});
// Output tensor is of the following dimensions:
// [ in_batch, out_rows, out_cols, out_depth ]
Tensor* output = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, out_shape, &output));
// This will call different implementations (e.g. reference or optimized)
// depending on the template parameter.
ConvFunctor<T1, T2, T3> conv_functor;
conv_functor(context, input.flat<T1>().data(), batch, input_rows,
input_cols, in_depth, offset_input, filter.flat<T2>().data(),
filter_rows, filter_cols, out_depth, offset_filter, stride,
padding_, output->flat<T3>().data(), out_rows, out_cols,
shift_output, offset_output, mult_output);
float min_output_value;
float max_output_value;
QuantizationRangeForMultiplication<T1, T2, T3>(
min_input, max_input, min_filter, max_filter, &min_output_value,
&max_output_value);
Tensor* output_min = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(1, {}, &output_min));
output_min->flat<float>()(0) = min_output_value;
Tensor* output_max = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(2, {}, &output_max));
output_max->flat<float>()(0) = max_output_value;
} | 667 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::QuantizedConv2DOp::Compute | tensorflow::QuantizedConv2DOp::Compute( OpKernelContext * context) | ['context'] | void Compute(OpKernelContext* context) override {
// Input tensor is of the following dimensions:
// [ batch, in_rows, in_cols, in_depth ]
const Tensor& input = context->input(0);
// Input filter is of the following dimensions:
// [ filter_rows, filter_cols, in_depth, out_depth]
const Tensor& filter = context->input(1);
// For 2D convolution, there should be 4 dimensions.
OP_REQUIRES(context, input.dims() == 4,
errors::InvalidArgument("input must be 4-dimensional",
input.shape().DebugString()));
OP_REQUIRES(context, filter.dims() == 4,
errors::InvalidArgument("filter must be 4-dimensional: ",
filter.shape().DebugString()));
const float min_input = context->input(2).flat<float>()(0);
const float max_input = context->input(3).flat<float>()(0);
const float min_filter = context->input(4).flat<float>()(0);
const float max_filter = context->input(5).flat<float>()(0);
const int32_t offset_input =
FloatToQuantizedUnclamped<T1>(0.0f, min_input, max_input);
const int32_t offset_filter =
FloatToQuantizedUnclamped<T2>(0.0f, min_filter, max_filter);
const int32_t offset_output = 0;
const int32_t mult_output = 1;
const int32_t shift_output = 0;
// The last dimension for input is in_depth. It must be the same as the
// filter's in_depth.
const int64_t in_depth = input.dim_size(3);
OP_REQUIRES(context, in_depth == filter.dim_size(2),
errors::InvalidArgument(
"input and filter must have the same depth: ", in_depth,
" vs ", filter.dim_size(2)));
// The last dimension for filter is out_depth.
const int64_t out_depth = filter.dim_size(3);
// The second dimension for input is rows/height.
// The first dimension for filter is rows/height.
const int64_t input_rows = input.dim_size(1);
const int64_t filter_rows = filter.dim_size(0);
// The third dimension for input is columns/width.
// The second dimension for filter is columns/width.
const int64_t input_cols = input.dim_size(2);
const int64_t filter_cols = filter.dim_size(1);
// The first dimension for input is batch.
const int64_t batch = input.dim_size(0);
// For now we take the stride from the second dimension only (we
// assume row = col stride, and do not support striding on the
// batch or depth dimension).
const int stride = strides_[1];
int64_t out_rows = 0, out_cols = 0, pad_rows = 0, pad_cols = 0;
OP_REQUIRES_OK(context,
GetWindowedOutputSize(input_rows, filter_rows, stride,
padding_, &out_rows, &pad_rows));
OP_REQUIRES_OK(context,
GetWindowedOutputSize(input_cols, filter_cols, stride,
padding_, &out_cols, &pad_cols));
CHECK_GT(batch, 0);
CHECK_GT(out_rows, 0);
CHECK_GT(out_cols, 0);
CHECK_GT(out_depth, 0);
TensorShape out_shape({batch, out_rows, out_cols, out_depth});
// Output tensor is of the following dimensions:
// [ in_batch, out_rows, out_cols, out_depth ]
Tensor* output = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, out_shape, &output));
// This will call different implementations (e.g. reference or optimized)
// depending on the template parameter.
ConvFunctor<T1, T2, T3> conv_functor;
conv_functor(context, input.flat<T1>().data(), batch, input_rows,
input_cols, in_depth, offset_input, filter.flat<T2>().data(),
filter_rows, filter_cols, out_depth, offset_filter, stride,
padding_, output->flat<T3>().data(), out_rows, out_cols,
shift_output, offset_output, mult_output);
float min_output_value;
float max_output_value;
QuantizationRangeForMultiplication<T1, T2, T3>(
min_input, max_input, min_filter, max_filter, &min_output_value,
&max_output_value);
Tensor* output_min = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(1, {}, &output_min));
output_min->flat<float>()(0) = min_output_value;
Tensor* output_max = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(2, {}, &output_max));
output_max->flat<float>()(0) = max_output_value;
} | 667 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , OddPadding) | ['QuantizedConv2DTest', 'OddPadding'] | TEST_F(QuantizedConv2DTest, OddPadding) {
const int stride = 2;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 4;
const int image_batch_count = 1;
AddInputFromArray<quint8>(
TensorShape({image_batch_count, image_height, image_width, depth}),
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16});
const int filter_size = 3;
const int filter_count = 1;
AddInputFromArray<quint8>(
TensorShape({filter_size, filter_size, depth, filter_count}),
{1, 2, 3, 4, 5, 6, 7, 8, 9});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
TF_ASSERT_OK(RunOpKernel());
const int expected_width = image_width / stride;
const int expected_height = (image_height * filter_count) / stride;
Tensor expected(DT_QINT32, TensorShape({image_batch_count, expected_height,
expected_width, filter_count}));
test::FillValues<qint32>(&expected, {348, 252, 274, 175});
test::ExpectTensorEqual<qint32>(expected, *GetOutput(0));
} | 405 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , OddPadding) | ['QuantizedConv2DTest', 'OddPadding'] | TEST_F(QuantizedConv2DTest, OddPadding) {
const int stride = 2;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 4;
const int image_batch_count = 1;
AddInputFromArray<quint8>(
TensorShape({image_batch_count, image_height, image_width, depth}),
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16});
const int filter_size = 3;
const int filter_count = 1;
AddInputFromArray<quint8>(
TensorShape({filter_size, filter_size, depth, filter_count}),
{1, 2, 3, 4, 5, 6, 7, 8, 9});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
TF_ASSERT_OK(RunOpKernel());
const int expected_width = image_width / stride;
const int expected_height = (image_height * filter_count) / stride;
Tensor expected(DT_QINT32, TensorShape({image_batch_count, expected_height,
expected_width, filter_count}));
test::FillValues<qint32>(&expected, {348, 252, 274, 175});
test::ExpectTensorEqual<qint32>(expected, *GetOutput(0));
} | 405 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , OddPaddingBatch) | ['QuantizedConv2DTest', 'OddPaddingBatch'] | TEST_F(QuantizedConv2DTest, OddPaddingBatch) {
const int stride = 2;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 4;
const int image_batch_count = 3;
AddInputFromArray<quint8>(
TensorShape({image_batch_count, image_height, image_width, depth}),
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16});
const int filter_size = 3;
const int filter_count = 1;
AddInputFromArray<quint8>(
TensorShape({filter_size, filter_size, depth, filter_count}),
{1, 2, 3, 4, 5, 6, 7, 8, 9});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
TF_ASSERT_OK(RunOpKernel());
const int expected_width = image_width / stride;
const int expected_height = (image_height * filter_count) / stride;
Tensor expected(DT_QINT32, TensorShape({image_batch_count, expected_height,
expected_width, filter_count}));
test::FillValues<qint32>(&expected, {348, 252, 274, 175, //
348, 252, 274, 175, //
348, 252, 274, 175});
test::ExpectTensorEqual<qint32>(expected, *GetOutput(0));
} | 485 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , OddPaddingBatch) | ['QuantizedConv2DTest', 'OddPaddingBatch'] | TEST_F(QuantizedConv2DTest, OddPaddingBatch) {
const int stride = 2;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 4;
const int image_batch_count = 3;
AddInputFromArray<quint8>(
TensorShape({image_batch_count, image_height, image_width, depth}),
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16});
const int filter_size = 3;
const int filter_count = 1;
AddInputFromArray<quint8>(
TensorShape({filter_size, filter_size, depth, filter_count}),
{1, 2, 3, 4, 5, 6, 7, 8, 9});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
TF_ASSERT_OK(RunOpKernel());
const int expected_width = image_width / stride;
const int expected_height = (image_height * filter_count) / stride;
Tensor expected(DT_QINT32, TensorShape({image_batch_count, expected_height,
expected_width, filter_count}));
test::FillValues<qint32>(&expected, {348, 252, 274, 175, //
348, 252, 274, 175, //
348, 252, 274, 175});
test::ExpectTensorEqual<qint32>(expected, *GetOutput(0));
} | 485 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , Small) | ['QuantizedConv2DTest', 'Small'] | TEST_F(QuantizedConv2DTest, Small) {
const int stride = 1;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 3;
const int image_batch_count = 1;
// The image data should always be able to represent zero, to allow a fast
// implementation of border padding, so we set the min value to 0.
const float image_min = 0.0f;
const float image_max = 12.0f;
// The image matrix is:
// | 1 | 2 | 3 | 4 |
// | 5 | 6 | 7 | 8 |
// | 9 | 10 | 11 | 12 |
Tensor image_float(DT_FLOAT,
{image_batch_count, image_height, image_width, depth});
test::FillValues<float>(&image_float,
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12});
Tensor image_quantized =
FloatTensorToQuantized<quint8>(image_float, image_min, image_max);
// The filter matrix is:
// | 1 | 4 | 7 |
// | 2 | 5 | 8 |
// | 3 | 6 | 9 |
const int filter_size = 3;
const int filter_count = 1;
const float filter_min = 1.0f;
const float filter_max = 9.0f;
Tensor filter_float(DT_FLOAT,
{filter_size, filter_size, depth, filter_count});
test::FillValues<float>(&filter_float, {1, 4, 7, 2, 5, 8, 3, 6, 9});
Tensor filter_quantized =
FloatTensorToQuantized<quint8>(filter_float, filter_min, filter_max);
AddInputFromArray<quint8>(image_quantized.shape(),
image_quantized.flat<quint8>());
AddInputFromArray<quint8>(filter_quantized.shape(),
filter_quantized.flat<quint8>());
AddInputFromArray<float>(TensorShape({1}), {image_min});
AddInputFromArray<float>(TensorShape({1}), {image_max});
AddInputFromArray<float>(TensorShape({1}), {filter_min});
AddInputFromArray<float>(TensorShape({1}), {filter_max});
TF_ASSERT_OK(RunOpKernel());
// We're sliding the 3x3 filter across the 3x4 image, with accesses outside
// the input set to zero because we're using the 'SAME' padding mode.
// The calculations behind the expected output are:
// (1*0)+(4*0)+(7*0)+(2*0)+(5*1)+(8*2)+(3*0)+(6*5)+(9*6)=105
// (1*0)+(4*0)+(7*0)+(2*1)+(5*2)+(8*3)+(3*5)+(6*6)+(9*7)=150
// (1*0)+(4*0)+(7*0)+(2*2)+(5*3)+(8*4)+(3*6)+(6*7)+(9*8)=183
// (1*0)+(4*0)+(7*0)+(2*3)+(5*4)+(8*0)+(3*7)+(6*8)+(9*0)=95
// (1*0)+(4*1)+(7*2)+(2*0)+(5*5)+(8*6)+(3*0)+(6*9)+(9*10)=235
// (1*1)+(4*2)+(7*3)+(2*5)+(5*6)+(8*7)+(3*9)+(6*10)+(9*11)=312
// (1*2)+(4*3)+(7*4)+(2*6)+(5*7)+(8*8)+(3*10)+(6*11)+(9*12)=357
// (1*3)+(4*4)+(7*0)+(2*7)+(5*8)+(8*0)+(3*11)+(6*12)+(9*0)=178
// (1*0)+(4*5)+(7*6)+(2*0)+(5*9)+(8*10)+(3*0)+(6*0)+(9*0)=187
// (1*5)+(4*6)+(7*7)+(2*9)+(5*10)+(8*11)+(3*0)+(6*0)+(9*0)=234
// (1*6)+(4*7)+(7*8)+(2*10)+(5*11)+(8*12)+(3*0)+(6*0)+(9*0)=261
// (1*7)+(4*11)+(7*0)+(2*8)+(5*12)+(8*0)+(3*0)+(6*0)+(9*0)=121
// This means we should end up with this matrix:
// | 105 | 150 | 183 | 95 |
// | 235 | 312 | 357 | 178 |
// | 187 | 234 | 261 | 121 |
const int expected_width = image_width;
const int expected_height = image_height * filter_count;
Tensor expected_float(
DT_FLOAT, TensorShape({image_batch_count, expected_height, expected_width,
filter_count}));
test::FillValues<float>(&expected_float, {105, 150, 183, 95, 235, 312, 357,
178, 187, 234, 261, 121});
const Tensor& output_quantized = *GetOutput(0);
const float output_min = GetOutput(1)->flat<float>()(0);
const float output_max = GetOutput(2)->flat<float>()(0);
Tensor output_float =
QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max);
test::ExpectTensorNear<float>(expected_float, output_float, 1.0);
} | 587 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , Small) | ['QuantizedConv2DTest', 'Small'] | TEST_F(QuantizedConv2DTest, Small) {
const int stride = 1;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 3;
const int image_batch_count = 1;
// The image data should always be able to represent zero, to allow a fast
// implementation of border padding, so we set the min value to 0.
const float image_min = 0.0f;
const float image_max = 12.0f;
// The image matrix is:
// | 1 | 2 | 3 | 4 |
// | 5 | 6 | 7 | 8 |
// | 9 | 10 | 11 | 12 |
Tensor image_float(DT_FLOAT,
{image_batch_count, image_height, image_width, depth});
test::FillValues<float>(&image_float,
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12});
Tensor image_quantized =
FloatTensorToQuantized<quint8>(image_float, image_min, image_max);
// The filter matrix is:
// | 1 | 4 | 7 |
// | 2 | 5 | 8 |
// | 3 | 6 | 9 |
const int filter_size = 3;
const int filter_count = 1;
const float filter_min = 1.0f;
const float filter_max = 9.0f;
Tensor filter_float(DT_FLOAT,
{filter_size, filter_size, depth, filter_count});
test::FillValues<float>(&filter_float, {1, 4, 7, 2, 5, 8, 3, 6, 9});
Tensor filter_quantized =
FloatTensorToQuantized<quint8>(filter_float, filter_min, filter_max);
AddInputFromArray<quint8>(image_quantized.shape(),
image_quantized.flat<quint8>());
AddInputFromArray<quint8>(filter_quantized.shape(),
filter_quantized.flat<quint8>());
AddInputFromArray<float>(TensorShape({1}), {image_min});
AddInputFromArray<float>(TensorShape({1}), {image_max});
AddInputFromArray<float>(TensorShape({1}), {filter_min});
AddInputFromArray<float>(TensorShape({1}), {filter_max});
TF_ASSERT_OK(RunOpKernel());
// We're sliding the 3x3 filter across the 3x4 image, with accesses outside
// the input set to zero because we're using the 'SAME' padding mode.
// The calculations behind the expected output are:
// (1*0)+(4*0)+(7*0)+(2*0)+(5*1)+(8*2)+(3*0)+(6*5)+(9*6)=105
// (1*0)+(4*0)+(7*0)+(2*1)+(5*2)+(8*3)+(3*5)+(6*6)+(9*7)=150
// (1*0)+(4*0)+(7*0)+(2*2)+(5*3)+(8*4)+(3*6)+(6*7)+(9*8)=183
// (1*0)+(4*0)+(7*0)+(2*3)+(5*4)+(8*0)+(3*7)+(6*8)+(9*0)=95
// (1*0)+(4*1)+(7*2)+(2*0)+(5*5)+(8*6)+(3*0)+(6*9)+(9*10)=235
// (1*1)+(4*2)+(7*3)+(2*5)+(5*6)+(8*7)+(3*9)+(6*10)+(9*11)=312
// (1*2)+(4*3)+(7*4)+(2*6)+(5*7)+(8*8)+(3*10)+(6*11)+(9*12)=357
// (1*3)+(4*4)+(7*0)+(2*7)+(5*8)+(8*0)+(3*11)+(6*12)+(9*0)=178
// (1*0)+(4*5)+(7*6)+(2*0)+(5*9)+(8*10)+(3*0)+(6*0)+(9*0)=187
// (1*5)+(4*6)+(7*7)+(2*9)+(5*10)+(8*11)+(3*0)+(6*0)+(9*0)=234
// (1*6)+(4*7)+(7*8)+(2*10)+(5*11)+(8*12)+(3*0)+(6*0)+(9*0)=261
// (1*7)+(4*11)+(7*0)+(2*8)+(5*12)+(8*0)+(3*0)+(6*0)+(9*0)=121
// This means we should end up with this matrix:
// | 105 | 150 | 183 | 95 |
// | 235 | 312 | 357 | 178 |
// | 187 | 234 | 261 | 121 |
const int expected_width = image_width;
const int expected_height = image_height * filter_count;
Tensor expected_float(
DT_FLOAT, TensorShape({image_batch_count, expected_height, expected_width,
filter_count}));
test::FillValues<float>(&expected_float, {105, 150, 183, 95, 235, 312, 357,
178, 187, 234, 261, 121});
const Tensor& output_quantized = *GetOutput(0);
const float output_min = GetOutput(1)->flat<float>()(0);
const float output_max = GetOutput(2)->flat<float>()(0);
Tensor output_float =
QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max);
test::ExpectTensorNear<float>(expected_float, output_float, 1.0);
} | 587 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , Small32Bit) | ['QuantizedConv2DTest', 'Small32Bit'] | TEST_F(QuantizedConv2DTest, Small32Bit) {
const int stride = 1;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 3;
const int image_batch_count = 1;
AddInputFromArray<quint8>(
TensorShape({image_batch_count, image_height, image_width, depth}),
{10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120});
const int filter_size = 3;
const int filter_count = 1;
AddInputFromArray<quint8>(
TensorShape({filter_size, filter_size, depth, filter_count}),
{10, 40, 70, 20, 50, 80, 30, 60, 90});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
TF_ASSERT_OK(RunOpKernel());
const int expected_width = image_width;
const int expected_height = image_height * filter_count;
Tensor expected(DT_QINT32, TensorShape({image_batch_count, expected_height,
expected_width, filter_count}));
test::FillValues<qint32>(
&expected, {10500, 15000, 18300, 9500, 23500, 31200, 35700, 17800, 18700,
23400, 26100, 12100});
test::ExpectTensorEqual<qint32>(expected, *GetOutput(0));
} | 407 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , Small32Bit) | ['QuantizedConv2DTest', 'Small32Bit'] | TEST_F(QuantizedConv2DTest, Small32Bit) {
const int stride = 1;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 3;
const int image_batch_count = 1;
AddInputFromArray<quint8>(
TensorShape({image_batch_count, image_height, image_width, depth}),
{10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120});
const int filter_size = 3;
const int filter_count = 1;
AddInputFromArray<quint8>(
TensorShape({filter_size, filter_size, depth, filter_count}),
{10, 40, 70, 20, 50, 80, 30, 60, 90});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
AddInputFromArray<float>(TensorShape({1}), {0});
AddInputFromArray<float>(TensorShape({1}), {255.0f});
TF_ASSERT_OK(RunOpKernel());
const int expected_width = image_width;
const int expected_height = image_height * filter_count;
Tensor expected(DT_QINT32, TensorShape({image_batch_count, expected_height,
expected_width, filter_count}));
test::FillValues<qint32>(
&expected, {10500, 15000, 18300, 9500, 23500, 31200, 35700, 17800, 18700,
23400, 26100, 12100});
test::ExpectTensorEqual<qint32>(expected, *GetOutput(0));
} | 407 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | Improper Input Validation | The product receives input or data, but it does
not validate or incorrectly validates that the input has the
properties that are required to process the data safely and
correctly. |
Input validation is a frequently-used technique
for checking potentially dangerous inputs in order to
ensure that the inputs are safe for processing within the
code, or when communicating with other components. When
software does not validate input properly, an attacker is
able to craft the input in a form that is not expected by
the rest of the application. This will lead to parts of the
system receiving unintended input, which may result in
altered control flow, arbitrary control of a resource, or
arbitrary code execution.
Input validation is not the only technique for
processing input, however. Other techniques attempt to
transform potentially-dangerous input into something safe, such
as filtering (CWE-790) - which attempts to remove dangerous
inputs - or encoding/escaping (CWE-116), which attempts to
ensure that the input is not misinterpreted when it is included
in output to another component. Other techniques exist as well
(see CWE-138 for more examples.)
Input validation can be applied to:
raw data - strings, numbers, parameters, file contents, etc.
metadata - information about the raw data, such as headers or size
Data can be simple or structured. Structured data
can be composed of many nested layers, composed of
combinations of metadata and raw data, with other simple or
structured data.
Many properties of raw data or metadata may need
to be validated upon entry into the code, such
as:
specified quantities such as size, length, frequency, price, rate, number of operations, time, etc.
implied or derived quantities, such as the actual size of a file instead of a specified size
indexes, offsets, or positions into more complex data structures
symbolic keys or other elements into hash tables, associative arrays, etc.
well-formedness, i.e. syntactic correctness - compliance with expected syntax
lexical token correctness - compliance with rules for what is treated as a token
specified or derived type - the actual type of the input (or what the input appears to be)
consistency - between individual data elements, between raw data and metadata, between references, etc.
conformance to domain-specific rules, e.g. business logic
equivalence - ensuring that equivalent inputs are treated the same
authenticity, ownership, or other attestations about the input, e.g. a cryptographic signature to prove the source of the data
Implied or derived properties of data must often
be calculated or inferred by the code itself. Errors in
deriving properties may be considered a contributing factor
to improper input validation.
Note that "input validation" has very different
meanings to different people, or within different
classification schemes. Caution must be used when
referencing this CWE entry or mapping to it. For example,
some weaknesses might involve inadvertently giving control
to an attacker over an input when they should not be able
to provide an input at all, but sometimes this is referred
to as input validation.
Finally, it is important to emphasize that the
distinctions between input validation and output escaping
are often blurred, and developers must be careful to
understand the difference, including how input validation
is not always sufficient to prevent vulnerabilities,
especially when less stringent data types must be
supported, such as free-form text. Consider a SQL injection
scenario in which a person's last name is inserted into a
query. The name "O'Reilly" would likely pass the validation
step since it is a common last name in the English
language. However, this valid name cannot be directly
inserted into the database because it contains the "'"
apostrophe character, which would need to be escaped or
otherwise transformed. In this case, removing the
apostrophe might reduce the risk of SQL injection, but it
would produce incorrect behavior because the wrong name
would be recorded.
| https://cwe.mitre.org/data/definitions/20.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , SmallWithNoZero) | ['QuantizedConv2DTest', 'SmallWithNoZero'] | TEST_F(QuantizedConv2DTest, SmallWithNoZero) {
const int stride = 1;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 3;
const int image_batch_count = 1;
// Here we're testing a slow implementation path, where zero is not
// representable in the image data and so simple border padding is not
// possible, so we have a min value greater than 0.
const float image_min = 1.0f;
const float image_max = 12.0f;
Tensor image_float(DT_FLOAT,
{image_batch_count, image_height, image_width, depth});
test::FillValues<float>(&image_float,
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12});
Tensor image_quantized =
FloatTensorToQuantized<quint8>(image_float, image_min, image_max);
const int filter_size = 3;
const int filter_count = 1;
const float filter_min = 1.0f;
const float filter_max = 9.0f;
Tensor filter_float(DT_FLOAT,
{filter_size, filter_size, depth, filter_count});
test::FillValues<float>(&filter_float, {1, 4, 7, 2, 5, 8, 3, 6, 9});
Tensor filter_quantized =
FloatTensorToQuantized<quint8>(filter_float, filter_min, filter_max);
AddInputFromArray<quint8>(image_quantized.shape(),
image_quantized.flat<quint8>());
AddInputFromArray<quint8>(filter_quantized.shape(),
filter_quantized.flat<quint8>());
AddInputFromArray<float>(TensorShape({1}), {image_min});
AddInputFromArray<float>(TensorShape({1}), {image_max});
AddInputFromArray<float>(TensorShape({1}), {filter_min});
AddInputFromArray<float>(TensorShape({1}), {filter_max});
TF_ASSERT_OK(RunOpKernel());
const int expected_width = image_width;
const int expected_height = image_height * filter_count;
Tensor expected_float(
DT_FLOAT, TensorShape({image_batch_count, expected_height, expected_width,
filter_count}));
test::FillValues<float>(&expected_float, {105, 150, 183, 95, 235, 312, 357,
178, 187, 234, 261, 121});
const Tensor& output_quantized = *GetOutput(0);
const float output_min = GetOutput(1)->flat<float>()(0);
const float output_max = GetOutput(2)->flat<float>()(0);
Tensor output_float =
QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max);
test::ExpectTensorNear<float>(expected_float, output_float, 1.0);
} | 587 | True | 1 |
CVE-2022-29201 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'name': 'https://github.com/tensorflow/tensorflow/commit/0f0b080ecde4d3dfec158d6f60da34d5e31693c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-pqhm-4wvf-2jg8', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/kernels/quantized_conv_ops.cc', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-20'}, {'lang': 'en', 'value': 'CWE-476'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.QuantizedConv2D` does not fully validate the input arguments. In this case, references get bound to `nullptr` for each argument that is empty. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T18:16Z | 2022-05-20T23:15Z | NULL Pointer Dereference | A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit. | NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions.
| https://cwe.mitre.org/data/definitions/476.html | 0 | Antonio Sanchez | 2022-04-29 15:22:06-07:00 | Fix undefined behavior in QuantizedConv2D
Added more input validation and tests. Prior to this, we could get
`nullptr` exceptions when attempting to access 0th elements of 0-sized
inputs, leading to security vulnerability bugs.
Also needed to modify `quantized_conv_ops_test.cc` for consistency.
Previously the CPU kernel did technically support passing tensors
of rank larger than 0 for min/max values. However, the XLA kernels do not.
PiperOrigin-RevId: 445518507 | 0f0b080ecde4d3dfec158d6f60da34d5e31693c4 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TEST_F | tensorflow::TEST_F( QuantizedConv2DTest , SmallWithNoZero) | ['QuantizedConv2DTest', 'SmallWithNoZero'] | TEST_F(QuantizedConv2DTest, SmallWithNoZero) {
const int stride = 1;
TF_ASSERT_OK(NodeDefBuilder("quantized_conv_op", "QuantizedConv2D")
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_QUINT8))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Input(FakeInput(DT_FLOAT))
.Attr("out_type", DataTypeToEnum<qint32>::v())
.Attr("strides", {1, stride, stride, 1})
.Attr("padding", "SAME")
.Finalize(node_def()));
TF_ASSERT_OK(InitOp());
const int depth = 1;
const int image_width = 4;
const int image_height = 3;
const int image_batch_count = 1;
// Here we're testing a slow implementation path, where zero is not
// representable in the image data and so simple border padding is not
// possible, so we have a min value greater than 0.
const float image_min = 1.0f;
const float image_max = 12.0f;
Tensor image_float(DT_FLOAT,
{image_batch_count, image_height, image_width, depth});
test::FillValues<float>(&image_float,
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12});
Tensor image_quantized =
FloatTensorToQuantized<quint8>(image_float, image_min, image_max);
const int filter_size = 3;
const int filter_count = 1;
const float filter_min = 1.0f;
const float filter_max = 9.0f;
Tensor filter_float(DT_FLOAT,
{filter_size, filter_size, depth, filter_count});
test::FillValues<float>(&filter_float, {1, 4, 7, 2, 5, 8, 3, 6, 9});
Tensor filter_quantized =
FloatTensorToQuantized<quint8>(filter_float, filter_min, filter_max);
AddInputFromArray<quint8>(image_quantized.shape(),
image_quantized.flat<quint8>());
AddInputFromArray<quint8>(filter_quantized.shape(),
filter_quantized.flat<quint8>());
AddInputFromArray<float>(TensorShape({1}), {image_min});
AddInputFromArray<float>(TensorShape({1}), {image_max});
AddInputFromArray<float>(TensorShape({1}), {filter_min});
AddInputFromArray<float>(TensorShape({1}), {filter_max});
TF_ASSERT_OK(RunOpKernel());
const int expected_width = image_width;
const int expected_height = image_height * filter_count;
Tensor expected_float(
DT_FLOAT, TensorShape({image_batch_count, expected_height, expected_width,
filter_count}));
test::FillValues<float>(&expected_float, {105, 150, 183, 95, 235, 312, 357,
178, 187, 234, 261, 121});
const Tensor& output_quantized = *GetOutput(0);
const float output_min = GetOutput(1)->flat<float>()(0);
const float output_max = GetOutput(2)->flat<float>()(0);
Tensor output_float =
QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max);
test::ExpectTensorNear<float>(expected_float, output_float, 1.0);
} | 587 | True | 1 |
CVE-2022-29203 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-jjm6-4vf7-cjh4', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-jjm6-4vf7-cjh4', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'name': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/acd56b8bcb72b163c834ae4f18469047b001fadf', 'name': 'https://github.com/tensorflow/tensorflow/commit/acd56b8bcb72b163c834ae4f18469047b001fadf', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.SpaceToBatchND` (in all backends such as XLA and handwritten kernels) is vulnerable to an integer overflow: The result of this integer overflow is used to allocate the output tensor, hence we get a denial of service via a `CHECK`-failure (assertion failure), as in TFSA-2021-198. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T19:16Z | 2022-05-20T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Sagun Bajra | 2022-04-29 16:08:37-07:00 | Fix security vulnerability with SpaceToBatchNDOp.
PiperOrigin-RevId: 445527615 | acd56b8bcb72b163c834ae4f18469047b001fadf | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::SpaceToBatch | tensorflow::SpaceToBatch( XlaOpKernelContext * ctx , const xla :: XlaOp & input , DataType input_dtype , const TensorShape & input_tensor_shape , absl :: Span<const int64_t> block_shape , const xla :: Literal & paddings) | ['ctx', 'input', 'input_dtype', 'input_tensor_shape', 'block_shape', 'paddings'] | void SpaceToBatch(XlaOpKernelContext* ctx, const xla::XlaOp& input,
DataType input_dtype, const TensorShape& input_tensor_shape,
absl::Span<const int64_t> block_shape,
const xla::Literal& paddings) {
const int input_rank = input_tensor_shape.dims();
const absl::InlinedVector<int64_t, 4> input_shape =
input_tensor_shape.dim_sizes();
const int block_rank = block_shape.size();
OP_REQUIRES(
ctx, input_rank >= 1 + block_rank,
errors::InvalidArgument("input rank should be >= ", 1 + block_rank,
" instead of ", input_rank));
absl::Span<const int64_t> remainder_shape(input_shape);
remainder_shape.remove_prefix(1 + block_rank);
OP_REQUIRES(
ctx,
paddings.shape().rank() == 2 &&
block_rank == xla::ShapeUtil::GetDimension(paddings.shape(), 0) &&
2 == xla::ShapeUtil::GetDimension(paddings.shape(), 1),
errors::InvalidArgument("paddings should have shape [", block_rank,
", 2] instead of ",
xla::ShapeUtil::HumanString(paddings.shape())));
xla::XlaBuilder* b = ctx->builder();
// 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the
// input according to `paddings` to produce `padded` of shape `padded_shape`.
xla::PaddingConfig padding_config;
std::vector<int64_t> padded_shape(input_shape.begin(), input_shape.end());
int64_t block_num_elems = 1LL;
padding_config.add_dimensions(); // Don't pad the batch dimension.
for (int i = 0; i < block_rank; ++i) {
auto* dim = padding_config.add_dimensions();
int64_t pad_start = paddings.Get<int64_t>({i, 0});
int64_t pad_end = paddings.Get<int64_t>({i, 1});
OP_REQUIRES(ctx, pad_start >= 0 && pad_end >= 0,
errors::InvalidArgument("Paddings must be non-negative"));
dim->set_edge_padding_low(pad_start);
dim->set_edge_padding_high(pad_end);
padded_shape[1 + i] += pad_start + pad_end;
block_num_elems *= block_shape[i];
}
// Don't pad the remainder dimensions.
for (int i = 0; i < remainder_shape.size(); ++i) {
padding_config.add_dimensions();
}
OP_REQUIRES(ctx, block_num_elems > 0,
errors::InvalidArgument(
"The product of the block dimensions must be positive"));
xla::XlaOp padded =
xla::Pad(input, XlaHelpers::Zero(b, input_dtype), padding_config);
// 2. Reshape `padded` to `reshaped_padded` of shape:
//
// [batch] +
// [padded_shape[1] / block_shape[0],
// block_shape[0],
// ...,
// padded_shape[M] / block_shape[M-1],
// block_shape[M-1]] +
// remaining_shape
const int64_t batch_size = input_shape[0];
std::vector<int64_t> reshaped_padded_shape(input_rank + block_rank);
reshaped_padded_shape[0] = batch_size;
for (int i = 0; i < block_rank; ++i) {
OP_REQUIRES(ctx, padded_shape[1 + i] % block_shape[i] == 0,
errors::InvalidArgument("padded_shape[", 1 + i,
"]=", padded_shape[1 + i],
" is not divisible by block_shape[", i,
"]=", block_shape[i]));
reshaped_padded_shape[1 + i * 2] = padded_shape[1 + i] / block_shape[i];
reshaped_padded_shape[1 + i * 2 + 1] = block_shape[i];
}
std::copy(remainder_shape.begin(), remainder_shape.end(),
reshaped_padded_shape.begin() + 1 + 2 * block_rank);
xla::XlaOp reshaped_padded = xla::Reshape(padded, reshaped_padded_shape);
// 3. Permute dimensions of `reshaped_padded` to produce
// `permuted_reshaped_padded` of shape:
//
// block_shape +
// [batch] +
// [padded_shape[1] / block_shape[0],
// ...,
// padded_shape[M] / block_shape[M-1]] +
// remaining_shape
std::vector<int64_t> permutation(reshaped_padded_shape.size());
for (int i = 0; i < block_rank; ++i) {
permutation[i] = 1 + 2 * i + 1;
permutation[block_rank + 1 + i] = 1 + 2 * i;
}
permutation[block_rank] = 0;
std::iota(permutation.begin() + 1 + block_rank * 2, permutation.end(),
1 + block_rank * 2);
xla::XlaOp permuted_reshaped_padded =
xla::Transpose(reshaped_padded, permutation);
// 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the
// batch dimension, producing an output tensor of shape:
//
// [batch * prod(block_shape)] +
// [padded_shape[1] / block_shape[0],
// ...,
// padded_shape[M] / block_shape[M-1]] +
// remaining_shape
// Determine the length of the prefix of block dims that can be combined
// into the batch dimension due to having no padding and block_shape=1.
std::vector<int64_t> output_shape(input_rank);
output_shape[0] = batch_size * block_num_elems;
for (int i = 0; i < block_rank; ++i) {
output_shape[1 + i] = padded_shape[1 + i] / block_shape[i];
}
std::copy(remainder_shape.begin(), remainder_shape.end(),
output_shape.begin() + 1 + block_rank);
xla::XlaOp output = xla::Reshape(permuted_reshaped_padded, output_shape);
ctx->SetOutput(0, output);
} | 814 | True | 1 |
CVE-2022-29203 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-jjm6-4vf7-cjh4', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-jjm6-4vf7-cjh4', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'name': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/acd56b8bcb72b163c834ae4f18469047b001fadf', 'name': 'https://github.com/tensorflow/tensorflow/commit/acd56b8bcb72b163c834ae4f18469047b001fadf', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.SpaceToBatchND` (in all backends such as XLA and handwritten kernels) is vulnerable to an integer overflow: The result of this integer overflow is used to allocate the output tensor, hence we get a denial of service via a `CHECK`-failure (assertion failure), as in TFSA-2021-198. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T19:16Z | 2022-05-20T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Sagun Bajra | 2022-04-29 16:08:37-07:00 | Fix security vulnerability with SpaceToBatchNDOp.
PiperOrigin-RevId: 445527615 | acd56b8bcb72b163c834ae4f18469047b001fadf | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::shape_inference::InferenceContext::Multiply | tensorflow::shape_inference::InferenceContext::Multiply( DimensionHandle first , DimensionOrConstant second , DimensionHandle * out) | ['first', 'second', 'out'] | Status InferenceContext::Multiply(DimensionHandle first,
DimensionOrConstant second,
DimensionHandle* out) {
const int64_t first_value = Value(first);
const int64_t second_value = Value(second);
// Special cases.
if (first_value == 0) {
*out = first;
} else if (second_value == 0) {
*out = MakeDim(second);
} else if (first_value == 1) {
*out = MakeDim(second);
} else if (second_value == 1) {
*out = first;
} else if (first_value == kUnknownDim || second_value == kUnknownDim) {
*out = UnknownDim();
} else {
// Invariant: Both values are known and greater than 1.
const int64_t product = first_value * second_value;
if (product < 0) {
return errors::InvalidArgument(
"Negative dimension size caused by overflow when multiplying ",
first_value, " and ", second_value);
}
*out = MakeDim(product);
}
return Status::OK();
} | 163 | True | 1 |
CVE-2022-29203 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-jjm6-4vf7-cjh4', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-jjm6-4vf7-cjh4', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'name': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2021-198.md', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.6.4', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.7.2', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/commit/acd56b8bcb72b163c834ae4f18469047b001fadf', 'name': 'https://github.com/tensorflow/tensorflow/commit/acd56b8bcb72b163c834ae4f18469047b001fadf', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-190'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.7.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.7.0', 'versionEndExcluding': '2.7.2', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2.6.4', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.9.0:rc0:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:-:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.SpaceToBatchND` (in all backends such as XLA and handwritten kernels) is vulnerable to an integer overflow: The result of this integer overflow is used to allocate the output tensor, hence we get a denial of service via a `CHECK`-failure (assertion failure), as in TFSA-2021-198. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.'}] | 2022-06-02T19:16Z | 2022-05-20T23:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Sagun Bajra | 2022-04-29 16:08:37-07:00 | Fix security vulnerability with SpaceToBatchNDOp.
PiperOrigin-RevId: 445527615 | acd56b8bcb72b163c834ae4f18469047b001fadf | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::SpaceToBatchOpCompute | tensorflow::SpaceToBatchOpCompute( OpKernelContext * context , const Tensor & orig_input_tensor , const Tensor & orig_block_shape , const Tensor & orig_paddings) | ['context', 'orig_input_tensor', 'orig_block_shape', 'orig_paddings'] | Status SpaceToBatchOpCompute(OpKernelContext* context,
const Tensor& orig_input_tensor,
const Tensor& orig_block_shape,
const Tensor& orig_paddings) {
const int input_dims = orig_input_tensor.dims();
if (!TensorShapeUtils::IsVector(orig_block_shape.shape())) {
return errors::InvalidArgument("block_shape rank should be 1 instead of ",
orig_block_shape.dims());
}
const int block_dims = orig_block_shape.dim_size(0);
if (orig_input_tensor.dims() < 1 + block_dims) {
return errors::InvalidArgument("input rank should be >= ", 1 + block_dims,
" instead of ", orig_input_tensor.dims());
}
if (!(TensorShapeUtils::IsMatrix(orig_paddings.shape()) &&
block_dims == orig_paddings.dim_size(0) &&
2 == orig_paddings.dim_size(1))) {
return errors::InvalidArgument("paddings should have shape [", block_dims,
", 2] instead of ",
orig_paddings.shape().DebugString());
}
// To avoid out-of-bounds access in the case that the block_shape and/or
// paddings tensors are concurrently modified, we must copy the values.
gtl::InlinedVector<int64_t, 4> block_shape;
gtl::InlinedVector<int64_t, 8> paddings;
internal::spacetobatch::SubtleMustCopyFlat(orig_block_shape, &block_shape);
internal::spacetobatch::SubtleMustCopyFlat(orig_paddings, &paddings);
// Determine the length of the prefix of block dims that can be combined
// into the batch dimension due to having no padding and block_shape=1.
int removed_prefix_block_dims = 0;
for (; removed_prefix_block_dims < block_dims; ++removed_prefix_block_dims) {
const int dim = removed_prefix_block_dims;
if (paddings[2 * dim] != 0 || paddings[2 * dim + 1] != 0 ||
block_shape[dim] != 1) {
break;
}
}
// Determine the length of the suffix of block dims that can be combined
// into the depth dimension due to having no padding and block_shape=1.
int removed_suffix_block_dims = 0;
for (; removed_suffix_block_dims < block_dims - removed_prefix_block_dims;
++removed_suffix_block_dims) {
const int dim = block_dims - 1 - removed_suffix_block_dims;
if (paddings[dim * 2] != 0 || paddings[dim * 2 + 1] != 0 ||
block_shape[dim] != 1) {
break;
}
}
// Compute the product of the block_shape values.
int64_t block_shape_product = 1;
for (int block_dim = 0; block_dim < block_dims; ++block_dim) {
block_shape_product *= block_shape[block_dim];
}
if (block_shape_product <= 0) {
return errors::InvalidArgument(
"Product of block sizes must be positive, got ", block_shape_product);
}
const int internal_block_dims =
block_dims - removed_prefix_block_dims - removed_suffix_block_dims;
if (internal_block_dims > kMaxSpaceToBatchBlockDims) {
return errors::InvalidArgument(
"Maximum number of non-combined block dimensions is ",
internal_block_dims, " but must not exceed ",
kMaxSpaceToBatchBlockDims);
}
if (internal_block_dims == 0) {
context->set_output(0, orig_input_tensor);
return Status::OK();
}
// For the purpose of computing the result, the input will be treated as
// having this shape, of rank 2 + internal_block_dims.
TensorShape internal_input_shape;
// For the purpose of computing the result, the output will be treated as
// having this shape, of rank 2 + internal_block_dims.
TensorShape internal_output_shape;
// The actual output shape exposed to callers.
TensorShape external_output_shape;
external_output_shape.AddDim(orig_input_tensor.dim_size(0) *
block_shape_product);
int64_t input_batch_size = orig_input_tensor.dim_size(0);
for (int block_dim = 0; block_dim < removed_prefix_block_dims; ++block_dim) {
const int64_t size = orig_input_tensor.dim_size(block_dim + 1);
input_batch_size *= size;
external_output_shape.AddDim(size);
}
internal_input_shape.AddDim(input_batch_size);
internal_output_shape.AddDim(input_batch_size * block_shape_product);
for (int block_dim = removed_prefix_block_dims;
block_dim < block_dims - removed_suffix_block_dims; ++block_dim) {
const int64_t pad_start = paddings[2 * block_dim],
pad_end = paddings[2 * block_dim + 1];
if (pad_start < 0 || pad_end < 0) {
return errors::InvalidArgument("Paddings must be non-negative");
}
const int64_t input_size = orig_input_tensor.dim_size(block_dim + 1);
const int64_t block_shape_value = block_shape[block_dim];
const int64_t padded_size = input_size + pad_start + pad_end;
if (padded_size % block_shape_value != 0) {
return errors::InvalidArgument("padded_shape[", block_dim,
"]=", padded_size,
" is not divisible by block_shape[",
block_dim, "]=", block_shape_value);
}
internal_input_shape.AddDim(input_size);
const int64_t output_size = padded_size / block_shape_value;
internal_output_shape.AddDim(output_size);
external_output_shape.AddDim(output_size);
}
int64_t depth = 1;
for (int dim = block_dims - removed_suffix_block_dims + 1; dim < input_dims;
++dim) {
const int64_t size = orig_input_tensor.dim_size(dim);
external_output_shape.AddDim(size);
depth *= size;
}
internal_input_shape.AddDim(depth);
internal_output_shape.AddDim(depth);
// Allocate output tensor.
Tensor* output_tensor = nullptr;
TF_RETURN_IF_ERROR(
context->allocate_output(0, external_output_shape, &output_tensor));
const int64_t* internal_paddings = &paddings[2 * removed_prefix_block_dims];
const int64_t* internal_block_shape = &block_shape[removed_prefix_block_dims];
switch (internal_block_dims) {
#define TF_SPACETOBATCH_BLOCK_DIMS_CASE(NUM_BLOCK_DIMS) \
case NUM_BLOCK_DIMS: { \
TF_RETURN_IF_ERROR( \
functor::SpaceToBatchFunctor<Device, T, NUM_BLOCK_DIMS, false>()( \
context->eigen_device<Device>(), \
orig_input_tensor.shaped<T, NUM_BLOCK_DIMS + 2>( \
internal_input_shape.dim_sizes()), \
internal_block_shape, internal_paddings, \
output_tensor->shaped<T, NUM_BLOCK_DIMS + 2>( \
internal_output_shape.dim_sizes()))); \
} break; \
/**/
TF_SPACETOBATCH_FOR_EACH_NUM_BLOCK_DIMS(TF_SPACETOBATCH_BLOCK_DIMS_CASE)
#undef TF_SPACETOBATCH_BLOCK_DIMS_CASE
}
return Status::OK();
} | 798 | True | 1 |
CVE-2022-29210 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6', 'name': 'https://github.com/tensorflow/tensorflow/commit/1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hc2f-7r5r-r2hg', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hc2f-7r5r-r2hg', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/framework/tensor_key.h#L53-L64', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/framework/tensor_key.h#L53-L64', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}, {'lang': 'en', 'value': 'CWE-122'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In version 2.8.0, the `TensorKey` hash function used total estimated `AllocatedBytes()`, which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. `int32_t`). It also tried to access individual tensor bytes through `tensor.data()` of size `AllocatedBytes()`. This led to ASAN failures because the `AllocatedBytes()` is an estimate of total bytes allocated by a tensor, including any pointed-to constructs (e.g. strings), and does not refer to contiguous bytes in the `.data()` buffer. The discoverers could not use this byte vector anyway because types such as `tstring` include pointers, whereas they needed to hash the string values themselves. This issue is patched in Tensorflow versions 2.9.0 and 2.8.1.'}] | 2022-06-03T02:33Z | 2022-05-21T00:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Antonio Sanchez | 2022-05-03 12:47:21-07:00 | Fix TensorKey hash function.
The original hash function only used total estimated `AllocatedBytes()`, which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. `int32_t`).
It also tried to access individual tensor bytes through `tensor.data()` of size `AllocatedBytes()`. This led to ASAN failures because the `AllocatedBytes()` is an estimate of total bytes allocated by a tensor, including any pointed-to constructs (e.g. strings), and does not refer to contiguous bytes in the `.data()` buffer. We couldn't use this byte vector anyways, since types like `tstring` include pointers, whereas we need to hash the string values themselves.
Modified the hash function to more closely mirror the `==` operator. This correctly handles `tstring` and any numeric types that do have contiguous storage. Other types are currently left as unimplemented.
PiperOrigin-RevId: 446265413 | 1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TensorKey::AbslHashValue | tensorflow::TensorKey::AbslHashValue( H h , const TensorKey & k) | ['h', 'k'] | friend H AbslHashValue(H h, const TensorKey& k) {
const uint8* d = static_cast<uint8*>(k.data());
size_t s = k.AllocatedBytes();
std::vector<uint8> vec;
vec.reserve(s);
for (int i = 0; i < s; i++) {
vec.push_back(d[i]);
}
return H::combine(std::move(h), s);
} | 95 | True | 1 |
CVE-2022-29210 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6', 'name': 'https://github.com/tensorflow/tensorflow/commit/1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hc2f-7r5r-r2hg', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hc2f-7r5r-r2hg', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/framework/tensor_key.h#L53-L64', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/framework/tensor_key.h#L53-L64', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}, {'lang': 'en', 'value': 'CWE-122'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In version 2.8.0, the `TensorKey` hash function used total estimated `AllocatedBytes()`, which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. `int32_t`). It also tried to access individual tensor bytes through `tensor.data()` of size `AllocatedBytes()`. This led to ASAN failures because the `AllocatedBytes()` is an estimate of total bytes allocated by a tensor, including any pointed-to constructs (e.g. strings), and does not refer to contiguous bytes in the `.data()` buffer. The discoverers could not use this byte vector anyway because types such as `tstring` include pointers, whereas they needed to hash the string values themselves. This issue is patched in Tensorflow versions 2.9.0 and 2.8.1.'}] | 2022-06-03T02:33Z | 2022-05-21T00:15Z | Heap-based Buffer Overflow | A heap overflow condition is a buffer overflow, where the buffer that can be overwritten is allocated in the heap portion of memory, generally meaning that the buffer was allocated using a routine such as malloc(). | https://cwe.mitre.org/data/definitions/122.html | 0 | Antonio Sanchez | 2022-05-03 12:47:21-07:00 | Fix TensorKey hash function.
The original hash function only used total estimated `AllocatedBytes()`, which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. `int32_t`).
It also tried to access individual tensor bytes through `tensor.data()` of size `AllocatedBytes()`. This led to ASAN failures because the `AllocatedBytes()` is an estimate of total bytes allocated by a tensor, including any pointed-to constructs (e.g. strings), and does not refer to contiguous bytes in the `.data()` buffer. We couldn't use this byte vector anyways, since types like `tstring` include pointers, whereas we need to hash the string values themselves.
Modified the hash function to more closely mirror the `==` operator. This correctly handles `tstring` and any numeric types that do have contiguous storage. Other types are currently left as unimplemented.
PiperOrigin-RevId: 446265413 | 1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TensorKey::AbslHashValue | tensorflow::TensorKey::AbslHashValue( H h , const TensorKey & k) | ['h', 'k'] | friend H AbslHashValue(H h, const TensorKey& k) {
const uint8* d = static_cast<uint8*>(k.data());
size_t s = k.AllocatedBytes();
std::vector<uint8> vec;
vec.reserve(s);
for (int i = 0; i < s; i++) {
vec.push_back(d[i]);
}
return H::combine(std::move(h), s);
} | 95 | True | 1 |
|
CVE-2022-29210 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6', 'name': 'https://github.com/tensorflow/tensorflow/commit/1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hc2f-7r5r-r2hg', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hc2f-7r5r-r2hg', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/framework/tensor_key.h#L53-L64', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/framework/tensor_key.h#L53-L64', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}, {'lang': 'en', 'value': 'CWE-122'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In version 2.8.0, the `TensorKey` hash function used total estimated `AllocatedBytes()`, which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. `int32_t`). It also tried to access individual tensor bytes through `tensor.data()` of size `AllocatedBytes()`. This led to ASAN failures because the `AllocatedBytes()` is an estimate of total bytes allocated by a tensor, including any pointed-to constructs (e.g. strings), and does not refer to contiguous bytes in the `.data()` buffer. The discoverers could not use this byte vector anyway because types such as `tstring` include pointers, whereas they needed to hash the string values themselves. This issue is patched in Tensorflow versions 2.9.0 and 2.8.1.'}] | 2022-06-03T02:33Z | 2022-05-21T00:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Antonio Sanchez | 2022-05-03 12:47:21-07:00 | Fix TensorKey hash function.
The original hash function only used total estimated `AllocatedBytes()`, which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. `int32_t`).
It also tried to access individual tensor bytes through `tensor.data()` of size `AllocatedBytes()`. This led to ASAN failures because the `AllocatedBytes()` is an estimate of total bytes allocated by a tensor, including any pointed-to constructs (e.g. strings), and does not refer to contiguous bytes in the `.data()` buffer. We couldn't use this byte vector anyways, since types like `tstring` include pointers, whereas we need to hash the string values themselves.
Modified the hash function to more closely mirror the `==` operator. This correctly handles `tstring` and any numeric types that do have contiguous storage. Other types are currently left as unimplemented.
PiperOrigin-RevId: 446265413 | 1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TensorKey::operator == | tensorflow::TensorKey::operator ==( const TensorKey & t1 , const TensorKey & t2) | ['t1', 't2'] | friend bool operator==(const TensorKey& t1, const TensorKey& t2) {
if (t1.dtype() != t2.dtype() || t1.shape() != t2.shape()) {
return false;
}
if (DataTypeCanUseMemcpy(t1.dtype())) {
return t1.tensor_data() == t2.tensor_data();
}
if (t1.dtype() == DT_STRING) {
const auto s1 = t1.unaligned_flat<tstring>();
const auto s2 = t2.unaligned_flat<tstring>();
for (int64_t i = 0, n = t1.NumElements(); i < n; ++i) {
if (TF_PREDICT_FALSE(s1(i) != s2(i))) {
return false;
}
}
return true;
}
return false;
} | 160 | True | 1 |
CVE-2022-29210 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6', 'name': 'https://github.com/tensorflow/tensorflow/commit/1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hc2f-7r5r-r2hg', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-hc2f-7r5r-r2hg', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.8.1', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'name': 'https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/framework/tensor_key.h#L53-L64', 'name': 'https://github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/core/framework/tensor_key.h#L53-L64', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}, {'lang': 'en', 'value': 'CWE-122'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:2.8.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'TensorFlow is an open source platform for machine learning. In version 2.8.0, the `TensorKey` hash function used total estimated `AllocatedBytes()`, which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. `int32_t`). It also tried to access individual tensor bytes through `tensor.data()` of size `AllocatedBytes()`. This led to ASAN failures because the `AllocatedBytes()` is an estimate of total bytes allocated by a tensor, including any pointed-to constructs (e.g. strings), and does not refer to contiguous bytes in the `.data()` buffer. The discoverers could not use this byte vector anyway because types such as `tstring` include pointers, whereas they needed to hash the string values themselves. This issue is patched in Tensorflow versions 2.9.0 and 2.8.1.'}] | 2022-06-03T02:33Z | 2022-05-21T00:15Z | Heap-based Buffer Overflow | A heap overflow condition is a buffer overflow, where the buffer that can be overwritten is allocated in the heap portion of memory, generally meaning that the buffer was allocated using a routine such as malloc(). | https://cwe.mitre.org/data/definitions/122.html | 0 | Antonio Sanchez | 2022-05-03 12:47:21-07:00 | Fix TensorKey hash function.
The original hash function only used total estimated `AllocatedBytes()`, which (a) is an estimate per tensor, and (b) is a very poor hash function for constants (e.g. `int32_t`).
It also tried to access individual tensor bytes through `tensor.data()` of size `AllocatedBytes()`. This led to ASAN failures because the `AllocatedBytes()` is an estimate of total bytes allocated by a tensor, including any pointed-to constructs (e.g. strings), and does not refer to contiguous bytes in the `.data()` buffer. We couldn't use this byte vector anyways, since types like `tstring` include pointers, whereas we need to hash the string values themselves.
Modified the hash function to more closely mirror the `==` operator. This correctly handles `tstring` and any numeric types that do have contiguous storage. Other types are currently left as unimplemented.
PiperOrigin-RevId: 446265413 | 1b85a28d395dc91f4d22b5f9e1e9a22e92ccecd6 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::TensorKey::operator == | tensorflow::TensorKey::operator ==( const TensorKey & t1 , const TensorKey & t2) | ['t1', 't2'] | friend bool operator==(const TensorKey& t1, const TensorKey& t2) {
if (t1.dtype() != t2.dtype() || t1.shape() != t2.shape()) {
return false;
}
if (DataTypeCanUseMemcpy(t1.dtype())) {
return t1.tensor_data() == t2.tensor_data();
}
if (t1.dtype() == DT_STRING) {
const auto s1 = t1.unaligned_flat<tstring>();
const auto s2 = t2.unaligned_flat<tstring>();
for (int64_t i = 0, n = t1.NumElements(); i < n; ++i) {
if (TF_PREDICT_FALSE(s1(i) != s2(i))) {
return false;
}
}
return true;
}
return false;
} | 160 | True | 1 |
|
CVE-2019-16778 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/db4f9717c41bccc3ce10099ab61996b246099892', 'name': 'https://github.com/tensorflow/tensorflow/commit/db4f9717c41bccc3ce10099ab61996b246099892', 'refsource': 'MISC', 'tags': ['Patch']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2019-002.md', 'name': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2019-002.md', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-844w-j86r-4x2j', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-844w-j86r-4x2j', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-681'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.0.0', 'versionEndExcluding': '1.15.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In TensorFlow before 1.15, a heap buffer overflow in UnsortedSegmentSum can be produced when the Index template argument is int32. In this case data_size and num_segments fields are truncated from int64 to int32 and can produce negative numbers, resulting in accessing out of bounds heap memory. This is unlikely to be exploitable and was detected and fixed internally in TensorFlow 1.15 and 2.0.'}] | 2021-10-29T15:03Z | 2019-12-16T21:15Z | Incorrect Conversion between Numeric Types | When converting from one data type to another, such as long to integer, data can be omitted or translated in a way that produces unexpected values. If the resulting values are used in a sensitive context, then dangerous behaviors may occur. | https://cwe.mitre.org/data/definitions/681.html | 0 | RJ Skerry-Ryan | 2019-07-03 15:45:01-07:00 | Fix heap buffer overflow in UnsortedSegmentSum.
When Index=int32, data_size and num_segments were truncated from int64 to int32. This truncation can produce negative numbers, which causes UnsortedSegmentFunctor to access out of bounds memory.
Also:
- Switches some indexing calculations to int64 to avoid signed integer overflow when either the input or output tensors have more than 2**31 - 1 elements.
- Fixes a range check error in the GPU kernel. The segment ID was checked against an upper bound measured in elements, not segments.
PiperOrigin-RevId: 256451663 | db4f9717c41bccc3ce10099ab61996b246099892 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::UnsortedSegmentReductionOp::Compute | tensorflow::UnsortedSegmentReductionOp::Compute( OpKernelContext * context) | ['context'] | void Compute(OpKernelContext* context) override {
const Tensor& data = context->input(0);
const Tensor& segment_ids = context->input(1);
const Tensor& num_segments = context->input(2);
if (!UnsortedSegmentReductionDoValidation(this, context, data, segment_ids,
num_segments)) {
return;
}
const auto segment_flat = segment_ids.flat<Index>();
const Index output_rows = internal::SubtleMustCopy(static_cast<Index>(
num_segments.dtype() == DT_INT32 ? num_segments.scalar<int32>()()
: num_segments.scalar<int64>()()));
OP_REQUIRES(context, output_rows >= 0,
errors::InvalidArgument("Input num_segments == ", output_rows,
" must not be negative."));
TensorShape output_shape;
output_shape.AddDim(output_rows);
for (int i = segment_ids.dims(); i < data.dims(); i++) {
output_shape.AddDim(data.dim_size(i));
}
Tensor* output = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, output_shape, &output));
auto output_flat = output->flat_outer_dims<T>();
auto data_ptr = data.template flat<T>().data();
reduction_functor_(context, output_rows, segment_ids.shape(), segment_flat,
data.NumElements(), data_ptr, output_flat);
} | 266 | True | 1 |
|
CVE-2019-16778 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/db4f9717c41bccc3ce10099ab61996b246099892', 'name': 'https://github.com/tensorflow/tensorflow/commit/db4f9717c41bccc3ce10099ab61996b246099892', 'refsource': 'MISC', 'tags': ['Patch']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2019-002.md', 'name': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2019-002.md', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-844w-j86r-4x2j', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-844w-j86r-4x2j', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-681'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.0.0', 'versionEndExcluding': '1.15.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In TensorFlow before 1.15, a heap buffer overflow in UnsortedSegmentSum can be produced when the Index template argument is int32. In this case data_size and num_segments fields are truncated from int64 to int32 and can produce negative numbers, resulting in accessing out of bounds heap memory. This is unlikely to be exploitable and was detected and fixed internally in TensorFlow 1.15 and 2.0.'}] | 2021-10-29T15:03Z | 2019-12-16T21:15Z | Incorrect Conversion between Numeric Types | When converting from one data type to another, such as long to integer, data can be omitted or translated in a way that produces unexpected values. If the resulting values are used in a sensitive context, then dangerous behaviors may occur. | https://cwe.mitre.org/data/definitions/681.html | 0 | RJ Skerry-Ryan | 2019-07-03 15:45:01-07:00 | Fix heap buffer overflow in UnsortedSegmentSum.
When Index=int32, data_size and num_segments were truncated from int64 to int32. This truncation can produce negative numbers, which causes UnsortedSegmentFunctor to access out of bounds memory.
Also:
- Switches some indexing calculations to int64 to avoid signed integer overflow when either the input or output tensors have more than 2**31 - 1 elements.
- Fixes a range check error in the GPU kernel. The segment ID was checked against an upper bound measured in elements, not segments.
PiperOrigin-RevId: 256451663 | db4f9717c41bccc3ce10099ab61996b246099892 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::functor::UnsortedSegmentFunctor<CPUDevice,T,Index,InitialValueF,ReductionF>::operator ( ) | tensorflow::functor::UnsortedSegmentFunctor<CPUDevice,T,Index,InitialValueF,ReductionF>::operator ( )( OpKernelContext * ctx , const Index num_segments , const TensorShape & segment_ids_shape , typename TTypes<Index> :: ConstFlat segment_ids , const Index data_size , const T * data , typename TTypes<T,2> :: Tensor output) | ['ctx', 'num_segments', 'segment_ids_shape', 'segment_ids', 'data_size', 'data', 'output'] | void operator()(OpKernelContext* ctx, const Index num_segments,
const TensorShape& segment_ids_shape,
typename TTypes<Index>::ConstFlat segment_ids,
const Index data_size, const T* data,
typename TTypes<T, 2>::Tensor output) {
output.setConstant(InitialValueF()());
if (data_size == 0) {
return;
}
const int64 N = segment_ids.dimension(0);
ReductionF reduction;
auto data_flat = typename TTypes<T, 2>::ConstTensor(data, N, data_size / N);
for (int64 i = 0; i < N; ++i) {
Index j = internal::SubtleMustCopy(segment_ids(i));
if (j < 0) {
continue;
}
OP_REQUIRES(ctx, FastBoundsCheck(j, num_segments),
errors::InvalidArgument(
"segment_ids", SliceDebugString(segment_ids_shape, i),
" = ", j, " is out of range [0, ", num_segments, ")"));
reduction(data_flat.template chip<0>(i), output.template chip<0>(j));
}
} | 205 | True | 1 |
|
CVE-2019-16778 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/tensorflow/tensorflow/commit/db4f9717c41bccc3ce10099ab61996b246099892', 'name': 'https://github.com/tensorflow/tensorflow/commit/db4f9717c41bccc3ce10099ab61996b246099892', 'refsource': 'MISC', 'tags': ['Patch']}, {'url': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2019-002.md', 'name': 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/advisory/tfsa-2019-002.md', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-844w-j86r-4x2j', 'name': 'https://github.com/tensorflow/tensorflow/security/advisories/GHSA-844w-j86r-4x2j', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-681'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:tensorflow:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.0.0', 'versionEndExcluding': '1.15.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In TensorFlow before 1.15, a heap buffer overflow in UnsortedSegmentSum can be produced when the Index template argument is int32. In this case data_size and num_segments fields are truncated from int64 to int32 and can produce negative numbers, resulting in accessing out of bounds heap memory. This is unlikely to be exploitable and was detected and fixed internally in TensorFlow 1.15 and 2.0.'}] | 2021-10-29T15:03Z | 2019-12-16T21:15Z | Incorrect Conversion between Numeric Types | When converting from one data type to another, such as long to integer, data can be omitted or translated in a way that produces unexpected values. If the resulting values are used in a sensitive context, then dangerous behaviors may occur. | https://cwe.mitre.org/data/definitions/681.html | 0 | RJ Skerry-Ryan | 2019-07-03 15:45:01-07:00 | Fix heap buffer overflow in UnsortedSegmentSum.
When Index=int32, data_size and num_segments were truncated from int64 to int32. This truncation can produce negative numbers, which causes UnsortedSegmentFunctor to access out of bounds memory.
Also:
- Switches some indexing calculations to int64 to avoid signed integer overflow when either the input or output tensors have more than 2**31 - 1 elements.
- Fixes a range check error in the GPU kernel. The segment ID was checked against an upper bound measured in elements, not segments.
PiperOrigin-RevId: 256451663 | db4f9717c41bccc3ce10099ab61996b246099892 | False | tensorflow/tensorflow | An Open Source Machine Learning Framework for Everyone | 2015-11-07 01:19:20 | 2022-08-27 17:32:40 | https://tensorflow.org | tensorflow | 167391.0 | 87115.0 | tensorflow::UnsortedSegmentCustomKernel | tensorflow::UnsortedSegmentCustomKernel( const Index input_outer_dim_size , const Index inner_dim_size , const Index output_outer_dim_size , const Index * segment_ids , const T * input , T * output) | ['input_outer_dim_size', 'inner_dim_size', 'output_outer_dim_size', 'segment_ids', 'input', 'output'] | __global__ void UnsortedSegmentCustomKernel(const Index input_outer_dim_size,
const Index inner_dim_size,
const Index output_outer_dim_size,
const Index* segment_ids,
const T* input, T* output) {
const Index input_total_size = input_outer_dim_size * inner_dim_size;
const Index output_total_size = output_outer_dim_size * inner_dim_size;
for (int input_index : GpuGridRangeX(input_total_size)) {
const Index input_segment_index = input_index / inner_dim_size;
const Index segment_offset = input_index % inner_dim_size;
const Index output_segment_index = segment_ids[input_segment_index];
if (output_segment_index < 0 || output_segment_index >= output_total_size) {
continue;
}
const Index output_index =
output_segment_index * inner_dim_size + segment_offset;
KernelReductionFunctor()(output + output_index, ldg(input + input_index));
}
} | 123 | True | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.