func
stringlengths 0
484k
| target
int64 0
1
| cwe
listlengths 0
4
| project
stringclasses 799
values | commit_id
stringlengths 40
40
| hash
float64 1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
| size
int64 1
24k
| message
stringlengths 0
13.3k
|
---|---|---|---|---|---|---|---|
void CoreNetwork::setPingInterval(int interval)
{
_pingTimer.setInterval(interval * 1000);
}
| 0 |
[
"CWE-399"
] |
quassel
|
b5e38970ffd55e2dd9f706ce75af9a8d7730b1b8
| 39,239,288,845,329,550,000,000,000,000,000,000,000 | 4 |
Improve the message-splitting algorithm for PRIVMSG and CTCP
This introduces a new message splitting algorithm based on
QTextBoundaryFinder. It works by first starting with the entire
message to be sent, encoding it, and checking to see if it is over
the maximum message length. If it is, it uses QTBF to find the
word boundary most immediately preceding the maximum length. If no
suitable boundary can be found, it falls back to searching for
grapheme boundaries. It repeats this process until the entire
message has been sent.
Unlike what it replaces, the new splitting code is not recursive
and cannot cause stack overflows. Additionally, if it is unable
to split a string, it will give up gracefully and not crash the
core or cause a thread to run away.
This patch fixes two bugs. The first is garbage characters caused
by accidentally splitting the string in the middle of a multibyte
character. Since the new code splits at a character level instead
of a byte level, this will no longer be an issue. The second is
the core crash caused by sending an overlength CTCP query ("/me")
containing only multibyte characters. This bug was caused by the
old CTCP splitter using the byte index from lastParamOverrun() as
a character index for a QString.
|
GF_Err dinf_on_child_box(GF_Box *s, GF_Box *a)
{
GF_DataInformationBox *ptr = (GF_DataInformationBox *)s;
switch(a->type) {
case GF_ISOM_BOX_TYPE_DREF:
if (ptr->dref) ERROR_ON_DUPLICATED_BOX(a, ptr)
ptr->dref = (GF_DataReferenceBox *)a;
return GF_OK;
}
return GF_OK;
}
| 0 |
[
"CWE-787"
] |
gpac
|
388ecce75d05e11fc8496aa4857b91245007d26e
| 230,249,799,059,745,230,000,000,000,000,000,000,000 | 11 |
fixed #1587
|
static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte)
{
if (sizeof(pteval_t) > sizeof(long))
/* 5 arg words */
pv_ops.mmu.set_pte_at(mm, addr, ptep, pte);
else
PVOP_VCALL4(mmu.set_pte_at, mm, addr, ptep, pte.pte);
}
| 0 |
[
"CWE-276"
] |
linux
|
cadfad870154e14f745ec845708bc17d166065f2
| 201,267,862,141,760,960,000,000,000,000,000,000,000 | 9 |
x86/ioperm: Fix io bitmap invalidation on Xen PV
tss_invalidate_io_bitmap() wasn't wired up properly through the pvop
machinery, so the TSS and Xen's io bitmap would get out of sync
whenever disabling a valid io bitmap.
Add a new pvop for tss_invalidate_io_bitmap() to fix it.
This is XSA-329.
Fixes: 22fe5b0439dd ("x86/ioperm: Move TSS bitmap update to exit to user work")
Signed-off-by: Andy Lutomirski <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Juergen Gross <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: https://lkml.kernel.org/r/d53075590e1f91c19f8af705059d3ff99424c020.1595030016.git.luto@kernel.org
|
static const char *cmd_action(cmd_parms *cmd, void *_dcfg, const char *p1)
{
return add_rule(cmd, (directory_config *)_dcfg, RULE_TYPE_ACTION, SECACTION_TARGETS, SECACTION_ARGS, p1);
}
| 0 |
[
"CWE-20",
"CWE-611"
] |
ModSecurity
|
d4d80b38aa85eccb26e3c61b04d16e8ca5de76fe
| 833,153,061,519,812,300,000,000,000,000,000,000 | 4 |
Added SecXmlExternalEntity
|
TEST_F(HeaderToMetadataTest, NoCookieValue) {
const std::string config = R"EOF(
request_rules:
- cookie: foo
on_header_missing:
metadata_namespace: envoy.lb
key: foo
value: some_value
type: STRING
)EOF";
initializeFilter(config);
Http::TestRequestHeaderMapImpl headers{{"cookie", ""}};
std::map<std::string, std::string> expected = {{"foo", "some_value"}};
EXPECT_CALL(decoder_callbacks_, streamInfo()).WillRepeatedly(ReturnRef(req_info_));
EXPECT_CALL(req_info_, setDynamicMetadata("envoy.lb", MapEq(expected)));
EXPECT_EQ(Http::FilterHeadersStatus::Continue, filter_->decodeHeaders(headers, false));
}
| 0 |
[] |
envoy
|
2c60632d41555ec8b3d9ef5246242be637a2db0f
| 68,145,216,804,772,310,000,000,000,000,000,000,000 | 18 |
http: header map security fixes for duplicate headers (#197)
Previously header matching did not match on all headers for
non-inline headers. This patch changes the default behavior to
always logically match on all headers. Multiple individual
headers will be logically concatenated with ',' similar to what
is done with inline headers. This makes the behavior effectively
consistent. This behavior can be temporary reverted by setting
the runtime value "envoy.reloadable_features.header_match_on_all_headers"
to "false".
Targeted fixes have been additionally performed on the following
extensions which make them consider all duplicate headers by default as
a comma concatenated list:
1) Any extension using CEL matching on headers.
2) The header to metadata filter.
3) The JWT filter.
4) The Lua filter.
Like primary header matching used in routing, RBAC, etc. this behavior
can be disabled by setting the runtime value
"envoy.reloadable_features.header_match_on_all_headers" to false.
Finally, the setCopy() header map API previously only set the first
header in the case of duplicate non-inline headers. setCopy() now
behaves similiarly to the other set*() APIs and replaces all found
headers with a single value. This may have had security implications
in the extauth filter which uses this API. This behavior can be disabled
by setting the runtime value
"envoy.reloadable_features.http_set_copy_replace_all_headers" to false.
Fixes https://github.com/envoyproxy/envoy-setec/issues/188
Signed-off-by: Matt Klein <[email protected]>
|
u64 blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
int off)
{
struct blkg_rwstat rwstat = blkg_rwstat_read((void *)pd + off);
return __blkg_prfill_rwstat(sf, pd, &rwstat);
}
| 0 |
[
"CWE-415"
] |
linux
|
9b54d816e00425c3a517514e0d677bb3cec49258
| 230,060,014,458,799,740,000,000,000,000,000,000,000 | 7 |
blkcg: fix double free of new_blkg in blkcg_init_queue
If blkg_create fails, new_blkg passed as an argument will
be freed by blkg_create, so there is no need to free it again.
Signed-off-by: Hou Tao <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
void DoRealForwardFFT(OpKernelContext* ctx, uint64* fft_shape,
const Tensor& in, Tensor* out) {
// Create the axes (which are always trailing).
const auto axes = Eigen::ArrayXi::LinSpaced(FFTRank, 1, FFTRank);
auto device = ctx->eigen_device<CPUDevice>();
auto input = Tensor(in).flat_inner_dims<RealT, FFTRank + 1>();
const auto input_dims = input.dimensions();
// Slice input to fft_shape on its inner-most dimensions.
Eigen::DSizes<Eigen::DenseIndex, FFTRank + 1> input_slice_sizes;
input_slice_sizes[0] = input_dims[0];
TensorShape temp_shape{input_dims[0]};
for (int i = 1; i <= FFTRank; ++i) {
input_slice_sizes[i] = fft_shape[i - 1];
temp_shape.AddDim(fft_shape[i - 1]);
}
OP_REQUIRES(ctx, temp_shape.num_elements() > 0,
errors::InvalidArgument("Obtained a FFT shape of 0 elements: ",
temp_shape.DebugString()));
auto output = out->flat_inner_dims<ComplexT, FFTRank + 1>();
const Eigen::DSizes<Eigen::DenseIndex, FFTRank + 1> zero_start_indices;
// Compute the full FFT using a temporary tensor.
Tensor temp;
OP_REQUIRES_OK(ctx, ctx->allocate_temp(DataTypeToEnum<ComplexT>::v(),
temp_shape, &temp));
auto full_fft = temp.flat_inner_dims<ComplexT, FFTRank + 1>();
full_fft.device(device) =
input.slice(zero_start_indices, input_slice_sizes)
.template fft<Eigen::BothParts, Eigen::FFT_FORWARD>(axes);
// Slice away the negative frequency components.
output.device(device) =
full_fft.slice(zero_start_indices, output.dimensions());
}
| 0 |
[
"CWE-617",
"CWE-703"
] |
tensorflow
|
31bd5026304677faa8a0b77602c6154171b9aec1
| 309,164,948,673,975,830,000,000,000,000,000,000,000 | 36 |
Prevent check fail in FFT
PiperOrigin-RevId: 372031044
Change-Id: I50994e3e8a5d1342d01bde80256f6bf2730ca299
|
create_un_nat_conn(struct conntrack *ct, struct conn *conn_for_un_nat_copy,
long long now, bool alg_un_nat)
{
struct conn *nc = xmemdup(conn_for_un_nat_copy, sizeof *nc);
memcpy(&nc->key, &conn_for_un_nat_copy->rev_key, sizeof nc->key);
memcpy(&nc->rev_key, &conn_for_un_nat_copy->key, sizeof nc->rev_key);
uint32_t un_nat_hash = conn_key_hash(&nc->key, ct->hash_basis);
unsigned un_nat_conn_bucket = hash_to_bucket(un_nat_hash);
ct_lock_lock(&ct->buckets[un_nat_conn_bucket].lock);
struct conn *rev_conn = conn_lookup(ct, &nc->key, now);
if (alg_un_nat) {
if (!rev_conn) {
hmap_insert(&ct->buckets[un_nat_conn_bucket].connections,
&nc->node, un_nat_hash);
} else {
char *log_msg = xasprintf("Unusual condition for un_nat conn "
"create for alg: rev_conn %p", rev_conn);
ct_print_conn_info(nc, log_msg, VLL_INFO, true, false);
free(log_msg);
free(nc);
}
} else {
ct_rwlock_rdlock(&ct->resources_lock);
struct nat_conn_key_node *nat_conn_key_node =
nat_conn_keys_lookup(&ct->nat_conn_keys, &nc->key, ct->hash_basis);
if (nat_conn_key_node && !conn_key_cmp(&nat_conn_key_node->value,
&nc->rev_key) && !rev_conn) {
hmap_insert(&ct->buckets[un_nat_conn_bucket].connections,
&nc->node, un_nat_hash);
} else {
char *log_msg = xasprintf("Unusual condition for un_nat conn "
"create: nat_conn_key_node/rev_conn "
"%p/%p", nat_conn_key_node, rev_conn);
ct_print_conn_info(nc, log_msg, VLL_INFO, true, false);
free(log_msg);
free(nc);
}
ct_rwlock_unlock(&ct->resources_lock);
}
ct_lock_unlock(&ct->buckets[un_nat_conn_bucket].lock);
}
| 0 |
[
"CWE-400"
] |
ovs
|
abd7a457652e6734902720fe6a5dddb3fc0d1e3b
| 226,700,324,937,409,430,000,000,000,000,000,000,000 | 43 |
flow: Support extra padding length.
Although not required, padding can be optionally added until
the packet length is MTU bytes. A packet with extra padding
currently fails sanity checks.
Vulnerability: CVE-2020-35498
Fixes: fa8d9001a624 ("miniflow_extract: Properly handle small IP packets.")
Reported-by: Joakim Hindersson <[email protected]>
Acked-by: Ilya Maximets <[email protected]>
Signed-off-by: Flavio Leitner <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
|
static void nma_icon_theme_changed (GtkIconTheme *icon_theme, NMApplet *applet)
{
nma_icons_free (applet);
nma_icons_load (applet);
}
| 0 |
[
"CWE-200"
] |
network-manager-applet
|
8627880e07c8345f69ed639325280c7f62a8f894
| 204,311,954,339,026,460,000,000,000,000,000,000,000 | 5 |
editor: prevent any registration of objects on the system bus
D-Bus access-control is name-based; so requests for a specific name
are allowed/denied based on the rules in /etc/dbus-1/system.d. But
apparently apps still get a non-named service on the bus, and if we
register *any* object even though we don't have a named service,
dbus and dbus-glib will happily proxy signals. Since the connection
editor shouldn't ever expose anything having to do with connections
on any bus, make sure that's the case.
|
void WebPImage::doWriteMetadata(BasicIo& outIo)
{
if (!io_->isopen()) throw Error(kerInputDataReadFailed);
if (!outIo.isopen()) throw Error(kerImageWriteFailed);
#ifdef DEBUG
std::cout << "Writing metadata" << std::endl;
#endif
byte data [WEBP_TAG_SIZE*3];
DataBuf chunkId(WEBP_TAG_SIZE+1);
chunkId.pData_ [WEBP_TAG_SIZE] = '\0';
io_->read(data, WEBP_TAG_SIZE * 3);
uint64_t filesize = Exiv2::getULong(data + WEBP_TAG_SIZE, littleEndian);
/* Set up header */
if (outIo.write(data, WEBP_TAG_SIZE * 3) != WEBP_TAG_SIZE * 3)
throw Error(kerImageWriteFailed);
/* Parse Chunks */
bool has_size = false;
bool has_xmp = false;
bool has_exif = false;
bool has_vp8x = false;
bool has_alpha = false;
bool has_icc = iccProfileDefined();
int width = 0;
int height = 0;
byte size_buff[WEBP_TAG_SIZE];
Blob blob;
if (exifData_.count() > 0) {
ExifParser::encode(blob, littleEndian, exifData_);
if (blob.size() > 0) {
has_exif = true;
}
}
if (xmpData_.count() > 0 && !writeXmpFromPacket()) {
XmpParser::encode(xmpPacket_, xmpData_,
XmpParser::useCompactFormat |
XmpParser::omitAllFormatting);
}
has_xmp = xmpPacket_.size() > 0;
std::string xmp(xmpPacket_);
/* Verify for a VP8X Chunk First before writing in
case we have any exif or xmp data, also check
for any chunks with alpha frame/layer set */
while ( !io_->eof() && (uint64_t) io_->tell() < filesize) {
io_->read(chunkId.pData_, WEBP_TAG_SIZE);
io_->read(size_buff, WEBP_TAG_SIZE);
long size = Exiv2::getULong(size_buff, littleEndian);
DataBuf payload(size);
io_->read(payload.pData_, payload.size_);
byte c;
if ( payload.size_ % 2 ) io_->read(&c,1);
/* Chunk with information about features
used in the file. */
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8X) && !has_vp8x) {
has_vp8x = true;
}
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8X) && !has_size) {
has_size = true;
byte size_buf[WEBP_TAG_SIZE];
// Fetch width - stored in 24bits
memcpy(&size_buf, &payload.pData_[4], 3);
size_buf[3] = 0;
width = Exiv2::getULong(size_buf, littleEndian) + 1;
// Fetch height - stored in 24bits
memcpy(&size_buf, &payload.pData_[7], 3);
size_buf[3] = 0;
height = Exiv2::getULong(size_buf, littleEndian) + 1;
}
/* Chunk with with animation control data. */
#ifdef __CHECK_FOR_ALPHA__ // Maybe in the future
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_ANIM) && !has_alpha) {
has_alpha = true;
}
#endif
/* Chunk with with lossy image data. */
#ifdef __CHECK_FOR_ALPHA__ // Maybe in the future
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8) && !has_alpha) {
has_alpha = true;
}
#endif
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8) && !has_size) {
has_size = true;
byte size_buf[2];
/* Refer to this https://tools.ietf.org/html/rfc6386
for height and width reference for VP8 chunks */
// Fetch width - stored in 16bits
memcpy(&size_buf, &payload.pData_[6], 2);
width = Exiv2::getUShort(size_buf, littleEndian) & 0x3fff;
// Fetch height - stored in 16bits
memcpy(&size_buf, &payload.pData_[8], 2);
height = Exiv2::getUShort(size_buf, littleEndian) & 0x3fff;
}
/* Chunk with with lossless image data. */
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8L) && !has_alpha) {
if ((payload.pData_[4] & WEBP_VP8X_ALPHA_BIT) == WEBP_VP8X_ALPHA_BIT) {
has_alpha = true;
}
}
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8L) && !has_size) {
has_size = true;
byte size_buf_w[2];
byte size_buf_h[3];
/* For VP8L chunks width & height are stored in 28 bits
of a 32 bit field requires bitshifting to get actual
sizes. Width and height are split even into 14 bits
each. Refer to this https://goo.gl/bpgMJf */
// Fetch width - 14 bits wide
memcpy(&size_buf_w, &payload.pData_[1], 2);
size_buf_w[1] &= 0x3F;
width = Exiv2::getUShort(size_buf_w, littleEndian) + 1;
// Fetch height - 14 bits wide
memcpy(&size_buf_h, &payload.pData_[2], 3);
size_buf_h[0] =
((size_buf_h[0] >> 6) & 0x3) |
((size_buf_h[1] & 0x3F) << 0x2);
size_buf_h[1] =
((size_buf_h[1] >> 6) & 0x3) |
((size_buf_h[2] & 0xF) << 0x2);
height = Exiv2::getUShort(size_buf_h, littleEndian) + 1;
}
/* Chunk with animation frame. */
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_ANMF) && !has_alpha) {
if ((payload.pData_[5] & 0x2) == 0x2) {
has_alpha = true;
}
}
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_ANMF) && !has_size) {
has_size = true;
byte size_buf[WEBP_TAG_SIZE];
// Fetch width - stored in 24bits
memcpy(&size_buf, &payload.pData_[6], 3);
size_buf[3] = 0;
width = Exiv2::getULong(size_buf, littleEndian) + 1;
// Fetch height - stored in 24bits
memcpy(&size_buf, &payload.pData_[9], 3);
size_buf[3] = 0;
height = Exiv2::getULong(size_buf, littleEndian) + 1;
}
/* Chunk with alpha data. */
if (equalsWebPTag(chunkId, "ALPH") && !has_alpha) {
has_alpha = true;
}
}
/* Inject a VP8X chunk if one isn't available. */
if (!has_vp8x) {
inject_VP8X(outIo, has_xmp, has_exif, has_alpha,
has_icc, width, height);
}
io_->seek(12, BasicIo::beg);
while ( !io_->eof() && (uint64_t) io_->tell() < filesize) {
io_->read(chunkId.pData_, 4);
io_->read(size_buff, 4);
long size = Exiv2::getULong(size_buff, littleEndian);
DataBuf payload(size);
io_->read(payload.pData_, size);
if ( io_->tell() % 2 ) io_->seek(+1,BasicIo::cur); // skip pad
if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8X)) {
if (has_icc){
payload.pData_[0] |= WEBP_VP8X_ICC_BIT;
} else {
payload.pData_[0] &= ~WEBP_VP8X_ICC_BIT;
}
if (has_xmp){
payload.pData_[0] |= WEBP_VP8X_XMP_BIT;
} else {
payload.pData_[0] &= ~WEBP_VP8X_XMP_BIT;
}
if (has_exif) {
payload.pData_[0] |= WEBP_VP8X_EXIF_BIT;
} else {
payload.pData_[0] &= ~WEBP_VP8X_EXIF_BIT;
}
if (outIo.write(chunkId.pData_, WEBP_TAG_SIZE) != WEBP_TAG_SIZE)
throw Error(kerImageWriteFailed);
if (outIo.write(size_buff, WEBP_TAG_SIZE) != WEBP_TAG_SIZE)
throw Error(kerImageWriteFailed);
if (outIo.write(payload.pData_, payload.size_) != payload.size_)
throw Error(kerImageWriteFailed);
if (outIo.tell() % 2) {
if (outIo.write(&WEBP_PAD_ODD, 1) != 1) throw Error(kerImageWriteFailed);
}
if (has_icc) {
if (outIo.write((const byte*)WEBP_CHUNK_HEADER_ICCP, WEBP_TAG_SIZE) != WEBP_TAG_SIZE) throw Error(kerImageWriteFailed);
ul2Data(data, (uint32_t) iccProfile_.size_, littleEndian);
if (outIo.write(data, WEBP_TAG_SIZE) != WEBP_TAG_SIZE) throw Error(kerImageWriteFailed);
if (outIo.write(iccProfile_.pData_, iccProfile_.size_) != iccProfile_.size_) {
throw Error(kerImageWriteFailed);
}
has_icc = false;
}
} else if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_ICCP)) {
// Skip it altogether handle it prior to here :)
} else if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_EXIF)) {
// Skip and add new data afterwards
} else if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_XMP)) {
// Skip and add new data afterwards
} else {
if (outIo.write(chunkId.pData_, WEBP_TAG_SIZE) != WEBP_TAG_SIZE)
throw Error(kerImageWriteFailed);
if (outIo.write(size_buff, WEBP_TAG_SIZE) != WEBP_TAG_SIZE)
throw Error(kerImageWriteFailed);
if (outIo.write(payload.pData_, payload.size_) != payload.size_)
throw Error(kerImageWriteFailed);
}
// Encoder required to pad odd sized data with a null byte
if (outIo.tell() % 2) {
if (outIo.write(&WEBP_PAD_ODD, 1) != 1) throw Error(kerImageWriteFailed);
}
}
if (has_exif) {
if (outIo.write((const byte*)WEBP_CHUNK_HEADER_EXIF, WEBP_TAG_SIZE) != WEBP_TAG_SIZE) throw Error(kerImageWriteFailed);
us2Data(data, (uint16_t) blob.size()+8, bigEndian);
ul2Data(data, (uint32_t) blob.size(), littleEndian);
if (outIo.write(data, WEBP_TAG_SIZE) != WEBP_TAG_SIZE) throw Error(kerImageWriteFailed);
if (outIo.write((const byte*)&blob[0], static_cast<long>(blob.size())) != (long)blob.size())
{
throw Error(kerImageWriteFailed);
}
if (outIo.tell() % 2) {
if (outIo.write(&WEBP_PAD_ODD, 1) != 1) throw Error(kerImageWriteFailed);
}
}
if (has_xmp) {
if (outIo.write((const byte*)WEBP_CHUNK_HEADER_XMP, WEBP_TAG_SIZE) != WEBP_TAG_SIZE) throw Error(kerImageWriteFailed);
ul2Data(data, (uint32_t) xmpPacket().size(), littleEndian);
if (outIo.write(data, WEBP_TAG_SIZE) != WEBP_TAG_SIZE) throw Error(kerImageWriteFailed);
if (outIo.write((const byte*)xmp.data(), static_cast<long>(xmp.size())) != (long)xmp.size()) {
throw Error(kerImageWriteFailed);
}
if (outIo.tell() % 2) {
if (outIo.write(&WEBP_PAD_ODD, 1) != 1) throw Error(kerImageWriteFailed);
}
}
// Fix File Size Payload Data
outIo.seek(0, BasicIo::beg);
filesize = outIo.size() - 8;
outIo.seek(4, BasicIo::beg);
ul2Data(data, (uint32_t) filesize, littleEndian);
if (outIo.write(data, WEBP_TAG_SIZE) != WEBP_TAG_SIZE) throw Error(kerImageWriteFailed);
} // WebPImage::writeMetadata
| 0 |
[
"CWE-190"
] |
exiv2
|
c73d1e27198a389ce7caf52ac30f8e2120acdafd
| 184,435,689,863,349,250,000,000,000,000,000,000,000 | 279 |
Avoid negative integer overflow when `filesize < io_->tell()`.
This fixes #791.
|
xmlCtxtReadDoc(xmlParserCtxtPtr ctxt, const xmlChar * cur,
const char *URL, const char *encoding, int options)
{
xmlParserInputPtr stream;
if (cur == NULL)
return (NULL);
if (ctxt == NULL)
return (NULL);
xmlInitParser();
xmlCtxtReset(ctxt);
stream = xmlNewStringInputStream(ctxt, cur);
if (stream == NULL) {
return (NULL);
}
inputPush(ctxt, stream);
return (xmlDoRead(ctxt, URL, encoding, options, 1));
}
| 0 |
[] |
libxml2
|
9cd1c3cfbd32655d60572c0a413e017260c854df
| 201,332,048,893,984,500,000,000,000,000,000,000,000 | 20 |
Do not fetch external parameter entities
Unless explicitely asked for when validating or replacing entities
with their value. Problem pointed out by Daniel Berrange <[email protected]>
|
sasl_session_abort(struct sasl_session *const restrict p)
{
(void) sasl_sts(p->uid, 'D', "F");
(void) sasl_session_destroy(p);
}
| 0 |
[
"CWE-287",
"CWE-288"
] |
atheme
|
4e664c75d0b280a052eb8b5e81aa41944e593c52
| 63,679,971,948,115,280,000,000,000,000,000,000,000 | 5 |
saslserv/main: Track EID we're pending login to
The existing model does not remember that we've sent a SVSLOGIN for a
given SASL session, and simply assumes that if a client is introduced
with a SASL session open, that session must have succeeded. The security
of this approach requires ircd to implicitly abort SASL sessions on
client registration.
This also means that if a client successfully authenticates and then
does something else its pending login is forgotten about, even though a
SVSLOGIN has been sent for it, and the ircd is going to think it's
logged in.
This change removes the dependency on ircd's state machine by keeping
explicit track of the pending login, i.e. the one we've most recently
sent a SVSLOGIN for. The next commit will ensure that a client abort
(even an implicit one) doesn't blow that information away.
|
void ipmi_smi_watchdog_pretimeout(struct ipmi_smi *intf)
{
if (intf->in_shutdown)
return;
atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
tasklet_schedule(&intf->recv_tasklet);
}
| 0 |
[
"CWE-416",
"CWE-284"
] |
linux
|
77f8269606bf95fcb232ee86f6da80886f1dfae8
| 20,370,976,612,373,712,000,000,000,000,000,000,000 | 8 |
ipmi: fix use-after-free of user->release_barrier.rda
When we do the following test, we got oops in ipmi_msghandler driver
while((1))
do
service ipmievd restart & service ipmievd restart
done
---------------------------------------------------------------
[ 294.230186] Unable to handle kernel paging request at virtual address 0000803fea6ea008
[ 294.230188] Mem abort info:
[ 294.230190] ESR = 0x96000004
[ 294.230191] Exception class = DABT (current EL), IL = 32 bits
[ 294.230193] SET = 0, FnV = 0
[ 294.230194] EA = 0, S1PTW = 0
[ 294.230195] Data abort info:
[ 294.230196] ISV = 0, ISS = 0x00000004
[ 294.230197] CM = 0, WnR = 0
[ 294.230199] user pgtable: 4k pages, 48-bit VAs, pgdp = 00000000a1c1b75a
[ 294.230201] [0000803fea6ea008] pgd=0000000000000000
[ 294.230204] Internal error: Oops: 96000004 [#1] SMP
[ 294.235211] Modules linked in: nls_utf8 isofs rpcrdma ib_iser ib_srpt target_core_mod ib_srp scsi_transport_srp ib_ipoib rdma_ucm ib_umad rdma_cm ib_cm iw_cm dm_mirror dm_region_hash dm_log dm_mod aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce ses sha256_arm64 sha1_ce hibmc_drm hisi_sas_v2_hw enclosure sg hisi_sas_main sbsa_gwdt ip_tables mlx5_ib ib_uverbs marvell ib_core mlx5_core ixgbe ipmi_si mdio hns_dsaf ipmi_devintf ipmi_msghandler hns_enet_drv hns_mdio
[ 294.277745] CPU: 3 PID: 0 Comm: swapper/3 Kdump: loaded Not tainted 5.0.0-rc2+ #113
[ 294.285511] Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.37 11/21/2017
[ 294.292835] pstate: 80000005 (Nzcv daif -PAN -UAO)
[ 294.297695] pc : __srcu_read_lock+0x38/0x58
[ 294.301940] lr : acquire_ipmi_user+0x2c/0x70 [ipmi_msghandler]
[ 294.307853] sp : ffff00001001bc80
[ 294.311208] x29: ffff00001001bc80 x28: ffff0000117e5000
[ 294.316594] x27: 0000000000000000 x26: dead000000000100
[ 294.321980] x25: dead000000000200 x24: ffff803f6bd06800
[ 294.327366] x23: 0000000000000000 x22: 0000000000000000
[ 294.332752] x21: ffff00001001bd04 x20: ffff80df33d19018
[ 294.338137] x19: ffff80df33d19018 x18: 0000000000000000
[ 294.343523] x17: 0000000000000000 x16: 0000000000000000
[ 294.348908] x15: 0000000000000000 x14: 0000000000000002
[ 294.354293] x13: 0000000000000000 x12: 0000000000000000
[ 294.359679] x11: 0000000000000000 x10: 0000000000100000
[ 294.365065] x9 : 0000000000000000 x8 : 0000000000000004
[ 294.370451] x7 : 0000000000000000 x6 : ffff80df34558678
[ 294.375836] x5 : 000000000000000c x4 : 0000000000000000
[ 294.381221] x3 : 0000000000000001 x2 : 0000803fea6ea000
[ 294.386607] x1 : 0000803fea6ea008 x0 : 0000000000000001
[ 294.391994] Process swapper/3 (pid: 0, stack limit = 0x0000000083087293)
[ 294.398791] Call trace:
[ 294.401266] __srcu_read_lock+0x38/0x58
[ 294.405154] acquire_ipmi_user+0x2c/0x70 [ipmi_msghandler]
[ 294.410716] deliver_response+0x80/0xf8 [ipmi_msghandler]
[ 294.416189] deliver_local_response+0x28/0x68 [ipmi_msghandler]
[ 294.422193] handle_one_recv_msg+0x158/0xcf8 [ipmi_msghandler]
[ 294.432050] handle_new_recv_msgs+0xc0/0x210 [ipmi_msghandler]
[ 294.441984] smi_recv_tasklet+0x8c/0x158 [ipmi_msghandler]
[ 294.451618] tasklet_action_common.isra.5+0x88/0x138
[ 294.460661] tasklet_action+0x2c/0x38
[ 294.468191] __do_softirq+0x120/0x2f8
[ 294.475561] irq_exit+0x134/0x140
[ 294.482445] __handle_domain_irq+0x6c/0xc0
[ 294.489954] gic_handle_irq+0xb8/0x178
[ 294.497037] el1_irq+0xb0/0x140
[ 294.503381] arch_cpu_idle+0x34/0x1a8
[ 294.510096] do_idle+0x1d4/0x290
[ 294.516322] cpu_startup_entry+0x28/0x30
[ 294.523230] secondary_start_kernel+0x184/0x1d0
[ 294.530657] Code: d538d082 d2800023 8b010c81 8b020021 (c85f7c25)
[ 294.539746] ---[ end trace 8a7a880dee570b29 ]---
[ 294.547341] Kernel panic - not syncing: Fatal exception in interrupt
[ 294.556837] SMP: stopping secondary CPUs
[ 294.563996] Kernel Offset: disabled
[ 294.570515] CPU features: 0x002,21006008
[ 294.577638] Memory Limit: none
[ 294.587178] Starting crashdump kernel...
[ 294.594314] Bye!
Because the user->release_barrier.rda is freed in ipmi_destroy_user(), but
the refcount is not zero, when acquire_ipmi_user() uses user->release_barrier.rda
in __srcu_read_lock(), it causes oops.
Fix this by calling cleanup_srcu_struct() when the refcount is zero.
Fixes: e86ee2d44b44 ("ipmi: Rework locking and shutdown for hot remove")
Cc: [email protected] # 4.18
Signed-off-by: Yang Yingliang <[email protected]>
Signed-off-by: Corey Minyard <[email protected]>
|
static void nlm_put_lockowner(struct nlm_lockowner *lockowner)
{
if (!atomic_dec_and_lock(&lockowner->count, &lockowner->host->h_lock))
return;
list_del(&lockowner->list);
spin_unlock(&lockowner->host->h_lock);
nlmclnt_release_host(lockowner->host);
kfree(lockowner);
}
| 0 |
[
"CWE-400",
"CWE-399",
"CWE-703"
] |
linux
|
0b760113a3a155269a3fba93a409c640031dd68f
| 283,266,119,980,611,160,000,000,000,000,000,000,000 | 9 |
NLM: Don't hang forever on NLM unlock requests
If the NLM daemon is killed on the NFS server, we can currently end up
hanging forever on an 'unlock' request, instead of aborting. Basically,
if the rpcbind request fails, or the server keeps returning garbage, we
really want to quit instead of retrying.
Tested-by: Vasily Averin <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Cc: [email protected]
|
void dccp_destroy_sock(struct sock *sk)
{
struct dccp_sock *dp = dccp_sk(sk);
__skb_queue_purge(&sk->sk_write_queue);
if (sk->sk_send_head != NULL) {
kfree_skb(sk->sk_send_head);
sk->sk_send_head = NULL;
}
/* Clean up a referenced DCCP bind bucket. */
if (inet_csk(sk)->icsk_bind_hash != NULL)
inet_put_port(sk);
kfree(dp->dccps_service_list);
dp->dccps_service_list = NULL;
if (dp->dccps_hc_rx_ackvec != NULL) {
dccp_ackvec_free(dp->dccps_hc_rx_ackvec);
dp->dccps_hc_rx_ackvec = NULL;
}
ccid_hc_rx_delete(dp->dccps_hc_rx_ccid, sk);
dp->dccps_hc_rx_ccid = NULL;
/* clean up feature negotiation state */
dccp_feat_list_purge(&dp->dccps_featneg);
}
| 0 |
[
"CWE-416"
] |
linux
|
69c64866ce072dea1d1e59a0d61e0f66c0dffb76
| 264,867,360,036,858,840,000,000,000,000,000,000,000 | 27 |
dccp: CVE-2017-8824: use-after-free in DCCP code
Whenever the sock object is in DCCP_CLOSED state,
dccp_disconnect() must free dccps_hc_tx_ccid and
dccps_hc_rx_ccid and set to NULL.
Signed-off-by: Mohamed Ghannam <[email protected]>
Reviewed-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
xmlBufferAddHead(xmlBufferPtr buf, const xmlChar *str, int len) {
unsigned int needSize;
if (buf == NULL)
return(-1);
if (buf->alloc == XML_BUFFER_ALLOC_IMMUTABLE) return -1;
if (str == NULL) {
#ifdef DEBUG_BUFFER
xmlGenericError(xmlGenericErrorContext,
"xmlBufferAddHead: str == NULL\n");
#endif
return -1;
}
if (len < -1) {
#ifdef DEBUG_BUFFER
xmlGenericError(xmlGenericErrorContext,
"xmlBufferAddHead: len < 0\n");
#endif
return -1;
}
if (len == 0) return 0;
if (len < 0)
len = xmlStrlen(str);
if (len <= 0) return -1;
if ((buf->alloc == XML_BUFFER_ALLOC_IO) && (buf->contentIO != NULL)) {
size_t start_buf = buf->content - buf->contentIO;
if (start_buf > (unsigned int) len) {
/*
* We can add it in the space previously shrunk
*/
buf->content -= len;
memmove(&buf->content[0], str, len);
buf->use += len;
buf->size += len;
return(0);
}
}
needSize = buf->use + len + 2;
if (needSize > buf->size){
if (!xmlBufferResize(buf, needSize)){
xmlTreeErrMemory("growing buffer");
return XML_ERR_NO_MEMORY;
}
}
memmove(&buf->content[len], &buf->content[0], buf->use);
memmove(&buf->content[0], str, len);
buf->use += len;
buf->content[buf->use] = 0;
return 0;
}
| 0 |
[
"CWE-190"
] |
libxml2
|
6c283d83eccd940bcde15634ac8c7f100e3caefd
| 115,145,151,552,209,540,000,000,000,000,000,000,000 | 55 |
[CVE-2022-29824] Fix integer overflows in xmlBuf and xmlBuffer
In several places, the code handling string buffers didn't check for
integer overflow or used wrong types for buffer sizes. This could
result in out-of-bounds writes or other memory errors when working on
large, multi-gigabyte buffers.
Thanks to Felix Wilhelm for the report.
|
R_API R_OWN char *r_anal_function_autoname_var(RAnalFunction *fcn, char kind, const char *pfx, int ptr) {
void **it;
const ut32 uptr = R_ABS (ptr);
char *varname = r_str_newf ("%s_%xh", pfx, uptr);
r_pvector_foreach (&fcn->vars, it) {
RAnalVar *var = *it;
if (!strcmp (varname, var->name)) {
if (var->kind != kind) {
const char *k = kind == R_ANAL_VAR_KIND_SPV ? "sp" : "bp";
free (varname);
varname = r_str_newf ("%s_%s_%xh", pfx, k, uptr);
return varname;
}
int i = 2;
do {
free (varname);
varname = r_str_newf ("%s_%xh_%u", pfx, uptr, i++);
} while (r_anal_function_get_var_byname (fcn, varname));
return varname;
}
}
return varname;
}
| 0 |
[
"CWE-416"
] |
radare2
|
a7ce29647fcb38386d7439696375e16e093d6acb
| 75,865,278,660,347,590,000,000,000,000,000,000,000 | 23 |
Fix UAF in aaaa on arm/thumb switching ##crash
* Reported by @peacock-doris via huntr.dev
* Reproducer tests_65185
* This is a logic fix, but not the fully safe as changes in the code
can result on UAF again, to properly protect r2 from crashing we
need to break the ABI and add refcounting to RRegItem, which can't
happen in 5.6.x because of abi-compat rules
|
ssize_t xdr_stream_decode_string(struct xdr_stream *xdr, char *str, size_t size)
{
ssize_t ret;
void *p;
ret = xdr_stream_decode_opaque_inline(xdr, &p, size);
if (ret > 0) {
memcpy(str, p, ret);
str[ret] = '\0';
return strlen(str);
}
*str = '\0';
return ret;
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
linux
|
6d1c0f3d28f98ea2736128ed3e46821496dc3a8c
| 99,743,373,570,186,540,000,000,000,000,000,000,000 | 14 |
sunrpc: Avoid a KASAN slab-out-of-bounds bug in xdr_set_page_base()
This seems to happen fairly easily during READ_PLUS testing on NFS v4.2.
I found that we could end up accessing xdr->buf->pages[pgnr] with a pgnr
greater than the number of pages in the array. So let's just return
early if we're setting base to a point at the end of the page data and
let xdr_set_tail_base() handle setting up the buffer pointers instead.
Signed-off-by: Anna Schumaker <[email protected]>
Fixes: 8d86e373b0ef ("SUNRPC: Clean up helpers xdr_set_iov() and xdr_set_page_base()")
Signed-off-by: Trond Myklebust <[email protected]>
|
void do_get_replace(struct st_command *command)
{
uint i;
char *from= command->first_argument;
char *buff, *start;
char word_end_chars[256], *pos;
POINTER_ARRAY to_array, from_array;
DBUG_ENTER("get_replace");
free_replace();
bzero((char*) &to_array,sizeof(to_array));
bzero((char*) &from_array,sizeof(from_array));
if (!*from)
die("Missing argument in %s", command->query);
start= buff= (char*)my_malloc(strlen(from)+1,MYF(MY_WME | MY_FAE));
while (*from)
{
char *to= buff;
to= get_string(&buff, &from, command);
if (!*from)
die("Wrong number of arguments to replace_result in '%s'",
command->query);
#ifdef __WIN__
fix_win_paths(to, from - to);
#endif
insert_pointer_name(&from_array,to);
to= get_string(&buff, &from, command);
insert_pointer_name(&to_array,to);
}
for (i= 1,pos= word_end_chars ; i < 256 ; i++)
if (my_isspace(charset_info,i))
*pos++= i;
*pos=0; /* End pointer */
if (!(glob_replace= init_replace((char**) from_array.typelib.type_names,
(char**) to_array.typelib.type_names,
(uint) from_array.typelib.count,
word_end_chars)))
die("Can't initialize replace from '%s'", command->query);
free_pointer_array(&from_array);
free_pointer_array(&to_array);
my_free(start);
command->last_argument= command->end;
DBUG_VOID_RETURN;
}
| 0 |
[
"CWE-295"
] |
mysql-server
|
b3e9211e48a3fb586e88b0270a175d2348935424
| 272,518,426,999,717,600,000,000,000,000,000,000,000 | 45 |
WL#9072: Backport WL#8785 to 5.5
|
unsigned find_get_pages(struct address_space *mapping, pgoff_t start,
unsigned int nr_pages, struct page **pages)
{
unsigned int i;
unsigned int ret;
read_lock_irq(&mapping->tree_lock);
ret = radix_tree_gang_lookup(&mapping->page_tree,
(void **)pages, start, nr_pages);
for (i = 0; i < ret; i++)
page_cache_get(pages[i]);
read_unlock_irq(&mapping->tree_lock);
return ret;
}
| 0 |
[
"CWE-20"
] |
linux
|
124d3b7041f9a0ca7c43a6293e1cae4576c32fd5
| 246,910,004,225,052,250,000,000,000,000,000,000,000 | 14 |
fix writev regression: pan hanging unkillable and un-straceable
Frederik Himpe reported an unkillable and un-straceable pan process.
Zero length iovecs can go into an infinite loop in writev, because the
iovec iterator does not always advance over them.
The sequence required to trigger this is not trivial. I think it
requires that a zero-length iovec be followed by a non-zero-length iovec
which causes a pagefault in the atomic usercopy. This causes the writev
code to drop back into single-segment copy mode, which then tries to
copy the 0 bytes of the zero-length iovec; a zero length copy looks like
a failure though, so it loops.
Put a test into iov_iter_advance to catch zero-length iovecs. We could
just put the test in the fallback path, but I feel it is more robust to
skip over zero-length iovecs throughout the code (iovec iterator may be
used in filesystems too, so it should be robust).
Signed-off-by: Nick Piggin <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static int ipv6_inherit_eui64(u8 *eui, struct inet6_dev *idev)
{
int err = -1;
struct inet6_ifaddr *ifp;
read_lock_bh(&idev->lock);
list_for_each_entry_reverse(ifp, &idev->addr_list, if_list) {
if (ifp->scope > IFA_LINK)
break;
if (ifp->scope == IFA_LINK && !(ifp->flags&IFA_F_TENTATIVE)) {
memcpy(eui, ifp->addr.s6_addr+8, 8);
err = 0;
break;
}
}
read_unlock_bh(&idev->lock);
return err;
}
| 0 |
[
"CWE-20"
] |
linux
|
77751427a1ff25b27d47a4c36b12c3c8667855ac
| 227,252,326,091,425,780,000,000,000,000,000,000,000 | 18 |
ipv6: addrconf: validate new MTU before applying it
Currently we don't check if the new MTU is valid or not and this allows
one to configure a smaller than minimum allowed by RFCs or even bigger
than interface own MTU, which is a problem as it may lead to packet
drops.
If you have a daemon like NetworkManager running, this may be exploited
by remote attackers by forging RA packets with an invalid MTU, possibly
leading to a DoS. (NetworkManager currently only validates for values
too small, but not for too big ones.)
The fix is just to make sure the new value is valid. That is, between
IPV6_MIN_MTU and interface's MTU.
Note that similar check is already performed at
ndisc_router_discovery(), for when kernel itself parses the RA.
Signed-off-by: Marcelo Ricardo Leitner <[email protected]>
Signed-off-by: Sabrina Dubroca <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
authentic_manage_sdo(struct sc_card *card, struct sc_authentic_sdo *sdo, unsigned long cmd)
{
struct sc_context *ctx = card->ctx;
struct sc_apdu apdu;
unsigned char *data = NULL;
size_t data_len = 0, save_max_send = card->max_send_size;
int rv;
LOG_FUNC_CALLED(ctx);
sc_log(ctx, "SDO(cmd:%lX,mech:%X,id:%X)", cmd, sdo->docp.mech, sdo->docp.id);
rv = authentic_manage_sdo_encode(card, sdo, cmd, &data, &data_len);
LOG_TEST_RET(ctx, rv, "Cannot encode SDO data");
sc_log(ctx, "encoded SDO length %"SC_FORMAT_LEN_SIZE_T"u", data_len);
sc_format_apdu(card, &apdu, SC_APDU_CASE_3_SHORT, 0xDB, 0x3F, 0xFF);
apdu.data = data;
apdu.datalen = data_len;
apdu.lc = data_len;
apdu.flags |= SC_APDU_FLAGS_CHAINING;
if (card->max_send_size > 255)
card->max_send_size = 255;
rv = sc_transmit_apdu(card, &apdu);
card->max_send_size = save_max_send;
LOG_TEST_RET(ctx, rv, "APDU transmit failed");
rv = sc_check_sw(card, apdu.sw1, apdu.sw2);
LOG_TEST_RET(ctx, rv, "authentic_sdo_create() SDO put data error");
free(data);
LOG_FUNC_RETURN(ctx, rv);
}
| 0 |
[
"CWE-125"
] |
OpenSC
|
8fe377e93b4b56060e5bbfb6f3142ceaeca744fa
| 231,912,146,194,919,720,000,000,000,000,000,000,000 | 34 |
fixed out of bounds reads
Thanks to Eric Sesterhenn from X41 D-SEC GmbH
for reporting and suggesting security fixes.
|
static int _process_persist_conn(void *arg,
persist_msg_t *persist_msg,
Buf *out_buffer, uint32_t *uid)
{
slurm_msg_t msg;
slurm_persist_conn_t *persist_conn = arg;
if (*uid == NO_VAL)
*uid = g_slurm_auth_get_uid(persist_conn->auth_cred,
slurmctld_config.auth_info);
*out_buffer = NULL;
slurm_msg_t_init(&msg);
msg.auth_cred = persist_conn->auth_cred;
msg.conn = persist_conn;
msg.conn_fd = persist_conn->fd;
msg.msg_type = persist_msg->msg_type;
msg.data = persist_msg->data;
slurmctld_req(&msg, NULL);
return SLURM_SUCCESS;
}
| 0 |
[
"CWE-20"
] |
slurm
|
033dc0d1d28b8d2ba1a5187f564a01c15187eb4e
| 307,197,938,331,026,100,000,000,000,000,000,000,000 | 26 |
Fix insecure handling of job requested gid.
Only trust MUNGE signed values, unless the RPC was signed by
SlurmUser or root.
CVE-2018-10995.
|
void git_treebuilder_free(git_treebuilder *bld)
{
if (bld == NULL)
return;
git_treebuilder_clear(bld);
git_vector_free(&bld->entries);
git__free(bld);
}
| 0 |
[
"CWE-20"
] |
libgit2
|
928429c5c96a701bcbcafacb2421a82602b36915
| 33,386,906,441,704,390,000,000,000,000,000,000,000 | 9 |
tree: Check for `.git` with case insensitivy
|
bm_delta2_search (char const **tpp, char const *ep, char const *sp, int len,
char const *trans, char gc1, char gc2,
unsigned char const *d1, kwset_t kwset)
{
char const *tp = *tpp;
int d = len, skip = 0;
while (true)
{
int i = 2;
if (tr (trans, tp[-2]) == gc2)
{
while (++i <= d)
if (tr (trans, tp[-i]) != tr (trans, sp[-i]))
break;
if (i > d)
{
for (i = d + skip + 1; i <= len; ++i)
if (tr (trans, tp[-i]) != tr (trans, sp[-i]))
break;
if (i > len)
{
*tpp = tp - len;
return true;
}
}
}
tp += d = kwset->shift[i - 2];
if (tp > ep)
break;
if (tr (trans, tp[-1]) != gc1)
{
if (d1)
tp += d1[U(tp[-1])];
break;
}
skip = i - 1;
}
*tpp = tp;
return false;
}
| 0 |
[
"CWE-119"
] |
grep
|
83a95bd8c8561875b948cadd417c653dbe7ef2e2
| 203,298,987,497,166,270,000,000,000,000,000,000,000 | 43 |
grep -F: fix a heap buffer (read) overrun
grep's read buffer is often filled to its full size, except when
reading the final buffer of a file. In that case, the number of
bytes read may be far less than the size of the buffer. However, for
certain unusual pattern/text combinations, grep -F would mistakenly
examine bytes in that uninitialized region of memory when searching
for a match. With carefully chosen inputs, one can cause grep -F to
read beyond the end of that buffer altogether. This problem arose via
commit v2.18-90-g73893ff with the introduction of a more efficient
heuristic using what is now the memchr_kwset function. The use of
that function in bmexec_trans could leave TP much larger than EP,
and the subsequent call to bm_delta2_search would mistakenly access
beyond end of the main input read buffer.
* src/kwset.c (bmexec_trans): When TP reaches or exceeds EP,
do not call bm_delta2_search.
* tests/kwset-abuse: New file.
* tests/Makefile.am (TESTS): Add it.
* THANKS.in: Update.
* NEWS (Bug fixes): Mention it.
Prior to this patch, this command would trigger a UMR:
printf %0360db 0 | valgrind src/grep -F $(printf %019dXb 0)
Use of uninitialised value of size 8
at 0x4142BE: bmexec_trans (kwset.c:657)
by 0x4143CA: bmexec (kwset.c:678)
by 0x414973: kwsexec (kwset.c:848)
by 0x414DC4: Fexecute (kwsearch.c:128)
by 0x404E2E: grepbuf (grep.c:1238)
by 0x4054BF: grep (grep.c:1417)
by 0x405CEB: grepdesc (grep.c:1645)
by 0x405EC1: grep_command_line_arg (grep.c:1692)
by 0x4077D4: main (grep.c:2570)
See the accompanying test for how to trigger the heap buffer overrun.
Thanks to Nima Aghdaii for testing and finding numerous
ways to break early iterations of this patch.
|
const struct cpumask *sched_trace_rd_span(struct root_domain *rd)
{
#ifdef CONFIG_SMP
return rd ? rd->span : NULL;
#else
return NULL;
#endif
}
| 0 |
[
"CWE-400",
"CWE-703"
] |
linux
|
de53fd7aedb100f03e5d2231cfce0e4993282425
| 252,889,890,226,232,300,000,000,000,000,000,000,000 | 8 |
sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices
It has been observed, that highly-threaded, non-cpu-bound applications
running under cpu.cfs_quota_us constraints can hit a high percentage of
periods throttled while simultaneously not consuming the allocated
amount of quota. This use case is typical of user-interactive non-cpu
bound applications, such as those running in kubernetes or mesos when
run on multiple cpu cores.
This has been root caused to cpu-local run queue being allocated per cpu
bandwidth slices, and then not fully using that slice within the period.
At which point the slice and quota expires. This expiration of unused
slice results in applications not being able to utilize the quota for
which they are allocated.
The non-expiration of per-cpu slices was recently fixed by
'commit 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift
condition")'. Prior to that it appears that this had been broken since
at least 'commit 51f2176d74ac ("sched/fair: Fix unlocked reads of some
cfs_b->quota/period")' which was introduced in v3.16-rc1 in 2014. That
added the following conditional which resulted in slices never being
expired.
if (cfs_rq->runtime_expires != cfs_b->runtime_expires) {
/* extend local deadline, drift is bounded above by 2 ticks */
cfs_rq->runtime_expires += TICK_NSEC;
Because this was broken for nearly 5 years, and has recently been fixed
and is now being noticed by many users running kubernetes
(https://github.com/kubernetes/kubernetes/issues/67577) it is my opinion
that the mechanisms around expiring runtime should be removed
altogether.
This allows quota already allocated to per-cpu run-queues to live longer
than the period boundary. This allows threads on runqueues that do not
use much CPU to continue to use their remaining slice over a longer
period of time than cpu.cfs_period_us. However, this helps prevent the
above condition of hitting throttling while also not fully utilizing
your cpu quota.
This theoretically allows a machine to use slightly more than its
allotted quota in some periods. This overflow would be bounded by the
remaining quota left on each per-cpu runqueueu. This is typically no
more than min_cfs_rq_runtime=1ms per cpu. For CPU bound tasks this will
change nothing, as they should theoretically fully utilize all of their
quota in each period. For user-interactive tasks as described above this
provides a much better user/application experience as their cpu
utilization will more closely match the amount they requested when they
hit throttling. This means that cpu limits no longer strictly apply per
period for non-cpu bound applications, but that they are still accurate
over longer timeframes.
This greatly improves performance of high-thread-count, non-cpu bound
applications with low cfs_quota_us allocation on high-core-count
machines. In the case of an artificial testcase (10ms/100ms of quota on
80 CPU machine), this commit resulted in almost 30x performance
improvement, while still maintaining correct cpu quota restrictions.
That testcase is available at https://github.com/indeedeng/fibtest.
Fixes: 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift condition")
Signed-off-by: Dave Chiluk <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Phil Auld <[email protected]>
Reviewed-by: Ben Segall <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: John Hammond <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Kyle Anderson <[email protected]>
Cc: Gabriel Munos <[email protected]>
Cc: Peter Oskolkov <[email protected]>
Cc: Cong Wang <[email protected]>
Cc: Brendan Gregg <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
SV *
Perl__get_regclass_nonbitmap_data(pTHX_ const regexp *prog,
const regnode* node,
bool doinit,
SV** listsvp,
SV** only_utf8_locale_ptr,
SV** output_invlist)
{
/* For internal core use only.
* Returns the inversion list for the input 'node' in the regex 'prog'.
* If <doinit> is 'true', will attempt to create the inversion list if not
* already done.
* If <listsvp> is non-null, will return the printable contents of the
* property definition. This can be used to get debugging information
* even before the inversion list exists, by calling this function with
* 'doinit' set to false, in which case the components that will be used
* to eventually create the inversion list are returned (in a printable
* form).
* If <only_utf8_locale_ptr> is not NULL, it is where this routine is to
* store an inversion list of code points that should match only if the
* execution-time locale is a UTF-8 one.
* If <output_invlist> is not NULL, it is where this routine is to store an
* inversion list of the code points that would be instead returned in
* <listsvp> if this were NULL. Thus, what gets output in <listsvp>
* when this parameter is used, is just the non-code point data that
* will go into creating the inversion list. This currently should be just
* user-defined properties whose definitions were not known at compile
* time. Using this parameter allows for easier manipulation of the
* inversion list's data by the caller. It is illegal to call this
* function with this parameter set, but not <listsvp>
*
* Tied intimately to how S_set_ANYOF_arg sets up the data structure. Note
* that, in spite of this function's name, the inversion list it returns
* may include the bitmap data as well */
SV *si = NULL; /* Input initialization string */
SV* invlist = NULL;
RXi_GET_DECL(prog, progi);
const struct reg_data * const data = prog ? progi->data : NULL;
PERL_ARGS_ASSERT__GET_REGCLASS_NONBITMAP_DATA;
assert(! output_invlist || listsvp);
if (data && data->count) {
const U32 n = ARG(node);
if (data->what[n] == 's') {
SV * const rv = MUTABLE_SV(data->data[n]);
AV * const av = MUTABLE_AV(SvRV(rv));
SV **const ary = AvARRAY(av);
invlist = ary[INVLIST_INDEX];
if (av_tindex_skip_len_mg(av) >= ONLY_LOCALE_MATCHES_INDEX) {
*only_utf8_locale_ptr = ary[ONLY_LOCALE_MATCHES_INDEX];
}
if (av_tindex_skip_len_mg(av) >= DEFERRED_USER_DEFINED_INDEX) {
si = ary[DEFERRED_USER_DEFINED_INDEX];
}
if (doinit && (si || invlist)) {
if (si) {
bool user_defined;
SV * msg = newSVpvs_flags("", SVs_TEMP);
SV * prop_definition = handle_user_defined_property(
"", 0, FALSE, /* There is no \p{}, \P{} */
SvPVX_const(si)[1] - '0', /* /i or not has been
stored here for just
this occasion */
TRUE, /* run time */
FALSE, /* This call must find the defn */
si, /* The property definition */
&user_defined,
msg,
0 /* base level call */
);
if (SvCUR(msg)) {
assert(prop_definition == NULL);
Perl_croak(aTHX_ "%" UTF8f,
UTF8fARG(SvUTF8(msg), SvCUR(msg), SvPVX(msg)));
}
if (invlist) {
_invlist_union(invlist, prop_definition, &invlist);
SvREFCNT_dec_NN(prop_definition);
}
else {
invlist = prop_definition;
}
STATIC_ASSERT_STMT(ONLY_LOCALE_MATCHES_INDEX == 1 + INVLIST_INDEX);
STATIC_ASSERT_STMT(DEFERRED_USER_DEFINED_INDEX == 1 + ONLY_LOCALE_MATCHES_INDEX);
av_store(av, INVLIST_INDEX, invlist);
av_fill(av, (ary[ONLY_LOCALE_MATCHES_INDEX])
? ONLY_LOCALE_MATCHES_INDEX:
INVLIST_INDEX);
si = NULL;
}
}
}
}
/* If requested, return a printable version of what this ANYOF node matches
* */
if (listsvp) {
SV* matches_string = NULL;
/* This function can be called at compile-time, before everything gets
* resolved, in which case we return the currently best available
* information, which is the string that will eventually be used to do
* that resolving, 'si' */
if (si) {
/* Here, we only have 'si' (and possibly some passed-in data in
* 'invlist', which is handled below) If the caller only wants
* 'si', use that. */
if (! output_invlist) {
matches_string = newSVsv(si);
}
else {
/* But if the caller wants an inversion list of the node, we
* need to parse 'si' and place as much as possible in the
* desired output inversion list, making 'matches_string' only
* contain the currently unresolvable things */
const char *si_string = SvPVX(si);
STRLEN remaining = SvCUR(si);
UV prev_cp = 0;
U8 count = 0;
/* Ignore everything before the first new-line */
while (*si_string != '\n' && remaining > 0) {
si_string++;
remaining--;
}
assert(remaining > 0);
si_string++;
remaining--;
while (remaining > 0) {
/* The data consists of just strings defining user-defined
* property names, but in prior incarnations, and perhaps
* somehow from pluggable regex engines, it could still
* hold hex code point definitions. Each component of a
* range would be separated by a tab, and each range by a
* new-line. If these are found, instead add them to the
* inversion list */
I32 grok_flags = PERL_SCAN_SILENT_ILLDIGIT
|PERL_SCAN_SILENT_NON_PORTABLE;
STRLEN len = remaining;
UV cp = grok_hex(si_string, &len, &grok_flags, NULL);
/* If the hex decode routine found something, it should go
* up to the next \n */
if ( *(si_string + len) == '\n') {
if (count) { /* 2nd code point on line */
*output_invlist = _add_range_to_invlist(*output_invlist, prev_cp, cp);
}
else {
*output_invlist = add_cp_to_invlist(*output_invlist, cp);
}
count = 0;
goto prepare_for_next_iteration;
}
/* If the hex decode was instead for the lower range limit,
* save it, and go parse the upper range limit */
if (*(si_string + len) == '\t') {
assert(count == 0);
prev_cp = cp;
count = 1;
prepare_for_next_iteration:
si_string += len + 1;
remaining -= len + 1;
continue;
}
/* Here, didn't find a legal hex number. Just add it from
* here to the next \n */
remaining -= len;
while (*(si_string + len) != '\n' && remaining > 0) {
remaining--;
len++;
}
if (*(si_string + len) == '\n') {
len++;
remaining--;
}
if (matches_string) {
sv_catpvn(matches_string, si_string, len - 1);
}
else {
matches_string = newSVpvn(si_string, len - 1);
}
si_string += len;
sv_catpvs(matches_string, " ");
} /* end of loop through the text */
assert(matches_string);
if (SvCUR(matches_string)) { /* Get rid of trailing blank */
SvCUR_set(matches_string, SvCUR(matches_string) - 1);
}
} /* end of has an 'si' */
}
/* Add the stuff that's already known */
if (invlist) {
/* Again, if the caller doesn't want the output inversion list, put
* everything in 'matches-string' */
if (! output_invlist) {
if ( ! matches_string) {
matches_string = newSVpvs("\n");
}
sv_catsv(matches_string, invlist_contents(invlist,
TRUE /* traditional style */
));
}
else if (! *output_invlist) {
*output_invlist = invlist_clone(invlist, NULL);
}
else {
_invlist_union(*output_invlist, invlist, output_invlist);
}
}
*listsvp = matches_string;
}
return invlist;
| 0 |
[
"CWE-190",
"CWE-787"
] |
perl5
|
897d1f7fd515b828e4b198d8b8bef76c6faf03ed
| 135,326,011,323,460,500,000,000,000,000,000,000,000 | 239 |
regcomp.c: Prevent integer overflow from nested regex quantifiers.
(CVE-2020-10543) On 32bit systems the size calculations for nested regular
expression quantifiers could overflow causing heap memory corruption.
Fixes: Perl/perl5-security#125
(cherry picked from commit bfd31397db5dc1a5c5d3e0a1f753a4f89a736e71)
|
my_bool end_of_query(int c)
{
return match_delimiter(c, delimiter, delimiter_length);
}
| 0 |
[
"CWE-284",
"CWE-295"
] |
mysql-server
|
3bd5589e1a5a93f9c224badf983cd65c45215390
| 165,144,431,486,334,960,000,000,000,000,000,000,000 | 4 |
WL#6791 : Redefine client --ssl option to imply enforced encryption
# Changed the meaning of the --ssl=1 option of all client binaries
to mean force ssl, not try ssl and fail over to eunecrypted
# Added a new MYSQL_OPT_SSL_ENFORCE mysql_options()
option to specify that an ssl connection is required.
# Added a new macro SSL_SET_OPTIONS() to the client
SSL handling headers that sets all the relevant SSL options at
once.
# Revamped all of the current native clients to use the new macro
# Removed some Windows line endings.
# Added proper handling of the new option into the ssl helper
headers.
# If SSL is mandatory assume that the media is secure enough
for the sha256 plugin to do unencrypted password exchange even
before establishing a connection.
# Set the default ssl cipher to DHE-RSA-AES256-SHA if none is
specified.
# updated test cases that require a non-default cipher to spawn
a mysql command line tool binary since mysqltest has no support
for specifying ciphers.
# updated the replication slave connection code to always enforce
SSL if any of the SSL config options is present.
# test cases added and updated.
# added a mysql_get_option() API to return mysql_options()
values. Used the new API inside the sha256 plugin.
# Fixed compilation warnings because of unused variables.
# Fixed test failures (mysql_ssl and bug13115401)
# Fixed whitespace issues.
# Fully implemented the mysql_get_option() function.
# Added a test case for mysql_get_option()
# fixed some trailing whitespace issues
# fixed some uint/int warnings in mysql_client_test.c
# removed shared memory option from non-windows get_options
tests
# moved MYSQL_OPT_LOCAL_INFILE to the uint options
|
int Field_long::store(double nr)
{
ASSERT_COLUMN_MARKED_FOR_WRITE_OR_COMPUTED;
int error= 0;
int32 res;
nr=rint(nr);
if (unsigned_flag)
{
if (nr < 0)
{
res=0;
error= 1;
}
else if (nr > (double) UINT_MAX32)
{
res= UINT_MAX32;
set_warning(ER_WARN_DATA_OUT_OF_RANGE, 1);
error= 1;
}
else
res=(int32) (ulong) nr;
}
else
{
if (nr < (double) INT_MIN32)
{
res=(int32) INT_MIN32;
error= 1;
}
else if (nr > (double) INT_MAX32)
{
res=(int32) INT_MAX32;
error= 1;
}
else
res=(int32) (longlong) nr;
}
if (error)
set_warning(ER_WARN_DATA_OUT_OF_RANGE, 1);
int4store(ptr,res);
return error;
}
| 0 |
[
"CWE-120"
] |
server
|
eca207c46293bc72dd8d0d5622153fab4d3fccf1
| 20,216,638,095,720,540,000,000,000,000,000,000,000 | 43 |
MDEV-25317 Assertion `scale <= precision' failed in decimal_bin_size And Assertion `scale >= 0 && precision > 0 && scale <= precision' failed in decimal_bin_size_inline/decimal_bin_size.
Precision should be kept below DECIMAL_MAX_SCALE for computations.
It can be bigger in Item_decimal. I'd fix this too but it changes the
existing behaviour so problemmatic to ix.
|
static struct cluster_list *cluster_intern(struct cluster_list *cluster)
{
struct cluster_list *find;
find = hash_get(cluster_hash, cluster, cluster_hash_alloc);
find->refcnt++;
return find;
}
| 0 |
[
"CWE-20",
"CWE-436"
] |
frr
|
943d595a018e69b550db08cccba1d0778a86705a
| 231,019,157,008,195,300,000,000,000,000,000,000,000 | 9 |
bgpd: don't use BGP_ATTR_VNC(255) unless ENABLE_BGP_VNC_ATTR is defined
Signed-off-by: Lou Berger <[email protected]>
|
req_ack(
sockaddr_u *srcadr,
endpt *inter,
struct req_pkt *inpkt,
int errcode
)
{
/*
* fill in the fields
*/
rpkt.rm_vn_mode = RM_VN_MODE(RESP_BIT, 0, reqver);
rpkt.auth_seq = AUTH_SEQ(0, 0);
rpkt.implementation = inpkt->implementation;
rpkt.request = inpkt->request;
rpkt.err_nitems = ERR_NITEMS(errcode, 0);
rpkt.mbz_itemsize = MBZ_ITEMSIZE(0);
/*
* send packet and bump counters
*/
sendpkt(srcadr, inter, -1, (struct pkt *)&rpkt, RESP_HEADER_SIZE);
errorcounter[errcode]++;
}
| 0 |
[
"CWE-190"
] |
ntp
|
c04c3d3d940dfe1a53132925c4f51aef017d2e0f
| 52,302,477,835,166,190,000,000,000,000,000,000,000 | 23 |
[TALOS-CAN-0052] crash by loop counter underrun.
|
int blocking_notifier_chain_register(struct blocking_notifier_head *nh,
struct notifier_block *n)
{
int ret;
/*
* This code gets used during boot-up, when task switching is
* not yet working and interrupts must remain disabled. At
* such times we must not call down_write().
*/
if (unlikely(system_state == SYSTEM_BOOTING))
return notifier_chain_register(&nh->head, n);
down_write(&nh->rwsem);
ret = notifier_chain_register(&nh->head, n);
up_write(&nh->rwsem);
return ret;
}
| 0 |
[
"CWE-20"
] |
linux-2.6
|
9926e4c74300c4b31dee007298c6475d33369df0
| 172,638,901,418,899,630,000,000,000,000,000,000,000 | 18 |
CPU time limit patch / setrlimit(RLIMIT_CPU, 0) cheat fix
As discovered here today, the change in Kernel 2.6.17 intended to inhibit
users from setting RLIMIT_CPU to 0 (as that is equivalent to unlimited) by
"cheating" and setting it to 1 in such a case, does not make a difference,
as the check is done in the wrong place (too late), and only applies to the
profiling code.
On all systems I checked running kernels above 2.6.17, no matter what the
hard and soft CPU time limits were before, a user could escape them by
issuing in the shell (sh/bash/zsh) "ulimit -t 0", and then the user's
process was not ever killed.
Attached is a trivial patch to fix that. Simply moving the check to a
slightly earlier location (specifically, before the line that actually
assigns the limit - *old_rlim = new_rlim), does the trick.
Do note that at least the zsh (but not ash, dash, or bash) shell has the
problem of "caching" the limits set by the ulimit command, so when running
zsh the fix will not immediately be evident - after entering "ulimit -t 0",
"ulimit -a" will show "-t: cpu time (seconds) 0", even though the actual
limit as returned by getrlimit(...) will be 1. It can be verified by
opening a subshell (which will not have the values of the parent shell in
cache) and checking in it, or just by running a CPU intensive command like
"echo '65536^1048576' | bc" and verifying that it dumps core after one
second.
Regardless of whether that is a misfeature in the shell, perhaps it would
be better to return -EINVAL from setrlimit in such a case instead of
cheating and setting to 1, as that does not really reflect the actual state
of the process anymore. I do not however know what the ground for that
decision was in the original 2.6.17 change, and whether there would be any
"backward" compatibility issues, so I preferred not to touch that right
now.
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static struct results_store *new_store(struct private_data *priv)
{
struct results_store *store;
int i;
int best = 0;
time_t oldest = TIME_T_MAX;
for (i = 0; i < priv->n_stores; i++) {
if (priv->store[i] == NULL) {
best = i;
break;
} else if (priv->store[i]->timestamp < oldest){
best = i;
oldest = priv->store[i]->timestamp;
}
}
store = talloc_zero(priv, struct results_store);
if (store == NULL) {
return NULL;
}
if (priv->store[best] != NULL) {
TALLOC_FREE(priv->store[best]);
}
priv->store[best] = store;
store->timestamp = time(NULL);
return store;
}
| 0 |
[
"CWE-416"
] |
samba
|
32c333def9ad5a1c67abee320cf5f3c4f2cb1e5c
| 138,195,712,842,854,400,000,000,000,000,000,000,000 | 27 |
CVE-2020-10760 dsdb: Ensure a proper talloc tree for saved controls
Otherwise a paged search on the GC port will fail as the ->data was
not kept around for the second page of searches.
An example command to produce this is
bin/ldbsearch --paged -H ldap://$SERVER:3268 -U$USERNAME%$PASSWORD
This shows up later in the partition module as:
ERROR: AddressSanitizer: heap-use-after-free on address 0x60b00151ef20 at pc 0x7fec3f801aac bp 0x7ffe8472c270 sp 0x7ffe8472c260
READ of size 4 at 0x60b00151ef20 thread T0 (ldap(0))
#0 0x7fec3f801aab in talloc_chunk_from_ptr ../../lib/talloc/talloc.c:526
#1 0x7fec3f801aab in __talloc_get_name ../../lib/talloc/talloc.c:1559
#2 0x7fec3f801aab in talloc_check_name ../../lib/talloc/talloc.c:1582
#3 0x7fec1b86b2e1 in partition_search ../../source4/dsdb/samdb/ldb_modules/partition.c:780
or
smb_panic_default: PANIC (pid 13287): Bad talloc magic value - unknown value
(from source4/dsdb/samdb/ldb_modules/partition.c:780)
BUG: https://bugzilla.samba.org/show_bug.cgi?id=14402
Signed-off-by: Andrew Bartlett <[email protected]>
|
Value ExpressionRound::evaluate(const Document& root, Variables* variables) const {
return evaluateRoundOrTrunc(
root, _children, getOpName(), Decimal128::kRoundTiesToEven, &std::round, variables);
}
| 0 |
[] |
mongo
|
1772b9a0393b55e6a280a35e8f0a1f75c014f301
| 57,503,612,460,496,220,000,000,000,000,000,000,000 | 4 |
SERVER-49404 Enforce additional checks in $arrayToObject
|
acl_fetch_meth(struct proxy *px, struct session *l4, void *l7, unsigned int opt,
const struct arg *args, struct sample *smp)
{
int meth;
struct http_txn *txn = l7;
CHECK_HTTP_MESSAGE_FIRST_PERM();
meth = txn->meth;
smp->type = SMP_T_UINT;
smp->data.uint = meth;
if (meth == HTTP_METH_OTHER) {
if (txn->rsp.msg_state != HTTP_MSG_RPBEFORE)
/* ensure the indexes are not affected */
return 0;
smp->type = SMP_T_CSTR;
smp->data.str.len = txn->req.sl.rq.m_l;
smp->data.str.str = txn->req.chn->buf->p;
}
smp->flags = SMP_F_VOL_1ST;
return 1;
}
| 0 |
[] |
haproxy
|
aae75e3279c6c9bd136413a72dafdcd4986bb89a
| 178,275,310,572,889,900,000,000,000,000,000,000,000 | 22 |
BUG/CRITICAL: using HTTP information in tcp-request content may crash the process
During normal HTTP request processing, request buffers are realigned if
there are less than global.maxrewrite bytes available after them, in
order to leave enough room for rewriting headers after the request. This
is done in http_wait_for_request().
However, if some HTTP inspection happens during a "tcp-request content"
rule, this realignment is not performed. In theory this is not a problem
because empty buffers are always aligned and TCP inspection happens at
the beginning of a connection. But with HTTP keep-alive, it also happens
at the beginning of each subsequent request. So if a second request was
pipelined by the client before the first one had a chance to be forwarded,
the second request will not be realigned. Then, http_wait_for_request()
will not perform such a realignment either because the request was
already parsed and marked as such. The consequence of this, is that the
rewrite of a sufficient number of such pipelined, unaligned requests may
leave less room past the request been processed than the configured
reserve, which can lead to a buffer overflow if request processing appends
some data past the end of the buffer.
A number of conditions are required for the bug to be triggered :
- HTTP keep-alive must be enabled ;
- HTTP inspection in TCP rules must be used ;
- some request appending rules are needed (reqadd, x-forwarded-for)
- since empty buffers are always realigned, the client must pipeline
enough requests so that the buffer always contains something till
the point where there is no more room for rewriting.
While such a configuration is quite unlikely to be met (which is
confirmed by the bug's lifetime), a few people do use these features
together for very specific usages. And more importantly, writing such
a configuration and the request to attack it is trivial.
A quick workaround consists in forcing keep-alive off by adding
"option httpclose" or "option forceclose" in the frontend. Alternatively,
disabling HTTP-based TCP inspection rules enough if the application
supports it.
At first glance, this bug does not look like it could lead to remote code
execution, as the overflowing part is controlled by the configuration and
not by the user. But some deeper analysis should be performed to confirm
this. And anyway, corrupting the process' memory and crashing it is quite
trivial.
Special thanks go to Yves Lafon from the W3C who reported this bug and
deployed significant efforts to collect the relevant data needed to
understand it in less than one week.
CVE-2013-1912 was assigned to this issue.
Note that 1.4 is also affected so the fix must be backported.
|
CacheManager::ActionProtection(const Mgr::ActionProfile::Pointer &profile)
{
assert(profile != NULL);
const char *pwd = PasswdGet(Config.passwd_list, profile->name);
if (!pwd)
return profile->isPwReq ? "hidden" : "public";
if (!strcmp(pwd, "disable"))
return "disabled";
if (strcmp(pwd, "none") == 0)
return "public";
return "protected";
}
| 0 |
[
"CWE-401"
] |
squid
|
26e65059bc06ebce508737b5cd0866478691566a
| 97,835,556,158,097,450,000,000,000,000,000,000,000 | 16 |
Bug 5106: Broken cache manager URL parsing (#788)
Use already parsed request-target URL in cache manager and
update CacheManager to Tokanizer based URL parse
Removing use of sscan() and regex string processing which have
proven to be problematic on many levels. Most particularly with
regards to tolerance of normally harmless garbage syntax in URLs
received.
Support for generic URI schemes is added possibly resolving some
issues reported with ftp:// URL and manager access via ftp_port
sockets.
Truly generic support for /squid-internal-mgr/ path prefix is
added, fixing some user confusion about its use on cache_object:
scheme URLs.
TODO: support for single-name parameters and URL #fragments
are left to future updates. As is refactoring the QueryParams
data storage to avoid SBuf data copying.
|
SendMousePosition(XtermWidget xw, XEvent *event)
{
XButtonEvent *my_event = (XButtonEvent *) event;
Bool result = False;
switch (okSendMousePos(xw)) {
case MOUSE_OFF:
/* If send_mouse_pos mode isn't on, we shouldn't be here */
break;
case BTN_EVENT_MOUSE:
case ANY_EVENT_MOUSE:
if (!OverrideEvent(event)) {
/* xterm extension for motion reporting. June 1998 */
/* EditorButton() will distinguish between the modes */
switch (event->type) {
case MotionNotify:
my_event->button = 0;
/* FALLTHRU */
case ButtonPress:
/* FALLTHRU */
case ButtonRelease:
EditorButton(xw, my_event);
result = True;
break;
}
}
break;
case X10_MOUSE: /* X10 compatibility sequences */
if (IsBtnEvent(event)) {
if (!OverrideButton(my_event)) {
if (my_event->type == ButtonPress)
EditorButton(xw, my_event);
result = True;
}
}
break;
case VT200_HIGHLIGHT_MOUSE: /* DEC vt200 hilite tracking */
if (IsBtnEvent(event)) {
if (!OverrideButton(my_event)) {
if (my_event->type == ButtonPress &&
my_event->button == Button1) {
TrackDown(xw, my_event);
} else {
EditorButton(xw, my_event);
}
result = True;
}
}
break;
case VT200_MOUSE: /* DEC vt200 compatible */
if (IsBtnEvent(event)) {
if (!OverrideButton(my_event)) {
EditorButton(xw, my_event);
result = True;
}
}
break;
case DEC_LOCATOR:
#if OPT_DEC_LOCATOR
if (IsBtnEvent(event) || event->type == MotionNotify) {
result = SendLocatorPosition(xw, my_event);
}
#endif /* OPT_DEC_LOCATOR */
break;
}
return result;
}
| 0 |
[
"CWE-399"
] |
xterm-snapshots
|
82ba55b8f994ab30ff561a347b82ea340ba7075c
| 275,911,435,035,056,160,000,000,000,000,000,000,000 | 72 |
snapshot of project "xterm", label xterm-365d
|
can_bs(
int what) /* BS_INDENT, BS_EOL or BS_START */
{
switch (*p_bs)
{
case '2': return TRUE;
case '1': return (what != BS_START);
case '0': return FALSE;
}
return vim_strchr(p_bs, what) != NULL;
}
| 0 |
[
"CWE-20"
] |
vim
|
d0b5138ba4bccff8a744c99836041ef6322ed39a
| 268,173,206,866,864,500,000,000,000,000,000,000,000 | 11 |
patch 8.0.0056
Problem: When setting 'filetype' there is no check for a valid name.
Solution: Only allow valid characters in 'filetype', 'syntax' and 'keymap'.
|
includeFile(FileInfo *nested, CharsString *includedFile,
CharacterClass **characterClasses,
TranslationTableCharacterAttributes *characterClassAttribute,
short opcodeLengths[], TranslationTableOffset *newRuleOffset,
TranslationTableRule **newRule, RuleName **ruleNames,
TranslationTableHeader **table) {
int k;
char includeThis[MAXSTRING];
char **tableFiles;
int rv;
for (k = 0; k < includedFile->length; k++)
includeThis[k] = (char)includedFile->chars[k];
includeThis[k] = 0;
tableFiles = _lou_resolveTable(includeThis, nested->fileName);
if (tableFiles == NULL) {
errorCount++;
return 0;
}
if (tableFiles[1] != NULL) {
errorCount++;
free_tablefiles(tableFiles);
_lou_logMessage(LOG_ERROR,
"Table list not supported in include statement: 'include %s'",
includeThis);
return 0;
}
rv = compileFile(*tableFiles, characterClasses, characterClassAttribute,
opcodeLengths, newRuleOffset, newRule, ruleNames, table);
free_tablefiles(tableFiles);
return rv;
}
| 1 |
[
"CWE-787"
] |
liblouis
|
fb2bfce4ed49ac4656a8f7e5b5526e4838da1dde
| 83,674,893,421,353,010,000,000,000,000,000,000,000 | 31 |
Fix yet another buffer overflow in the braille table parser
Reported by Henri Salo
Fixes #592
|
parse_string_opt(char *s) {
struct string_opt_map *m;
int lth;
for (m = &string_opt_map[0]; m->tag; m++) {
lth = strlen(m->tag);
if (!strncmp(s, m->tag, lth)) {
*(m->valptr) = xstrdup(s + lth);
return 1;
}
}
return 0;
}
| 0 |
[
"CWE-200"
] |
util-linux
|
0377ef91270d06592a0d4dd009c29e7b1ff9c9b8
| 120,670,638,267,576,120,000,000,000,000,000,000,000 | 13 |
mount: (deprecated) drop --guess-fstype
The option is undocumented and unnecessary.
Signed-off-by: Karel Zak <[email protected]>
|
static void show_entry(struct diff_options *opt, const char *prefix, struct tree_desc *desc,
const char *base, int baselen)
{
unsigned mode;
const char *path;
const unsigned char *sha1 = tree_entry_extract(desc, &path, &mode);
if (DIFF_OPT_TST(opt, RECURSIVE) && S_ISDIR(mode)) {
enum object_type type;
int pathlen = tree_entry_len(path, sha1);
char *newbase = malloc_base(base, baselen, path, pathlen);
struct tree_desc inner;
void *tree;
unsigned long size;
tree = read_sha1_file(sha1, &type, &size);
if (!tree || type != OBJ_TREE)
die("corrupt tree sha %s", sha1_to_hex(sha1));
init_tree_desc(&inner, tree, size);
show_tree(opt, prefix, &inner, newbase, baselen + 1 + pathlen);
free(tree);
free(newbase);
} else {
opt->add_remove(opt, prefix[0], mode, sha1, base, path);
}
}
| 1 |
[
"CWE-119"
] |
git
|
fd55a19eb1d49ae54008d932a65f79cd6fda45c9
| 138,571,458,001,826,580,000,000,000,000,000,000,000 | 28 |
Fix buffer overflow in git diff
If PATH_MAX on your system is smaller than a path stored, it may cause
buffer overflow and stack corruption in diff_addremove() and diff_change()
functions when running git-diff
Signed-off-by: Dmitry Potapov <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]>
|
int message_snippet_generate(struct istream *input,
unsigned int max_snippet_chars,
string_t *snippet)
{
struct message_parser_ctx *parser;
struct message_part *parts;
struct message_decoder_context *decoder;
struct message_block raw_block, block;
struct snippet_context ctx;
pool_t pool;
int ret;
i_zero(&ctx);
pool = pool_alloconly_create("message snippet", 2048);
ctx.snippet.snippet = str_new(pool, max_snippet_chars);
ctx.snippet.chars_left = max_snippet_chars;
ctx.quoted_snippet.snippet = str_new(pool, max_snippet_chars);
ctx.quoted_snippet.chars_left = max_snippet_chars;
parser = message_parser_init(pool_datastack_create(), input, 0, 0);
decoder = message_decoder_init(NULL, 0);
while ((ret = message_parser_parse_next_block(parser, &raw_block)) > 0) {
if (!message_decoder_decode_next_block(decoder, &raw_block, &block))
continue;
if (block.size == 0) {
const char *ct;
if (block.hdr != NULL)
continue;
/* end of headers - verify that we can use this
Content-Type. we get here only once, because we
always handle only one non-multipart MIME part. */
ct = message_decoder_current_content_type(decoder);
if (ct == NULL)
/* text/plain */ ;
else if (mail_html2text_content_type_match(ct)) {
ctx.html2text = mail_html2text_init(0);
ctx.plain_output = buffer_create_dynamic(pool, 1024);
} else if (strncasecmp(ct, "text/", 5) != 0)
break;
continue;
}
if (!snippet_generate(&ctx, block.data, block.size))
break;
}
i_assert(ret != 0);
message_decoder_deinit(&decoder);
message_parser_deinit(&parser, &parts);
mail_html2text_deinit(&ctx.html2text);
if (ctx.snippet.snippet->used != 0)
snippet_copy(str_c(ctx.snippet.snippet), snippet);
else if (ctx.quoted_snippet.snippet->used != 0) {
str_append_c(snippet, '>');
snippet_copy(str_c(ctx.quoted_snippet.snippet), snippet);
}
pool_unref(&pool);
return input->stream_errno == 0 ? 0 : -1;
}
| 0 |
[
"CWE-20"
] |
core
|
3a55f35c208b5fd3d52c0a6272bd5b8717a2ae54
| 247,214,368,354,940,200,000,000,000,000,000,000,000 | 58 |
lib-mail: message_snippet_generate() - Fix potential crash when input ends with '>'
This happens only when the mail was large enough and full enough with
whitespace that message-parser returned multiple blocks before the snippet
was finished.
Broken by 74063ed8219d055489d5233b0c02a59886d2078c
|
static bool nft_setelem_valid_key_end(const struct nft_set *set,
struct nlattr **nla, u32 flags)
{
if ((set->flags & (NFT_SET_CONCAT | NFT_SET_INTERVAL)) ==
(NFT_SET_CONCAT | NFT_SET_INTERVAL)) {
if (flags & NFT_SET_ELEM_INTERVAL_END)
return false;
if (!nla[NFTA_SET_ELEM_KEY_END] &&
!(flags & NFT_SET_ELEM_CATCHALL))
return false;
} else {
if (nla[NFTA_SET_ELEM_KEY_END])
return false;
}
return true;
}
| 0 |
[
"CWE-400",
"CWE-703"
] |
linux
|
e02f0d3970404bfea385b6edb86f2d936db0ea2b
| 286,414,010,545,743,950,000,000,000,000,000,000,000 | 17 |
netfilter: nf_tables: disallow binding to already bound chain
Update nft_data_init() to report EINVAL if chain is already bound.
Fixes: d0e2c7de92c7 ("netfilter: nf_tables: add NFT_CHAIN_BINDING")
Reported-by: Gwangun Jung <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
BSONObj operand() {
return BSON("" << Date_t::fromMillisSinceEpoch(12345));
}
| 0 |
[
"CWE-835"
] |
mongo
|
0a076417d1d7fba3632b73349a1fd29a83e68816
| 207,825,218,081,104,020,000,000,000,000,000,000,000 | 3 |
SERVER-38070 fix infinite loop in agg expression
|
long FS_filelength(fileHandle_t f)
{
FILE *h;
h = FS_FileForHandle(f);
if(h == NULL)
return -1;
else
return FS_fplength(h);
}
| 0 |
[
"CWE-269"
] |
ioq3
|
376267d534476a875d8b9228149c4ee18b74a4fd
| 135,408,260,378,013,350,000,000,000,000,000,000,000 | 11 |
Don't load .pk3s as .dlls, and don't load user config files from .pk3s.
|
void nl80211_send_assoc_timeout(struct cfg80211_registered_device *rdev,
struct net_device *netdev, const u8 *addr,
gfp_t gfp)
{
nl80211_send_mlme_timeout(rdev, netdev, NL80211_CMD_ASSOCIATE,
addr, gfp);
| 0 |
[
"CWE-362",
"CWE-119"
] |
linux
|
208c72f4fe44fe09577e7975ba0e7fa0278f3d03
| 98,334,702,879,137,700,000,000,000,000,000,000,000 | 7 |
nl80211: fix check for valid SSID size in scan operations
In both trigger_scan and sched_scan operations, we were checking for
the SSID length before assigning the value correctly. Since the
memory was just kzalloc'ed, the check was always failing and SSID with
over 32 characters were allowed to go through.
This was causing a buffer overflow when copying the actual SSID to the
proper place.
This bug has been there since 2.6.29-rc4.
Cc: [email protected]
Signed-off-by: Luciano Coelho <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
|
ConnStateData::handleSslBumpHandshakeError(const Security::IoResult &handshakeResult)
{
auto errCategory = ERR_NONE;
switch (handshakeResult.category) {
case Security::IoResult::ioSuccess: {
static const auto d = MakeNamedErrorDetail("TLS_ACCEPT_UNEXPECTED_SUCCESS");
updateError(errCategory = ERR_GATEWAY_FAILURE, d);
break;
}
case Security::IoResult::ioWantRead: {
static const auto d = MakeNamedErrorDetail("TLS_ACCEPT_UNEXPECTED_READ");
updateError(errCategory = ERR_GATEWAY_FAILURE, d);
break;
}
case Security::IoResult::ioWantWrite: {
static const auto d = MakeNamedErrorDetail("TLS_ACCEPT_UNEXPECTED_WRITE");
updateError(errCategory = ERR_GATEWAY_FAILURE, d);
break;
}
case Security::IoResult::ioError:
debugs(83, (handshakeResult.important ? DBG_IMPORTANT : 2), "ERROR: " << handshakeResult.errorDescription <<
" while SslBump-accepting a TLS connection on " << clientConnection << ": " << handshakeResult.errorDetail);
updateError(errCategory = ERR_SECURE_ACCEPT_FAIL, handshakeResult.errorDetail);
break;
}
if (!tunnelOnError(HttpRequestMethod(), errCategory))
clientConnection->close();
}
| 0 |
[
"CWE-116"
] |
squid
|
7024fb734a59409889e53df2257b3fc817809fb4
| 206,080,116,008,058,960,000,000,000,000,000,000,000 | 34 |
Handle more Range requests (#790)
Also removed some effectively unused code.
|
static int req_warn(lua_State *L)
{
return req_log_at(L, APLOG_WARNING);
}
| 0 |
[
"CWE-20"
] |
httpd
|
78eb3b9235515652ed141353d98c239237030410
| 333,737,261,343,693,060,000,000,000,000,000,000,000 | 4 |
*) SECURITY: CVE-2015-0228 (cve.mitre.org)
mod_lua: A maliciously crafted websockets PING after a script
calls r:wsupgrade() can cause a child process crash.
[Edward Lu <Chaosed0 gmail.com>]
Discovered by Guido Vranken <guidovranken gmail.com>
Submitted by: Edward Lu
Committed by: covener
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1657261 13f79535-47bb-0310-9956-ffa450edef68
|
static int stbi__tga_info(stbi__context *s, int *x, int *y, int *comp)
{
int tga_w, tga_h, tga_comp, tga_image_type, tga_bits_per_pixel, tga_colormap_bpp;
int sz, tga_colormap_type;
stbi__get8(s); // discard Offset
tga_colormap_type = stbi__get8(s); // colormap type
if( tga_colormap_type > 1 ) {
stbi__rewind(s);
return 0; // only RGB or indexed allowed
}
tga_image_type = stbi__get8(s); // image type
if ( tga_colormap_type == 1 ) { // colormapped (paletted) image
if (tga_image_type != 1 && tga_image_type != 9) {
stbi__rewind(s);
return 0;
}
stbi__skip(s,4); // skip index of first colormap entry and number of entries
sz = stbi__get8(s); // check bits per palette color entry
if ( (sz != 8) && (sz != 15) && (sz != 16) && (sz != 24) && (sz != 32) ) {
stbi__rewind(s);
return 0;
}
stbi__skip(s,4); // skip image x and y origin
tga_colormap_bpp = sz;
} else { // "normal" image w/o colormap - only RGB or grey allowed, +/- RLE
if ( (tga_image_type != 2) && (tga_image_type != 3) && (tga_image_type != 10) && (tga_image_type != 11) ) {
stbi__rewind(s);
return 0; // only RGB or grey allowed, +/- RLE
}
stbi__skip(s,9); // skip colormap specification and image x/y origin
tga_colormap_bpp = 0;
}
tga_w = stbi__get16le(s);
if( tga_w < 1 ) {
stbi__rewind(s);
return 0; // test width
}
tga_h = stbi__get16le(s);
if( tga_h < 1 ) {
stbi__rewind(s);
return 0; // test height
}
tga_bits_per_pixel = stbi__get8(s); // bits per pixel
stbi__get8(s); // ignore alpha bits
if (tga_colormap_bpp != 0) {
if((tga_bits_per_pixel != 8) && (tga_bits_per_pixel != 16)) {
// when using a colormap, tga_bits_per_pixel is the size of the indexes
// I don't think anything but 8 or 16bit indexes makes sense
stbi__rewind(s);
return 0;
}
tga_comp = stbi__tga_get_comp(tga_colormap_bpp, 0, NULL);
} else {
tga_comp = stbi__tga_get_comp(tga_bits_per_pixel, (tga_image_type == 3) || (tga_image_type == 11), NULL);
}
if(!tga_comp) {
stbi__rewind(s);
return 0;
}
if (x) *x = tga_w;
if (y) *y = tga_h;
if (comp) *comp = tga_comp;
return 1; // seems to have passed everything
}
| 0 |
[
"CWE-787"
] |
stb
|
5ba0baaa269b3fd681828e0e3b3ac0f1472eaf40
| 127,784,536,050,068,180,000,000,000,000,000,000,000 | 64 |
stb_image: Reject fractional JPEG component subsampling ratios
The component resamplers are not written to support this and I've
never seen it happen in a real (non-crafted) JPEG file so I'm
fine rejecting this as outright corrupt.
Fixes issue #1178.
|
bool Type_std_attributes::agg_item_collations(DTCollation &c, const char *fname,
Item **av, uint count,
uint flags, int item_sep)
{
uint i;
Item **arg;
bool unknown_cs= 0;
c.set(av[0]->collation);
for (i= 1, arg= &av[item_sep]; i < count; i++, arg+= item_sep)
{
if (c.aggregate((*arg)->collation, flags))
{
if (c.derivation == DERIVATION_NONE &&
c.collation == &my_charset_bin)
{
unknown_cs= 1;
continue;
}
my_coll_agg_error(av, count, fname, item_sep);
return TRUE;
}
}
if (unknown_cs &&
c.derivation != DERIVATION_EXPLICIT)
{
my_coll_agg_error(av, count, fname, item_sep);
return TRUE;
}
if ((flags & MY_COLL_DISALLOW_NONE) &&
c.derivation == DERIVATION_NONE)
{
my_coll_agg_error(av, count, fname, item_sep);
return TRUE;
}
/* If all arguments where numbers, reset to @@collation_connection */
if (flags & MY_COLL_ALLOW_NUMERIC_CONV &&
c.derivation == DERIVATION_NUMERIC)
c.set(Item::default_charset(), DERIVATION_COERCIBLE, MY_REPERTOIRE_NUMERIC);
return FALSE;
}
| 0 |
[
"CWE-416"
] |
server
|
c02ebf3510850ba78a106be9974c94c3b97d8585
| 134,714,923,572,538,780,000,000,000,000,000,000,000 | 45 |
MDEV-24176 Preparations
1. moved fix_vcol_exprs() call to open_table()
mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session
2. Vanilla cleanups and comments.
|
GF_Err nump_box_read(GF_Box *s, GF_BitStream *bs)
{
GF_NUMPBox *ptr = (GF_NUMPBox *)s;
ISOM_DECREASE_SIZE(ptr, 8);
ptr->nbPackets = gf_bs_read_u64(bs);
return GF_OK;
}
| 0 |
[
"CWE-787"
] |
gpac
|
388ecce75d05e11fc8496aa4857b91245007d26e
| 294,757,654,268,122,540,000,000,000,000,000,000,000 | 7 |
fixed #1587
|
utf16be_mbc_case_fold(OnigCaseFoldType flag,
const UChar** pp, const UChar* end, UChar* fold)
{
const UChar* p = *pp;
if (ONIGENC_IS_ASCII_CODE(*(p+1)) && *p == 0) {
p++;
#ifdef USE_UNICODE_CASE_FOLD_TURKISH_AZERI
if ((flag & ONIGENC_CASE_FOLD_TURKISH_AZERI) != 0) {
if (*p == 0x49) {
*fold++ = 0x01;
*fold = 0x31;
(*pp) += 2;
return 2;
}
}
#endif
*fold++ = 0;
*fold = ONIGENC_ASCII_CODE_TO_LOWER_CASE(*p);
*pp += 2;
return 2;
}
else
return onigenc_unicode_mbc_case_fold(ONIG_ENCODING_UTF16_BE, flag,
pp, end, fold);
}
| 0 |
[
"CWE-125"
] |
php-src
|
9d6c59eeea88a3e9d7039cb4fed5126ef704593a
| 14,619,335,932,294,853,000,000,000,000,000,000,000 | 27 |
Fix bug #77418 - Heap overflow in utf32be_mbc_to_code
|
int processing_finish(png_structp png_ptr, png_infop info_ptr) {
unsigned char footer[12] = {0, 0, 0, 0, 73, 69, 78, 68, 174, 66, 96, 130};
if (!png_ptr || !info_ptr) return 1;
if (setjmp(png_jmpbuf(png_ptr))) {
png_destroy_read_struct(&png_ptr, &info_ptr, 0);
return 1;
}
png_process_data(png_ptr, info_ptr, footer, 12);
png_destroy_read_struct(&png_ptr, &info_ptr, 0);
return 0;
}
| 0 |
[
"CWE-369"
] |
libjxl
|
7dfa400ded53919d986c5d3d23446a09e0cf481b
| 299,930,837,348,692,720,000,000,000,000,000,000,000 | 15 |
Fix handling of APNG with 0 delay_den (#313)
|
int ip6_err_gen_icmpv6_unreach(struct sk_buff *skb, int nhs, int type,
unsigned int data_len)
{
struct in6_addr temp_saddr;
struct rt6_info *rt;
struct sk_buff *skb2;
u32 info = 0;
if (!pskb_may_pull(skb, nhs + sizeof(struct ipv6hdr) + 8))
return 1;
/* RFC 4884 (partial) support for ICMP extensions */
if (data_len < 128 || (data_len & 7) || skb->len < data_len)
data_len = 0;
skb2 = data_len ? skb_copy(skb, GFP_ATOMIC) : skb_clone(skb, GFP_ATOMIC);
if (!skb2)
return 1;
skb_dst_drop(skb2);
skb_pull(skb2, nhs);
skb_reset_network_header(skb2);
rt = rt6_lookup(dev_net(skb->dev), &ipv6_hdr(skb2)->saddr, NULL, 0, 0);
if (rt && rt->dst.dev)
skb2->dev = rt->dst.dev;
ipv6_addr_set_v4mapped(ip_hdr(skb)->saddr, &temp_saddr);
if (data_len) {
/* RFC 4884 (partial) support :
* insert 0 padding at the end, before the extensions
*/
__skb_push(skb2, nhs);
skb_reset_network_header(skb2);
memmove(skb2->data, skb2->data + nhs, data_len - nhs);
memset(skb2->data + data_len - nhs, 0, nhs);
/* RFC 4884 4.5 : Length is measured in 64-bit words,
* and stored in reserved[0]
*/
info = (data_len/8) << 24;
}
if (type == ICMP_TIME_EXCEEDED)
icmp6_send(skb2, ICMPV6_TIME_EXCEED, ICMPV6_EXC_HOPLIMIT,
info, &temp_saddr);
else
icmp6_send(skb2, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH,
info, &temp_saddr);
if (rt)
ip6_rt_put(rt);
kfree_skb(skb2);
return 0;
}
| 0 |
[
"CWE-20",
"CWE-200"
] |
linux
|
79dc7e3f1cd323be4c81aa1a94faa1b3ed987fb2
| 279,264,975,514,688,000,000,000,000,000,000,000,000 | 57 |
net: handle no dst on skb in icmp6_send
Andrey reported the following while fuzzing the kernel with syzkaller:
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN
Modules linked in:
CPU: 0 PID: 3859 Comm: a.out Not tainted 4.9.0-rc6+ #429
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task: ffff8800666d4200 task.stack: ffff880067348000
RIP: 0010:[<ffffffff833617ec>] [<ffffffff833617ec>]
icmp6_send+0x5fc/0x1e30 net/ipv6/icmp.c:451
RSP: 0018:ffff88006734f2c0 EFLAGS: 00010206
RAX: ffff8800666d4200 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: dffffc0000000000 RDI: 0000000000000018
RBP: ffff88006734f630 R08: ffff880064138418 R09: 0000000000000003
R10: dffffc0000000000 R11: 0000000000000005 R12: 0000000000000000
R13: ffffffff84e7e200 R14: ffff880064138484 R15: ffff8800641383c0
FS: 00007fb3887a07c0(0000) GS:ffff88006cc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020000000 CR3: 000000006b040000 CR4: 00000000000006f0
Stack:
ffff8800666d4200 ffff8800666d49f8 ffff8800666d4200 ffffffff84c02460
ffff8800666d4a1a 1ffff1000ccdaa2f ffff88006734f498 0000000000000046
ffff88006734f440 ffffffff832f4269 ffff880064ba7456 0000000000000000
Call Trace:
[<ffffffff83364ddc>] icmpv6_param_prob+0x2c/0x40 net/ipv6/icmp.c:557
[< inline >] ip6_tlvopt_unknown net/ipv6/exthdrs.c:88
[<ffffffff83394405>] ip6_parse_tlv+0x555/0x670 net/ipv6/exthdrs.c:157
[<ffffffff8339a759>] ipv6_parse_hopopts+0x199/0x460 net/ipv6/exthdrs.c:663
[<ffffffff832ee773>] ipv6_rcv+0xfa3/0x1dc0 net/ipv6/ip6_input.c:191
...
icmp6_send / icmpv6_send is invoked for both rx and tx paths. In both
cases the dst->dev should be preferred for determining the L3 domain
if the dst has been set on the skb. Fallback to the skb->dev if it has
not. This covers the case reported here where icmp6_send is invoked on
Rx before the route lookup.
Fixes: 5d41ce29e ("net: icmp6_send should use dst dev to determine L3 domain")
Reported-by: Andrey Konovalov <[email protected]>
Signed-off-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static int bsg_open(struct inode *inode, struct file *file)
{
struct bsg_device *bd;
lock_kernel();
bd = bsg_get_device(inode, file);
unlock_kernel();
if (IS_ERR(bd))
return PTR_ERR(bd);
file->private_data = bd;
return 0;
}
| 0 |
[
"CWE-399"
] |
linux-2.6
|
f2f1fa78a155524b849edf359e42a3001ea652c0
| 167,179,195,585,021,760,000,000,000,000,000,000,000 | 14 |
Enforce a minimum SG_IO timeout
There's no point in having too short SG_IO timeouts, since if the
command does end up timing out, we'll end up through the reset sequence
that is several seconds long in order to abort the command that timed
out.
As a result, shorter timeouts than a few seconds simply do not make
sense, as the recovery would be longer than the timeout itself.
Add a BLK_MIN_SG_TIMEOUT to match the existign BLK_DEFAULT_SG_TIMEOUT.
Suggested-by: Alan Cox <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Acked-by: Jens Axboe <[email protected]>
Cc: Jeff Garzik <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
int make_pages_present(unsigned long addr, unsigned long end)
{
int ret, len, write;
struct vm_area_struct * vma;
vma = find_vma(current->mm, addr);
if (!vma)
return -1;
write = (vma->vm_flags & VM_WRITE) != 0;
BUG_ON(addr >= end);
BUG_ON(end > vma->vm_end);
len = DIV_ROUND_UP(end, PAGE_SIZE) - addr/PAGE_SIZE;
ret = get_user_pages(current, current->mm, addr,
len, write, 0, NULL, NULL);
if (ret < 0)
return ret;
return ret == len ? 0 : -1;
}
| 0 |
[
"CWE-20"
] |
linux-2.6
|
89f5b7da2a6bad2e84670422ab8192382a5aeb9f
| 146,169,745,831,008,060,000,000,000,000,000,000,000 | 18 |
Reinstate ZERO_PAGE optimization in 'get_user_pages()' and fix XIP
KAMEZAWA Hiroyuki and Oleg Nesterov point out that since the commit
557ed1fa2620dc119adb86b34c614e152a629a80 ("remove ZERO_PAGE") removed
the ZERO_PAGE from the VM mappings, any users of get_user_pages() will
generally now populate the VM with real empty pages needlessly.
We used to get the ZERO_PAGE when we did the "handle_mm_fault()", but
since fault handling no longer uses ZERO_PAGE for new anonymous pages,
we now need to handle that special case in follow_page() instead.
In particular, the removal of ZERO_PAGE effectively removed the core
file writing optimization where we would skip writing pages that had not
been populated at all, and increased memory pressure a lot by allocating
all those useless newly zeroed pages.
This reinstates the optimization by making the unmapped PTE case the
same as for a non-existent page table, which already did this correctly.
While at it, this also fixes the XIP case for follow_page(), where the
caller could not differentiate between the case of a page that simply
could not be used (because it had no "struct page" associated with it)
and a page that just wasn't mapped.
We do that by simply returning an error pointer for pages that could not
be turned into a "struct page *". The error is arbitrarily picked to be
EFAULT, since that was what get_user_pages() already used for the
equivalent IO-mapped page case.
[ Also removed an impossible test for pte_offset_map_lock() failing:
that's not how that function works ]
Acked-by: Oleg Nesterov <[email protected]>
Acked-by: Nick Piggin <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Roland McGrath <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
find_attendee_if_sentby (icalcomponent *ical_comp,
const gchar *address)
{
icalproperty *prop;
if (address == NULL)
return NULL;
for (prop = icalcomponent_get_first_property (ical_comp, ICAL_ATTENDEE_PROPERTY);
prop != NULL;
prop = icalcomponent_get_next_property (ical_comp, ICAL_ATTENDEE_PROPERTY)) {
icalparameter *param;
const gchar *attendee_sentby;
gchar *text;
param = icalproperty_get_first_parameter (prop, ICAL_SENTBY_PARAMETER);
if (!param)
continue;
attendee_sentby = icalparameter_get_sentby (param);
if (!attendee_sentby)
continue;
text = g_strdup (itip_strip_mailto (attendee_sentby));
text = g_strstrip (text);
if (text && !g_ascii_strcasecmp (address, text)) {
g_free (text);
break;
}
g_free (text);
}
return prop;
}
| 0 |
[
"CWE-295"
] |
evolution-ews
|
915226eca9454b8b3e5adb6f2fff9698451778de
| 166,280,397,661,165,500,000,000,000,000,000,000,000 | 35 |
I#27 - SSL Certificates are not validated
This depends on https://gitlab.gnome.org/GNOME/evolution-data-server/commit/6672b8236139bd6ef41ecb915f4c72e2a052dba5 too.
Closes https://gitlab.gnome.org/GNOME/evolution-ews/issues/27
|
skip_vimgrep_pat(char_u *p, char_u **s, int *flags)
{
int c;
if (vim_isIDc(*p))
{
/* ":vimgrep pattern fname" */
if (s != NULL)
*s = p;
p = skiptowhite(p);
if (s != NULL && *p != NUL)
*p++ = NUL;
}
else
{
/* ":vimgrep /pattern/[g][j] fname" */
if (s != NULL)
*s = p + 1;
c = *p;
p = skip_regexp(p + 1, c, TRUE, NULL);
if (*p != c)
return NULL;
/* Truncate the pattern. */
if (s != NULL)
*p = NUL;
++p;
/* Find the flags */
while (*p == 'g' || *p == 'j')
{
if (flags != NULL)
{
if (*p == 'g')
*flags |= VGR_GLOBAL;
else
*flags |= VGR_NOJUMP;
}
++p;
}
}
return p;
}
| 0 |
[
"CWE-78"
] |
vim
|
8c62a08faf89663e5633dc5036cd8695c80f1075
| 131,879,620,521,665,560,000,000,000,000,000,000,000 | 43 |
patch 8.1.0881: can execute shell commands in rvim through interfaces
Problem: Can execute shell commands in rvim through interfaces.
Solution: Disable using interfaces in restricted mode. Allow for writing
file with writefile(), histadd() and a few others.
|
place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
{
u64 vruntime;
if (first_fair(cfs_rq)) {
vruntime = min_vruntime(cfs_rq->min_vruntime,
__pick_next_entity(cfs_rq)->vruntime);
} else
vruntime = cfs_rq->min_vruntime;
/*
* The 'current' period is already promised to the current tasks,
* however the extra weight of the new task will slow them down a
* little, place the new task so that it fits in the slot that
* stays open at the end.
*/
if (initial && sched_feat(START_DEBIT))
vruntime += sched_vslice_add(cfs_rq, se);
if (!initial) {
/* sleeps upto a single latency don't count. */
if (sched_feat(NEW_FAIR_SLEEPERS)) {
if (sched_feat(NORMALIZED_SLEEPER))
vruntime -= calc_delta_weight(sysctl_sched_latency, se);
else
vruntime -= sysctl_sched_latency;
}
/* ensure we never gain time by being placed backwards. */
vruntime = max_vruntime(se->vruntime, vruntime);
}
se->vruntime = vruntime;
}
| 0 |
[] |
linux-2.6
|
8f1bc385cfbab474db6c27b5af1e439614f3025c
| 275,895,987,838,523,030,000,000,000,000,000,000,000 | 34 |
sched: fair: weight calculations
In order to level the hierarchy, we need to calculate load based on the
root view. That is, each task's load is in the same unit.
A
/ \
B 1
/ \
2 3
To compute 1's load we do:
weight(1)
--------------
rq_weight(A)
To compute 2's load we do:
weight(2) weight(B)
------------ * -----------
rq_weight(B) rw_weight(A)
This yields load fractions in comparable units.
The consequence is that it changes virtual time. We used to have:
time_{i}
vtime_{i} = ------------
weight_{i}
vtime = \Sum vtime_{i} = time / rq_weight.
But with the new way of load calculation we get that vtime equals time.
Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
static void bnx2x_set_reset_done(struct bnx2x *bp)
{
u32 val;
u32 bit = BP_PATH(bp) ?
BNX2X_PATH1_RST_IN_PROG_BIT : BNX2X_PATH0_RST_IN_PROG_BIT;
bnx2x_acquire_hw_lock(bp, HW_LOCK_RESOURCE_RECOVERY_REG);
val = REG_RD(bp, BNX2X_RECOVERY_GLOB_REG);
/* Clear the bit */
val &= ~bit;
REG_WR(bp, BNX2X_RECOVERY_GLOB_REG, val);
bnx2x_release_hw_lock(bp, HW_LOCK_RESOURCE_RECOVERY_REG);
}
| 0 |
[
"CWE-20"
] |
linux
|
8914a595110a6eca69a5e275b323f5d09e18f4f9
| 172,037,660,723,988,570,000,000,000,000,000,000,000 | 14 |
bnx2x: disable GSO where gso_size is too big for hardware
If a bnx2x card is passed a GSO packet with a gso_size larger than
~9700 bytes, it will cause a firmware error that will bring the card
down:
bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f0)]MC assert!
bnx2x: [bnx2x_mc_assert:720(enP24p1s0f0)]XSTORM_ASSERT_LIST_INDEX 0x2
bnx2x: [bnx2x_mc_assert:736(enP24p1s0f0)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e43e47 0x00463e01 0x00010052
bnx2x: [bnx2x_mc_assert:750(enP24p1s0f0)]Chip Revision: everest3, FW Version: 7_13_1
... (dump of values continues) ...
Detect when the mac length of a GSO packet is greater than the maximum
packet size (9700 bytes) and disable GSO.
Signed-off-by: Daniel Axtens <[email protected]>
Reviewed-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static bool check_rename_table(THD *thd, TABLE_LIST *first_table,
TABLE_LIST *all_tables)
{
DBUG_ASSERT(first_table == all_tables && first_table != 0);
TABLE_LIST *table;
for (table= first_table; table; table= table->next_local->next_local)
{
if (check_access(thd, ALTER_ACL | DROP_ACL, table->db.str,
&table->grant.privilege,
&table->grant.m_internal,
0, 0) ||
check_access(thd, INSERT_ACL | CREATE_ACL, table->next_local->db.str,
&table->next_local->grant.privilege,
&table->next_local->grant.m_internal,
0, 0))
return 1;
/* check if these are referring to temporary tables */
table->table= find_temporary_table_for_rename(thd, first_table, table);
table->next_local->table= table->table;
TABLE_LIST old_list, new_list;
/*
we do not need initialize old_list and new_list because we will
copy table[0] and table->next[0] there
*/
old_list= table[0];
new_list= table->next_local[0];
if (check_grant(thd, ALTER_ACL | DROP_ACL, &old_list, FALSE, 1, FALSE) ||
(!test_all_bits(table->next_local->grant.privilege,
INSERT_ACL | CREATE_ACL) &&
check_grant(thd, INSERT_ACL | CREATE_ACL, &new_list, FALSE, 1,
FALSE)))
return 1;
}
return 0;
}
| 0 |
[
"CWE-703"
] |
server
|
39feab3cd31b5414aa9b428eaba915c251ac34a2
| 46,963,943,642,839,050,000,000,000,000,000,000,000 | 39 |
MDEV-26412 Server crash in Item_field::fix_outer_field for INSERT SELECT
IF an INSERT/REPLACE SELECT statement contained an ON expression in the top
level select and this expression used a subquery with a column reference
that could not be resolved then an attempt to resolve this reference as
an outer reference caused a crash of the server. This happened because the
outer context field in the Name_resolution_context structure was not set
to NULL for such references. Rather it pointed to the first element in
the select_stack.
Note that starting from 10.4 we cannot use the SELECT_LEX::outer_select()
method when parsing a SELECT construct.
Approved by Oleksandr Byelkin <[email protected]>
|
void ext4_error_file(struct file *file, const char *function,
unsigned int line, ext4_fsblk_t block,
const char *fmt, ...)
{
va_list args;
struct va_format vaf;
struct ext4_super_block *es;
struct inode *inode = file->f_dentry->d_inode;
char pathname[80], *path;
es = EXT4_SB(inode->i_sb)->s_es;
es->s_last_error_ino = cpu_to_le32(inode->i_ino);
save_error_info(inode->i_sb, function, line);
path = d_path(&(file->f_path), pathname, sizeof(pathname));
if (IS_ERR(path))
path = "(unknown)";
printk(KERN_CRIT
"EXT4-fs error (device %s): %s:%d: inode #%lu: ",
inode->i_sb->s_id, function, line, inode->i_ino);
if (block)
printk(KERN_CONT "block %llu: ", block);
va_start(args, fmt);
vaf.fmt = fmt;
vaf.va = &args;
printk(KERN_CONT "comm %s: path %s: %pV\n", current->comm, path, &vaf);
va_end(args);
ext4_handle_error(inode->i_sb);
}
| 0 |
[
"CWE-703",
"CWE-189"
] |
linux
|
d50f2ab6f050311dbf7b8f5501b25f0bf64a439b
| 170,482,954,352,489,350,000,000,000,000,000,000,000 | 29 |
ext4: fix undefined behavior in ext4_fill_flex_info()
Commit 503358ae01b70ce6909d19dd01287093f6b6271c ("ext4: avoid divide by
zero when trying to mount a corrupted file system") fixes CVE-2009-4307
by performing a sanity check on s_log_groups_per_flex, since it can be
set to a bogus value by an attacker.
sbi->s_log_groups_per_flex = sbi->s_es->s_log_groups_per_flex;
groups_per_flex = 1 << sbi->s_log_groups_per_flex;
if (groups_per_flex < 2) { ... }
This patch fixes two potential issues in the previous commit.
1) The sanity check might only work on architectures like PowerPC.
On x86, 5 bits are used for the shifting amount. That means, given a
large s_log_groups_per_flex value like 36, groups_per_flex = 1 << 36
is essentially 1 << 4 = 16, rather than 0. This will bypass the check,
leaving s_log_groups_per_flex and groups_per_flex inconsistent.
2) The sanity check relies on undefined behavior, i.e., oversized shift.
A standard-confirming C compiler could rewrite the check in unexpected
ways. Consider the following equivalent form, assuming groups_per_flex
is unsigned for simplicity.
groups_per_flex = 1 << sbi->s_log_groups_per_flex;
if (groups_per_flex == 0 || groups_per_flex == 1) {
We compile the code snippet using Clang 3.0 and GCC 4.6. Clang will
completely optimize away the check groups_per_flex == 0, leaving the
patched code as vulnerable as the original. GCC keeps the check, but
there is no guarantee that future versions will do the same.
Signed-off-by: Xi Wang <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Cc: [email protected]
|
static jas_iccattrval_t *jas_iccattrval_create0()
{
jas_iccattrval_t *attrval;
if (!(attrval = jas_malloc(sizeof(jas_iccattrval_t))))
return 0;
memset(attrval, 0, sizeof(jas_iccattrval_t));
attrval->refcnt = 0;
attrval->ops = 0;
attrval->type = 0;
return attrval;
}
| 0 |
[
"CWE-189"
] |
jasper
|
3c55b399c36ef46befcb21e4ebc4799367f89684
| 58,132,860,740,248,820,000,000,000,000,000,000,000 | 11 |
At many places in the code, jas_malloc or jas_recalloc was being
invoked with the size argument being computed in a manner that would not
allow integer overflow to be detected. Now, these places in the code
have been modified to use special-purpose memory allocation functions
(e.g., jas_alloc2, jas_alloc3, jas_realloc2) that check for overflow.
This should fix many security problems.
|
static inline MessageType anyConvert(const ProtobufWkt::Any& message) {
return MessageUtil::anyConvert<MessageType>(message);
}
| 0 |
[] |
envoy
|
2c60632d41555ec8b3d9ef5246242be637a2db0f
| 70,405,519,370,580,805,000,000,000,000,000,000,000 | 3 |
http: header map security fixes for duplicate headers (#197)
Previously header matching did not match on all headers for
non-inline headers. This patch changes the default behavior to
always logically match on all headers. Multiple individual
headers will be logically concatenated with ',' similar to what
is done with inline headers. This makes the behavior effectively
consistent. This behavior can be temporary reverted by setting
the runtime value "envoy.reloadable_features.header_match_on_all_headers"
to "false".
Targeted fixes have been additionally performed on the following
extensions which make them consider all duplicate headers by default as
a comma concatenated list:
1) Any extension using CEL matching on headers.
2) The header to metadata filter.
3) The JWT filter.
4) The Lua filter.
Like primary header matching used in routing, RBAC, etc. this behavior
can be disabled by setting the runtime value
"envoy.reloadable_features.header_match_on_all_headers" to false.
Finally, the setCopy() header map API previously only set the first
header in the case of duplicate non-inline headers. setCopy() now
behaves similiarly to the other set*() APIs and replaces all found
headers with a single value. This may have had security implications
in the extauth filter which uses this API. This behavior can be disabled
by setting the runtime value
"envoy.reloadable_features.http_set_copy_replace_all_headers" to false.
Fixes https://github.com/envoyproxy/envoy-setec/issues/188
Signed-off-by: Matt Klein <[email protected]>
|
remove_pattern (param, pattern, op)
char *param, *pattern;
int op;
{
char *xret;
if (param == NULL)
return (param);
if (*param == '\0' || pattern == NULL || *pattern == '\0') /* minor optimization */
return (savestring (param));
#if defined (HANDLE_MULTIBYTE)
if (MB_CUR_MAX > 1)
{
wchar_t *ret, *oret;
size_t n;
wchar_t *wparam, *wpattern;
mbstate_t ps;
n = xdupmbstowcs (&wpattern, NULL, pattern);
if (n == (size_t)-1)
{
xret = remove_upattern (param, pattern, op);
return ((xret == param) ? savestring (param) : xret);
}
n = xdupmbstowcs (&wparam, NULL, param);
if (n == (size_t)-1)
{
free (wpattern);
xret = remove_upattern (param, pattern, op);
return ((xret == param) ? savestring (param) : xret);
}
oret = ret = remove_wpattern (wparam, n, wpattern, op);
/* Don't bother to convert wparam back to multibyte string if nothing
matched; just return copy of original string */
if (ret == wparam)
{
free (wparam);
free (wpattern);
return (savestring (param));
}
free (wparam);
free (wpattern);
n = strlen (param);
xret = (char *)xmalloc (n + 1);
memset (&ps, '\0', sizeof (mbstate_t));
n = wcsrtombs (xret, (const wchar_t **)&ret, n, &ps);
xret[n] = '\0'; /* just to make sure */
free (oret);
return xret;
}
else
#endif
{
xret = remove_upattern (param, pattern, op);
return ((xret == param) ? savestring (param) : xret);
}
}
| 0 |
[] |
bash
|
955543877583837c85470f7fb8a97b7aa8d45e6c
| 313,016,670,000,600,360,000,000,000,000,000,000,000 | 61 |
bash-4.4-rc2 release
|
inline void PadV2(const T* input_data, const Dims<4>& input_dims,
const std::vector<int>& left_paddings,
const std::vector<int>& right_paddings, T* output_data,
const Dims<4>& output_dims, const T pad_value) {
TFLITE_DCHECK_EQ(left_paddings.size(), 4);
TFLITE_DCHECK_EQ(right_paddings.size(), 4);
tflite::PadParams op_params;
op_params.left_padding_count = 4;
op_params.right_padding_count = 4;
for (int i = 0; i < 4; ++i) {
op_params.left_padding[i] = left_paddings[3 - i];
op_params.right_padding[i] = right_paddings[3 - i];
}
const T pad_value_copy = pad_value;
Pad(op_params, DimsToShape(input_dims), input_data, &pad_value_copy,
DimsToShape(output_dims), output_data);
}
| 0 |
[
"CWE-703",
"CWE-835"
] |
tensorflow
|
dfa22b348b70bb89d6d6ec0ff53973bacb4f4695
| 216,037,004,914,596,450,000,000,000,000,000,000,000 | 18 |
Prevent a division by 0 in average ops.
PiperOrigin-RevId: 385184660
Change-Id: I7affd4554f9b336fca29ac68f633232c094d0bd3
|
virtual TYPELIB *get_typelib() const { return NULL; }
| 0 |
[
"CWE-416",
"CWE-703"
] |
server
|
08c7ab404f69d9c4ca6ca7a9cf7eec74c804f917
| 246,376,192,112,254,400,000,000,000,000,000,000,000 | 1 |
MDEV-24176 Server crashes after insert in the table with virtual
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <[email protected]>
|
static void __xfrm_state_bump_genids(struct xfrm_state *xnew)
{
struct net *net = xs_net(xnew);
unsigned short family = xnew->props.family;
u32 reqid = xnew->props.reqid;
struct xfrm_state *x;
unsigned int h;
u32 mark = xnew->mark.v & xnew->mark.m;
u32 if_id = xnew->if_id;
h = xfrm_dst_hash(net, &xnew->id.daddr, &xnew->props.saddr, reqid, family);
hlist_for_each_entry(x, net->xfrm.state_bydst+h, bydst) {
if (x->props.family == family &&
x->props.reqid == reqid &&
x->if_id == if_id &&
(mark & x->mark.m) == x->mark.v &&
xfrm_addr_equal(&x->id.daddr, &xnew->id.daddr, family) &&
xfrm_addr_equal(&x->props.saddr, &xnew->props.saddr, family))
x->genid++;
}
}
| 0 |
[
"CWE-416"
] |
linux
|
dbb2483b2a46fbaf833cfb5deb5ed9cace9c7399
| 182,290,623,533,868,170,000,000,000,000,000,000,000 | 21 |
xfrm: clean up xfrm protocol checks
In commit 6a53b7593233 ("xfrm: check id proto in validate_tmpl()")
I introduced a check for xfrm protocol, but according to Herbert
IPSEC_PROTO_ANY should only be used as a wildcard for lookup, so
it should be removed from validate_tmpl().
And, IPSEC_PROTO_ANY is expected to only match 3 IPSec-specific
protocols, this is why xfrm_state_flush() could still miss
IPPROTO_ROUTING, which leads that those entries are left in
net->xfrm.state_all before exit net. Fix this by replacing
IPSEC_PROTO_ANY with zero.
This patch also extracts the check from validate_tmpl() to
xfrm_id_proto_valid() and uses it in parse_ipsecrequest().
With this, no other protocols should be added into xfrm.
Fixes: 6a53b7593233 ("xfrm: check id proto in validate_tmpl()")
Reported-by: [email protected]
Cc: Steffen Klassert <[email protected]>
Cc: Herbert Xu <[email protected]>
Signed-off-by: Cong Wang <[email protected]>
Acked-by: Herbert Xu <[email protected]>
Signed-off-by: Steffen Klassert <[email protected]>
|
static int ssl_check_ca_name(STACK_OF(X509_NAME) *names, X509 *x)
{
X509_NAME *nm;
int i;
nm = X509_get_issuer_name(x);
for (i = 0; i < sk_X509_NAME_num(names); i++) {
if (!X509_NAME_cmp(nm, sk_X509_NAME_value(names, i)))
return 1;
}
return 0;
}
| 0 |
[] |
openssl
|
76343947ada960b6269090638f5391068daee88d
| 291,158,032,886,860,880,000,000,000,000,000,000,000 | 11 |
Fix for CVE-2015-0291
If a client renegotiates using an invalid signature algorithms extension
it will crash a server with a NULL pointer dereference.
Thanks to David Ramos of Stanford University for reporting this bug.
CVE-2015-0291
Reviewed-by: Tim Hudson <[email protected]>
Conflicts:
ssl/t1_lib.c
|
Item_hex_hybrid(THD *thd): Item_hex_constant(thd) {}
| 0 |
[
"CWE-617"
] |
server
|
2e7891080667c59ac80f788eef4d59d447595772
| 134,749,982,190,116,730,000,000,000,000,000,000,000 | 1 |
MDEV-25635 Assertion failure when pushing from HAVING into WHERE of view
This bug could manifest itself after pushing a where condition over a
mergeable derived table / view / CTE DT into a grouping view / derived
table / CTE V whose item list contained set functions with constant
arguments such as MIN(2), SUM(1) etc. In such cases the field references
used in the condition pushed into the view V that correspond set functions
are wrapped into Item_direct_view_ref wrappers. Due to a wrong implementation
of the virtual method const_item() for the class Item_direct_view_ref the
wrapped set functions with constant arguments could be erroneously taken
for constant items. This could lead to a wrong result set returned by the
main select query in 10.2. In 10.4 where a possibility of pushing condition
from HAVING into WHERE had been added this could cause a crash.
Approved by Sergey Petrunya <[email protected]>
|
break_new(mrb_state *mrb, uint32_t tag, const struct RProc *p, mrb_value val)
{
struct RBreak *brk;
brk = MRB_OBJ_ALLOC(mrb, MRB_TT_BREAK, NULL);
mrb_break_proc_set(brk, p);
mrb_break_value_set(brk, val);
mrb_break_tag_set(brk, tag);
return brk;
}
| 0 |
[
"CWE-122",
"CWE-787"
] |
mruby
|
47068ae07a5fa3aa9a1879cdfe98a9ce0f339299
| 172,084,353,982,767,030,000,000,000,000,000,000,000 | 11 |
vm.c: packed arguments length may be zero for `send` method.
|
static int io_register_personality(struct io_ring_ctx *ctx)
{
const struct cred *creds = get_current_cred();
int id;
id = idr_alloc_cyclic(&ctx->personality_idr, (void *) creds, 1,
USHRT_MAX, GFP_KERNEL);
if (id < 0)
put_cred(creds);
return id;
}
| 0 |
[] |
linux
|
ff002b30181d30cdfbca316dadd099c3ca0d739c
| 152,337,915,474,341,900,000,000,000,000,000,000,000 | 11 |
io_uring: grab ->fs as part of async preparation
This passes it in to io-wq, so it assumes the right fs_struct when
executing async work that may need to do lookups.
Cc: [email protected] # 5.3+
Signed-off-by: Jens Axboe <[email protected]>
|
static int is_vma_resv_set(struct vm_area_struct *vma, unsigned long flag)
{
VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
return (get_vma_private_data(vma) & flag) != 0;
}
| 0 |
[
"CWE-703"
] |
linux
|
5af10dfd0afc559bb4b0f7e3e8227a1578333995
| 117,519,371,421,592,740,000,000,000,000,000,000,000 | 6 |
userfaultfd: hugetlbfs: remove superfluous page unlock in VM_SHARED case
huge_add_to_page_cache->add_to_page_cache implicitly unlocks the page
before returning in case of errors.
The error returned was -EEXIST by running UFFDIO_COPY on a non-hole
offset of a VM_SHARED hugetlbfs mapping. It was an userland bug that
triggered it and the kernel must cope with it returning -EEXIST from
ioctl(UFFDIO_COPY) as expected.
page dumped because: VM_BUG_ON_PAGE(!PageLocked(page))
kernel BUG at mm/filemap.c:964!
invalid opcode: 0000 [#1] SMP
CPU: 1 PID: 22582 Comm: qemu-system-x86 Not tainted 4.11.11-300.fc26.x86_64 #1
RIP: unlock_page+0x4a/0x50
Call Trace:
hugetlb_mcopy_atomic_pte+0xc0/0x320
mcopy_atomic+0x96f/0xbe0
userfaultfd_ioctl+0x218/0xe90
do_vfs_ioctl+0xa5/0x600
SyS_ioctl+0x79/0x90
entry_SYSCALL_64_fastpath+0x1a/0xa9
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Andrea Arcangeli <[email protected]>
Tested-by: Maxime Coquelin <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Cc: "Dr. David Alan Gilbert" <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Alexey Perevalov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static NTSTATUS open_mode_check(connection_struct *conn,
struct share_mode_lock *lck,
uint32_t access_mask,
uint32_t share_access)
{
uint32_t i;
if(lck->data->num_share_modes == 0) {
return NT_STATUS_OK;
}
if (is_stat_open(access_mask)) {
/* Stat open that doesn't trigger oplock breaks or share mode
* checks... ! JRA. */
return NT_STATUS_OK;
}
/*
* Check if the share modes will give us access.
*/
#if defined(DEVELOPER)
for(i = 0; i < lck->data->num_share_modes; i++) {
validate_my_share_entries(conn->sconn, i,
&lck->data->share_modes[i]);
}
#endif
/* Now we check the share modes, after any oplock breaks. */
for(i = 0; i < lck->data->num_share_modes; i++) {
if (!is_valid_share_mode_entry(&lck->data->share_modes[i])) {
continue;
}
/* someone else has a share lock on it, check to see if we can
* too */
if (share_conflict(&lck->data->share_modes[i],
access_mask, share_access)) {
if (share_mode_stale_pid(lck->data, i)) {
continue;
}
return NT_STATUS_SHARING_VIOLATION;
}
}
return NT_STATUS_OK;
}
| 0 |
[
"CWE-835"
] |
samba
|
10c3e3923022485c720f322ca4f0aca5d7501310
| 156,139,481,947,558,440,000,000,000,000,000,000,000 | 50 |
s3: smbd: Don't loop infinitely on bad-symlink resolution.
In the FILE_OPEN_IF case we have O_CREAT, but not
O_EXCL. Previously we went into a loop trying first
~(O_CREAT|O_EXCL), and if that returned ENOENT
try (O_CREAT|O_EXCL). We kept looping indefinately
until we got an error, or the file was created or
opened.
The big problem here is dangling symlinks. Opening
without O_NOFOLLOW means both bad symlink
and missing path return -1, ENOENT from open(). As POSIX
is pathname based it's not possible to tell
the difference between these two cases in a
non-racy way, so change to try only two attempts before
giving up.
We don't have this problem for the O_NOFOLLOW
case as we just return NT_STATUS_OBJECT_PATH_NOT_FOUND
mapped from the ELOOP POSIX error and immediately
returned.
Unroll the loop logic to two tries instead.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=12572
Pair-programmed-with: Ralph Boehme <[email protected]>
Signed-off-by: Jeremy Allison <[email protected]>
Signed-off-by: Ralph Boehme <[email protected]>
Reviewed-by: Ralph Boehme <[email protected]>
|
static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len,
void *priv, u32 addr)
{
struct rpmsg_eptdev *eptdev = priv;
struct sk_buff *skb;
skb = alloc_skb(len, GFP_ATOMIC);
if (!skb)
return -ENOMEM;
skb_put_data(skb, buf, len);
spin_lock(&eptdev->queue_lock);
skb_queue_tail(&eptdev->queue, skb);
spin_unlock(&eptdev->queue_lock);
/* wake up any blocking processes, waiting for new data */
wake_up_interruptible(&eptdev->readq);
return 0;
}
| 0 |
[
"CWE-400",
"CWE-401"
] |
linux
|
bbe692e349e2a1edf3fe0a29a0e05899c9c94d51
| 291,579,155,435,979,850,000,000,000,000,000,000,000 | 21 |
rpmsg: char: release allocated memory
In rpmsg_eptdev_write_iter, if copy_from_iter_full fails the allocated
buffer needs to be released.
Signed-off-by: Navid Emamdoost <[email protected]>
Signed-off-by: Bjorn Andersson <[email protected]>
|
IRBuilder &LReference::getBuilder() {
return irgen_->Builder;
}
| 0 |
[
"CWE-125",
"CWE-787"
] |
hermes
|
091835377369c8fd5917d9b87acffa721ad2a168
| 289,999,767,519,857,950,000,000,000,000,000,000,000 | 3 |
Correctly restore whether or not a function is an inner generator
Summary:
If a generator was large enough to be lazily compiled, we would lose
that information when reconstituting the function's context. This meant
the function was generated as a regular function instead of a generator.
#utd-hermes-ignore-android
Reviewed By: tmikov
Differential Revision: D23580247
fbshipit-source-id: af5628bf322cbdc7c7cdfbb5f8d0756328518ea1
|
rpc_restart_call_prepare(struct rpc_task *task)
{
if (RPC_ASSASSINATED(task))
return 0;
task->tk_action = rpc_prepare_task;
return 1;
}
| 0 |
[
"CWE-400",
"CWE-399",
"CWE-703"
] |
linux
|
0b760113a3a155269a3fba93a409c640031dd68f
| 93,896,886,790,676,040,000,000,000,000,000,000,000 | 7 |
NLM: Don't hang forever on NLM unlock requests
If the NLM daemon is killed on the NFS server, we can currently end up
hanging forever on an 'unlock' request, instead of aborting. Basically,
if the rpcbind request fails, or the server keeps returning garbage, we
really want to quit instead of retrying.
Tested-by: Vasily Averin <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Cc: [email protected]
|
mspack_create_chm_decompressor(struct mspack_system *sys)
{
struct mschm_decompressor_p *self = NULL;
if (!sys) sys = mspack_default_system;
if (!mspack_valid_system(sys)) return NULL;
if ((self = (struct mschm_decompressor_p *) sys->alloc(sys, sizeof(struct mschm_decompressor_p)))) {
self->base.open = &chmd_open;
self->base.close = &chmd_close;
self->base.extract = &chmd_extract;
self->base.last_error = &chmd_error;
self->base.fast_open = &chmd_fast_open;
self->base.fast_find = &chmd_fast_find;
self->system = sys;
self->error = MSPACK_ERR_OK;
self->d = NULL;
}
return (struct mschm_decompressor *) self;
}
| 0 |
[
"CWE-20",
"CWE-193",
"CWE-682"
] |
libmspack
|
72e70a921f0f07fee748aec2274b30784e1d312a
| 294,488,132,533,805,920,000,000,000,000,000,000,000 | 20 |
Fix off-by-one bounds check on CHM PMGI/PMGL chunk numbers and
reject empty filenames. Thanks to Hanno Böck for reporting
|
read_rep_section(FILE *fd, garray_T *gap, short *first)
{
int cnt;
fromto_T *ftp;
int i;
cnt = get2c(fd); /* <repcount> */
if (cnt < 0)
return SP_TRUNCERROR;
if (ga_grow(gap, cnt) == FAIL)
return SP_OTHERERROR;
/* <rep> : <repfromlen> <repfrom> <reptolen> <repto> */
for (; gap->ga_len < cnt; ++gap->ga_len)
{
ftp = &((fromto_T *)gap->ga_data)[gap->ga_len];
ftp->ft_from = read_cnt_string(fd, 1, &i);
if (i < 0)
return i;
if (i == 0)
return SP_FORMERROR;
ftp->ft_to = read_cnt_string(fd, 1, &i);
if (i <= 0)
{
vim_free(ftp->ft_from);
if (i < 0)
return i;
return SP_FORMERROR;
}
}
/* Fill the first-index table. */
for (i = 0; i < 256; ++i)
first[i] = -1;
for (i = 0; i < gap->ga_len; ++i)
{
ftp = &((fromto_T *)gap->ga_data)[i];
if (first[*ftp->ft_from] == -1)
first[*ftp->ft_from] = i;
}
return 0;
}
| 0 |
[
"CWE-190"
] |
vim
|
399c297aa93afe2c0a39e2a1b3f972aebba44c9d
| 245,913,801,845,413,100,000,000,000,000,000,000,000 | 43 |
patch 8.0.0322: possible overflow with corrupted spell file
Problem: Possible overflow with spell file where the tree length is
corrupted.
Solution: Check for an invalid length (suggested by shqking)
|
static NTSTATUS dcesrv_lsa_LSARUNREGISTERAUDITEVENT(struct dcesrv_call_state *dce_call, TALLOC_CTX *mem_ctx,
struct lsa_LSARUNREGISTERAUDITEVENT *r)
{
DCESRV_FAULT(DCERPC_FAULT_OP_RNG_ERROR);
}
| 0 |
[
"CWE-200"
] |
samba
|
0a3aa5f908e351201dc9c4d4807b09ed9eedff77
| 269,957,736,621,198,640,000,000,000,000,000,000,000 | 5 |
CVE-2022-32746 ldb: Make use of functions for appending to an ldb_message
This aims to minimise usage of the error-prone pattern of searching for
a just-added message element in order to make modifications to it (and
potentially finding the wrong element).
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <[email protected]>
|
void sinterstoreCommand(client *c) {
sinterGenericCommand(c,c->argv+2,c->argc-2,c->argv[1]);
}
| 0 |
[
"CWE-190"
] |
redis
|
a30d367a71b7017581cf1ca104242a3c644dec0f
| 168,197,472,857,041,800,000,000,000,000,000,000,000 | 3 |
Fix Integer overflow issue with intsets (CVE-2021-32687)
The vulnerability involves changing the default set-max-intset-entries
configuration parameter to a very large value and constructing specially
crafted commands to manipulate sets
|
static int aesni_cbc_hmac_sha1_cipher(EVP_CIPHER_CTX *ctx, unsigned char *out,
const unsigned char *in, size_t len)
{
EVP_AES_HMAC_SHA1 *key = data(ctx);
unsigned int l;
size_t plen = key->payload_length, iv = 0, /* explicit IV in TLS 1.1 and
* later */
sha_off = 0;
# if defined(STITCHED_CALL)
size_t aes_off = 0, blocks;
sha_off = SHA_CBLOCK - key->md.num;
# endif
key->payload_length = NO_PAYLOAD_LENGTH;
if (len % AES_BLOCK_SIZE)
return 0;
if (ctx->encrypt) {
if (plen == NO_PAYLOAD_LENGTH)
plen = len;
else if (len !=
((plen + SHA_DIGEST_LENGTH +
AES_BLOCK_SIZE) & -AES_BLOCK_SIZE))
return 0;
else if (key->aux.tls_ver >= TLS1_1_VERSION)
iv = AES_BLOCK_SIZE;
# if defined(STITCHED_CALL)
if (plen > (sha_off + iv)
&& (blocks = (plen - (sha_off + iv)) / SHA_CBLOCK)) {
SHA1_Update(&key->md, in + iv, sha_off);
aesni_cbc_sha1_enc(in, out, blocks, &key->ks,
ctx->iv, &key->md, in + iv + sha_off);
blocks *= SHA_CBLOCK;
aes_off += blocks;
sha_off += blocks;
key->md.Nh += blocks >> 29;
key->md.Nl += blocks <<= 3;
if (key->md.Nl < (unsigned int)blocks)
key->md.Nh++;
} else {
sha_off = 0;
}
# endif
sha_off += iv;
SHA1_Update(&key->md, in + sha_off, plen - sha_off);
if (plen != len) { /* "TLS" mode of operation */
if (in != out)
memcpy(out + aes_off, in + aes_off, plen - aes_off);
/* calculate HMAC and append it to payload */
SHA1_Final(out + plen, &key->md);
key->md = key->tail;
SHA1_Update(&key->md, out + plen, SHA_DIGEST_LENGTH);
SHA1_Final(out + plen, &key->md);
/* pad the payload|hmac */
plen += SHA_DIGEST_LENGTH;
for (l = len - plen - 1; plen < len; plen++)
out[plen] = l;
/* encrypt HMAC|padding at once */
aesni_cbc_encrypt(out + aes_off, out + aes_off, len - aes_off,
&key->ks, ctx->iv, 1);
} else {
aesni_cbc_encrypt(in + aes_off, out + aes_off, len - aes_off,
&key->ks, ctx->iv, 1);
}
} else {
union {
unsigned int u[SHA_DIGEST_LENGTH / sizeof(unsigned int)];
unsigned char c[32 + SHA_DIGEST_LENGTH];
} mac, *pmac;
/* arrange cache line alignment */
pmac = (void *)(((size_t)mac.c + 31) & ((size_t)0 - 32));
/* decrypt HMAC|padding at once */
aesni_cbc_encrypt(in, out, len, &key->ks, ctx->iv, 0);
if (plen) { /* "TLS" mode of operation */
size_t inp_len, mask, j, i;
unsigned int res, maxpad, pad, bitlen;
int ret = 1;
union {
unsigned int u[SHA_LBLOCK];
unsigned char c[SHA_CBLOCK];
} *data = (void *)key->md.data;
if ((key->aux.tls_aad[plen - 4] << 8 | key->aux.tls_aad[plen - 3])
>= TLS1_1_VERSION)
iv = AES_BLOCK_SIZE;
if (len < (iv + SHA_DIGEST_LENGTH + 1))
return 0;
/* omit explicit iv */
out += iv;
len -= iv;
/* figure out payload length */
pad = out[len - 1];
maxpad = len - (SHA_DIGEST_LENGTH + 1);
maxpad |= (255 - maxpad) >> (sizeof(maxpad) * 8 - 8);
maxpad &= 255;
ret &= constant_time_ge(maxpad, pad);
inp_len = len - (SHA_DIGEST_LENGTH + pad + 1);
mask = (0 - ((inp_len - len) >> (sizeof(inp_len) * 8 - 1)));
inp_len &= mask;
ret &= (int)mask;
key->aux.tls_aad[plen - 2] = inp_len >> 8;
key->aux.tls_aad[plen - 1] = inp_len;
/* calculate HMAC */
key->md = key->head;
SHA1_Update(&key->md, key->aux.tls_aad, plen);
# if 1
len -= SHA_DIGEST_LENGTH; /* amend mac */
if (len >= (256 + SHA_CBLOCK)) {
j = (len - (256 + SHA_CBLOCK)) & (0 - SHA_CBLOCK);
j += SHA_CBLOCK - key->md.num;
SHA1_Update(&key->md, out, j);
out += j;
len -= j;
inp_len -= j;
}
/* but pretend as if we hashed padded payload */
bitlen = key->md.Nl + (inp_len << 3); /* at most 18 bits */
# ifdef BSWAP
bitlen = BSWAP(bitlen);
# else
mac.c[0] = 0;
mac.c[1] = (unsigned char)(bitlen >> 16);
mac.c[2] = (unsigned char)(bitlen >> 8);
mac.c[3] = (unsigned char)bitlen;
bitlen = mac.u[0];
# endif
pmac->u[0] = 0;
pmac->u[1] = 0;
pmac->u[2] = 0;
pmac->u[3] = 0;
pmac->u[4] = 0;
for (res = key->md.num, j = 0; j < len; j++) {
size_t c = out[j];
mask = (j - inp_len) >> (sizeof(j) * 8 - 8);
c &= mask;
c |= 0x80 & ~mask & ~((inp_len - j) >> (sizeof(j) * 8 - 8));
data->c[res++] = (unsigned char)c;
if (res != SHA_CBLOCK)
continue;
/* j is not incremented yet */
mask = 0 - ((inp_len + 7 - j) >> (sizeof(j) * 8 - 1));
data->u[SHA_LBLOCK - 1] |= bitlen & mask;
sha1_block_data_order(&key->md, data, 1);
mask &= 0 - ((j - inp_len - 72) >> (sizeof(j) * 8 - 1));
pmac->u[0] |= key->md.h0 & mask;
pmac->u[1] |= key->md.h1 & mask;
pmac->u[2] |= key->md.h2 & mask;
pmac->u[3] |= key->md.h3 & mask;
pmac->u[4] |= key->md.h4 & mask;
res = 0;
}
for (i = res; i < SHA_CBLOCK; i++, j++)
data->c[i] = 0;
if (res > SHA_CBLOCK - 8) {
mask = 0 - ((inp_len + 8 - j) >> (sizeof(j) * 8 - 1));
data->u[SHA_LBLOCK - 1] |= bitlen & mask;
sha1_block_data_order(&key->md, data, 1);
mask &= 0 - ((j - inp_len - 73) >> (sizeof(j) * 8 - 1));
pmac->u[0] |= key->md.h0 & mask;
pmac->u[1] |= key->md.h1 & mask;
pmac->u[2] |= key->md.h2 & mask;
pmac->u[3] |= key->md.h3 & mask;
pmac->u[4] |= key->md.h4 & mask;
memset(data, 0, SHA_CBLOCK);
j += 64;
}
data->u[SHA_LBLOCK - 1] = bitlen;
sha1_block_data_order(&key->md, data, 1);
mask = 0 - ((j - inp_len - 73) >> (sizeof(j) * 8 - 1));
pmac->u[0] |= key->md.h0 & mask;
pmac->u[1] |= key->md.h1 & mask;
pmac->u[2] |= key->md.h2 & mask;
pmac->u[3] |= key->md.h3 & mask;
pmac->u[4] |= key->md.h4 & mask;
# ifdef BSWAP
pmac->u[0] = BSWAP(pmac->u[0]);
pmac->u[1] = BSWAP(pmac->u[1]);
pmac->u[2] = BSWAP(pmac->u[2]);
pmac->u[3] = BSWAP(pmac->u[3]);
pmac->u[4] = BSWAP(pmac->u[4]);
# else
for (i = 0; i < 5; i++) {
res = pmac->u[i];
pmac->c[4 * i + 0] = (unsigned char)(res >> 24);
pmac->c[4 * i + 1] = (unsigned char)(res >> 16);
pmac->c[4 * i + 2] = (unsigned char)(res >> 8);
pmac->c[4 * i + 3] = (unsigned char)res;
}
# endif
len += SHA_DIGEST_LENGTH;
# else
SHA1_Update(&key->md, out, inp_len);
res = key->md.num;
SHA1_Final(pmac->c, &key->md);
{
unsigned int inp_blocks, pad_blocks;
/* but pretend as if we hashed padded payload */
inp_blocks =
1 + ((SHA_CBLOCK - 9 - res) >> (sizeof(res) * 8 - 1));
res += (unsigned int)(len - inp_len);
pad_blocks = res / SHA_CBLOCK;
res %= SHA_CBLOCK;
pad_blocks +=
1 + ((SHA_CBLOCK - 9 - res) >> (sizeof(res) * 8 - 1));
for (; inp_blocks < pad_blocks; inp_blocks++)
sha1_block_data_order(&key->md, data, 1);
}
# endif
key->md = key->tail;
SHA1_Update(&key->md, pmac->c, SHA_DIGEST_LENGTH);
SHA1_Final(pmac->c, &key->md);
/* verify HMAC */
out += inp_len;
len -= inp_len;
# if 1
{
unsigned char *p = out + len - 1 - maxpad - SHA_DIGEST_LENGTH;
size_t off = out - p;
unsigned int c, cmask;
maxpad += SHA_DIGEST_LENGTH;
for (res = 0, i = 0, j = 0; j < maxpad; j++) {
c = p[j];
cmask =
((int)(j - off - SHA_DIGEST_LENGTH)) >> (sizeof(int) *
8 - 1);
res |= (c ^ pad) & ~cmask; /* ... and padding */
cmask &= ((int)(off - 1 - j)) >> (sizeof(int) * 8 - 1);
res |= (c ^ pmac->c[i]) & cmask;
i += 1 & cmask;
}
maxpad -= SHA_DIGEST_LENGTH;
res = 0 - ((0 - res) >> (sizeof(res) * 8 - 1));
ret &= (int)~res;
}
# else
for (res = 0, i = 0; i < SHA_DIGEST_LENGTH; i++)
res |= out[i] ^ pmac->c[i];
res = 0 - ((0 - res) >> (sizeof(res) * 8 - 1));
ret &= (int)~res;
/* verify padding */
pad = (pad & ~res) | (maxpad & res);
out = out + len - 1 - pad;
for (res = 0, i = 0; i < pad; i++)
res |= out[i] ^ pad;
res = (0 - res) >> (sizeof(res) * 8 - 1);
ret &= (int)~res;
# endif
return ret;
} else {
SHA1_Update(&key->md, out, len);
}
}
return 1;
}
| 0 |
[
"CWE-310"
] |
openssl
|
4159f311671cf3bac03815e5de44681eb758304a
| 220,060,526,159,493,280,000,000,000,000,000,000,000 | 289 |
Check that we have enough padding characters.
Reviewed-by: Emilia Käsper <[email protected]>
CVE-2016-2107
MR: #2572
|
//! Load image from a DLM file \newinstance.
static CImg<T> get_load_dlm(std::FILE *const file) {
return CImg<T>().load_dlm(file);
| 0 |
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
| 125,127,478,354,924,840,000,000,000,000,000,000,000 | 3 |
Fix other issues in 'CImg<T>::load_bmp()'.
|
uint32_t CompactProtocolWriter::serializedSizeMapEnd() const {
return 0;
}
| 0 |
[
"CWE-703",
"CWE-770"
] |
fbthrift
|
c9a903e5902834e95bbd4ab0e9fa53ba0189f351
| 178,995,156,940,505,100,000,000,000,000,000,000,000 | 3 |
Better handling of truncated data when reading strings
Summary:
Currently we read string size and blindly pre-allocate it. This allows malicious attacker to send a few bytes message and cause server to allocate huge amount of memory (>1GB).
This diff changes the logic to check if we have enough data in the buffer before allocating the string.
This is a second part of a fix for CVE-2019-3553.
Reviewed By: vitaut
Differential Revision: D14393393
fbshipit-source-id: e2046d2f5b087d3abc9a9d2c6c107cf088673057
|
WandExport void DrawSetTextDecoration(DrawingWand *wand,
const DecorationType decoration)
{
assert(wand != (DrawingWand *) NULL);
assert(wand->signature == MagickWandSignature);
if (wand->debug != MagickFalse)
(void) LogMagickEvent(WandEvent,GetMagickModule(),"%s",wand->name);
if ((wand->filter_off != MagickFalse) ||
(CurrentContext->decorate != decoration))
{
CurrentContext->decorate=decoration;
(void) MVGPrintf(wand,"decorate '%s'\n",CommandOptionToMnemonic(
MagickDecorateOptions,(ssize_t) decoration));
}
}
| 0 |
[
"CWE-476"
] |
ImageMagick
|
6ad5fc3c9b652eec27fc0b1a0817159f8547d5d9
| 329,696,473,094,628,140,000,000,000,000,000,000,000 | 15 |
https://github.com/ImageMagick/ImageMagick/issues/716
|
void Compute(OpKernelContext* ctx) override {
StagingMap<Ordered>* map = nullptr;
OP_REQUIRES_OK(ctx, GetStagingMap(ctx, def(), &map));
core::ScopedUnref scope(map);
// Allocate size output tensor
Tensor* size = nullptr;
OP_REQUIRES_OK(ctx, ctx->allocate_output(0, TensorShape({}), &size));
// Set it to the actual size
size->scalar<int32>().setConstant(map->size());
}
| 0 |
[
"CWE-20",
"CWE-476"
] |
tensorflow
|
d7de67733925de196ec8863a33445b73f9562d1d
| 136,690,776,941,916,170,000,000,000,000,000,000,000 | 12 |
Prevent a CHECK-fail due to empty tensor input in `map_stage_op.cc`
PiperOrigin-RevId: 387737906
Change-Id: Idc52df0c71c7ed6e2dd633b651a581932f277c8a
|
static gboolean gdk_pixbuf__bmp_image_stop_load(gpointer data, GError **error)
{
gboolean retval = TRUE;
struct bmp_progressive_state *context =
(struct bmp_progressive_state *) data;
/* FIXME this thing needs to report errors if
* we have unused image data
*/
g_return_val_if_fail(context != NULL, TRUE);
g_free(context->Colormap);
if (context->pixbuf)
g_object_unref(context->pixbuf);
if (context->read_state == READ_STATE_HEADERS) {
if (error && *error == NULL) {
g_set_error_literal (error,
GDK_PIXBUF_ERROR,
GDK_PIXBUF_ERROR_CORRUPT_IMAGE,
_("Premature end-of-file encountered"));
}
retval = FALSE;
}
g_free(context->buff);
g_free(context);
return retval;
}
| 0 |
[] |
gdk-pixbuf
|
779429ce34e439c01d257444fe9d6739e72a2024
| 56,953,584,008,212,280,000,000,000,000,000,000,000 | 33 |
bmp: Detect integer overflow of the line width
Instead of risking crashes or OOM, return an error if
we detect integer overflow.
The commit also includes a test image that triggers
this overflow when used with pixbuf-read.
https://bugzilla.gnome.org/show_bug.cgi?id=768738
|
ebt_register_table(struct net *net, const struct ebt_table *input_table)
{
struct ebt_table_info *newinfo;
struct ebt_table *t, *table;
struct ebt_replace_kernel *repl;
int ret, i, countersize;
void *p;
if (input_table == NULL || (repl = input_table->table) == NULL ||
repl->entries == NULL || repl->entries_size == 0 ||
repl->counters != NULL || input_table->private != NULL) {
BUGPRINT("Bad table data for ebt_register_table!!!\n");
return ERR_PTR(-EINVAL);
}
/* Don't add one table to multiple lists. */
table = kmemdup(input_table, sizeof(struct ebt_table), GFP_KERNEL);
if (!table) {
ret = -ENOMEM;
goto out;
}
countersize = COUNTER_OFFSET(repl->nentries) * nr_cpu_ids;
newinfo = vmalloc(sizeof(*newinfo) + countersize);
ret = -ENOMEM;
if (!newinfo)
goto free_table;
p = vmalloc(repl->entries_size);
if (!p)
goto free_newinfo;
memcpy(p, repl->entries, repl->entries_size);
newinfo->entries = p;
newinfo->entries_size = repl->entries_size;
newinfo->nentries = repl->nentries;
if (countersize)
memset(newinfo->counters, 0, countersize);
/* fill in newinfo and parse the entries */
newinfo->chainstack = NULL;
for (i = 0; i < NF_BR_NUMHOOKS; i++) {
if ((repl->valid_hooks & (1 << i)) == 0)
newinfo->hook_entry[i] = NULL;
else
newinfo->hook_entry[i] = p +
((char *)repl->hook_entry[i] - repl->entries);
}
ret = translate_table(net, repl->name, newinfo);
if (ret != 0) {
BUGPRINT("Translate_table failed\n");
goto free_chainstack;
}
if (table->check && table->check(newinfo, table->valid_hooks)) {
BUGPRINT("The table doesn't like its own initial data, lol\n");
return ERR_PTR(-EINVAL);
}
table->private = newinfo;
rwlock_init(&table->lock);
ret = mutex_lock_interruptible(&ebt_mutex);
if (ret != 0)
goto free_chainstack;
list_for_each_entry(t, &net->xt.tables[NFPROTO_BRIDGE], list) {
if (strcmp(t->name, table->name) == 0) {
ret = -EEXIST;
BUGPRINT("Table name already exists\n");
goto free_unlock;
}
}
/* Hold a reference count if the chains aren't empty */
if (newinfo->nentries && !try_module_get(table->me)) {
ret = -ENOENT;
goto free_unlock;
}
list_add(&table->list, &net->xt.tables[NFPROTO_BRIDGE]);
mutex_unlock(&ebt_mutex);
return table;
free_unlock:
mutex_unlock(&ebt_mutex);
free_chainstack:
if (newinfo->chainstack) {
for_each_possible_cpu(i)
vfree(newinfo->chainstack[i]);
vfree(newinfo->chainstack);
}
vfree(newinfo->entries);
free_newinfo:
vfree(newinfo);
free_table:
kfree(table);
out:
return ERR_PTR(ret);
}
| 0 |
[
"CWE-20"
] |
linux
|
d846f71195d57b0bbb143382647c2c6638b04c5a
| 196,919,058,296,979,280,000,000,000,000,000,000,000 | 99 |
bridge: netfilter: fix information leak
Struct tmp is copied from userspace. It is not checked whether the "name"
field is NULL terminated. This may lead to buffer overflow and passing
contents of kernel stack as a module name to try_then_request_module() and,
consequently, to modprobe commandline. It would be seen by all userspace
processes.
Signed-off-by: Vasiliy Kulikov <[email protected]>
Signed-off-by: Patrick McHardy <[email protected]>
|
action_text(
nic_rule_action action
)
{
const char *t;
switch (action) {
default:
t = "ERROR"; /* quiet uninit warning */
DPRINTF(1, ("fatal: unknown nic_rule_action %d\n",
action));
ENSURE(0);
break;
case ACTION_LISTEN:
t = "listen";
break;
case ACTION_IGNORE:
t = "ignore";
break;
case ACTION_DROP:
t = "drop";
break;
}
return t;
}
| 0 |
[
"CWE-287"
] |
ntp
|
71a962710bfe066f76da9679cf4cfdeffe34e95e
| 336,630,369,161,610,240,000,000,000,000,000,000,000 | 30 |
[Sec 2936] Skeleton Key: Any trusted key system can serve time. HStenn.
|
static void nl_fib_lookup(struct net *net, struct fib_result_nl *frn)
{
struct fib_result res;
struct flowi4 fl4 = {
.flowi4_mark = frn->fl_mark,
.daddr = frn->fl_addr,
.flowi4_tos = frn->fl_tos,
.flowi4_scope = frn->fl_scope,
};
struct fib_table *tb;
rcu_read_lock();
tb = fib_get_table(net, frn->tb_id_in);
frn->err = -ENOENT;
if (tb) {
local_bh_disable();
frn->tb_id = tb->tb_id;
frn->err = fib_table_lookup(tb, &fl4, &res, FIB_LOOKUP_NOREF);
if (!frn->err) {
frn->prefixlen = res.prefixlen;
frn->nh_sel = res.nh_sel;
frn->type = res.type;
frn->scope = res.scope;
}
local_bh_enable();
}
rcu_read_unlock();
}
| 0 |
[
"CWE-399"
] |
net-next
|
fbd40ea0180a2d328c5adc61414dc8bab9335ce2
| 172,966,665,798,671,100,000,000,000,000,000,000,000 | 34 |
ipv4: Don't do expensive useless work during inetdev destroy.
When an inetdev is destroyed, every address assigned to the interface
is removed. And in this scenerio we do two pointless things which can
be very expensive if the number of assigned interfaces is large:
1) Address promotion. We are deleting all addresses, so there is no
point in doing this.
2) A full nf conntrack table purge for every address. We only need to
do this once, as is already caught by the existing
masq_dev_notifier so masq_inet_event() can skip this.
Reported-by: Solar Designer <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Tested-by: Cyrill Gorcunov <[email protected]>
|
TEST_F(StreamInfoImplTest, BytesTest) {
StreamInfoImpl stream_info(Http::Protocol::Http2, test_time_.timeSystem(), nullptr);
const uint64_t bytes_sent = 7;
const uint64_t bytes_received = 12;
stream_info.addBytesSent(bytes_sent);
stream_info.addBytesReceived(bytes_received);
EXPECT_EQ(bytes_sent, stream_info.bytesSent());
EXPECT_EQ(bytes_received, stream_info.bytesReceived());
}
| 0 |
[
"CWE-416"
] |
envoy
|
fe7c69c248f4fe5a9080c7ccb35275b5218bb5ab
| 271,730,790,973,210,670,000,000,000,000,000,000,000 | 12 |
internal redirect: fix a lifetime bug (#785)
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Matt Klein <[email protected]>
Signed-off-by: Pradeep Rao <[email protected]>
|
static int stimer_set_config(struct kvm_vcpu_hv_stimer *stimer, u64 config,
bool host)
{
union hv_stimer_config new_config = {.as_uint64 = config},
old_config = {.as_uint64 = stimer->config.as_uint64};
struct kvm_vcpu *vcpu = hv_stimer_to_vcpu(stimer);
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
struct kvm_vcpu_hv_synic *synic = to_hv_synic(vcpu);
if (!synic->active && (!host || config))
return 1;
if (unlikely(!host && hv_vcpu->enforce_cpuid && new_config.direct_mode &&
!(hv_vcpu->cpuid_cache.features_edx &
HV_STIMER_DIRECT_MODE_AVAILABLE)))
return 1;
trace_kvm_hv_stimer_set_config(hv_stimer_to_vcpu(stimer)->vcpu_id,
stimer->index, config, host);
stimer_cleanup(stimer);
if (old_config.enable &&
!new_config.direct_mode && new_config.sintx == 0)
new_config.enable = 0;
stimer->config.as_uint64 = new_config.as_uint64;
if (stimer->config.enable)
stimer_mark_pending(stimer, false);
return 0;
}
| 0 |
[
"CWE-476"
] |
linux
|
b1e34d325397a33d97d845e312d7cf2a8b646b44
| 259,346,317,376,889,200,000,000,000,000,000,000,000 | 31 |
KVM: x86: Forbid VMM to set SYNIC/STIMER MSRs when SynIC wasn't activated
Setting non-zero values to SYNIC/STIMER MSRs activates certain features,
this should not happen when KVM_CAP_HYPERV_SYNIC{,2} was not activated.
Note, it would've been better to forbid writing anything to SYNIC/STIMER
MSRs, including zeroes, however, at least QEMU tries clearing
HV_X64_MSR_STIMER0_CONFIG without SynIC. HV_X64_MSR_EOM MSR is somewhat
'special' as writing zero there triggers an action, this also should not
happen when SynIC wasn't activated.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
Message-Id: <[email protected]>
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
|
get_page_render_details(fz_context *ctx, fz_page *page, render_details *render)
{
float page_width, page_height;
int rot;
float s_x, s_y;
render->page = page;
render->list = NULL;
render->num_workers = num_workers;
render->bounds = fz_bound_page(ctx, page);
page_width = (render->bounds.x1 - render->bounds.x0)/72;
page_height = (render->bounds.y1 - render->bounds.y0)/72;
s_x = x_resolution / 72;
s_y = y_resolution / 72;
if (rotation == -1)
{
/* Automatic rotation. If we fit, use 0. If we don't, and 90 would be 'better' use that. */
if (page_width <= width && page_height <= height)
{
/* Page fits, so use no rotation. */
rot = 0;
}
else if (fit)
{
/* Use whichever gives the biggest scale */
float sx_0 = width / page_width;
float sy_0 = height / page_height;
float sx_90 = height / page_width;
float sy_90 = width / page_height;
float s_0, s_90;
s_0 = fz_min(sx_0, sy_0);
s_90 = fz_min(sx_90, sy_90);
if (s_0 >= s_90)
{
rot = 0;
if (s_0 < 1)
{
s_x *= s_0;
s_y *= s_0;
}
}
else
{
rot = 90;
if (s_90 < 1)
{
s_x *= s_90;
s_y *= s_90;
}
}
}
else
{
/* Use whichever crops the least area */
float lost0 = 0;
float lost90 = 0;
if (page_width > width)
lost0 += (page_width - width) * (page_height > height ? height : page_height);
if (page_height > height)
lost0 += (page_height - height) * page_width;
if (page_width > height)
lost90 += (page_width - height) * (page_height > width ? width : page_height);
if (page_height > width)
lost90 += (page_height - width) * page_width;
rot = (lost0 <= lost90 ? 0 : 90);
}
}
else
{
rot = rotation;
}
render->ctm = fz_pre_scale(fz_rotate(rot), s_x, s_y);
render->tbounds = fz_transform_rect(render->bounds, render->ctm);;
render->ibounds = fz_round_rect(render->tbounds);
}
| 0 |
[
"CWE-369",
"CWE-22"
] |
mupdf
|
22c47acbd52949421f8c7cb46ea1556827d0fcbf
| 61,402,250,459,759,630,000,000,000,000,000,000,000 | 81 |
Bug 704834: Fix division by zero for zero width pages in muraster.
|
static void child_msg_onlinestatus(int msg_type, struct process_id src,
void *buf, size_t len, void *private_data)
{
TALLOC_CTX *mem_ctx;
const char *message;
struct process_id *sender;
DEBUG(5,("winbind_msg_onlinestatus received.\n"));
if (!buf) {
return;
}
sender = (struct process_id *)buf;
mem_ctx = talloc_init("winbind_msg_onlinestatus");
if (mem_ctx == NULL) {
return;
}
message = collect_onlinestatus(mem_ctx);
if (message == NULL) {
talloc_destroy(mem_ctx);
return;
}
message_send_pid(*sender, MSG_WINBIND_ONLINESTATUS,
message, strlen(message) + 1, True);
talloc_destroy(mem_ctx);
}
| 0 |
[] |
samba
|
c93d42969451949566327e7fdbf29bfcee2c8319
| 248,465,118,807,215,460,000,000,000,000,000,000,000 | 31 |
Back-port of Volkers fix.
Fix a race condition in winbind leading to a crash
When SIGCHLD handling is delayed for some reason, sending a request to a child
can fail early because the child has died already. In this case
async_main_request_sent() directly called the continuation function without
properly removing the malfunctioning child process and the requests in the
queue. The next request would then crash in the DLIST_ADD_END() in
async_request() because the request pending for the child had been
talloc_free()'ed and yet still was referenced in the list.
This one is *old*...
Volker
Jeremy.
|
static int x509_get_version( unsigned char **p,
const unsigned char *end,
int *ver )
{
int ret;
size_t len;
if( ( ret = mbedtls_asn1_get_tag( p, end, &len,
MBEDTLS_ASN1_CONTEXT_SPECIFIC | MBEDTLS_ASN1_CONSTRUCTED | 0 ) ) != 0 )
{
if( ret == MBEDTLS_ERR_ASN1_UNEXPECTED_TAG )
{
*ver = 0;
return( 0 );
}
return( ret );
}
end = *p + len;
if( ( ret = mbedtls_asn1_get_int( p, end, ver ) ) != 0 )
return( MBEDTLS_ERR_X509_INVALID_VERSION + ret );
if( *p != end )
return( MBEDTLS_ERR_X509_INVALID_VERSION +
MBEDTLS_ERR_ASN1_LENGTH_MISMATCH );
return( 0 );
}
| 0 |
[
"CWE-287",
"CWE-284"
] |
mbedtls
|
d15795acd5074e0b44e71f7ede8bdfe1b48591fc
| 62,100,417,877,547,620,000,000,000,000,000,000,000 | 30 |
Improve behaviour on fatal errors
If we didn't walk the whole chain, then there may be any kind of errors in the
part of the chain we didn't check, so setting all flags looks like the safe
thing to do.
|
free_buf_options(
buf_T *buf,
int free_p_ff)
{
if (free_p_ff)
{
clear_string_option(&buf->b_p_fenc);
clear_string_option(&buf->b_p_ff);
clear_string_option(&buf->b_p_bh);
clear_string_option(&buf->b_p_bt);
}
#ifdef FEAT_FIND_ID
clear_string_option(&buf->b_p_def);
clear_string_option(&buf->b_p_inc);
# ifdef FEAT_EVAL
clear_string_option(&buf->b_p_inex);
# endif
#endif
#if defined(FEAT_CINDENT) && defined(FEAT_EVAL)
clear_string_option(&buf->b_p_inde);
clear_string_option(&buf->b_p_indk);
#endif
#if defined(FEAT_BEVAL) && defined(FEAT_EVAL)
clear_string_option(&buf->b_p_bexpr);
#endif
#if defined(FEAT_CRYPT)
clear_string_option(&buf->b_p_cm);
#endif
clear_string_option(&buf->b_p_fp);
#if defined(FEAT_EVAL)
clear_string_option(&buf->b_p_fex);
#endif
#ifdef FEAT_CRYPT
# ifdef FEAT_SODIUM
if ((buf->b_p_key != NULL) && (*buf->b_p_key != NUL) &&
(crypt_get_method_nr(buf) == CRYPT_M_SOD))
crypt_sodium_munlock(buf->b_p_key, STRLEN(buf->b_p_key));
# endif
clear_string_option(&buf->b_p_key);
#endif
clear_string_option(&buf->b_p_kp);
clear_string_option(&buf->b_p_mps);
clear_string_option(&buf->b_p_fo);
clear_string_option(&buf->b_p_flp);
clear_string_option(&buf->b_p_isk);
#ifdef FEAT_VARTABS
clear_string_option(&buf->b_p_vsts);
vim_free(buf->b_p_vsts_nopaste);
buf->b_p_vsts_nopaste = NULL;
vim_free(buf->b_p_vsts_array);
buf->b_p_vsts_array = NULL;
clear_string_option(&buf->b_p_vts);
VIM_CLEAR(buf->b_p_vts_array);
#endif
#ifdef FEAT_KEYMAP
clear_string_option(&buf->b_p_keymap);
keymap_clear(&buf->b_kmap_ga);
ga_clear(&buf->b_kmap_ga);
#endif
clear_string_option(&buf->b_p_com);
#ifdef FEAT_FOLDING
clear_string_option(&buf->b_p_cms);
#endif
clear_string_option(&buf->b_p_nf);
#ifdef FEAT_SYN_HL
clear_string_option(&buf->b_p_syn);
clear_string_option(&buf->b_s.b_syn_isk);
#endif
#ifdef FEAT_SPELL
clear_string_option(&buf->b_s.b_p_spc);
clear_string_option(&buf->b_s.b_p_spf);
vim_regfree(buf->b_s.b_cap_prog);
buf->b_s.b_cap_prog = NULL;
clear_string_option(&buf->b_s.b_p_spl);
clear_string_option(&buf->b_s.b_p_spo);
#endif
#ifdef FEAT_SEARCHPATH
clear_string_option(&buf->b_p_sua);
#endif
clear_string_option(&buf->b_p_ft);
#ifdef FEAT_CINDENT
clear_string_option(&buf->b_p_cink);
clear_string_option(&buf->b_p_cino);
#endif
#if defined(FEAT_CINDENT) || defined(FEAT_SMARTINDENT)
clear_string_option(&buf->b_p_cinw);
#endif
clear_string_option(&buf->b_p_cpt);
#ifdef FEAT_COMPL_FUNC
clear_string_option(&buf->b_p_cfu);
free_callback(&buf->b_cfu_cb);
clear_string_option(&buf->b_p_ofu);
free_callback(&buf->b_ofu_cb);
clear_string_option(&buf->b_p_tsrfu);
free_callback(&buf->b_tsrfu_cb);
#endif
#ifdef FEAT_QUICKFIX
clear_string_option(&buf->b_p_gp);
clear_string_option(&buf->b_p_mp);
clear_string_option(&buf->b_p_efm);
#endif
clear_string_option(&buf->b_p_ep);
clear_string_option(&buf->b_p_path);
clear_string_option(&buf->b_p_tags);
clear_string_option(&buf->b_p_tc);
#ifdef FEAT_EVAL
clear_string_option(&buf->b_p_tfu);
free_callback(&buf->b_tfu_cb);
#endif
clear_string_option(&buf->b_p_dict);
clear_string_option(&buf->b_p_tsr);
#ifdef FEAT_TEXTOBJ
clear_string_option(&buf->b_p_qe);
#endif
buf->b_p_ar = -1;
buf->b_p_ul = NO_LOCAL_UNDOLEVEL;
#ifdef FEAT_LISP
clear_string_option(&buf->b_p_lw);
#endif
clear_string_option(&buf->b_p_bkc);
clear_string_option(&buf->b_p_menc);
}
| 1 |
[
"CWE-416"
] |
vim
|
9b4a80a66544f2782040b641498754bcb5b8d461
| 108,917,920,630,831,060,000,000,000,000,000,000,000 | 122 |
patch 8.2.4281: using freed memory with :lopen and :bwipe
Problem: Using freed memory with :lopen and :bwipe.
Solution: Do not use a wiped out buffer.
|
compile_gimmick_node(GimmickNode* node, regex_t* reg)
{
int r;
switch (node->type) {
case GIMMICK_FAIL:
r = add_op(reg, OP_FAIL);
break;
case GIMMICK_SAVE:
r = add_op(reg, OP_PUSH_SAVE_VAL);
if (r != 0) return r;
COP(reg)->push_save_val.type = node->detail_type;
COP(reg)->push_save_val.id = node->id;
break;
case GIMMICK_UPDATE_VAR:
r = add_op(reg, OP_UPDATE_VAR);
if (r != 0) return r;
COP(reg)->update_var.type = node->detail_type;
COP(reg)->update_var.id = node->id;
break;
#ifdef USE_CALLOUT
case GIMMICK_CALLOUT:
switch (node->detail_type) {
case ONIG_CALLOUT_OF_CONTENTS:
case ONIG_CALLOUT_OF_NAME:
{
if (node->detail_type == ONIG_CALLOUT_OF_NAME) {
r = add_op(reg, OP_CALLOUT_NAME);
if (r != 0) return r;
COP(reg)->callout_name.id = node->id;
COP(reg)->callout_name.num = node->num;
}
else {
r = add_op(reg, OP_CALLOUT_CONTENTS);
if (r != 0) return r;
COP(reg)->callout_contents.num = node->num;
}
}
break;
default:
r = ONIGERR_TYPE_BUG;
break;
}
#endif
}
return r;
}
| 0 |
[
"CWE-476",
"CWE-125"
] |
oniguruma
|
c509265c5f6ae7264f7b8a8aae1cfa5fc59d108c
| 214,666,603,635,932,440,000,000,000,000,000,000,000 | 52 |
Fix CVE-2019-13225: problem in converting if-then-else pattern to bytecode.
|
read_one_cert (ReadFromGConfInfo *info,
const char *setting_name,
const char *key)
{
char *value = NULL;
if (!nm_gconf_get_string_helper (info->client, info->dir, key, setting_name, &value))
return;
g_object_set_data_full (G_OBJECT (info->connection),
key, value,
(GDestroyNotify) g_free);
}
| 1 |
[
"CWE-310"
] |
network-manager-applet
|
4020594dfbf566f1852f0acb36ad631a9e73a82b
| 57,641,357,943,977,500,000,000,000,000,000,000,000 | 13 |
core: fix CA cert mishandling after cert file deletion (deb #560067) (rh #546793)
If a connection was created with a CA certificate, but the user later
moved or deleted that CA certificate, the applet would simply provide the
connection to NetworkManager without any CA certificate. This could cause
NM to connect to the original network (or a network spoofing the original
network) without verifying the identity of the network as the user
expects.
In the future we can/should do better here by (1) alerting the user that
some connection is now no longer complete by flagging it in the connection
editor or notifying the user somehow, and (2) by using a freaking' cert
store already (not that Linux has one yet).
|
static inline void update_blocked_averages(int cpu)
{
struct rq *rq = cpu_rq(cpu);
struct cfs_rq *cfs_rq = &rq->cfs;
const struct sched_class *curr_class;
struct rq_flags rf;
rq_lock_irqsave(rq, &rf);
update_rq_clock(rq);
update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq);
curr_class = rq->curr->sched_class;
update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class);
update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class);
update_irq_load_avg(rq, 0);
#ifdef CONFIG_NO_HZ_COMMON
rq->last_blocked_load_update_tick = jiffies;
if (!cfs_rq_has_blocked(cfs_rq) && !others_have_blocked(rq))
rq->has_blocked_load = 0;
#endif
rq_unlock_irqrestore(rq, &rf);
}
| 1 |
[
"CWE-400",
"CWE-703",
"CWE-835"
] |
linux
|
c40f7d74c741a907cfaeb73a7697081881c497d0
| 96,039,933,554,210,430,000,000,000,000,000,000,000 | 22 |
sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
Zhipeng Xie, Xie XiuQi and Sargun Dhillon reported lockups in the
scheduler under high loads, starting at around the v4.18 time frame,
and Zhipeng Xie tracked it down to bugs in the rq->leaf_cfs_rq_list
manipulation.
Do a (manual) revert of:
a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
It turns out that the list_del_leaf_cfs_rq() introduced by this commit
is a surprising property that was not considered in followup commits
such as:
9c2791f936ef ("sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_list")
As Vincent Guittot explains:
"I think that there is a bigger problem with commit a9e7f6544b9c and
cfs_rq throttling:
Let take the example of the following topology TG2 --> TG1 --> root:
1) The 1st time a task is enqueued, we will add TG2 cfs_rq then TG1
cfs_rq to leaf_cfs_rq_list and we are sure to do the whole branch in
one path because it has never been used and can't be throttled so
tmp_alone_branch will point to leaf_cfs_rq_list at the end.
2) Then TG1 is throttled
3) and we add TG3 as a new child of TG1.
4) The 1st enqueue of a task on TG3 will add TG3 cfs_rq just before TG1
cfs_rq and tmp_alone_branch will stay on rq->leaf_cfs_rq_list.
With commit a9e7f6544b9c, we can del a cfs_rq from rq->leaf_cfs_rq_list.
So if the load of TG1 cfs_rq becomes NULL before step 2) above, TG1
cfs_rq is removed from the list.
Then at step 4), TG3 cfs_rq is added at the beginning of rq->leaf_cfs_rq_list
but tmp_alone_branch still points to TG3 cfs_rq because its throttled
parent can't be enqueued when the lock is released.
tmp_alone_branch doesn't point to rq->leaf_cfs_rq_list whereas it should.
So if TG3 cfs_rq is removed or destroyed before tmp_alone_branch
points on another TG cfs_rq, the next TG cfs_rq that will be added,
will be linked outside rq->leaf_cfs_rq_list - which is bad.
In addition, we can break the ordering of the cfs_rq in
rq->leaf_cfs_rq_list but this ordering is used to update and
propagate the update from leaf down to root."
Instead of trying to work through all these cases and trying to reproduce
the very high loads that produced the lockup to begin with, simplify
the code temporarily by reverting a9e7f6544b9c - which change was clearly
not thought through completely.
This (hopefully) gives us a kernel that doesn't lock up so people
can continue to enjoy their holidays without worrying about regressions. ;-)
[ mingo: Wrote changelog, fixed weird spelling in code comment while at it. ]
Analyzed-by: Xie XiuQi <[email protected]>
Analyzed-by: Vincent Guittot <[email protected]>
Reported-by: Zhipeng Xie <[email protected]>
Reported-by: Sargun Dhillon <[email protected]>
Reported-by: Xie XiuQi <[email protected]>
Tested-by: Zhipeng Xie <[email protected]>
Tested-by: Sargun Dhillon <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Acked-by: Vincent Guittot <[email protected]>
Cc: <[email protected]> # v4.13+
Cc: Bin Li <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
quicklist *quicklistNew(int fill, int compress) {
quicklist *quicklist = quicklistCreate();
quicklistSetOptions(quicklist, fill, compress);
return quicklist;
}
| 0 |
[
"CWE-190"
] |
redis
|
f6a40570fa63d5afdd596c78083d754081d80ae3
| 132,879,210,070,221,500,000,000,000,000,000,000,000 | 5 |
Fix ziplist and listpack overflows and truncations (CVE-2021-32627, CVE-2021-32628)
- fix possible heap corruption in ziplist and listpack resulting by trying to
allocate more than the maximum size of 4GB.
- prevent ziplist (hash and zset) from reaching size of above 1GB, will be
converted to HT encoding, that's not a useful size.
- prevent listpack (stream) from reaching size of above 1GB.
- XADD will start a new listpack if the new record may cause the previous
listpack to grow over 1GB.
- XADD will respond with an error if a single stream record is over 1GB
- List type (ziplist in quicklist) was truncating strings that were over 4GB,
now it'll respond with an error.
|
Subsets and Splits
CWE 416 & 19
The query filters records related to specific CWEs (Common Weakness Enumerations), providing a basic overview of entries with these vulnerabilities but without deeper analysis.
CWE Frequency in Train Set
Counts the occurrences of each CWE (Common Weakness Enumeration) in the dataset, providing a basic distribution but limited insight.