func
stringlengths 0
484k
| target
int64 0
1
| cwe
listlengths 0
4
| project
stringclasses 799
values | commit_id
stringlengths 40
40
| hash
float64 1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
| size
int64 1
24k
| message
stringlengths 0
13.3k
|
---|---|---|---|---|---|---|---|
explicit ArithmeticOptimizerContext(SetVector<NodeDef*>* nodes_to_simplify)
: nodes_to_simplify(nodes_to_simplify) {}
| 0 |
[
"CWE-476"
] |
tensorflow
|
e6340f0665d53716ef3197ada88936c2a5f7a2d3
| 181,144,736,259,669,800,000,000,000,000,000,000,000 | 2 |
Handle a special grappler case resulting in crash.
It might happen that a malformed input could be used to trick Grappler into trying to optimize a node with no inputs. This, in turn, would produce a null pointer dereference and a segfault.
PiperOrigin-RevId: 369242852
Change-Id: I2e5cbe7aec243d34a6d60220ac8ac9b16f136f6b
|
static int __ftrace_function_set_filter(int filter, char *buf, int len,
struct function_filter_data *data)
{
int i, re_cnt, ret = -EINVAL;
int *reset;
char **re;
reset = filter ? &data->first_filter : &data->first_notrace;
/*
* The 'ip' field could have multiple filters set, separated
* either by space or comma. We first cut the filter and apply
* all pieces separatelly.
*/
re = ftrace_function_filter_re(buf, len, &re_cnt);
if (!re)
return -EINVAL;
for (i = 0; i < re_cnt; i++) {
ret = ftrace_function_set_regexp(data->ops, filter, *reset,
re[i], strlen(re[i]));
if (ret)
break;
if (*reset)
*reset = 0;
}
argv_free(re);
return ret;
}
| 0 |
[
"CWE-787"
] |
linux
|
70303420b5721c38998cf987e6b7d30cc62d4ff1
| 109,189,296,427,139,110,000,000,000,000,000,000,000 | 31 |
tracing: Check for no filter when processing event filters
The syzkaller detected a out-of-bounds issue with the events filter code,
specifically here:
prog[N].pred = NULL; /* #13 */
prog[N].target = 1; /* TRUE */
prog[N+1].pred = NULL;
prog[N+1].target = 0; /* FALSE */
-> prog[N-1].target = N;
prog[N-1].when_to_branch = false;
As that's the first reference to a "N-1" index, it appears that the code got
here with N = 0, which means the filter parser found no filter to parse
(which shouldn't ever happen, but apparently it did).
Add a new error to the parsing code that will check to make sure that N is
not zero before going into this part of the code. If N = 0, then -EINVAL is
returned, and a error message is added to the filter.
Cc: [email protected]
Fixes: 80765597bc587 ("tracing: Rewrite filter logic to be simpler and faster")
Reported-by: air icy <[email protected]>
bugzilla url: https://bugzilla.kernel.org/show_bug.cgi?id=200019
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
|
set_phy_ctrl(E1000State *s, int index, uint16_t val)
{
if ((val & MII_CR_AUTO_NEG_EN) && (val & MII_CR_RESTART_AUTO_NEG)) {
s->nic->nc.link_down = true;
e1000_link_down(s);
s->phy_reg[PHY_STATUS] &= ~MII_SR_AUTONEG_COMPLETE;
DBGOUT(PHY, "Start link auto negotiation\n");
qemu_mod_timer(s->autoneg_timer, qemu_get_clock_ms(vm_clock) + 500);
}
}
| 0 |
[
"CWE-120"
] |
qemu
|
b0d9ffcd0251161c7c92f94804dcf599dfa3edeb
| 307,804,479,494,585,300,000,000,000,000,000,000,000 | 10 |
e1000: Discard packets that are too long if !SBP and !LPE
The e1000_receive function for the e1000 needs to discard packets longer than
1522 bytes if the SBP and LPE flags are disabled. The linux driver assumes
this behavior and allocates memory based on this assumption.
Signed-off-by: Michael Contreras <[email protected]>
Signed-off-by: Anthony Liguori <[email protected]>
|
OperatorShellMake(const char *operatorName,
Oid operatorNamespace,
Oid leftTypeId,
Oid rightTypeId)
{
Relation pg_operator_desc;
Oid operatorObjectId;
int i;
HeapTuple tup;
Datum values[Natts_pg_operator];
bool nulls[Natts_pg_operator];
NameData oname;
TupleDesc tupDesc;
/*
* validate operator name
*/
if (!validOperatorName(operatorName))
ereport(ERROR,
(errcode(ERRCODE_INVALID_NAME),
errmsg("\"%s\" is not a valid operator name",
operatorName)));
/*
* initialize our *nulls and *values arrays
*/
for (i = 0; i < Natts_pg_operator; ++i)
{
nulls[i] = false;
values[i] = (Datum) NULL; /* redundant, but safe */
}
/*
* initialize values[] with the operator name and input data types. Note
* that oprcode is set to InvalidOid, indicating it's a shell.
*/
namestrcpy(&oname, operatorName);
values[Anum_pg_operator_oprname - 1] = NameGetDatum(&oname);
values[Anum_pg_operator_oprnamespace - 1] = ObjectIdGetDatum(operatorNamespace);
values[Anum_pg_operator_oprowner - 1] = ObjectIdGetDatum(GetUserId());
values[Anum_pg_operator_oprkind - 1] = CharGetDatum(leftTypeId ? (rightTypeId ? 'b' : 'r') : 'l');
values[Anum_pg_operator_oprcanmerge - 1] = BoolGetDatum(false);
values[Anum_pg_operator_oprcanhash - 1] = BoolGetDatum(false);
values[Anum_pg_operator_oprleft - 1] = ObjectIdGetDatum(leftTypeId);
values[Anum_pg_operator_oprright - 1] = ObjectIdGetDatum(rightTypeId);
values[Anum_pg_operator_oprresult - 1] = ObjectIdGetDatum(InvalidOid);
values[Anum_pg_operator_oprcom - 1] = ObjectIdGetDatum(InvalidOid);
values[Anum_pg_operator_oprnegate - 1] = ObjectIdGetDatum(InvalidOid);
values[Anum_pg_operator_oprcode - 1] = ObjectIdGetDatum(InvalidOid);
values[Anum_pg_operator_oprrest - 1] = ObjectIdGetDatum(InvalidOid);
values[Anum_pg_operator_oprjoin - 1] = ObjectIdGetDatum(InvalidOid);
/*
* open pg_operator
*/
pg_operator_desc = heap_open(OperatorRelationId, RowExclusiveLock);
tupDesc = pg_operator_desc->rd_att;
/*
* create a new operator tuple
*/
tup = heap_form_tuple(tupDesc, values, nulls);
/*
* insert our "shell" operator tuple
*/
operatorObjectId = CatalogTupleInsert(pg_operator_desc, tup);
/* Add dependencies for the entry */
makeOperatorDependencies(tup, false);
heap_freetuple(tup);
/* Post creation hook for new shell operator */
InvokeObjectPostCreateHook(OperatorRelationId, operatorObjectId, 0);
/*
* Make sure the tuple is visible for subsequent lookups/updates.
*/
CommandCounterIncrement();
/*
* close the operator relation and return the oid.
*/
heap_close(pg_operator_desc, RowExclusiveLock);
return operatorObjectId;
}
| 0 |
[
"CWE-94"
] |
postgres
|
f52d2fbd8c62f667191b61228acf9d8aa53607b9
| 217,935,380,151,042,100,000,000,000,000,000,000,000 | 88 |
In extensions, don't replace objects not belonging to the extension.
Previously, if an extension script did CREATE OR REPLACE and there was
an existing object not belonging to the extension, it would overwrite
the object and adopt it into the extension. This is problematic, first
because the overwrite is probably unintentional, and second because we
didn't change the object's ownership. Thus a hostile user could create
an object in advance of an expected CREATE EXTENSION command, and would
then have ownership rights on an extension object, which could be
modified for trojan-horse-type attacks.
Hence, forbid CREATE OR REPLACE of an existing object unless it already
belongs to the extension. (Note that we've always forbidden replacing
an object that belongs to some other extension; only the behavior for
previously-free-standing objects changes here.)
For the same reason, also fail CREATE IF NOT EXISTS when there is
an existing object that doesn't belong to the extension.
Our thanks to Sven Klemm for reporting this problem.
Security: CVE-2022-2625
|
kg_unseal_v1(context, minor_status, ctx, ptr, bodysize, message_buffer,
conf_state, qop_state, toktype)
krb5_context context;
OM_uint32 *minor_status;
krb5_gss_ctx_id_rec *ctx;
unsigned char *ptr;
int bodysize;
gss_buffer_t message_buffer;
int *conf_state;
gss_qop_t *qop_state;
int toktype;
{
krb5_error_code code;
int conflen = 0;
int signalg;
int sealalg;
int bad_pad = 0;
gss_buffer_desc token;
krb5_checksum cksum;
krb5_checksum md5cksum;
krb5_data plaind;
char *data_ptr;
unsigned char *plain;
unsigned int cksum_len = 0;
size_t plainlen;
int direction;
krb5_ui_4 seqnum;
OM_uint32 retval;
size_t sumlen;
size_t padlen;
krb5_keyusage sign_usage = KG_USAGE_SIGN;
if (toktype == KG_TOK_SEAL_MSG) {
message_buffer->length = 0;
message_buffer->value = NULL;
}
/* Sanity checks */
if (ctx->seq == NULL) {
/* ctx was established using a newer enctype, and cannot process RFC
* 1964 tokens. */
*minor_status = 0;
return GSS_S_DEFECTIVE_TOKEN;
}
if ((bodysize < 22) || (ptr[4] != 0xff) || (ptr[5] != 0xff)) {
*minor_status = 0;
return GSS_S_DEFECTIVE_TOKEN;
}
signalg = ptr[0] + (ptr[1]<<8);
sealalg = ptr[2] + (ptr[3]<<8);
if ((toktype != KG_TOK_SEAL_MSG) &&
(sealalg != 0xffff)) {
*minor_status = 0;
return GSS_S_DEFECTIVE_TOKEN;
}
/* in the current spec, there is only one valid seal algorithm per
key type, so a simple comparison is ok */
if ((toktype == KG_TOK_SEAL_MSG) &&
!((sealalg == 0xffff) ||
(sealalg == ctx->sealalg))) {
*minor_status = 0;
return GSS_S_DEFECTIVE_TOKEN;
}
/* there are several mappings of seal algorithms to sign algorithms,
but few enough that we can try them all. */
if ((ctx->sealalg == SEAL_ALG_NONE && signalg > 1) ||
(ctx->sealalg == SEAL_ALG_1 && signalg != SGN_ALG_3) ||
(ctx->sealalg == SEAL_ALG_DES3KD &&
signalg != SGN_ALG_HMAC_SHA1_DES3_KD)||
(ctx->sealalg == SEAL_ALG_MICROSOFT_RC4 &&
signalg != SGN_ALG_HMAC_MD5)) {
*minor_status = 0;
return GSS_S_DEFECTIVE_TOKEN;
}
switch (signalg) {
case SGN_ALG_DES_MAC_MD5:
case SGN_ALG_MD2_5:
case SGN_ALG_HMAC_MD5:
cksum_len = 8;
if (toktype != KG_TOK_SEAL_MSG)
sign_usage = 15;
break;
case SGN_ALG_3:
cksum_len = 16;
break;
case SGN_ALG_HMAC_SHA1_DES3_KD:
cksum_len = 20;
break;
default:
*minor_status = 0;
return GSS_S_DEFECTIVE_TOKEN;
}
if ((size_t)bodysize < 14 + cksum_len) {
*minor_status = 0;
return GSS_S_DEFECTIVE_TOKEN;
}
/* get the token parameters */
if ((code = kg_get_seq_num(context, ctx->seq, ptr+14, ptr+6, &direction,
&seqnum))) {
*minor_status = code;
return(GSS_S_BAD_SIG);
}
/* decode the message, if SEAL */
if (toktype == KG_TOK_SEAL_MSG) {
size_t tmsglen = bodysize-(14+cksum_len);
if (sealalg != 0xffff) {
if ((plain = (unsigned char *) xmalloc(tmsglen)) == NULL) {
*minor_status = ENOMEM;
return(GSS_S_FAILURE);
}
if (ctx->sealalg == SEAL_ALG_MICROSOFT_RC4) {
unsigned char bigend_seqnum[4];
krb5_keyblock *enc_key;
int i;
store_32_be(seqnum, bigend_seqnum);
code = krb5_k_key_keyblock(context, ctx->enc, &enc_key);
if (code)
{
xfree(plain);
*minor_status = code;
return(GSS_S_FAILURE);
}
assert (enc_key->length == 16);
for (i = 0; i <= 15; i++)
((char *) enc_key->contents)[i] ^=0xf0;
code = kg_arcfour_docrypt (enc_key, 0,
&bigend_seqnum[0], 4,
ptr+14+cksum_len, tmsglen,
plain);
krb5_free_keyblock (context, enc_key);
} else {
code = kg_decrypt(context, ctx->enc, KG_USAGE_SEAL, NULL,
ptr+14+cksum_len, plain, tmsglen);
}
if (code) {
xfree(plain);
*minor_status = code;
return(GSS_S_FAILURE);
}
} else {
plain = ptr+14+cksum_len;
}
plainlen = tmsglen;
conflen = kg_confounder_size(context, ctx->enc->keyblock.enctype);
if (tmsglen < conflen) {
if (sealalg != 0xffff)
xfree(plain);
*minor_status = 0;
return(GSS_S_DEFECTIVE_TOKEN);
}
padlen = plain[tmsglen - 1];
if (tmsglen - conflen < padlen) {
/* Don't error out yet, to avoid padding oracle attacks. We will
* treat this as a checksum failure later on. */
padlen = 0;
bad_pad = 1;
}
token.length = tmsglen - conflen - padlen;
if (token.length) {
if ((token.value = (void *) gssalloc_malloc(token.length)) == NULL) {
if (sealalg != 0xffff)
xfree(plain);
*minor_status = ENOMEM;
return(GSS_S_FAILURE);
}
memcpy(token.value, plain+conflen, token.length);
} else {
token.value = NULL;
}
} else if (toktype == KG_TOK_SIGN_MSG) {
token = *message_buffer;
plain = token.value;
plainlen = token.length;
} else {
token.length = 0;
token.value = NULL;
plain = token.value;
plainlen = token.length;
}
/* compute the checksum of the message */
/* initialize the the cksum */
switch (signalg) {
case SGN_ALG_DES_MAC_MD5:
case SGN_ALG_MD2_5:
case SGN_ALG_DES_MAC:
case SGN_ALG_3:
md5cksum.checksum_type = CKSUMTYPE_RSA_MD5;
break;
case SGN_ALG_HMAC_MD5:
md5cksum.checksum_type = CKSUMTYPE_HMAC_MD5_ARCFOUR;
break;
case SGN_ALG_HMAC_SHA1_DES3_KD:
md5cksum.checksum_type = CKSUMTYPE_HMAC_SHA1_DES3;
break;
default:
abort ();
}
code = krb5_c_checksum_length(context, md5cksum.checksum_type, &sumlen);
if (code)
return(code);
md5cksum.length = sumlen;
switch (signalg) {
case SGN_ALG_DES_MAC_MD5:
case SGN_ALG_3:
/* compute the checksum of the message */
/* 8 = bytes of token body to be checksummed according to spec */
if (! (data_ptr = xmalloc(8 + plainlen))) {
if (sealalg != 0xffff)
xfree(plain);
if (toktype == KG_TOK_SEAL_MSG)
gssalloc_free(token.value);
*minor_status = ENOMEM;
return(GSS_S_FAILURE);
}
(void) memcpy(data_ptr, ptr-2, 8);
(void) memcpy(data_ptr+8, plain, plainlen);
plaind.length = 8 + plainlen;
plaind.data = data_ptr;
code = krb5_k_make_checksum(context, md5cksum.checksum_type,
ctx->seq, sign_usage,
&plaind, &md5cksum);
xfree(data_ptr);
if (code) {
if (toktype == KG_TOK_SEAL_MSG)
gssalloc_free(token.value);
*minor_status = code;
return(GSS_S_FAILURE);
}
code = kg_encrypt_inplace(context, ctx->seq, KG_USAGE_SEAL,
(g_OID_equal(ctx->mech_used,
gss_mech_krb5_old) ?
ctx->seq->keyblock.contents : NULL),
md5cksum.contents, 16);
if (code) {
krb5_free_checksum_contents(context, &md5cksum);
if (toktype == KG_TOK_SEAL_MSG)
gssalloc_free(token.value);
*minor_status = code;
return GSS_S_FAILURE;
}
if (signalg == 0)
cksum.length = 8;
else
cksum.length = 16;
cksum.contents = md5cksum.contents + 16 - cksum.length;
code = k5_bcmp(cksum.contents, ptr + 14, cksum.length);
break;
case SGN_ALG_MD2_5:
if (!ctx->seed_init &&
(code = kg_make_seed(context, ctx->subkey, ctx->seed))) {
krb5_free_checksum_contents(context, &md5cksum);
if (sealalg != 0xffff)
xfree(plain);
if (toktype == KG_TOK_SEAL_MSG)
gssalloc_free(token.value);
*minor_status = code;
return GSS_S_FAILURE;
}
if (! (data_ptr = xmalloc(sizeof(ctx->seed) + 8 + plainlen))) {
krb5_free_checksum_contents(context, &md5cksum);
if (sealalg == 0)
xfree(plain);
if (toktype == KG_TOK_SEAL_MSG)
gssalloc_free(token.value);
*minor_status = ENOMEM;
return(GSS_S_FAILURE);
}
(void) memcpy(data_ptr, ptr-2, 8);
(void) memcpy(data_ptr+8, ctx->seed, sizeof(ctx->seed));
(void) memcpy(data_ptr+8+sizeof(ctx->seed), plain, plainlen);
plaind.length = 8 + sizeof(ctx->seed) + plainlen;
plaind.data = data_ptr;
krb5_free_checksum_contents(context, &md5cksum);
code = krb5_k_make_checksum(context, md5cksum.checksum_type,
ctx->seq, sign_usage,
&plaind, &md5cksum);
xfree(data_ptr);
if (code) {
if (sealalg == 0)
xfree(plain);
if (toktype == KG_TOK_SEAL_MSG)
gssalloc_free(token.value);
*minor_status = code;
return(GSS_S_FAILURE);
}
code = k5_bcmp(md5cksum.contents, ptr + 14, 8);
/* Falls through to defective-token?? */
default:
*minor_status = 0;
return(GSS_S_DEFECTIVE_TOKEN);
case SGN_ALG_HMAC_SHA1_DES3_KD:
case SGN_ALG_HMAC_MD5:
/* compute the checksum of the message */
/* 8 = bytes of token body to be checksummed according to spec */
if (! (data_ptr = xmalloc(8 + plainlen))) {
if (sealalg != 0xffff)
xfree(plain);
if (toktype == KG_TOK_SEAL_MSG)
gssalloc_free(token.value);
*minor_status = ENOMEM;
return(GSS_S_FAILURE);
}
(void) memcpy(data_ptr, ptr-2, 8);
(void) memcpy(data_ptr+8, plain, plainlen);
plaind.length = 8 + plainlen;
plaind.data = data_ptr;
code = krb5_k_make_checksum(context, md5cksum.checksum_type,
ctx->seq, sign_usage,
&plaind, &md5cksum);
xfree(data_ptr);
if (code) {
if (toktype == KG_TOK_SEAL_MSG)
gssalloc_free(token.value);
*minor_status = code;
return(GSS_S_FAILURE);
}
code = k5_bcmp(md5cksum.contents, ptr + 14, cksum_len);
break;
}
krb5_free_checksum_contents(context, &md5cksum);
if (sealalg != 0xffff)
xfree(plain);
/* compare the computed checksum against the transmitted checksum */
if (code || bad_pad) {
if (toktype == KG_TOK_SEAL_MSG)
gssalloc_free(token.value);
*minor_status = 0;
return(GSS_S_BAD_SIG);
}
/* it got through unscathed. Make sure the context is unexpired */
if (toktype == KG_TOK_SEAL_MSG)
*message_buffer = token;
if (conf_state)
*conf_state = (sealalg != 0xffff);
if (qop_state)
*qop_state = GSS_C_QOP_DEFAULT;
/* do sequencing checks */
if ((ctx->initiate && direction != 0xff) ||
(!ctx->initiate && direction != 0)) {
if (toktype == KG_TOK_SEAL_MSG) {
gssalloc_free(token.value);
message_buffer->value = NULL;
message_buffer->length = 0;
}
*minor_status = (OM_uint32)G_BAD_DIRECTION;
return(GSS_S_BAD_SIG);
}
retval = g_seqstate_check(ctx->seqstate, (uint64_t)seqnum);
/* success or ordering violation */
*minor_status = 0;
return(retval);
}
| 0 |
[
"CWE-125"
] |
krb5
|
fb99962cbd063ac04c9a9d2cc7c75eab73f3533d
| 262,684,353,738,615,930,000,000,000,000,000,000,000 | 409 |
Handle invalid RFC 1964 tokens [CVE-2014-4341...]
Detect the following cases which would otherwise cause invalid memory
accesses and/or integer underflow:
* An RFC 1964 token being processed by an RFC 4121-only context
[CVE-2014-4342]
* A header with fewer than 22 bytes after the token ID or an
incomplete checksum [CVE-2014-4341 CVE-2014-4342]
* A ciphertext shorter than the confounder [CVE-2014-4341]
* A declared padding length longer than the plaintext [CVE-2014-4341]
If we detect a bad pad byte, continue on to compute the checksum to
avoid creating a padding oracle, but treat the checksum as invalid
even if it compares equal.
CVE-2014-4341:
In MIT krb5, an unauthenticated remote attacker with the ability to
inject packets into a legitimately established GSSAPI application
session can cause a program crash due to invalid memory references
when attempting to read beyond the end of a buffer.
CVSSv2 Vector: AV:N/AC:M/Au:N/C:N/I:N/A:P/E:POC/RL:OF/RC:C
CVE-2014-4342:
In MIT krb5 releases krb5-1.7 and later, an unauthenticated remote
attacker with the ability to inject packets into a legitimately
established GSSAPI application session can cause a program crash due
to invalid memory references when reading beyond the end of a buffer
or by causing a null pointer dereference.
CVSSv2 Vector: AV:N/AC:M/Au:N/C:N/I:N/A:P/E:POC/RL:OF/RC:C
[[email protected]: CVE summaries, CVSS]
ticket: 7949 (new)
subject: Handle invalid RFC 1964 tokens [CVE-2014-4341 CVE-2014-4342]
taget_version: 1.12.2
tags: pullup
|
SdMmcSoftwareReset (
IN SD_MMC_HC_PRIVATE_DATA *Private,
IN UINT8 Slot,
IN UINT16 ErrIntStatus
)
{
UINT8 SwReset;
EFI_STATUS Status;
SwReset = 0;
if ((ErrIntStatus & 0x0F) != 0) {
SwReset |= BIT1;
}
if ((ErrIntStatus & 0x70) != 0) {
SwReset |= BIT2;
}
Status = SdMmcHcRwMmio (
Private->PciIo,
Slot,
SD_MMC_HC_SW_RST,
FALSE,
sizeof (SwReset),
&SwReset
);
if (EFI_ERROR (Status)) {
return Status;
}
Status = SdMmcHcWaitMmioSet (
Private->PciIo,
Slot,
SD_MMC_HC_SW_RST,
sizeof (SwReset),
0xFF,
0,
SD_MMC_HC_GENERIC_TIMEOUT
);
if (EFI_ERROR (Status)) {
return Status;
}
return EFI_SUCCESS;
}
| 0 |
[] |
edk2
|
e36d5ac7d10a6ff5becb0f52fdfd69a1752b0d14
| 301,259,077,544,060,540,000,000,000,000,000,000,000 | 44 |
MdeModulePkg/SdMmcPciHcDxe: Fix double PciIo Unmap in TRB creation (CVE-2019-14587)
REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1989
The commit will avoid unmapping the same resource in error handling logic
for function BuildAdmaDescTable() and SdMmcCreateTrb().
For the error handling in BuildAdmaDescTable():
The error is directly related with the corresponding Map() operation
(mapped address beyond 4G, which is not supported in ADMA), so the Unmap()
operation is done in the error handling logic, and then setting
'Trb->AdmaMap' to NULL to avoid double Unmap.
For the error handling in SdMmcCreateTrb():
The error is not directly related with the corresponding Map() operation,
so the commit will update the code to left SdMmcFreeTrb() for the Unmap
operation to avoid double Unmap.
Cc: Jian J Wang <[email protected]>
Cc: Ray Ni <[email protected]>
Signed-off-by: Hao A Wu <[email protected]>
Reviewed-by: Jian J Wang <[email protected]>
|
PamBackend::~PamBackend() {
delete m_data;
delete m_pam;
}
| 0 |
[
"CWE-613",
"CWE-287",
"CWE-284"
] |
sddm
|
147cec383892d143b5e02daa70f1e7def50f5d98
| 275,658,418,030,067,700,000,000,000,000,000,000,000 | 4 |
Fix authentication when reusing an existing session
- Check the success value before unlocking the session
- Don't attempt to use the nonexistant "sddm-check" PAM service
|
static int cap_socket_accept(struct socket *sock, struct socket *newsock)
{
return 0;
}
| 0 |
[] |
linux-2.6
|
ee18d64c1f632043a02e6f5ba5e045bb26a5465f
| 318,398,755,651,835,400,000,000,000,000,000,000,000 | 4 |
KEYS: Add a keyctl to install a process's session keyring on its parent [try #6]
Add a keyctl to install a process's session keyring onto its parent. This
replaces the parent's session keyring. Because the COW credential code does
not permit one process to change another process's credentials directly, the
change is deferred until userspace next starts executing again. Normally this
will be after a wait*() syscall.
To support this, three new security hooks have been provided:
cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in
the blank security creds and key_session_to_parent() - which asks the LSM if
the process may replace its parent's session keyring.
The replacement may only happen if the process has the same ownership details
as its parent, and the process has LINK permission on the session keyring, and
the session keyring is owned by the process, and the LSM permits it.
Note that this requires alteration to each architecture's notify_resume path.
This has been done for all arches barring blackfin, m68k* and xtensa, all of
which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the
replacement to be performed at the point the parent process resumes userspace
execution.
This allows the userspace AFS pioctl emulation to fully emulate newpag() and
the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to
alter the parent process's PAG membership. However, since kAFS doesn't use
PAGs per se, but rather dumps the keys into the session keyring, the session
keyring of the parent must be replaced if, for example, VIOCSETTOK is passed
the newpag flag.
This can be tested with the following program:
#include <stdio.h>
#include <stdlib.h>
#include <keyutils.h>
#define KEYCTL_SESSION_TO_PARENT 18
#define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0)
int main(int argc, char **argv)
{
key_serial_t keyring, key;
long ret;
keyring = keyctl_join_session_keyring(argv[1]);
OSERROR(keyring, "keyctl_join_session_keyring");
key = add_key("user", "a", "b", 1, keyring);
OSERROR(key, "add_key");
ret = keyctl(KEYCTL_SESSION_TO_PARENT);
OSERROR(ret, "KEYCTL_SESSION_TO_PARENT");
return 0;
}
Compiled and linked with -lkeyutils, you should see something like:
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
355907932 --alswrv 4043 -1 \_ keyring: _uid.4043
[dhowells@andromeda ~]$ /tmp/newpag
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
1055658746 --alswrv 4043 4043 \_ user: a
[dhowells@andromeda ~]$ /tmp/newpag hello
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: hello
340417692 --alswrv 4043 4043 \_ user: a
Where the test program creates a new session keyring, sticks a user key named
'a' into it and then installs it on its parent.
Signed-off-by: David Howells <[email protected]>
Signed-off-by: James Morris <[email protected]>
|
static void cgw_remove_all_jobs(void)
{
struct cgw_job *gwj = NULL;
struct hlist_node *nx;
ASSERT_RTNL();
hlist_for_each_entry_safe(gwj, nx, &cgw_list, list) {
hlist_del(&gwj->list);
cgw_unregister_filter(gwj);
kmem_cache_free(cgw_cache, gwj);
}
}
| 0 |
[
"CWE-264"
] |
net
|
90f62cf30a78721641e08737bda787552428061e
| 174,537,300,225,301,800,000,000,000,000,000,000,000 | 13 |
net: Use netlink_ns_capable to verify the permisions of netlink messages
It is possible by passing a netlink socket to a more privileged
executable and then to fool that executable into writing to the socket
data that happens to be valid netlink message to do something that
privileged executable did not intend to do.
To keep this from happening replace bare capable and ns_capable calls
with netlink_capable, netlink_net_calls and netlink_ns_capable calls.
Which act the same as the previous calls except they verify that the
opener of the socket had the desired permissions as well.
Reported-by: Andy Lutomirski <[email protected]>
Signed-off-by: "Eric W. Biederman" <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
xfs_inode_item_format_data_fork(
struct xfs_inode_log_item *iip,
struct xfs_inode_log_format *ilf,
struct xfs_log_vec *lv,
struct xfs_log_iovec **vecp)
{
struct xfs_inode *ip = iip->ili_inode;
size_t data_bytes;
switch (ip->i_d.di_format) {
case XFS_DINODE_FMT_EXTENTS:
iip->ili_fields &=
~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT |
XFS_ILOG_DEV | XFS_ILOG_UUID);
if ((iip->ili_fields & XFS_ILOG_DEXT) &&
ip->i_d.di_nextents > 0 &&
ip->i_df.if_bytes > 0) {
struct xfs_bmbt_rec *p;
ASSERT(ip->i_df.if_u1.if_extents != NULL);
ASSERT(ip->i_df.if_bytes / sizeof(xfs_bmbt_rec_t) > 0);
p = xlog_prepare_iovec(lv, vecp, XLOG_REG_TYPE_IEXT);
data_bytes = xfs_iextents_copy(ip, p, XFS_DATA_FORK);
xlog_finish_iovec(lv, *vecp, data_bytes);
ASSERT(data_bytes <= ip->i_df.if_bytes);
ilf->ilf_dsize = data_bytes;
ilf->ilf_size++;
} else {
iip->ili_fields &= ~XFS_ILOG_DEXT;
}
break;
case XFS_DINODE_FMT_BTREE:
iip->ili_fields &=
~(XFS_ILOG_DDATA | XFS_ILOG_DEXT |
XFS_ILOG_DEV | XFS_ILOG_UUID);
if ((iip->ili_fields & XFS_ILOG_DBROOT) &&
ip->i_df.if_broot_bytes > 0) {
ASSERT(ip->i_df.if_broot != NULL);
xlog_copy_iovec(lv, vecp, XLOG_REG_TYPE_IBROOT,
ip->i_df.if_broot,
ip->i_df.if_broot_bytes);
ilf->ilf_dsize = ip->i_df.if_broot_bytes;
ilf->ilf_size++;
} else {
ASSERT(!(iip->ili_fields &
XFS_ILOG_DBROOT));
iip->ili_fields &= ~XFS_ILOG_DBROOT;
}
break;
case XFS_DINODE_FMT_LOCAL:
iip->ili_fields &=
~(XFS_ILOG_DEXT | XFS_ILOG_DBROOT |
XFS_ILOG_DEV | XFS_ILOG_UUID);
if ((iip->ili_fields & XFS_ILOG_DDATA) &&
ip->i_df.if_bytes > 0) {
/*
* Round i_bytes up to a word boundary.
* The underlying memory is guaranteed to
* to be there by xfs_idata_realloc().
*/
data_bytes = roundup(ip->i_df.if_bytes, 4);
ASSERT(ip->i_df.if_real_bytes == 0 ||
ip->i_df.if_real_bytes == data_bytes);
ASSERT(ip->i_df.if_u1.if_data != NULL);
ASSERT(ip->i_d.di_size > 0);
xlog_copy_iovec(lv, vecp, XLOG_REG_TYPE_ILOCAL,
ip->i_df.if_u1.if_data, data_bytes);
ilf->ilf_dsize = (unsigned)data_bytes;
ilf->ilf_size++;
} else {
iip->ili_fields &= ~XFS_ILOG_DDATA;
}
break;
case XFS_DINODE_FMT_DEV:
iip->ili_fields &=
~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT |
XFS_ILOG_DEXT | XFS_ILOG_UUID);
if (iip->ili_fields & XFS_ILOG_DEV)
ilf->ilf_u.ilfu_rdev = ip->i_df.if_u2.if_rdev;
break;
case XFS_DINODE_FMT_UUID:
iip->ili_fields &=
~(XFS_ILOG_DDATA | XFS_ILOG_DBROOT |
XFS_ILOG_DEXT | XFS_ILOG_DEV);
if (iip->ili_fields & XFS_ILOG_UUID)
ilf->ilf_u.ilfu_uuid = ip->i_df.if_u2.if_uuid;
break;
default:
ASSERT(0);
break;
}
}
| 0 |
[
"CWE-19"
] |
linux
|
fc0561cefc04e7803c0f6501ca4f310a502f65b8
| 138,398,629,932,559,050,000,000,000,000,000,000,000 | 97 |
xfs: optimise away log forces on timestamp updates for fdatasync
xfs: timestamp updates cause excessive fdatasync log traffic
Sage Weil reported that a ceph test workload was writing to the
log on every fdatasync during an overwrite workload. Event tracing
showed that the only metadata modification being made was the
timestamp updates during the write(2) syscall, but fdatasync(2)
is supposed to ignore them. The key observation was that the
transactions in the log all looked like this:
INODE: #regs: 4 ino: 0x8b flags: 0x45 dsize: 32
And contained a flags field of 0x45 or 0x85, and had data and
attribute forks following the inode core. This means that the
timestamp updates were triggering dirty relogging of previously
logged parts of the inode that hadn't yet been flushed back to
disk.
There are two parts to this problem. The first is that XFS relogs
dirty regions in subsequent transactions, so it carries around the
fields that have been dirtied since the last time the inode was
written back to disk, not since the last time the inode was forced
into the log.
The second part is that on v5 filesystems, the inode change count
update during inode dirtying also sets the XFS_ILOG_CORE flag, so
on v5 filesystems this makes a timestamp update dirty the entire
inode.
As a result when fdatasync is run, it looks at the dirty fields in
the inode, and sees more than just the timestamp flag, even though
the only metadata change since the last fdatasync was just the
timestamps. Hence we force the log on every subsequent fdatasync
even though it is not needed.
To fix this, add a new field to the inode log item that tracks
changes since the last time fsync/fdatasync forced the log to flush
the changes to the journal. This flag is updated when we dirty the
inode, but we do it before updating the change count so it does not
carry the "core dirty" flag from timestamp updates. The fields are
zeroed when the inode is marked clean (due to writeback/freeing) or
when an fsync/datasync forces the log. Hence if we only dirty the
timestamps on the inode between fsync/fdatasync calls, the fdatasync
will not trigger another log force.
Over 100 runs of the test program:
Ext4 baseline:
runtime: 1.63s +/- 0.24s
avg lat: 1.59ms +/- 0.24ms
iops: ~2000
XFS, vanilla kernel:
runtime: 2.45s +/- 0.18s
avg lat: 2.39ms +/- 0.18ms
log forces: ~400/s
iops: ~1000
XFS, patched kernel:
runtime: 1.49s +/- 0.26s
avg lat: 1.46ms +/- 0.25ms
log forces: ~30/s
iops: ~1500
Reported-by: Sage Weil <[email protected]>
Signed-off-by: Dave Chinner <[email protected]>
Reviewed-by: Brian Foster <[email protected]>
Signed-off-by: Dave Chinner <[email protected]>
|
static int dvb_usbv2_download_firmware(struct dvb_usb_device *d,
const char *name)
{
int ret;
const struct firmware *fw;
dev_dbg(&d->udev->dev, "%s:\n", __func__);
if (!d->props->download_firmware) {
ret = -EINVAL;
goto err;
}
ret = request_firmware(&fw, name, &d->udev->dev);
if (ret < 0) {
dev_err(&d->udev->dev,
"%s: Did not find the firmware file '%s'. Please see linux/Documentation/dvb/ for more details on firmware-problems. Status %d\n",
KBUILD_MODNAME, name, ret);
goto err;
}
dev_info(&d->udev->dev, "%s: downloading firmware from file '%s'\n",
KBUILD_MODNAME, name);
ret = d->props->download_firmware(d, fw);
release_firmware(fw);
if (ret < 0)
goto err;
return ret;
err:
dev_dbg(&d->udev->dev, "%s: failed=%d\n", __func__, ret);
return ret;
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
linux
|
005145378c9ad7575a01b6ce1ba118fb427f583a
| 218,363,515,849,131,480,000,000,000,000,000,000,000 | 33 |
[media] dvb-usb-v2: avoid use-after-free
I ran into a stack frame size warning because of the on-stack copy of
the USB device structure:
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c: In function 'dvb_usbv2_disconnect':
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c:1029:1: error: the frame size of 1104 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
Copying a device structure like this is wrong for a number of other reasons
too aside from the possible stack overflow. One of them is that the
dev_info() call will print the name of the device later, but AFAICT
we have only copied a pointer to the name earlier and the actual name
has been freed by the time it gets printed.
This removes the on-stack copy of the device and instead copies the
device name using kstrdup(). I'm ignoring the possible failure here
as both printk() and kfree() are able to deal with NULL pointers.
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
|
const CImg<T>& save_off(const CImgList<tf>& primitives, const CImgList<tc>& colors,
std::FILE *const file) const {
return _save_off(primitives,colors,file,0);
}
| 0 |
[
"CWE-770"
] |
cimg
|
619cb58dd90b4e03ac68286c70ed98acbefd1c90
| 175,534,862,953,687,400,000,000,000,000,000,000,000 | 4 |
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
|
fix_glyph_scaling(ASS_Renderer *priv, GlyphInfo *glyph)
{
double ft_size;
if (priv->settings.hinting == ASS_HINTING_NONE) {
// arbitrary, not too small to prevent grid fitting rounding effects
// XXX: this is a rather crude hack
ft_size = 256.0;
} else {
// If hinting is enabled, we want to pass the real font size
// to freetype. Normalize scale_y to 1.0.
ft_size = glyph->scale_y * glyph->font_size;
}
glyph->scale_x = glyph->scale_x * glyph->font_size / ft_size;
glyph->scale_y = glyph->scale_y * glyph->font_size / ft_size;
glyph->font_size = ft_size;
}
| 0 |
[
"CWE-125"
] |
libass
|
f4f48950788b91c6a30029cc28a240b834713ea7
| 139,927,730,050,104,520,000,000,000,000,000,000,000 | 16 |
Fix line wrapping mode 0/3 bugs
This fixes two separate bugs:
a) Don't move a linebreak into the first symbol. This results in a empty
line at the front, which does not help to equalize line lengths at all.
Instead, merge line with the second one.
b) When moving a linebreak into a symbol that already is a break, the
number of lines must be decremented. Otherwise, uninitialized memory
is possibly used for later layout operations.
Found by fuzzer test case
id:000085,sig:11,src:003377+003350,op:splice,rep:8.
This might also affect and hopefully fix libass#229.
v2: change semantics according to review
|
iobuf_tell (iobuf_t a)
{
return a->ntotal + a->nbytes;
}
| 0 |
[
"CWE-20"
] |
gnupg
|
2183683bd633818dd031b090b5530951de76f392
| 136,394,847,183,470,620,000,000,000,000,000,000,000 | 4 |
Use inline functions to convert buffer data to scalars.
* common/host2net.h (buf16_to_ulong, buf16_to_uint): New.
(buf16_to_ushort, buf16_to_u16): New.
(buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New.
--
Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to
avoid all sign extension on shift problems. Hanno Böck found a case
with an invalid read due to this problem. To fix that once and for
all almost all uses of "<< 24" and "<< 8" are changed by this patch to
use an inline function from host2net.h.
Signed-off-by: Werner Koch <[email protected]>
|
static void MSCFree(void* ptr)
{
free(ptr);
}
| 0 |
[
"CWE-190",
"CWE-189",
"CWE-703"
] |
ImageMagick
|
0f6fc2d5bf8f500820c3dbcf0d23ee14f2d9f734
| 223,537,725,215,557,940,000,000,000,000,000,000,000 | 4 | |
static MagickBooleanType WriteRGBImage(const ImageInfo *image_info,Image *image)
{
MagickBooleanType
status;
MagickOffsetType
scene;
QuantumInfo
*quantum_info;
QuantumType
quantum_type;
size_t
imageListLength,
length;
ssize_t
count,
y;
unsigned char
*pixels;
/*
Allocate memory for pixels.
*/
assert(image_info != (const ImageInfo *) NULL);
assert(image_info->signature == MagickCoreSignature);
assert(image != (Image *) NULL);
assert(image->signature == MagickCoreSignature);
if (image->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",image->filename);
if (image_info->interlace != PartitionInterlace)
{
/*
Open output image file.
*/
status=OpenBlob(image_info,image,WriteBinaryBlobMode,&image->exception);
if (status == MagickFalse)
return(status);
}
quantum_type=RGBQuantum;
if (LocaleCompare(image_info->magick,"RGBA") == 0)
quantum_type=RGBAQuantum;
if (LocaleCompare(image_info->magick,"RGBO") == 0)
quantum_type=RGBOQuantum;
scene=0;
imageListLength=GetImageListLength(image);
do
{
/*
Convert MIFF to RGB raster pixels.
*/
(void) TransformImageColorspace(image,sRGBColorspace);
if ((LocaleCompare(image_info->magick,"RGBA") == 0) &&
(image->matte == MagickFalse))
(void) SetImageAlphaChannel(image,ResetAlphaChannel);
quantum_info=AcquireQuantumInfo(image_info,image);
if (quantum_info == (QuantumInfo *) NULL)
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
pixels=GetQuantumPixels(quantum_info);
switch (image_info->interlace)
{
case NoInterlace:
default:
{
/*
No interlacing: RGBRGBRGBRGBRGBRGB...
*/
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,quantum_type,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,(MagickOffsetType) y,
image->rows);
if (status == MagickFalse)
break;
}
}
break;
}
case LineInterlace:
{
/*
Line interlacing: RRR...GGG...BBB...RRR...GGG...BBB...
*/
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,RedQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,GreenQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,BlueQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
if (quantum_type == RGBAQuantum)
{
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,AlphaQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (quantum_type == RGBOQuantum)
{
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,OpacityQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,(MagickOffsetType) y,
image->rows);
if (status == MagickFalse)
break;
}
}
break;
}
case PlaneInterlace:
{
/*
Plane interlacing: RRRRRR...GGGGGG...BBBBBB...
*/
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,RedQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,1,6);
if (status == MagickFalse)
break;
}
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,GreenQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,2,6);
if (status == MagickFalse)
break;
}
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,BlueQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,3,6);
if (status == MagickFalse)
break;
}
if (quantum_type == RGBAQuantum)
{
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,AlphaQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,5,6);
if (status == MagickFalse)
break;
}
}
if (image_info->interlace == PartitionInterlace)
(void) CopyMagickString(image->filename,image_info->filename,
MaxTextExtent);
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,6,6);
if (status == MagickFalse)
break;
}
break;
}
case PartitionInterlace:
{
/*
Partition interlacing: RRRRRR..., GGGGGG..., BBBBBB...
*/
AppendImageFormat("R",image->filename);
status=OpenBlob(image_info,image,scene == 0 ? WriteBinaryBlobMode :
AppendBinaryBlobMode,&image->exception);
if (status == MagickFalse)
return(status);
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,RedQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,1,6);
if (status == MagickFalse)
break;
}
(void) CloseBlob(image);
AppendImageFormat("G",image->filename);
status=OpenBlob(image_info,image,scene == 0 ? WriteBinaryBlobMode :
AppendBinaryBlobMode,&image->exception);
if (status == MagickFalse)
return(status);
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,GreenQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,2,6);
if (status == MagickFalse)
break;
}
(void) CloseBlob(image);
AppendImageFormat("B",image->filename);
status=OpenBlob(image_info,image,scene == 0 ? WriteBinaryBlobMode :
AppendBinaryBlobMode,&image->exception);
if (status == MagickFalse)
return(status);
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,BlueQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,3,6);
if (status == MagickFalse)
break;
}
if (quantum_type == RGBAQuantum)
{
(void) CloseBlob(image);
AppendImageFormat("A",image->filename);
status=OpenBlob(image_info,image,scene == 0 ? WriteBinaryBlobMode :
AppendBinaryBlobMode,&image->exception);
if (status == MagickFalse)
return(status);
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*magick_restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,
&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,AlphaQuantum,pixels,&image->exception);
count=WriteBlob(image,length,pixels);
if (count != (ssize_t) length)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,5,6);
if (status == MagickFalse)
break;
}
}
(void) CloseBlob(image);
(void) CopyMagickString(image->filename,image_info->filename,
MaxTextExtent);
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,6,6);
if (status == MagickFalse)
break;
}
break;
}
}
quantum_info=DestroyQuantumInfo(quantum_info);
if (GetNextImageInList(image) == (Image *) NULL)
break;
image=SyncNextImageInList(image);
status=SetImageProgress(image,SaveImagesTag,scene++,imageListLength);
if (status == MagickFalse)
break;
} while (image_info->adjoin != MagickFalse);
(void) CloseBlob(image);
return(MagickTrue);
}
| 0 |
[
"CWE-401"
] |
ImageMagick6
|
210474b2fac6a661bfa7ed563213920e93e76395
| 313,428,304,870,144,600,000,000,000,000,000,000,000 | 382 |
Fix ultra rare but potential memory-leak
|
static void vt_event_wait(struct vt_event_wait *vw)
{
__vt_event_queue(vw);
__vt_event_wait(vw);
__vt_event_dequeue(vw);
}
| 0 |
[
"CWE-662"
] |
linux
|
90bfdeef83f1d6c696039b6a917190dcbbad3220
| 269,927,584,358,172,740,000,000,000,000,000,000,000 | 6 |
tty: make FONTX ioctl use the tty pointer they were actually passed
Some of the font tty ioctl's always used the current foreground VC for
their operations. Don't do that then.
This fixes a data race on fg_console.
Side note: both Michael Ellerman and Jiri Slaby point out that all these
ioctls are deprecated, and should probably have been removed long ago,
and everything seems to be using the KDFONTOP ioctl instead.
In fact, Michael points out that it looks like busybox's loadfont
program seems to have switched over to using KDFONTOP exactly _because_
of this bug (ahem.. 12 years ago ;-).
Reported-by: Minh Yuan <[email protected]>
Acked-by: Michael Ellerman <[email protected]>
Acked-by: Jiri Slaby <[email protected]>
Cc: Greg KH <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
void WebContents::BeforeUnloadFired(content::WebContents* tab,
bool proceed,
bool* proceed_to_fire_unload) {
if (type_ == Type::BROWSER_WINDOW || type_ == Type::OFF_SCREEN)
*proceed_to_fire_unload = proceed;
else
*proceed_to_fire_unload = true;
// Note that Chromium does not emit this for navigations.
Emit("before-unload-fired", proceed);
}
| 0 |
[
"CWE-284",
"CWE-693"
] |
electron
|
18613925610ba319da7f497b6deed85ad712c59b
| 268,298,648,768,769,900,000,000,000,000,000,000,000 | 10 |
refactor: wire will-navigate up to a navigation throttle instead of OpenURL (#25108)
* refactor: wire will-navigate up to a navigation throttle instead of OpenURL (#25065)
* refactor: wire will-navigate up to a navigation throttle instead of OpenURL
* spec: add test for x-site _top navigation
* chore: old code be old
|
qf_pop_dir(struct dir_stack_T **stackptr)
{
struct dir_stack_T *ds_ptr;
// TODO: Should we check if dirbuf is the directory on top of the stack?
// What to do if it isn't?
// pop top element and free it
if (*stackptr != NULL)
{
ds_ptr = *stackptr;
*stackptr = (*stackptr)->next;
vim_free(ds_ptr->dirname);
vim_free(ds_ptr);
}
// return NEW top element as current dir or NULL if stack is empty
return *stackptr ? (*stackptr)->dirname : NULL;
}
| 0 |
[
"CWE-416"
] |
vim
|
4f1b083be43f351bc107541e7b0c9655a5d2c0bb
| 23,807,015,907,869,250,000,000,000,000,000,000,000 | 19 |
patch 9.0.0322: crash when no errors and 'quickfixtextfunc' is set
Problem: Crash when no errors and 'quickfixtextfunc' is set.
Solution: Do not handle errors if there aren't any.
|
_cairo_image_bounded_opaque_spans (void *abstract_renderer,
int y, int height,
const cairo_half_open_span_t *spans,
unsigned num_spans)
{
cairo_image_span_renderer_t *r = abstract_renderer;
if (num_spans == 0)
return CAIRO_STATUS_SUCCESS;
do {
if (spans[0].coverage)
pixman_image_compositor_blt (r->compositor,
spans[0].x, y,
spans[1].x - spans[0].x, height,
spans[0].coverage);
spans++;
} while (--num_spans > 1);
return CAIRO_STATUS_SUCCESS;
}
| 0 |
[
"CWE-787"
] |
cairo
|
c986a7310bb06582b7d8a566d5f007ba4e5e75bf
| 297,165,334,754,253,280,000,000,000,000,000,000,000 | 21 |
image: Enable inplace compositing with opacities for general routines
On a SNB i5-2500:
Speedups
========
firefox-chalkboard 34284.16 -> 19637.40: 1.74x speedup
swfdec-giant-steps 778.35 -> 665.37: 1.17x speedup
ocitysmap 485.64 -> 431.94: 1.12x speedup
Slowdowns
=========
firefox-fishbowl 46878.98 -> 54407.14: 1.16x slowdown
That slow down is due to overhead of the increased number of calls to
pixman_image_composite32() (pixman_transform_point for analyzing the
source extents in particular) outweighing any advantage gained by
performing the rasterisation in a single pass and eliding gaps. The
solution that has been floated in the past is for an interface into
pixman to only perform the analysis once and then to return a kernel to
use for all spans.
Signed-off-by: Chris Wilson <[email protected]>
|
static void php_zip_free_entry(zend_resource *rsrc)
{
zip_read_rsrc *zr_rsrc = (zip_read_rsrc *) rsrc->ptr;
if (zr_rsrc) {
if (zr_rsrc->zf) {
zip_fclose(zr_rsrc->zf);
zr_rsrc->zf = NULL;
}
efree(zr_rsrc);
rsrc->ptr = NULL;
}
}
| 0 |
[
"CWE-190"
] |
php-src
|
3b8d4de300854b3517c7acb239b84f7726c1353c
| 249,958,479,886,737,440,000,000,000,000,000,000,000 | 13 |
Fix bug #71923 - integer overflow in ZipArchive::getFrom*
|
s_aos_close(stream * s)
{
gs_free_object(s->memory, s->cbuf, "s_aos_close(buffer)");
s->cbuf = 0;
/* Increment the IDs to prevent further access. */
s->read_id = s->write_id = (s->read_id | s->write_id) + 1;
return 0;
}
| 0 |
[] |
ghostpdl
|
04b37bbce174eed24edec7ad5b920eb93db4d47d
| 104,020,584,158,416,260,000,000,000,000,000,000,000 | 8 |
Bug 697799: have .rsdparams check its parameters
The Ghostscript internal operator .rsdparams wasn't checking the number or
type of the operands it was being passed. Do so.
|
virDomainHostdevMatchSubsysPCI(virDomainHostdevDefPtr first,
virDomainHostdevDefPtr second)
{
virDomainHostdevSubsysPCIPtr first_pcisrc = &first->source.subsys.u.pci;
virDomainHostdevSubsysPCIPtr second_pcisrc = &second->source.subsys.u.pci;
if (first_pcisrc->addr.domain == second_pcisrc->addr.domain &&
first_pcisrc->addr.bus == second_pcisrc->addr.bus &&
first_pcisrc->addr.slot == second_pcisrc->addr.slot &&
first_pcisrc->addr.function == second_pcisrc->addr.function)
return 1;
return 0;
}
| 0 |
[
"CWE-212"
] |
libvirt
|
a5b064bf4b17a9884d7d361733737fb614ad8979
| 264,363,439,013,006,100,000,000,000,000,000,000,000 | 13 |
conf: Don't format http cookies unless VIR_DOMAIN_DEF_FORMAT_SECURE is used
Starting with 3b076391befc3fe72deb0c244ac6c2b4c100b410
(v6.1.0-122-g3b076391be) we support http cookies. Since they may contain
somewhat sensitive information we should not format them into the XML
unless VIR_DOMAIN_DEF_FORMAT_SECURE is asserted.
Reported-by: Han Han <[email protected]>
Signed-off-by: Peter Krempa <[email protected]>
Reviewed-by: Erik Skultety <[email protected]>
|
static void save_screen(struct vc_data *vc)
{
WARN_CONSOLE_UNLOCKED();
if (vc->vc_sw->con_save_screen)
vc->vc_sw->con_save_screen(vc);
}
| 0 |
[
"CWE-125"
] |
linux
|
3c4e0dff2095c579b142d5a0693257f1c58b4804
| 91,629,715,567,701,600,000,000,000,000,000,000,000 | 7 |
vt: Disable KD_FONT_OP_COPY
It's buggy:
On Fri, Nov 06, 2020 at 10:30:08PM +0800, Minh Yuan wrote:
> We recently discovered a slab-out-of-bounds read in fbcon in the latest
> kernel ( v5.10-rc2 for now ). The root cause of this vulnerability is that
> "fbcon_do_set_font" did not handle "vc->vc_font.data" and
> "vc->vc_font.height" correctly, and the patch
> <https://lkml.org/lkml/2020/9/27/223> for VT_RESIZEX can't handle this
> issue.
>
> Specifically, we use KD_FONT_OP_SET to set a small font.data for tty6, and
> use KD_FONT_OP_SET again to set a large font.height for tty1. After that,
> we use KD_FONT_OP_COPY to assign tty6's vc_font.data to tty1's vc_font.data
> in "fbcon_do_set_font", while tty1 retains the original larger
> height. Obviously, this will cause an out-of-bounds read, because we can
> access a smaller vc_font.data with a larger vc_font.height.
Further there was only one user ever.
- Android's loadfont, busybox and console-tools only ever use OP_GET
and OP_SET
- fbset documentation only mentions the kernel cmdline font: option,
not anything else.
- systemd used OP_COPY before release 232 published in Nov 2016
Now unfortunately the crucial report seems to have gone down with
gmane, and the commit message doesn't say much. But the pull request
hints at OP_COPY being broken
https://github.com/systemd/systemd/pull/3651
So in other words, this never worked, and the only project which
foolishly every tried to use it, realized that rather quickly too.
Instead of trying to fix security issues here on dead code by adding
missing checks, fix the entire thing by removing the functionality.
Note that systemd code using the OP_COPY function ignored the return
value, so it doesn't matter what we're doing here really - just in
case a lone server somewhere happens to be extremely unlucky and
running an affected old version of systemd. The relevant code from
font_copy_to_all_vcs() in systemd was:
/* copy font from active VT, where the font was uploaded to */
cfo.op = KD_FONT_OP_COPY;
cfo.height = vcs.v_active-1; /* tty1 == index 0 */
(void) ioctl(vcfd, KDFONTOP, &cfo);
Note this just disables the ioctl, garbage collecting the now unused
callbacks is left for -next.
v2: Tetsuo found the old mail, which allowed me to find it on another
archive. Add the link too.
Acked-by: Peilin Ye <[email protected]>
Reported-by: Minh Yuan <[email protected]>
References: https://lists.freedesktop.org/archives/systemd-devel/2016-June/036935.html
References: https://github.com/systemd/systemd/pull/3651
Cc: Greg KH <[email protected]>
Cc: Peilin Ye <[email protected]>
Cc: Tetsuo Handa <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
Image::AutoPtr newBigTiffInstance(BasicIo::AutoPtr io, bool)
{
return Image::AutoPtr(new BigTiffImage(io));
}
| 0 |
[
"CWE-617"
] |
exiv2
|
1647908e00a4df7246d76678e59587e62c690dcd
| 265,065,689,844,316,600,000,000,000,000,000,000,000 | 4 |
fix for crash in bigtiff (issue #208)
|
void __init __weak arch_task_cache_init(void) { }
| 0 |
[
"CWE-416",
"CWE-703"
] |
linux
|
2b7e8665b4ff51c034c55df3cff76518d1a9ee3a
| 315,339,787,981,553,930,000,000,000,000,000,000,000 | 1 |
fork: fix incorrect fput of ->exe_file causing use-after-free
Commit 7c051267931a ("mm, fork: make dup_mmap wait for mmap_sem for
write killable") made it possible to kill a forking task while it is
waiting to acquire its ->mmap_sem for write, in dup_mmap().
However, it was overlooked that this introduced an new error path before
a reference is taken on the mm_struct's ->exe_file. Since the
->exe_file of the new mm_struct was already set to the old ->exe_file by
the memcpy() in dup_mm(), it was possible for the mmput() in the error
path of dup_mm() to drop a reference to ->exe_file which was never
taken.
This caused the struct file to later be freed prematurely.
Fix it by updating mm_init() to NULL out the ->exe_file, in the same
place it clears other things like the list of mmaps.
This bug was found by syzkaller. It can be reproduced using the
following C program:
#define _GNU_SOURCE
#include <pthread.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/syscall.h>
#include <sys/wait.h>
#include <unistd.h>
static void *mmap_thread(void *_arg)
{
for (;;) {
mmap(NULL, 0x1000000, PROT_READ,
MAP_POPULATE|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
}
}
static void *fork_thread(void *_arg)
{
usleep(rand() % 10000);
fork();
}
int main(void)
{
fork();
fork();
fork();
for (;;) {
if (fork() == 0) {
pthread_t t;
pthread_create(&t, NULL, mmap_thread, NULL);
pthread_create(&t, NULL, fork_thread, NULL);
usleep(rand() % 10000);
syscall(__NR_exit_group, 0);
}
wait(NULL);
}
}
No special kernel config options are needed. It usually causes a NULL
pointer dereference in __remove_shared_vm_struct() during exit, or in
dup_mmap() (which is usually inlined into copy_process()) during fork.
Both are due to a vm_area_struct's ->vm_file being used after it's
already been freed.
Google Bug Id: 64772007
Link: http://lkml.kernel.org/r/[email protected]
Fixes: 7c051267931a ("mm, fork: make dup_mmap wait for mmap_sem for write killable")
Signed-off-by: Eric Biggers <[email protected]>
Tested-by: Mark Rutland <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: <[email protected]> [v4.7+]
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
void cmp_item_decimal::store_value(Item *item)
{
my_decimal *val= item->val_decimal(&value);
/* val may be zero if item is nnull */
if (val && val != &value)
my_decimal2decimal(val, &value);
m_null_value= item->null_value;
}
| 0 |
[
"CWE-617"
] |
server
|
807945f2eb5fa22e6f233cc17b85a2e141efe2c8
| 248,216,381,770,508,500,000,000,000,000,000,000,000 | 8 |
MDEV-26402: A SEGV in Item_field::used_tables/update_depend_map_for_order...
When doing condition pushdown from HAVING into WHERE,
Item_equal::create_pushable_equalities() calls
item->set_extraction_flag(IMMUTABLE_FL) for constant items.
Then, Item::cleanup_excluding_immutables_processor() checks for this flag
to see if it should call item->cleanup() or leave the item as-is.
The failure happens when a constant item has a non-constant one inside it,
like:
(tbl.col=0 AND impossible_cond)
item->walk(cleanup_excluding_immutables_processor) works in a bottom-up
way so it
1. will call Item_func_eq(tbl.col=0)->cleanup()
2. will not call Item_cond_and->cleanup (as the AND is constant)
This creates an item tree where a fixed Item has an un-fixed Item inside
it which eventually causes an assertion failure.
Fixed by introducing this rule: instead of just calling
item->set_extraction_flag(IMMUTABLE_FL);
we call Item::walk() to set the flag for all sub-items of the item.
|
void ompl::geometric::VFRRT::setup()
{
RRT::setup();
vfdim_ = si_->getStateSpace()->getValueLocations().size();
}
| 0 |
[
"CWE-703"
] |
ompl
|
abb4fadcb4e4fe4c9cf41e5e7706143a66948eb7
| 258,584,341,316,277,340,000,000,000,000,000,000,000 | 5 |
fix memory leak in VFRRT. closes #839
|
static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
{
bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu);
unsigned long mmu_seq;
int r;
fault->gfn = fault->addr >> PAGE_SHIFT;
fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn);
if (page_fault_handle_page_track(vcpu, fault))
return RET_PF_EMULATE;
r = fast_page_fault(vcpu, fault);
if (r != RET_PF_INVALID)
return r;
r = mmu_topup_memory_caches(vcpu, false);
if (r)
return r;
mmu_seq = vcpu->kvm->mmu_notifier_seq;
smp_rmb();
if (kvm_faultin_pfn(vcpu, fault, &r))
return r;
if (handle_abnormal_pfn(vcpu, fault, ACC_ALL, &r))
return r;
r = RET_PF_RETRY;
if (is_tdp_mmu_fault)
read_lock(&vcpu->kvm->mmu_lock);
else
write_lock(&vcpu->kvm->mmu_lock);
if (is_page_fault_stale(vcpu, fault, mmu_seq))
goto out_unlock;
r = make_mmu_pages_available(vcpu);
if (r)
goto out_unlock;
if (is_tdp_mmu_fault)
r = kvm_tdp_mmu_map(vcpu, fault);
else
r = __direct_map(vcpu, fault);
out_unlock:
if (is_tdp_mmu_fault)
read_unlock(&vcpu->kvm->mmu_lock);
else
write_unlock(&vcpu->kvm->mmu_lock);
kvm_release_pfn_clean(fault->pfn);
return r;
}
| 0 |
[
"CWE-476"
] |
linux
|
9f46c187e2e680ecd9de7983e4d081c3391acc76
| 217,074,184,658,069,530,000,000,000,000,000,000,000 | 57 |
KVM: x86/mmu: fix NULL pointer dereference on guest INVPCID
With shadow paging enabled, the INVPCID instruction results in a call
to kvm_mmu_invpcid_gva. If INVPCID is executed with CR0.PG=0, the
invlpg callback is not set and the result is a NULL pointer dereference.
Fix it trivially by checking for mmu->invlpg before every call.
There are other possibilities:
- check for CR0.PG, because KVM (like all Intel processors after P5)
flushes guest TLB on CR0.PG changes so that INVPCID/INVLPG are a
nop with paging disabled
- check for EFER.LMA, because KVM syncs and flushes when switching
MMU contexts outside of 64-bit mode
All of these are tricky, go for the simple solution. This is CVE-2022-1789.
Reported-by: Yongkang Jia <[email protected]>
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
|
void tfdt_box_del(GF_Box *s)
{
gf_free(s);
}
| 0 |
[
"CWE-787"
] |
gpac
|
77510778516803b7f7402d7423c6d6bef50254c3
| 85,731,231,595,751,140,000,000,000,000,000,000,000 | 4 |
fixed #2255
|
SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
u64 volatile_fid, u32 opcode, bool is_fsctl,
char *in_data, u32 indatalen, u32 max_out_data_len,
char **out_data, u32 *plen /* returned data len */)
{
struct smb_rqst rqst;
struct smb2_ioctl_rsp *rsp = NULL;
struct cifs_ses *ses;
struct kvec iov[SMB2_IOCTL_IOV_SIZE];
struct kvec rsp_iov = {NULL, 0};
int resp_buftype = CIFS_NO_BUFFER;
int rc = 0;
int flags = 0;
cifs_dbg(FYI, "SMB2 IOCTL\n");
if (out_data != NULL)
*out_data = NULL;
/* zero out returned data len, in case of error */
if (plen)
*plen = 0;
if (tcon)
ses = tcon->ses;
else
return -EIO;
if (!ses || !(ses->server))
return -EIO;
if (smb3_encryption_required(tcon))
flags |= CIFS_TRANSFORM_REQ;
memset(&rqst, 0, sizeof(struct smb_rqst));
memset(&iov, 0, sizeof(iov));
rqst.rq_iov = iov;
rqst.rq_nvec = SMB2_IOCTL_IOV_SIZE;
rc = SMB2_ioctl_init(tcon, &rqst, persistent_fid, volatile_fid, opcode,
is_fsctl, in_data, indatalen, max_out_data_len);
if (rc)
goto ioctl_exit;
rc = cifs_send_recv(xid, ses, &rqst, &resp_buftype, flags,
&rsp_iov);
rsp = (struct smb2_ioctl_rsp *)rsp_iov.iov_base;
if (rc != 0)
trace_smb3_fsctl_err(xid, persistent_fid, tcon->tid,
ses->Suid, 0, opcode, rc);
if ((rc != 0) && (rc != -EINVAL)) {
cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);
goto ioctl_exit;
} else if (rc == -EINVAL) {
if ((opcode != FSCTL_SRV_COPYCHUNK_WRITE) &&
(opcode != FSCTL_SRV_COPYCHUNK)) {
cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);
goto ioctl_exit;
}
}
/* check if caller wants to look at return data or just return rc */
if ((plen == NULL) || (out_data == NULL))
goto ioctl_exit;
*plen = le32_to_cpu(rsp->OutputCount);
/* We check for obvious errors in the output buffer length and offset */
if (*plen == 0)
goto ioctl_exit; /* server returned no data */
else if (*plen > rsp_iov.iov_len || *plen > 0xFF00) {
cifs_dbg(VFS, "srv returned invalid ioctl length: %d\n", *plen);
*plen = 0;
rc = -EIO;
goto ioctl_exit;
}
if (rsp_iov.iov_len - *plen < le32_to_cpu(rsp->OutputOffset)) {
cifs_dbg(VFS, "Malformed ioctl resp: len %d offset %d\n", *plen,
le32_to_cpu(rsp->OutputOffset));
*plen = 0;
rc = -EIO;
goto ioctl_exit;
}
*out_data = kmemdup((char *)rsp + le32_to_cpu(rsp->OutputOffset),
*plen, GFP_KERNEL);
if (*out_data == NULL) {
rc = -ENOMEM;
goto ioctl_exit;
}
ioctl_exit:
SMB2_ioctl_free(&rqst);
free_rsp_buf(resp_buftype, rsp);
return rc;
}
| 0 |
[
"CWE-416",
"CWE-200"
] |
linux
|
6a3eb3360667170988f8a6477f6686242061488a
| 86,487,791,495,872,740,000,000,000,000,000,000,000 | 99 |
cifs: Fix use-after-free in SMB2_write
There is a KASAN use-after-free:
BUG: KASAN: use-after-free in SMB2_write+0x1342/0x1580
Read of size 8 at addr ffff8880b6a8e450 by task ln/4196
Should not release the 'req' because it will use in the trace.
Fixes: eccb4422cf97 ("smb3: Add ftrace tracepoints for improved SMB3 debugging")
Signed-off-by: ZhangXiaoxu <[email protected]>
Signed-off-by: Steve French <[email protected]>
CC: Stable <[email protected]> 4.18+
Reviewed-by: Pavel Shilovsky <[email protected]>
|
void tq_freeze(struct thread_q *tq)
{
tq_freezethaw(tq, true);
}
| 0 |
[
"CWE-20",
"CWE-703"
] |
sgminer
|
910c36089940e81fb85c65b8e63dcd2fac71470c
| 325,135,419,859,798,200,000,000,000,000,000,000,000 | 4 |
stratum: parse_notify(): Don't die on malformed bbversion/prev_hash/nbit/ntime.
Might have introduced a memory leak, don't have time to check. :(
Should the other hex2bin()'s be checked?
Thanks to Mick Ayzenberg <mick.dejavusecurity.com> for finding this.
|
autoar_extractor_signal_error (AutoarExtractor *self)
{
if (self->error != NULL) {
if (self->error->domain == G_IO_ERROR &&
self->error->code == G_IO_ERROR_CANCELLED) {
g_error_free (self->error);
self->error = NULL;
autoar_extractor_signal_cancelled (self);
} else {
autoar_common_g_signal_emit (self, self->in_thread,
autoar_extractor_signals[AR_ERROR], 0,
self->error);
}
}
}
| 0 |
[
"CWE-22"
] |
gnome-autoar
|
adb067e645732fdbe7103516e506d09eb6a54429
| 210,285,819,663,176,340,000,000,000,000,000,000,000 | 15 |
AutoarExtractor: Do not extract files outside the destination dir
Currently, a malicious archive can cause that the files are extracted
outside of the destination dir. This can happen if the archive contains
a file whose parent is a symbolic link, which points outside of the
destination dir. This is potentially a security threat similar to
CVE-2020-11736. Let's skip such problematic files when extracting.
Fixes: https://gitlab.gnome.org/GNOME/gnome-autoar/-/issues/7
|
int evm_inode_setattr(struct dentry *dentry, struct iattr *attr)
{
unsigned int ia_valid = attr->ia_valid;
enum integrity_status evm_status;
if (!(ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID)))
return 0;
evm_status = evm_verify_current_integrity(dentry);
if ((evm_status == INTEGRITY_PASS) ||
(evm_status == INTEGRITY_NOXATTRS))
return 0;
integrity_audit_msg(AUDIT_INTEGRITY_METADATA, d_backing_inode(dentry),
dentry->d_name.name, "appraise_metadata",
integrity_status_msg[evm_status], -EPERM, 0);
return -EPERM;
}
| 0 |
[
"CWE-241",
"CWE-19"
] |
linux
|
613317bd212c585c20796c10afe5daaa95d4b0a1
| 151,570,008,107,514,180,000,000,000,000,000,000,000 | 16 |
EVM: Use crypto_memneq() for digest comparisons
This patch fixes vulnerability CVE-2016-2085. The problem exists
because the vm_verify_hmac() function includes a use of memcmp().
Unfortunately, this allows timing side channel attacks; specifically
a MAC forgery complexity drop from 2^128 to 2^12. This patch changes
the memcmp() to the cryptographically safe crypto_memneq().
Reported-by: Xiaofei Rex Guo <[email protected]>
Signed-off-by: Ryan Ware <[email protected]>
Cc: [email protected]
Signed-off-by: Mimi Zohar <[email protected]>
Signed-off-by: James Morris <[email protected]>
|
R_API RBinFile *r_bin_file_find_by_id(RBin *bin, ut32 binfile_id) {
RBinFile *binfile = NULL;
RListIter *iter = NULL;
r_list_foreach (bin->binfiles, iter, binfile) {
if (binfile->id == binfile_id) {
break;
}
binfile = NULL;
}
return binfile;
}
| 0 |
[
"CWE-125"
] |
radare2
|
3fcf41ed96ffa25b38029449520c8d0a198745f3
| 262,223,107,939,649,900,000,000,000,000,000,000,000 | 11 |
Fix #9902 - Fix oobread in RBin.string_scan_range
|
static inline int parse_mask(NM self, const char *str, int flags) {
char *p;
uint32_t v;
struct in6_addr s6;
struct in_addr s;
v = strtoul(str, &p, 0);
if(*p == '\0') {
/* read it as a CIDR value */
if(is_v4(self)) {
if(v < 0 || v > 32) return 0;
v += 96;
} else {
if(v < 0 || v > 128) return 0;
}
self->mask = u128_cidr(v);
} else if(inet_pton(AF_INET6, str, &s6)) {
self->mask = u128_of_s6(&s6);
/* flip cisco style masks */
if(u128_cmp(
u128_lit(0, 0),
u128_and(
u128_lit(1ULL << 63, 1),
u128_xor(u128_lit(0, 1), self->mask)
)
) == 0) {
self->mask = u128_neg(self->mask);
}
self->domain = AF_INET6;
} else if(self->domain == AF_INET && inet_aton(str, &s)) {
v = htonl(s.s_addr);
if(v & 1 && ~v >> 31) /* flip cisco style masks */
v = ~v;
/* since mask is currently all 1s, mask ^ ~m will
* set the low 32. */
self->mask = u128_xor(self->mask, u128_lit(0, ~v));
} else {
return 0;
}
if(!chkmask(self->mask))
return 0;
/* apply mask to neta */
self->neta = u128_and(self->neta, self->mask);
return 1;
}
| 1 |
[] |
netmask
|
29a9c239bd1008363f5b34ffd6c2cef906f3660c
| 269,093,484,129,150,300,000,000,000,000,000,000,000 | 45 |
bump version to 2.4.4
* remove checks for negative unsigned ints, fixes #2
* harden error logging functions, fixes #3
|
template<typename tf, typename tc, typename te>
CImg<floatT> get_elevation3d(CImgList<tf>& primitives, CImgList<tc>& colors, const CImg<te>& elevation) const {
if (!is_sameXY(elevation) || elevation._depth>1 || elevation._spectrum>1)
throw CImgArgumentException(_cimg_instance
"get_elevation3d(): Instance and specified elevation (%u,%u,%u,%u,%p) "
"have incompatible dimensions.",
cimg_instance,
elevation._width,elevation._height,elevation._depth,
elevation._spectrum,elevation._data);
if (is_empty()) return *this;
float m, M = (float)max_min(m);
if (M==m) ++M;
colors.assign();
const unsigned int size_x1 = _width - 1, size_y1 = _height - 1;
for (unsigned int y = 0; y<size_y1; ++y)
for (unsigned int x = 0; x<size_x1; ++x) {
const unsigned char
r = (unsigned char)(((*this)(x,y,0) - m)*255/(M-m)),
g = (unsigned char)(_spectrum>1?((*this)(x,y,1) - m)*255/(M-m):r),
b = (unsigned char)(_spectrum>2?((*this)(x,y,2) - m)*255/(M-m):_spectrum>1?0:r);
CImg<tc>::vector((tc)r,(tc)g,(tc)b).move_to(colors);
}
const typename CImg<te>::_functor2d_int func(elevation);
return elevation3d(primitives,func,0,0,_width - 1.0f,_height - 1.0f,_width,_height);
| 0 |
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
| 266,508,455,434,780,260,000,000,000,000,000,000,000 | 24 |
Fix other issues in 'CImg<T>::load_bmp()'.
|
static int tg3_wait_macro_done(struct tg3 *tp)
{
int limit = 100;
while (limit--) {
u32 tmp32;
if (!tg3_readphy(tp, MII_TG3_DSP_CONTROL, &tmp32)) {
if ((tmp32 & 0x1000) == 0)
break;
}
}
if (limit < 0)
return -EBUSY;
return 0;
}
| 0 |
[
"CWE-476",
"CWE-119"
] |
linux
|
715230a44310a8cf66fbfb5a46f9a62a9b2de424
| 97,866,134,296,339,960,000,000,000,000,000,000,000 | 17 |
tg3: fix length overflow in VPD firmware parsing
Commit 184b89044fb6e2a74611dafa69b1dce0d98612c6 ("tg3: Use VPD fw version
when present") introduced VPD parsing that contained a potential length
overflow.
Limit the hardware's reported firmware string length (max 255 bytes) to
stay inside the driver's firmware string length (32 bytes). On overflow,
truncate the formatted firmware string instead of potentially overwriting
portions of the tg3 struct.
http://cansecwest.com/slides/2013/PrivateCore%20CSW%202013.pdf
Signed-off-by: Kees Cook <[email protected]>
Reported-by: Oded Horovitz <[email protected]>
Reported-by: Brad Spengler <[email protected]>
Cc: [email protected]
Cc: Matt Carlson <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static int snd_pcm_hw_params_choose(struct snd_pcm_substream *pcm,
struct snd_pcm_hw_params *params)
{
static const int vars[] = {
SNDRV_PCM_HW_PARAM_ACCESS,
SNDRV_PCM_HW_PARAM_FORMAT,
SNDRV_PCM_HW_PARAM_SUBFORMAT,
SNDRV_PCM_HW_PARAM_CHANNELS,
SNDRV_PCM_HW_PARAM_RATE,
SNDRV_PCM_HW_PARAM_PERIOD_TIME,
SNDRV_PCM_HW_PARAM_BUFFER_SIZE,
SNDRV_PCM_HW_PARAM_TICK_TIME,
-1
};
const int *v;
struct snd_mask old_mask;
struct snd_interval old_interval;
int changed;
for (v = vars; *v != -1; v++) {
/* Keep old parameter to trace. */
if (trace_hw_mask_param_enabled()) {
if (hw_is_mask(*v))
old_mask = *hw_param_mask(params, *v);
}
if (trace_hw_interval_param_enabled()) {
if (hw_is_interval(*v))
old_interval = *hw_param_interval(params, *v);
}
if (*v != SNDRV_PCM_HW_PARAM_BUFFER_SIZE)
changed = snd_pcm_hw_param_first(pcm, params, *v, NULL);
else
changed = snd_pcm_hw_param_last(pcm, params, *v, NULL);
if (changed < 0)
return changed;
if (changed == 0)
continue;
/* Trace the changed parameter. */
if (hw_is_mask(*v)) {
trace_hw_mask_param(pcm, *v, 0, &old_mask,
hw_param_mask(params, *v));
}
if (hw_is_interval(*v)) {
trace_hw_interval_param(pcm, *v, 0, &old_interval,
hw_param_interval(params, *v));
}
}
return 0;
}
| 0 |
[
"CWE-125"
] |
linux
|
92ee3c60ec9fe64404dc035e7c41277d74aa26cb
| 309,339,799,132,213,960,000,000,000,000,000,000,000 | 51 |
ALSA: pcm: Fix races among concurrent hw_params and hw_free calls
Currently we have neither proper check nor protection against the
concurrent calls of PCM hw_params and hw_free ioctls, which may result
in a UAF. Since the existing PCM stream lock can't be used for
protecting the whole ioctl operations, we need a new mutex to protect
those racy calls.
This patch introduced a new mutex, runtime->buffer_mutex, and applies
it to both hw_params and hw_free ioctl code paths. Along with it, the
both functions are slightly modified (the mmap_count check is moved
into the state-check block) for code simplicity.
Reported-by: Hu Jiahui <[email protected]>
Cc: <[email protected]>
Reviewed-by: Jaroslav Kysela <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Takashi Iwai <[email protected]>
|
Item_datetime_literal(THD *thd, MYSQL_TIME *ltime, uint dec_arg):
Item_temporal_literal(thd, ltime, dec_arg)
{
max_length= MAX_DATETIME_WIDTH + (decimals ? decimals + 1 : 0);
fixed= 1;
// See the comment on maybe_null in Item_date_literal
maybe_null= !ltime->month || !ltime->day;
}
| 0 |
[
"CWE-617"
] |
server
|
2e7891080667c59ac80f788eef4d59d447595772
| 128,106,558,223,069,500,000,000,000,000,000,000,000 | 8 |
MDEV-25635 Assertion failure when pushing from HAVING into WHERE of view
This bug could manifest itself after pushing a where condition over a
mergeable derived table / view / CTE DT into a grouping view / derived
table / CTE V whose item list contained set functions with constant
arguments such as MIN(2), SUM(1) etc. In such cases the field references
used in the condition pushed into the view V that correspond set functions
are wrapped into Item_direct_view_ref wrappers. Due to a wrong implementation
of the virtual method const_item() for the class Item_direct_view_ref the
wrapped set functions with constant arguments could be erroneously taken
for constant items. This could lead to a wrong result set returned by the
main select query in 10.2. In 10.4 where a possibility of pushing condition
from HAVING into WHERE had been added this could cause a crash.
Approved by Sergey Petrunya <[email protected]>
|
GF_Err gf_isom_set_ipod_compatible(GF_ISOFile *the_file, u32 trackNumber)
{
GF_TrackBox *trak;
GF_Err e;
GF_MPEGVisualSampleEntryBox *entry;
e = CanAccessMovie(the_file, GF_ISOM_OPEN_WRITE);
if (e) return e;
trak = gf_isom_get_track_from_file(the_file, trackNumber);
if (!trak || !trak->Media) return GF_BAD_PARAM;
entry = (GF_MPEGVisualSampleEntryBox*)gf_list_get(trak->Media->information->sampleTable->SampleDescription->child_boxes, 0);
if (!entry) return GF_OK;
switch (entry->type) {
case GF_ISOM_BOX_TYPE_AVC1:
case GF_ISOM_BOX_TYPE_AVC2:
case GF_ISOM_BOX_TYPE_AVC3:
case GF_ISOM_BOX_TYPE_AVC4:
case GF_ISOM_BOX_TYPE_SVC1:
case GF_ISOM_BOX_TYPE_MVC1:
case GF_ISOM_BOX_TYPE_HVC1:
case GF_ISOM_BOX_TYPE_HEV1:
case GF_ISOM_BOX_TYPE_HVT1:
break;
default:
return GF_OK;
}
if (!entry->ipod_ext) {
entry->ipod_ext = (GF_UnknownUUIDBox *) gf_isom_box_new_parent(&entry->child_boxes, GF_ISOM_BOX_TYPE_UUID);
if (!entry->ipod_ext) return GF_OUT_OF_MEM;
}
memcpy(entry->ipod_ext->uuid, GF_ISOM_IPOD_EXT, sizeof(u8)*16);
entry->ipod_ext->dataSize = 4;
entry->ipod_ext->data = gf_malloc(sizeof(u8)*4);
if (!entry->ipod_ext->data) return GF_OUT_OF_MEM;
memset(entry->ipod_ext->data, 0, sizeof(u8)*4);
return GF_OK;
}
| 0 |
[
"CWE-476"
] |
gpac
|
ebfa346eff05049718f7b80041093b4c5581c24e
| 47,609,732,661,615,120,000,000,000,000,000,000,000 | 38 |
fixed #1706
|
ctnetlink_update_status(struct nf_conn *ct, const struct nlattr * const cda[])
{
unsigned int status = ntohl(nla_get_be32(cda[CTA_STATUS]));
unsigned long d = ct->status ^ status;
if (d & IPS_SEEN_REPLY && !(status & IPS_SEEN_REPLY))
/* SEEN_REPLY bit can only be set */
return -EBUSY;
if (d & IPS_ASSURED && !(status & IPS_ASSURED))
/* ASSURED bit can only be set */
return -EBUSY;
/* This check is less strict than ctnetlink_change_status()
* because callers often flip IPS_EXPECTED bits when sending
* an NFQA_CT attribute to the kernel. So ignore the
* unchangeable bits but do not error out. Also user programs
* are allowed to clear the bits that they are allowed to change.
*/
__ctnetlink_change_status(ct, status, ~status);
return 0;
}
| 0 |
[
"CWE-120"
] |
linux
|
1cc5ef91d2ff94d2bf2de3b3585423e8a1051cb6
| 149,477,185,349,715,520,000,000,000,000,000,000,000 | 22 |
netfilter: ctnetlink: add a range check for l3/l4 protonum
The indexes to the nf_nat_l[34]protos arrays come from userspace. So
check the tuple's family, e.g. l3num, when creating the conntrack in
order to prevent an OOB memory access during setup. Here is an example
kernel panic on 4.14.180 when userspace passes in an index greater than
NFPROTO_NUMPROTO.
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:...
Process poc (pid: 5614, stack limit = 0x00000000a3933121)
CPU: 4 PID: 5614 Comm: poc Tainted: G S W O 4.14.180-g051355490483
Hardware name: Qualcomm Technologies, Inc. SM8150 V2 PM8150 Google Inc. MSM
task: 000000002a3dfffe task.stack: 00000000a3933121
pc : __cfi_check_fail+0x1c/0x24
lr : __cfi_check_fail+0x1c/0x24
...
Call trace:
__cfi_check_fail+0x1c/0x24
name_to_dev_t+0x0/0x468
nfnetlink_parse_nat_setup+0x234/0x258
ctnetlink_parse_nat_setup+0x4c/0x228
ctnetlink_new_conntrack+0x590/0xc40
nfnetlink_rcv_msg+0x31c/0x4d4
netlink_rcv_skb+0x100/0x184
nfnetlink_rcv+0xf4/0x180
netlink_unicast+0x360/0x770
netlink_sendmsg+0x5a0/0x6a4
___sys_sendmsg+0x314/0x46c
SyS_sendmsg+0xb4/0x108
el0_svc_naked+0x34/0x38
This crash is not happening since 5.4+, however, ctnetlink still
allows for creating entries with unsupported layer 3 protocol number.
Fixes: c1d10adb4a521 ("[NETFILTER]: Add ctnetlink port for nf_conntrack")
Signed-off-by: Will McVicker <[email protected]>
[[email protected]: rebased original patch on top of nf.git]
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
PHP_FUNCTION(mkdir)
{
char *dir;
int dir_len;
zval *zcontext = NULL;
long mode = 0777;
zend_bool recursive = 0;
php_stream_context *context;
if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "p|lbr", &dir, &dir_len, &mode, &recursive, &zcontext) == FAILURE) {
RETURN_FALSE;
}
context = php_stream_context_from_zval(zcontext, 0);
RETURN_BOOL(php_stream_mkdir(dir, mode, (recursive ? PHP_STREAM_MKDIR_RECURSIVE : 0) | REPORT_ERRORS, context));
}
| 0 |
[
"CWE-19"
] |
php-src
|
be9b2a95adb504abd5acdc092d770444ad6f6854
| 272,977,691,917,554,400,000,000,000,000,000,000,000 | 17 |
Fixed bug #69418 - more s->p fixes for filenames
|
cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
{
struct task_group *parent = css_tg(parent_css);
struct task_group *tg;
if (!parent) {
/* This is early initialization for the top cgroup */
return &root_task_group.css;
}
tg = sched_create_group(parent);
if (IS_ERR(tg))
return ERR_PTR(-ENOMEM);
return &tg->css;
}
| 0 |
[
"CWE-200"
] |
linux
|
4efbc454ba68def5ef285b26ebfcfdb605b52755
| 43,390,279,912,493,730,000,000,000,000,000,000,000 | 16 |
sched: Fix information leak in sys_sched_getattr()
We're copying the on-stack structure to userspace, but forgot to give
the right number of bytes to copy. This allows the calling process to
obtain up to PAGE_SIZE bytes from the stack (and possibly adjacent
kernel memory).
This fix copies only as much as we actually have on the stack
(attr->size defaults to the size of the struct) and leaves the rest of
the userspace-provided buffer untouched.
Found using kmemcheck + trinity.
Fixes: d50dde5a10f30 ("sched: Add new scheduler syscalls to support an extended scheduling parameters ABI")
Cc: Dario Faggioli <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Ingo Molnar <[email protected]>
Signed-off-by: Vegard Nossum <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
|
const struct sockaddr_storage *smbXcli_conn_remote_sockaddr(struct smbXcli_conn *conn)
{
return &conn->remote_ss;
}
| 0 |
[
"CWE-20"
] |
samba
|
a819d2b440aafa3138d95ff6e8b824da885a70e9
| 70,602,610,150,303,300,000,000,000,000,000,000,000 | 4 |
CVE-2015-5296: libcli/smb: make sure we require signing when we demand encryption on a session
BUG: https://bugzilla.samba.org/show_bug.cgi?id=11536
Signed-off-by: Stefan Metzmacher <[email protected]>
Reviewed-by: Jeremy Allison <[email protected]>
|
ip_invoke_with_position(argc, argv, obj, position)
int argc;
VALUE *argv;
VALUE obj;
Tcl_QueuePosition position;
{
struct invoke_queue *ivq;
#ifdef RUBY_USE_NATIVE_THREAD
struct tcltkip *ptr;
#endif
int *alloc_done;
int thr_crit_bup;
volatile VALUE current = rb_thread_current();
volatile VALUE ip_obj = obj;
volatile VALUE result;
volatile VALUE ret;
struct timeval t;
#if TCL_MAJOR_VERSION >= 8
Tcl_Obj **av = (Tcl_Obj **)NULL;
#else /* TCL_MAJOR_VERSION < 8 */
char **av = (char **)NULL;
#endif
if (argc < 1) {
rb_raise(rb_eArgError, "command name missing");
}
#ifdef RUBY_USE_NATIVE_THREAD
ptr = get_ip(ip_obj);
DUMP2("invoke status: ptr->tk_thread_id %p", ptr->tk_thread_id);
DUMP2("invoke status: Tcl_GetCurrentThread %p", Tcl_GetCurrentThread());
#else
DUMP2("status: Tcl_GetCurrentThread %p", Tcl_GetCurrentThread());
#endif
DUMP2("status: eventloopt_thread %"PRIxVALUE, eventloop_thread);
if (
#ifdef RUBY_USE_NATIVE_THREAD
(ptr->tk_thread_id == 0 || ptr->tk_thread_id == Tcl_GetCurrentThread())
&&
#endif
(NIL_P(eventloop_thread) || current == eventloop_thread)
) {
if (NIL_P(eventloop_thread)) {
DUMP2("invoke from thread:%"PRIxVALUE" but no eventloop", current);
} else {
DUMP2("invoke from current eventloop %"PRIxVALUE, current);
}
result = ip_invoke_real(argc, argv, ip_obj);
if (rb_obj_is_kind_of(result, rb_eException)) {
rb_exc_raise(result);
}
return result;
}
DUMP2("invoke from thread %"PRIxVALUE" (NOT current eventloop)", current);
thr_crit_bup = rb_thread_critical;
rb_thread_critical = Qtrue;
/* allocate memory (for arguments) */
av = alloc_invoke_arguments(argc, argv);
/* allocate memory (keep result) */
/* alloc_done = (int*)ALLOC(int); */
alloc_done = RbTk_ALLOC_N(int, 1);
#if 0 /* use Tcl_Preserve/Release */
Tcl_Preserve((ClientData)alloc_done); /* XXXXXXXX */
#endif
*alloc_done = 0;
/* allocate memory (freed by Tcl_ServiceEvent) */
/* ivq = (struct invoke_queue *)Tcl_Alloc(sizeof(struct invoke_queue)); */
ivq = RbTk_ALLOC_N(struct invoke_queue, 1);
#if 0 /* use Tcl_Preserve/Release */
Tcl_Preserve((ClientData)ivq); /* XXXXXXXX */
#endif
/* allocate result obj */
result = rb_ary_new3(1, Qnil);
/* construct event data */
ivq->done = alloc_done;
ivq->argc = argc;
ivq->argv = av;
ivq->interp = ip_obj;
ivq->result = result;
ivq->thread = current;
ivq->safe_level = rb_safe_level();
ivq->ev.proc = invoke_queue_handler;
/* add the handler to Tcl event queue */
DUMP1("add handler");
#ifdef RUBY_USE_NATIVE_THREAD
if (ptr->tk_thread_id) {
/* Tcl_ThreadQueueEvent(ptr->tk_thread_id, &(ivq->ev), position); */
Tcl_ThreadQueueEvent(ptr->tk_thread_id, (Tcl_Event*)ivq, position);
Tcl_ThreadAlert(ptr->tk_thread_id);
} else if (tk_eventloop_thread_id) {
/* Tcl_ThreadQueueEvent(tk_eventloop_thread_id,
&(ivq->ev), position); */
Tcl_ThreadQueueEvent(tk_eventloop_thread_id,
(Tcl_Event*)ivq, position);
Tcl_ThreadAlert(tk_eventloop_thread_id);
} else {
/* Tcl_QueueEvent(&(ivq->ev), position); */
Tcl_QueueEvent((Tcl_Event*)ivq, position);
}
#else
/* Tcl_QueueEvent(&(ivq->ev), position); */
Tcl_QueueEvent((Tcl_Event*)ivq, position);
#endif
rb_thread_critical = thr_crit_bup;
/* wait for the handler to be processed */
t.tv_sec = 0;
t.tv_usec = (long)((EVENT_HANDLER_TIMEOUT)*1000.0);
DUMP2("ivq wait for handler (current thread:%"PRIxVALUE")", current);
while(*alloc_done >= 0) {
/* rb_thread_stop(); */
/* rb_thread_sleep_forever(); */
rb_thread_wait_for(t);
DUMP2("*** ivq wakeup (current thread:%"PRIxVALUE")", current);
DUMP2("*** (eventloop thread:%"PRIxVALUE")", eventloop_thread);
if (NIL_P(eventloop_thread)) {
DUMP1("*** ivq lost eventloop thread");
break;
}
}
DUMP2("back from handler (current thread:%"PRIxVALUE")", current);
/* get result & free allocated memory */
ret = RARRAY_AREF(result, 0);
#if 0 /* use Tcl_EventuallyFree */
Tcl_EventuallyFree((ClientData)alloc_done, TCL_DYNAMIC); /* XXXXXXXX */
#else
#if 0 /* use Tcl_Preserve/Release */
Tcl_Release((ClientData)alloc_done); /* XXXXXXXX */
#else
/* free(alloc_done); */
ckfree((char*)alloc_done);
#endif
#endif
#if 0 /* ivq is freed by Tcl_ServiceEvent */
#if 0 /* use Tcl_EventuallyFree */
Tcl_EventuallyFree((ClientData)ivq, TCL_DYNAMIC); /* XXXXXXXX */
#else
#if 0 /* use Tcl_Preserve/Release */
Tcl_Release(ivq);
#else
ckfree((char*)ivq);
#endif
#endif
#endif
/* free allocated memory */
free_invoke_arguments(argc, av);
/* exception? */
if (rb_obj_is_kind_of(ret, rb_eException)) {
DUMP1("raise exception");
/* rb_exc_raise(ret); */
rb_exc_raise(rb_exc_new3(rb_obj_class(ret),
rb_funcallv(ret, ID_to_s, 0, 0)));
}
DUMP1("exit ip_invoke");
return ret;
}
| 0 |
[] |
tk
|
d098136e3f62a4879a7d7cd34bbd50f482ba3331
| 165,380,260,633,092,320,000,000,000,000,000,000,000 | 173 |
tcltklib.c: use StringValueCStr [ci skip]
* ext/tk/tcltklib.c (set_max_block_time, tcl_protect_core,
ip_init, ip_create_slave_core, get_obj_from_str,
ip_cancel_eval_core, lib_set_system_encoding,
alloc_invoke_arguments, lib_merge_tklist): use StringValueCStr
instead of StringValuePtr for values to be passed to Tcl
interperter.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@55842 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
|
void preventCommandReplication(client *c) {
c->flags |= CLIENT_PREVENT_REPL_PROP;
}
| 0 |
[
"CWE-770"
] |
redis
|
5674b0057ff2903d43eaff802017eddf37c360f8
| 89,387,347,546,800,370,000,000,000,000,000,000,000 | 3 |
Prevent unauthenticated client from easily consuming lots of memory (CVE-2021-32675)
This change sets a low limit for multibulk and bulk length in the
protocol for unauthenticated connections, so that they can't easily
cause redis to allocate massive amounts of memory by sending just a few
characters on the network.
The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
|
void RGWPutBucketTags_ObjStore_S3::send_response()
{
if (op_ret)
set_req_state_err(s, op_ret);
dump_errno(s);
end_header(s, this, "application/xml");
dump_start(s);
}
| 0 |
[
"CWE-79"
] |
ceph
|
8f90658c731499722d5f4393c8ad70b971d05f77
| 261,281,775,946,825,400,000,000,000,000,000,000,000 | 8 |
rgw: reject unauthenticated response-header actions
Signed-off-by: Matt Benjamin <[email protected]>
Reviewed-by: Casey Bodley <[email protected]>
(cherry picked from commit d8dd5e513c0c62bbd7d3044d7e2eddcd897bd400)
|
static void crypto_rfc4309_exit_tfm(struct crypto_aead *tfm)
{
struct crypto_rfc4309_ctx *ctx = crypto_aead_ctx(tfm);
crypto_free_aead(ctx->child);
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
linux
|
3b30460c5b0ed762be75a004e924ec3f8711e032
| 75,113,388,487,953,380,000,000,000,000,000,000,000 | 6 |
crypto: ccm - move cbcmac input off the stack
Commit f15f05b0a5de ("crypto: ccm - switch to separate cbcmac driver")
refactored the CCM driver to allow separate implementations of the
underlying MAC to be provided by a platform. However, in doing so, it
moved some data from the linear region to the stack, which violates the
SG constraints when the stack is virtually mapped.
So move idata/odata back to the request ctx struct, of which we can
reasonably expect that it has been allocated using kmalloc() et al.
Reported-by: Johannes Berg <[email protected]>
Fixes: f15f05b0a5de ("crypto: ccm - switch to separate cbcmac driver")
Signed-off-by: Ard Biesheuvel <[email protected]>
Tested-by: Johannes Berg <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
int r_jwe_add_key_symmetric(jwe_t * jwe, const unsigned char * key, size_t key_len) {
int ret = RHN_OK;
jwa_alg alg;
jwk_t * j_key = NULL;
if (jwe != NULL && key != NULL && key_len) {
if (r_jwk_init(&j_key) == RHN_OK && r_jwk_import_from_symmetric_key(j_key, key, key_len) == RHN_OK) {
if (r_jwks_append_jwk(jwe->jwks_privkey, j_key) != RHN_OK || r_jwks_append_jwk(jwe->jwks_pubkey, j_key) != RHN_OK) {
y_log_message(Y_LOG_LEVEL_ERROR, "r_jwe_add_enc_key_symmetric - Error setting key");
ret = RHN_ERROR;
}
if (jwe->alg == R_JWA_ALG_UNKNOWN && (alg = r_str_to_jwa_alg(r_jwk_get_property_str(j_key, "alg"))) != R_JWA_ALG_NONE) {
r_jwe_set_alg(jwe, alg);
}
} else {
y_log_message(Y_LOG_LEVEL_ERROR, "r_jwe_add_enc_key_symmetric - Error parsing key");
ret = RHN_ERROR;
}
r_jwk_free(j_key);
} else {
ret = RHN_ERROR_PARAM;
}
return ret;
}
| 0 |
[
"CWE-787"
] |
rhonabwy
|
b4c2923a1ba4fabf9b55a89244127e153a3e549b
| 267,274,443,920,288,780,000,000,000,000,000,000,000 | 24 |
Fix buffer overflow on r_jwe_aesgcm_key_unwrap
|
archive_acl_from_text_l(struct archive_acl *acl, const char *text,
int want_type, struct archive_string_conv *sc)
{
struct {
const char *start;
const char *end;
} field[6], name;
const char *s, *st;
int numfields, fields, n, r, sol, ret;
int type, types, tag, permset, id;
size_t len;
char sep;
switch (want_type) {
case ARCHIVE_ENTRY_ACL_TYPE_POSIX1E:
want_type = ARCHIVE_ENTRY_ACL_TYPE_ACCESS;
__LA_FALLTHROUGH;
case ARCHIVE_ENTRY_ACL_TYPE_ACCESS:
case ARCHIVE_ENTRY_ACL_TYPE_DEFAULT:
numfields = 5;
break;
case ARCHIVE_ENTRY_ACL_TYPE_NFS4:
numfields = 6;
break;
default:
return (ARCHIVE_FATAL);
}
ret = ARCHIVE_OK;
types = 0;
while (text != NULL && *text != '\0') {
/*
* Parse the fields out of the next entry,
* advance 'text' to start of next entry.
*/
fields = 0;
do {
const char *start, *end;
next_field(&text, &start, &end, &sep);
if (fields < numfields) {
field[fields].start = start;
field[fields].end = end;
}
++fields;
} while (sep == ':');
/* Set remaining fields to blank. */
for (n = fields; n < numfields; ++n)
field[n].start = field[n].end = NULL;
if (field[0].start != NULL && *(field[0].start) == '#') {
/* Comment, skip entry */
continue;
}
n = 0;
sol = 0;
id = -1;
permset = 0;
name.start = name.end = NULL;
if (want_type != ARCHIVE_ENTRY_ACL_TYPE_NFS4) {
/* POSIX.1e ACLs */
/*
* Default keyword "default:user::rwx"
* if found, we have one more field
*
* We also support old Solaris extension:
* "defaultuser::rwx" is the default ACL corresponding
* to "user::rwx", etc. valid only for first field
*/
s = field[0].start;
len = field[0].end - field[0].start;
if (*s == 'd' && (len == 1 || (len >= 7
&& memcmp((s + 1), "efault", 6) == 0))) {
type = ARCHIVE_ENTRY_ACL_TYPE_DEFAULT;
if (len > 7)
field[0].start += 7;
else
n = 1;
} else
type = want_type;
/* Check for a numeric ID in field n+1 or n+3. */
isint(field[n + 1].start, field[n + 1].end, &id);
/* Field n+3 is optional. */
if (id == -1 && fields > (n + 3))
isint(field[n + 3].start, field[n + 3].end,
&id);
tag = 0;
s = field[n].start;
st = field[n].start + 1;
len = field[n].end - field[n].start;
if (len == 0) {
ret = ARCHIVE_WARN;
continue;
}
switch (*s) {
case 'u':
if (len == 1 || (len == 4
&& memcmp(st, "ser", 3) == 0))
tag = ARCHIVE_ENTRY_ACL_USER_OBJ;
break;
case 'g':
if (len == 1 || (len == 5
&& memcmp(st, "roup", 4) == 0))
tag = ARCHIVE_ENTRY_ACL_GROUP_OBJ;
break;
case 'o':
if (len == 1 || (len == 5
&& memcmp(st, "ther", 4) == 0))
tag = ARCHIVE_ENTRY_ACL_OTHER;
break;
case 'm':
if (len == 1 || (len == 4
&& memcmp(st, "ask", 3) == 0))
tag = ARCHIVE_ENTRY_ACL_MASK;
break;
default:
break;
}
switch (tag) {
case ARCHIVE_ENTRY_ACL_OTHER:
case ARCHIVE_ENTRY_ACL_MASK:
if (fields == (n + 2)
&& field[n + 1].start < field[n + 1].end
&& ismode(field[n + 1].start,
field[n + 1].end, &permset)) {
/* This is Solaris-style "other:rwx" */
sol = 1;
} else if (fields == (n + 3) &&
field[n + 1].start < field[n + 1].end) {
/* Invalid mask or other field */
ret = ARCHIVE_WARN;
continue;
}
break;
case ARCHIVE_ENTRY_ACL_USER_OBJ:
case ARCHIVE_ENTRY_ACL_GROUP_OBJ:
if (id != -1 ||
field[n + 1].start < field[n + 1].end) {
name = field[n + 1];
if (tag == ARCHIVE_ENTRY_ACL_USER_OBJ)
tag = ARCHIVE_ENTRY_ACL_USER;
else
tag = ARCHIVE_ENTRY_ACL_GROUP;
}
break;
default:
/* Invalid tag, skip entry */
ret = ARCHIVE_WARN;
continue;
}
/*
* Without "default:" we expect mode in field 3
* Exception: Solaris other and mask fields
*/
if (permset == 0 && !ismode(field[n + 2 - sol].start,
field[n + 2 - sol].end, &permset)) {
/* Invalid mode, skip entry */
ret = ARCHIVE_WARN;
continue;
}
} else {
/* NFS4 ACLs */
s = field[0].start;
len = field[0].end - field[0].start;
tag = 0;
switch (len) {
case 4:
if (memcmp(s, "user", 4) == 0)
tag = ARCHIVE_ENTRY_ACL_USER;
break;
case 5:
if (memcmp(s, "group", 5) == 0)
tag = ARCHIVE_ENTRY_ACL_GROUP;
break;
case 6:
if (memcmp(s, "owner@", 6) == 0)
tag = ARCHIVE_ENTRY_ACL_USER_OBJ;
else if (memcmp(s, "group@", 6) == 0)
tag = ARCHIVE_ENTRY_ACL_GROUP_OBJ;
break;
case 9:
if (memcmp(s, "everyone@", 9) == 0)
tag = ARCHIVE_ENTRY_ACL_EVERYONE;
break;
default:
break;
}
if (tag == 0) {
/* Invalid tag, skip entry */
ret = ARCHIVE_WARN;
continue;
} else if (tag == ARCHIVE_ENTRY_ACL_USER ||
tag == ARCHIVE_ENTRY_ACL_GROUP) {
n = 1;
name = field[1];
isint(name.start, name.end, &id);
} else
n = 0;
if (!is_nfs4_perms(field[1 + n].start,
field[1 + n].end, &permset)) {
/* Invalid NFSv4 perms, skip entry */
ret = ARCHIVE_WARN;
continue;
}
if (!is_nfs4_flags(field[2 + n].start,
field[2 + n].end, &permset)) {
/* Invalid NFSv4 flags, skip entry */
ret = ARCHIVE_WARN;
continue;
}
s = field[3 + n].start;
len = field[3 + n].end - field[3 + n].start;
type = 0;
if (len == 4) {
if (memcmp(s, "deny", 4) == 0)
type = ARCHIVE_ENTRY_ACL_TYPE_DENY;
} else if (len == 5) {
if (memcmp(s, "allow", 5) == 0)
type = ARCHIVE_ENTRY_ACL_TYPE_ALLOW;
else if (memcmp(s, "audit", 5) == 0)
type = ARCHIVE_ENTRY_ACL_TYPE_AUDIT;
else if (memcmp(s, "alarm", 5) == 0)
type = ARCHIVE_ENTRY_ACL_TYPE_ALARM;
}
if (type == 0) {
/* Invalid entry type, skip entry */
ret = ARCHIVE_WARN;
continue;
}
isint(field[4 + n].start, field[4 + n].end,
&id);
}
/* Add entry to the internal list. */
r = archive_acl_add_entry_len_l(acl, type, permset,
tag, id, name.start, name.end - name.start, sc);
if (r < ARCHIVE_WARN)
return (r);
if (r != ARCHIVE_OK)
ret = ARCHIVE_WARN;
types |= type;
}
/* Reset ACL */
archive_acl_reset(acl, types);
return (ret);
}
| 0 |
[
"CWE-476"
] |
libarchive
|
15bf44fd2c1ad0e3fd87048b3fcc90c4dcff1175
| 194,837,968,984,706,200,000,000,000,000,000,000,000 | 261 |
Skip 0-length ACL fields
Currently, it is possible to create an archive that crashes bsdtar
with a malformed ACL:
Program received signal SIGSEGV, Segmentation fault.
archive_acl_from_text_l (acl=<optimised out>, text=0x7e2e92 "", want_type=<optimised out>, sc=<optimised out>) at libarchive/archive_acl.c:1726
1726 switch (*s) {
(gdb) p n
$1 = 1
(gdb) p field[n]
$2 = {start = 0x0, end = 0x0}
Stop this by checking that the length is not zero before beginning
the switch statement.
I am pretty sure this is the bug mentioned in the qsym paper [1],
and I was able to replicate it with a qsym + AFL + afl-rb setup.
[1] https://www.usenix.org/conference/usenixsecurity18/presentation/yun
|
static Image *ReadSGIImage(const ImageInfo *image_info,ExceptionInfo *exception)
{
Image
*image;
MagickBooleanType
status;
MagickSizeType
number_pixels;
MemoryInfo
*pixel_info;
register Quantum
*q;
register ssize_t
i,
x;
register unsigned char
*p;
SGIInfo
iris_info;
size_t
bytes_per_pixel,
quantum;
ssize_t
count,
y,
z;
unsigned char
*pixels;
/*
Open image file.
*/
assert(image_info != (const ImageInfo *) NULL);
assert(image_info->signature == MagickCoreSignature);
if (image_info->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",
image_info->filename);
assert(exception != (ExceptionInfo *) NULL);
assert(exception->signature == MagickCoreSignature);
image=AcquireImage(image_info,exception);
status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception);
if (status == MagickFalse)
{
image=DestroyImageList(image);
return((Image *) NULL);
}
/*
Read SGI raster header.
*/
iris_info.magic=ReadBlobMSBShort(image);
do
{
/*
Verify SGI identifier.
*/
if (iris_info.magic != 0x01DA)
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
iris_info.storage=(unsigned char) ReadBlobByte(image);
switch (iris_info.storage)
{
case 0x00: image->compression=NoCompression; break;
case 0x01: image->compression=RLECompression; break;
default:
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
}
iris_info.bytes_per_pixel=(unsigned char) ReadBlobByte(image);
if ((iris_info.bytes_per_pixel == 0) || (iris_info.bytes_per_pixel > 2))
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
iris_info.dimension=ReadBlobMSBShort(image);
iris_info.columns=ReadBlobMSBShort(image);
iris_info.rows=ReadBlobMSBShort(image);
iris_info.depth=ReadBlobMSBShort(image);
if ((iris_info.depth == 0) || (iris_info.depth > 4))
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
iris_info.minimum_value=ReadBlobMSBLong(image);
iris_info.maximum_value=ReadBlobMSBLong(image);
iris_info.sans=ReadBlobMSBLong(image);
count=ReadBlob(image,sizeof(iris_info.name),(unsigned char *)
iris_info.name);
if (count != sizeof(iris_info.name))
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
iris_info.name[sizeof(iris_info.name)-1]='\0';
if (*iris_info.name != '\0')
(void) SetImageProperty(image,"label",iris_info.name,exception);
iris_info.pixel_format=ReadBlobMSBLong(image);
if (iris_info.pixel_format != 0)
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
count=ReadBlob(image,sizeof(iris_info.filler),iris_info.filler);
if (count != sizeof(iris_info.filler))
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
image->columns=iris_info.columns;
image->rows=iris_info.rows;
image->depth=(size_t) MagickMin(iris_info.depth,MAGICKCORE_QUANTUM_DEPTH);
if (iris_info.pixel_format == 0)
image->depth=(size_t) MagickMin((size_t) 8*iris_info.bytes_per_pixel,
MAGICKCORE_QUANTUM_DEPTH);
if (iris_info.depth < 3)
{
image->storage_class=PseudoClass;
image->colors=iris_info.bytes_per_pixel > 1 ? 65535 : 256;
}
if ((image_info->ping != MagickFalse) && (image_info->number_scenes != 0))
if (image->scene >= (image_info->scene+image_info->number_scenes-1))
break;
status=SetImageExtent(image,image->columns,image->rows,exception);
if (status == MagickFalse)
return(DestroyImageList(image));
/*
Allocate SGI pixels.
*/
bytes_per_pixel=(size_t) iris_info.bytes_per_pixel;
number_pixels=(MagickSizeType) iris_info.columns*iris_info.rows;
if ((4*bytes_per_pixel*number_pixels) != ((MagickSizeType) (size_t)
(4*bytes_per_pixel*number_pixels)))
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
pixel_info=AcquireVirtualMemory(iris_info.columns,iris_info.rows*4*
bytes_per_pixel*sizeof(*pixels));
if (pixel_info == (MemoryInfo *) NULL)
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
pixels=(unsigned char *) GetVirtualMemoryBlob(pixel_info);
if ((int) iris_info.storage != 0x01)
{
unsigned char
*scanline;
/*
Read standard image format.
*/
scanline=(unsigned char *) AcquireQuantumMemory(iris_info.columns,
bytes_per_pixel*sizeof(*scanline));
if (scanline == (unsigned char *) NULL)
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
for (z=0; z < (ssize_t) iris_info.depth; z++)
{
p=pixels+bytes_per_pixel*z;
for (y=0; y < (ssize_t) iris_info.rows; y++)
{
count=ReadBlob(image,bytes_per_pixel*iris_info.columns,scanline);
if (EOFBlob(image) != MagickFalse)
break;
if (bytes_per_pixel == 2)
for (x=0; x < (ssize_t) iris_info.columns; x++)
{
*p=scanline[2*x];
*(p+1)=scanline[2*x+1];
p+=8;
}
else
for (x=0; x < (ssize_t) iris_info.columns; x++)
{
*p=scanline[x];
p+=4;
}
}
}
scanline=(unsigned char *) RelinquishMagickMemory(scanline);
}
else
{
MemoryInfo
*packet_info;
size_t
*runlength;
ssize_t
offset,
*offsets;
unsigned char
*packets;
unsigned int
data_order;
/*
Read runlength-encoded image format.
*/
offsets=(ssize_t *) AcquireQuantumMemory((size_t) iris_info.rows,
iris_info.depth*sizeof(*offsets));
runlength=(size_t *) AcquireQuantumMemory(iris_info.rows,
iris_info.depth*sizeof(*runlength));
packet_info=AcquireVirtualMemory((size_t) iris_info.columns+10UL,4UL*
sizeof(*packets));
if ((offsets == (ssize_t *) NULL) ||
(runlength == (size_t *) NULL) ||
(packet_info == (MemoryInfo *) NULL))
{
if (offsets == (ssize_t *) NULL)
offsets=(ssize_t *) RelinquishMagickMemory(offsets);
if (runlength == (size_t *) NULL)
runlength=(size_t *) RelinquishMagickMemory(runlength);
if (packet_info == (MemoryInfo *) NULL)
packet_info=RelinquishVirtualMemory(packet_info);
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
}
packets=(unsigned char *) GetVirtualMemoryBlob(packet_info);
for (i=0; i < (ssize_t) (iris_info.rows*iris_info.depth); i++)
offsets[i]=ReadBlobMSBSignedLong(image);
for (i=0; i < (ssize_t) (iris_info.rows*iris_info.depth); i++)
{
runlength[i]=ReadBlobMSBLong(image);
if (runlength[i] > (4*(size_t) iris_info.columns+10))
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
}
/*
Check data order.
*/
offset=0;
data_order=0;
for (y=0; ((y < (ssize_t) iris_info.rows) && (data_order == 0)); y++)
for (z=0; ((z < (ssize_t) iris_info.depth) && (data_order == 0)); z++)
{
if (offsets[y+z*iris_info.rows] < offset)
data_order=1;
offset=offsets[y+z*iris_info.rows];
}
offset=(ssize_t) TellBlob(image);
if (data_order == 1)
{
for (z=0; z < (ssize_t) iris_info.depth; z++)
{
p=pixels;
for (y=0; y < (ssize_t) iris_info.rows; y++)
{
if (offset != offsets[y+z*iris_info.rows])
{
offset=offsets[y+z*iris_info.rows];
offset=(ssize_t) SeekBlob(image,(ssize_t) offset,SEEK_SET);
}
count=ReadBlob(image,(size_t) runlength[y+z*iris_info.rows],
packets);
if (EOFBlob(image) != MagickFalse)
break;
offset+=(ssize_t) runlength[y+z*iris_info.rows];
status=SGIDecode(bytes_per_pixel,(ssize_t)
(runlength[y+z*iris_info.rows]/bytes_per_pixel),packets,
1L*iris_info.columns,p+bytes_per_pixel*z);
if (status == MagickFalse)
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
p+=(iris_info.columns*4*bytes_per_pixel);
}
}
}
else
{
MagickOffsetType
position;
position=TellBlob(image);
p=pixels;
for (y=0; y < (ssize_t) iris_info.rows; y++)
{
for (z=0; z < (ssize_t) iris_info.depth; z++)
{
if (offset != offsets[y+z*iris_info.rows])
{
offset=offsets[y+z*iris_info.rows];
offset=(ssize_t) SeekBlob(image,(ssize_t) offset,SEEK_SET);
}
count=ReadBlob(image,(size_t) runlength[y+z*iris_info.rows],
packets);
if (EOFBlob(image) != MagickFalse)
break;
offset+=(ssize_t) runlength[y+z*iris_info.rows];
status=SGIDecode(bytes_per_pixel,(ssize_t)
(runlength[y+z*iris_info.rows]/bytes_per_pixel),packets,
1L*iris_info.columns,p+bytes_per_pixel*z);
if (status == MagickFalse)
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
}
p+=(iris_info.columns*4*bytes_per_pixel);
}
offset=(ssize_t) SeekBlob(image,position,SEEK_SET);
}
packet_info=RelinquishVirtualMemory(packet_info);
runlength=(size_t *) RelinquishMagickMemory(runlength);
offsets=(ssize_t *) RelinquishMagickMemory(offsets);
}
/*
Initialize image structure.
*/
image->alpha_trait=iris_info.depth == 4 ? BlendPixelTrait :
UndefinedPixelTrait;
image->columns=iris_info.columns;
image->rows=iris_info.rows;
/*
Convert SGI raster image to pixel packets.
*/
if (image->storage_class == DirectClass)
{
/*
Convert SGI image to DirectClass pixel packets.
*/
if (bytes_per_pixel == 2)
{
for (y=0; y < (ssize_t) image->rows; y++)
{
p=pixels+(image->rows-y-1)*8*image->columns;
q=QueueAuthenticPixels(image,0,y,image->columns,1,exception);
if (q == (Quantum *) NULL)
break;
for (x=0; x < (ssize_t) image->columns; x++)
{
SetPixelRed(image,ScaleShortToQuantum((unsigned short)
((*(p+0) << 8) | (*(p+1)))),q);
SetPixelGreen(image,ScaleShortToQuantum((unsigned short)
((*(p+2) << 8) | (*(p+3)))),q);
SetPixelBlue(image,ScaleShortToQuantum((unsigned short)
((*(p+4) << 8) | (*(p+5)))),q);
SetPixelAlpha(image,OpaqueAlpha,q);
if (image->alpha_trait != UndefinedPixelTrait)
SetPixelAlpha(image,ScaleShortToQuantum((unsigned short)
((*(p+6) << 8) | (*(p+7)))),q);
p+=8;
q+=GetPixelChannels(image);
}
if (SyncAuthenticPixels(image,exception) == MagickFalse)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,LoadImageTag,(MagickOffsetType)
y,image->rows);
if (status == MagickFalse)
break;
}
}
}
else
for (y=0; y < (ssize_t) image->rows; y++)
{
p=pixels+(image->rows-y-1)*4*image->columns;
q=QueueAuthenticPixels(image,0,y,image->columns,1,exception);
if (q == (Quantum *) NULL)
break;
for (x=0; x < (ssize_t) image->columns; x++)
{
SetPixelRed(image,ScaleCharToQuantum(*p),q);
SetPixelGreen(image,ScaleCharToQuantum(*(p+1)),q);
SetPixelBlue(image,ScaleCharToQuantum(*(p+2)),q);
SetPixelAlpha(image,OpaqueAlpha,q);
if (image->alpha_trait != UndefinedPixelTrait)
SetPixelAlpha(image,ScaleCharToQuantum(*(p+3)),q);
p+=4;
q+=GetPixelChannels(image);
}
if (SyncAuthenticPixels(image,exception) == MagickFalse)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y,
image->rows);
if (status == MagickFalse)
break;
}
}
}
else
{
/*
Create grayscale map.
*/
if (AcquireImageColormap(image,image->colors,exception) == MagickFalse)
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
/*
Convert SGI image to PseudoClass pixel packets.
*/
if (bytes_per_pixel == 2)
{
for (y=0; y < (ssize_t) image->rows; y++)
{
p=pixels+(image->rows-y-1)*8*image->columns;
q=QueueAuthenticPixels(image,0,y,image->columns,1,exception);
if (q == (Quantum *) NULL)
break;
for (x=0; x < (ssize_t) image->columns; x++)
{
quantum=(*p << 8);
quantum|=(*(p+1));
SetPixelIndex(image,(Quantum) quantum,q);
p+=8;
q+=GetPixelChannels(image);
}
if (SyncAuthenticPixels(image,exception) == MagickFalse)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,LoadImageTag,(MagickOffsetType)
y,image->rows);
if (status == MagickFalse)
break;
}
}
}
else
for (y=0; y < (ssize_t) image->rows; y++)
{
p=pixels+(image->rows-y-1)*4*image->columns;
q=QueueAuthenticPixels(image,0,y,image->columns,1,exception);
if (q == (Quantum *) NULL)
break;
for (x=0; x < (ssize_t) image->columns; x++)
{
SetPixelIndex(image,*p,q);
p+=4;
q+=GetPixelChannels(image);
}
if (SyncAuthenticPixels(image,exception) == MagickFalse)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y,
image->rows);
if (status == MagickFalse)
break;
}
}
(void) SyncImage(image,exception);
}
pixel_info=RelinquishVirtualMemory(pixel_info);
if (EOFBlob(image) != MagickFalse)
{
ThrowFileException(exception,CorruptImageError,"UnexpectedEndOfFile",
image->filename);
break;
}
/*
Proceed to next image.
*/
if (image_info->number_scenes != 0)
if (image->scene >= (image_info->scene+image_info->number_scenes-1))
break;
iris_info.magic=ReadBlobMSBShort(image);
if (iris_info.magic == 0x01DA)
{
/*
Allocate next image structure.
*/
AcquireNextImage(image_info,image,exception);
if (GetNextImageInList(image) == (Image *) NULL)
{
image=DestroyImageList(image);
return((Image *) NULL);
}
image=SyncNextImageInList(image);
status=SetImageProgress(image,LoadImagesTag,TellBlob(image),
GetBlobSize(image));
if (status == MagickFalse)
break;
}
} while (iris_info.magic == 0x01DA);
(void) CloseBlob(image);
return(GetFirstImageInList(image));
}
| 0 |
[
"CWE-125"
] |
ImageMagick
|
8f8959033e4e59418d6506b345829af1f7a71127
| 42,688,279,263,493,930,000,000,000,000,000,000,000 | 464 |
...
|
CImg<T>& _load_raw(std::FILE *const file, const char *const filename,
const unsigned int size_x, const unsigned int size_y,
const unsigned int size_z, const unsigned int size_c,
const bool is_multiplexed, const bool invert_endianness,
const ulongT offset) {
if (!file && !filename)
throw CImgArgumentException(_cimg_instance
"load_raw(): Specified filename is (null).",
cimg_instance);
if (cimg::is_directory(filename))
throw CImgArgumentException(_cimg_instance
"load_raw(): Specified filename '%s' is a directory.",
cimg_instance,filename);
ulongT siz = (ulongT)size_x*size_y*size_z*size_c;
unsigned int
_size_x = size_x,
_size_y = size_y,
_size_z = size_z,
_size_c = size_c;
std::FILE *const nfile = file?file:cimg::fopen(filename,"rb");
if (!siz) { // Retrieve file size
const longT fpos = cimg::ftell(nfile);
if (fpos<0) throw CImgArgumentException(_cimg_instance
"load_raw(): Cannot determine size of input file '%s'.",
cimg_instance,filename?filename:"(FILE*)");
cimg::fseek(nfile,0,SEEK_END);
siz = cimg::ftell(nfile)/sizeof(T);
_size_y = (unsigned int)siz;
_size_x = _size_z = _size_c = 1;
cimg::fseek(nfile,fpos,SEEK_SET);
}
cimg::fseek(nfile,offset,SEEK_SET);
assign(_size_x,_size_y,_size_z,_size_c,0);
if (siz && (!is_multiplexed || size_c==1)) {
cimg::fread(_data,siz,nfile);
if (invert_endianness) cimg::invert_endianness(_data,siz);
} else if (siz) {
CImg<T> buf(1,1,1,_size_c);
cimg_forXYZ(*this,x,y,z) {
cimg::fread(buf._data,_size_c,nfile);
if (invert_endianness) cimg::invert_endianness(buf._data,_size_c);
set_vector_at(buf,x,y,z);
}
}
if (!file) cimg::fclose(nfile);
return *this;
| 0 |
[
"CWE-119",
"CWE-787"
] |
CImg
|
ac8003393569aba51048c9d67e1491559877b1d1
| 145,792,470,679,987,790,000,000,000,000,000,000,000 | 48 |
.
|
static inline u64 perf_cgroup_event_cgrp_time(struct perf_event *event)
{
return 0;
}
| 0 |
[
"CWE-703",
"CWE-189"
] |
linux
|
8176cced706b5e5d15887584150764894e94e02f
| 258,052,789,467,048,340,000,000,000,000,000,000,000 | 4 |
perf: Treat attr.config as u64 in perf_swevent_init()
Trinity discovered that we fail to check all 64 bits of
attr.config passed by user space, resulting to out-of-bounds
access of the perf_swevent_enabled array in
sw_perf_event_destroy().
Introduced in commit b0a873ebb ("perf: Register PMU
implementations").
Signed-off-by: Tommi Rantala <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Cc: Paul Mackerras <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
int waitForRWData(int fd, bool waitForRead, int seconds, int useconds)
{
int ret;
struct pollfd pfd;
memset(&pfd, 0, sizeof(pfd));
pfd.fd = fd;
if(waitForRead)
pfd.events=POLLIN;
else
pfd.events=POLLOUT;
ret = poll(&pfd, 1, seconds * 1000 + useconds/1000);
if ( ret == -1 )
errno = ETIMEDOUT; // ???
return ret;
}
| 0 |
[
"CWE-399"
] |
pdns
|
881b5b03a590198d03008e4200dd00cc537712f3
| 154,061,024,335,211,130,000,000,000,000,000,000,000 | 19 |
Reject qname's wirelength > 255, `chopOff()` handle dot inside labels
|
void OSD::_add_heartbeat_peer(int p)
{
if (p == whoami)
return;
HeartbeatInfo *hi;
map<int,HeartbeatInfo>::iterator i = heartbeat_peers.find(p);
if (i == heartbeat_peers.end()) {
pair<ConnectionRef,ConnectionRef> cons = service.get_con_osd_hb(p, osdmap->get_epoch());
if (!cons.first)
return;
hi = &heartbeat_peers[p];
hi->peer = p;
HeartbeatSession *s = new HeartbeatSession(p);
hi->con_back = cons.first.get();
hi->con_back->set_priv(s->get());
if (cons.second) {
hi->con_front = cons.second.get();
hi->con_front->set_priv(s->get());
dout(10) << "_add_heartbeat_peer: new peer osd." << p
<< " " << hi->con_back->get_peer_addr()
<< " " << hi->con_front->get_peer_addr()
<< dendl;
} else {
hi->con_front.reset(NULL);
dout(10) << "_add_heartbeat_peer: new peer osd." << p
<< " " << hi->con_back->get_peer_addr()
<< dendl;
}
s->put();
} else {
hi = &i->second;
}
hi->epoch = osdmap->get_epoch();
}
| 0 |
[
"CWE-287",
"CWE-284"
] |
ceph
|
5ead97120e07054d80623dada90a5cc764c28468
| 101,246,970,032,854,430,000,000,000,000,000,000,000 | 35 |
auth/cephx: add authorizer challenge
Allow the accepting side of a connection to reject an initial authorizer
with a random challenge. The connecting side then has to respond with an
updated authorizer proving they are able to decrypt the service's challenge
and that the new authorizer was produced for this specific connection
instance.
The accepting side requires this challenge and response unconditionally
if the client side advertises they have the feature bit. Servers wishing
to require this improved level of authentication simply have to require
the appropriate feature.
Signed-off-by: Sage Weil <[email protected]>
(cherry picked from commit f80b848d3f830eb6dba50123e04385173fa4540b)
# Conflicts:
# src/auth/Auth.h
# src/auth/cephx/CephxProtocol.cc
# src/auth/cephx/CephxProtocol.h
# src/auth/none/AuthNoneProtocol.h
# src/msg/Dispatcher.h
# src/msg/async/AsyncConnection.cc
- const_iterator
- ::decode vs decode
- AsyncConnection ctor arg noise
- get_random_bytes(), not cct->random()
|
onig_set_callout_user_data_of_match_param(OnigMatchParam* param, void* user_data)
{
#ifdef USE_CALLOUT
param->callout_user_data = user_data;
return ONIG_NORMAL;
#else
return ONIG_NO_SUPPORT_CONFIG;
#endif
}
| 0 |
[
"CWE-125"
] |
oniguruma
|
d3e402928b6eb3327f8f7d59a9edfa622fec557b
| 251,898,441,219,789,650,000,000,000,000,000,000,000 | 9 |
fix heap-buffer-overflow
|
void kvm_lapic_sync_to_vapic(struct kvm_vcpu *vcpu)
{
u32 data, tpr;
int max_irr, max_isr;
struct kvm_lapic *apic = vcpu->arch.apic;
void *vapic;
apic_sync_pv_eoi_to_guest(vcpu, apic);
if (!test_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention))
return;
tpr = kvm_apic_get_reg(apic, APIC_TASKPRI) & 0xff;
max_irr = apic_find_highest_irr(apic);
if (max_irr < 0)
max_irr = 0;
max_isr = apic_find_highest_isr(apic);
if (max_isr < 0)
max_isr = 0;
data = (tpr & 0xff) | ((max_isr & 0xf0) << 8) | (max_irr << 24);
vapic = kmap_atomic(vcpu->arch.apic->vapic_page);
*(u32 *)(vapic + offset_in_page(vcpu->arch.apic->vapic_addr)) = data;
kunmap_atomic(vapic);
}
| 1 |
[
"CWE-20"
] |
linux
|
fda4e2e85589191b123d31cdc21fd33ee70f50fd
| 189,376,184,270,113,900,000,000,000,000,000,000,000 | 25 |
KVM: x86: Convert vapic synchronization to _cached functions (CVE-2013-6368)
In kvm_lapic_sync_from_vapic and kvm_lapic_sync_to_vapic there is the
potential to corrupt kernel memory if userspace provides an address that
is at the end of a page. This patches concerts those functions to use
kvm_write_guest_cached and kvm_read_guest_cached. It also checks the
vapic_address specified by userspace during ioctl processing and returns
an error to userspace if the address is not a valid GPA.
This is generally not guest triggerable, because the required write is
done by firmware that runs before the guest. Also, it only affects AMD
processors and oldish Intel that do not have the FlexPriority feature
(unless you disable FlexPriority, of course; then newer processors are
also affected).
Fixes: b93463aa59d6 ('KVM: Accelerated apic support')
Reported-by: Andrew Honig <[email protected]>
Cc: [email protected]
Signed-off-by: Andrew Honig <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
static struct inode *ext4_get_journal_inode(struct super_block *sb,
unsigned int journal_inum)
{
struct inode *journal_inode;
/*
* Test for the existence of a valid inode on disk. Bad things
* happen if we iget() an unused inode, as the subsequent iput()
* will try to delete it.
*/
journal_inode = ext4_iget(sb, journal_inum, EXT4_IGET_SPECIAL);
if (IS_ERR(journal_inode)) {
ext4_msg(sb, KERN_ERR, "no journal found");
return NULL;
}
if (!journal_inode->i_nlink) {
make_bad_inode(journal_inode);
iput(journal_inode);
ext4_msg(sb, KERN_ERR, "journal inode is deleted");
return NULL;
}
jbd_debug(2, "Journal inode found at %p: %lld bytes\n",
journal_inode, journal_inode->i_size);
if (!S_ISREG(journal_inode->i_mode)) {
ext4_msg(sb, KERN_ERR, "invalid journal inode");
iput(journal_inode);
return NULL;
}
return journal_inode;
}
| 0 |
[
"CWE-416",
"CWE-401"
] |
linux
|
4ea99936a1630f51fc3a2d61a58ec4a1c4b7d55a
| 313,703,761,472,682,800,000,000,000,000,000,000,000 | 31 |
ext4: add more paranoia checking in ext4_expand_extra_isize handling
It's possible to specify a non-zero s_want_extra_isize via debugging
option, and this can cause bad things(tm) to happen when using a file
system with an inode size of 128 bytes.
Add better checking when the file system is mounted, as well as when
we are actually doing the trying to do the inode expansion.
Link: https://lore.kernel.org/r/[email protected]
Reported-by: [email protected]
Reported-by: [email protected]
Reported-by: [email protected]
Signed-off-by: Theodore Ts'o <[email protected]>
Cc: [email protected]
|
__must_hold(&req->ctx->completion_lock)
{
struct io_kiocb *nxt, *link = req->link;
req->link = NULL;
while (link) {
nxt = link->link;
link->link = NULL;
trace_io_uring_fail_link(req, link);
io_cqring_fill_event(link->ctx, link->user_data, -ECANCELED, 0);
io_put_req_deferred(link, 2);
link = nxt;
}
}
| 0 |
[
"CWE-787"
] |
linux
|
d1f82808877bb10d3deee7cf3374a4eb3fb582db
| 29,150,748,885,869,050,000,000,000,000,000,000,000 | 15 |
io_uring: truncate lengths larger than MAX_RW_COUNT on provide buffers
Read and write operations are capped to MAX_RW_COUNT. Some read ops rely on
that limit, and that is not guaranteed by the IORING_OP_PROVIDE_BUFFERS.
Truncate those lengths when doing io_add_buffers, so buffer addresses still
use the uncapped length.
Also, take the chance and change struct io_buffer len member to __u32, so
it matches struct io_provide_buffer len member.
This fixes CVE-2021-3491, also reported as ZDI-CAN-13546.
Fixes: ddf0322db79c ("io_uring: add IORING_OP_PROVIDE_BUFFERS")
Reported-by: Billy Jheng Bing-Jhong (@st424204)
Signed-off-by: Thadeu Lima de Souza Cascardo <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
static void FixSDTPInTRAF(GF_MovieFragmentBox *moof)
{
u32 k;
if (!moof)
return;
for (k = 0; k < gf_list_count(moof->TrackList); k++) {
GF_TrackFragmentBox *traf = gf_list_get(moof->TrackList, k);
if (traf->sdtp) {
GF_TrackFragmentRunBox *trun;
u32 j = 0, sample_index = 0;
if (traf->sdtp->sampleCount == gf_list_count(traf->TrackRuns)) {
GF_LOG(GF_LOG_WARNING, GF_LOG_CONTAINER, ("[iso file] Warning: TRAF box of track id=%u contains a SDTP. Converting to TRUN sample flags.\n", traf->tfhd->trackID));
}
while ((trun = (GF_TrackFragmentRunBox*)gf_list_enum(traf->TrackRuns, &j))) {
u32 i;
trun->flags |= GF_ISOM_TRUN_FLAGS;
for (i=0; i<trun->nb_samples; i++) {
GF_TrunEntry *entry = &trun->samples[i];
const u8 info = traf->sdtp->sample_info[sample_index];
entry->flags |= GF_ISOM_GET_FRAG_DEPEND_FLAGS(info >> 6, info >> 4, info >> 2, info);
sample_index++;
if (sample_index > traf->sdtp->sampleCount) {
GF_LOG(GF_LOG_ERROR, GF_LOG_CONTAINER, ("[iso file] Error: TRAF box of track id=%u contained an inconsistent SDTP.\n", traf->tfhd->trackID));
return;
}
}
}
if (sample_index < traf->sdtp->sampleCount) {
GF_LOG(GF_LOG_ERROR, GF_LOG_CONTAINER, ("[iso file] Error: TRAF box of track id=%u list less samples than SDTP.\n", traf->tfhd->trackID));
}
gf_isom_box_del_parent(&traf->child_boxes, (GF_Box*)traf->sdtp);
traf->sdtp = NULL;
}
}
}
| 0 |
[
"CWE-401"
] |
gpac
|
fe5155cf047252d1c4cb91602048bfa682af0ea7
| 204,338,853,142,673,200,000,000,000,000,000,000,000 | 38 |
fixed #1783 (fuzz)
|
const Publisher* DataWriterImpl::get_publisher() const
{
return publisher_->get_publisher();
}
| 0 |
[
"CWE-284"
] |
Fast-DDS
|
d2aeab37eb4fad4376b68ea4dfbbf285a2926384
| 78,573,602,639,793,590,000,000,000,000,000,000,000 | 4 |
check remote permissions (#1387)
* Refs 5346. Blackbox test
Signed-off-by: Iker Luengo <[email protected]>
* Refs 5346. one-way string compare
Signed-off-by: Iker Luengo <[email protected]>
* Refs 5346. Do not add partition separator on last partition
Signed-off-by: Iker Luengo <[email protected]>
* Refs 5346. Uncrustify
Signed-off-by: Iker Luengo <[email protected]>
* Refs 5346. Uncrustify
Signed-off-by: Iker Luengo <[email protected]>
* Refs 3680. Access control unit testing
It only covers Partition and Topic permissions
Signed-off-by: Iker Luengo <[email protected]>
* Refs #3680. Fix partition check on Permissions plugin.
Signed-off-by: Iker Luengo <[email protected]>
* Refs 3680. Uncrustify
Signed-off-by: Iker Luengo <[email protected]>
* Refs 3680. Fix tests on mac
Signed-off-by: Iker Luengo <[email protected]>
* Refs 3680. Fix windows tests
Signed-off-by: Iker Luengo <[email protected]>
* Refs 3680. Avoid memory leak on test
Signed-off-by: Iker Luengo <[email protected]>
* Refs 3680. Proxy data mocks should not return temporary objects
Signed-off-by: Iker Luengo <[email protected]>
* refs 3680. uncrustify
Signed-off-by: Iker Luengo <[email protected]>
Co-authored-by: Miguel Company <[email protected]>
|
static int raw_cmd_copyout(int cmd, void __user *param,
struct floppy_raw_cmd *ptr)
{
int ret;
while (ptr) {
struct floppy_raw_cmd cmd = *ptr;
cmd.next = NULL;
cmd.kernel_data = NULL;
ret = copy_to_user(param, &cmd, sizeof(cmd));
if (ret)
return -EFAULT;
param += sizeof(struct floppy_raw_cmd);
if ((ptr->flags & FD_RAW_READ) && ptr->buffer_length) {
if (ptr->length >= 0 &&
ptr->length <= ptr->buffer_length) {
long length = ptr->buffer_length - ptr->length;
ret = fd_copyout(ptr->data, ptr->kernel_data,
length);
if (ret)
return ret;
}
}
ptr = ptr->next;
}
return 0;
}
| 0 |
[
"CWE-200",
"CWE-264",
"CWE-754"
] |
linux
|
2145e15e0557a01b9195d1c7199a1b92cb9be81f
| 223,610,657,874,578,700,000,000,000,000,000,000,000 | 28 |
floppy: don't write kernel-only members to FDRAWCMD ioctl output
Do not leak kernel-only floppy_raw_cmd structure members to userspace.
This includes the linked-list pointer and the pointer to the allocated
DMA space.
Signed-off-by: Matthew Daley <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
MagickPrivate void CacheComponentTerminus(void)
{
if (cache_semaphore == (SemaphoreInfo *) NULL)
ActivateSemaphoreInfo(&cache_semaphore);
/* no op-- nothing to destroy */
RelinquishSemaphoreInfo(&cache_semaphore);
}
| 0 |
[
"CWE-772"
] |
ImageMagick
|
7a42f63927e7f2e26846b7ed4560e9cb4984af7b
| 24,166,932,899,745,200,000,000,000,000,000,000,000 | 7 |
https://github.com/ImageMagick/ImageMagick/issues/903
|
main (int argc, char *argv[])
{
g_test_init (&argc, &argv, NULL);
g_test_add_func ("/socket-client/happy-eyeballs/slow", test_happy_eyeballs);
g_test_add_func ("/socket-client/happy-eyeballs/cancellation", test_happy_eyeballs_cancel);
return g_test_run ();
}
| 0 |
[
"CWE-754"
] |
glib
|
d553d92d6e9f53cbe5a34166fcb919ba652c6a8e
| 51,749,024,232,509,160,000,000,000,000,000,000,000 | 9 |
gsocketclient: Fix criticals
This ensures the parent GTask is kept alive as long as an enumeration
is running and trying to connect.
Closes #1646
Closes #1649
|
static void i40e_pci_error_resume(struct pci_dev *pdev)
{
struct i40e_pf *pf = pci_get_drvdata(pdev);
dev_dbg(&pdev->dev, "%s\n", __func__);
if (test_bit(__I40E_SUSPENDED, pf->state))
return;
i40e_handle_reset_warning(pf, false);
}
| 0 |
[
"CWE-400",
"CWE-401"
] |
linux
|
27d461333459d282ffa4a2bdb6b215a59d493a8f
| 277,529,256,858,870,000,000,000,000,000,000,000,000 | 10 |
i40e: prevent memory leak in i40e_setup_macvlans
In i40e_setup_macvlans if i40e_setup_channel fails the allocated memory
for ch should be released.
Signed-off-by: Navid Emamdoost <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
static MngInfo *MngInfoFreeStruct(MngInfo *mng_info)
{
register ssize_t
i;
if (mng_info == (MngInfo *) NULL)
return((MngInfo *) NULL);
for (i=1; i < MNG_MAX_OBJECTS; i++)
MngInfoDiscardObject(mng_info,i);
if (mng_info->global_plte != (png_colorp) NULL)
mng_info->global_plte=(png_colorp)
RelinquishMagickMemory(mng_info->global_plte);
return((MngInfo *) RelinquishMagickMemory(mng_info));
}
| 0 |
[
"CWE-476"
] |
ImageMagick
|
816ecab6c532ae086ff4186b3eaf4aa7092d536f
| 252,149,932,303,677,040,000,000,000,000,000,000,000 | 17 |
https://github.com/ImageMagick/ImageMagick/issues/58
|
static int edge_calc_num_ports(struct usb_serial *serial,
struct usb_serial_endpoints *epds)
{
struct device *dev = &serial->interface->dev;
unsigned char num_ports = serial->type->num_ports;
/* Make sure we have the required endpoints when in download mode. */
if (serial->interface->cur_altsetting->desc.bNumEndpoints > 1) {
if (epds->num_bulk_in < num_ports ||
epds->num_bulk_out < num_ports ||
epds->num_interrupt_in < 1) {
dev_err(dev, "required endpoints missing\n");
return -ENODEV;
}
}
return num_ports;
}
| 0 |
[
"CWE-369"
] |
linux
|
6aeb75e6adfaed16e58780309613a578fe1ee90b
| 318,657,606,695,179,150,000,000,000,000,000,000,000 | 18 |
USB: serial: io_ti: fix div-by-zero in set_termios
Fix a division-by-zero in set_termios when debugging is enabled and a
high-enough speed has been requested so that the divisor value becomes
zero.
Instead of just fixing the offending debug statement, cap the baud rate
at the base as a zero divisor value also appears to crash the firmware.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable <[email protected]> # 2.6.12
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
|
xmlBufferShrink(xmlBufferPtr buf, unsigned int len) {
if (buf == NULL) return(-1);
if (len == 0) return(0);
if (len > buf->use) return(-1);
buf->use -= len;
if ((buf->alloc == XML_BUFFER_ALLOC_IMMUTABLE) ||
((buf->alloc == XML_BUFFER_ALLOC_IO) && (buf->contentIO != NULL))) {
/*
* we just move the content pointer, but also make sure
* the perceived buffer size has shrunk accordingly
*/
buf->content += len;
buf->size -= len;
/*
* sometimes though it maybe be better to really shrink
* on IO buffers
*/
if ((buf->alloc == XML_BUFFER_ALLOC_IO) && (buf->contentIO != NULL)) {
size_t start_buf = buf->content - buf->contentIO;
if (start_buf >= buf->size) {
memmove(buf->contentIO, &buf->content[0], buf->use);
buf->content = buf->contentIO;
buf->content[buf->use] = 0;
buf->size += start_buf;
}
}
} else {
memmove(buf->content, &buf->content[len], buf->use);
buf->content[buf->use] = 0;
}
return(len);
}
| 0 |
[
"CWE-190"
] |
libxml2
|
6c283d83eccd940bcde15634ac8c7f100e3caefd
| 95,660,936,943,638,900,000,000,000,000,000,000,000 | 34 |
[CVE-2022-29824] Fix integer overflows in xmlBuf and xmlBuffer
In several places, the code handling string buffers didn't check for
integer overflow or used wrong types for buffer sizes. This could
result in out-of-bounds writes or other memory errors when working on
large, multi-gigabyte buffers.
Thanks to Felix Wilhelm for the report.
|
static int sctp_v4_skb_iif(const struct sk_buff *skb)
{
return inet_iif(skb);
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
linux
|
8e2d61e0aed2b7c4ecb35844fe07e0b2b762dee4
| 66,075,278,684,288,800,000,000,000,000,000,000,000 | 4 |
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <[email protected]>
Signed-off-by: Marcelo Ricardo Leitner <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
int TABLE::fix_vcol_exprs(THD *thd)
{
for (Field **vf= vfield; vf && *vf; vf++)
if (fix_session_vcol_expr(thd, (*vf)->vcol_info))
return 1;
for (Field **df= default_field; df && *df; df++)
if ((*df)->default_value &&
fix_session_vcol_expr(thd, (*df)->default_value))
return 1;
for (Virtual_column_info **cc= check_constraints; cc && *cc; cc++)
if (fix_session_vcol_expr(thd, (*cc)))
return 1;
return 0;
}
| 1 |
[
"CWE-416"
] |
server
|
c02ebf3510850ba78a106be9974c94c3b97d8585
| 291,222,374,320,680,330,000,000,000,000,000,000,000 | 17 |
MDEV-24176 Preparations
1. moved fix_vcol_exprs() call to open_table()
mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session
2. Vanilla cleanups and comments.
|
if(epollfd < 0 && errno == ENOSYS)
# endif
{
DBGPRINTF("imptcp uses epoll_create()\n");
/* reading the docs, the number of epoll events passed to
* epoll_create() seems not to be used at all in kernels. So
* we just provide "a" number, happens to be 10.
*/
epollfd = epoll_create(10);
}
| 0 |
[
"CWE-190"
] |
rsyslog
|
0381a0de64a5a048c3d48b79055bd9848d0c7fc2
| 201,214,233,922,273,140,000,000,000,000,000,000,000 | 10 |
imptcp: fix Segmentation Fault when octet count is to high
|
onig2posix_error_code(int code)
{
static const O2PERR o2p[] = {
{ ONIG_MISMATCH, REG_NOMATCH },
{ ONIG_NO_SUPPORT_CONFIG, REG_EONIG_INTERNAL },
{ ONIGERR_MEMORY, REG_ESPACE },
{ ONIGERR_MATCH_STACK_LIMIT_OVER, REG_EONIG_INTERNAL },
{ ONIGERR_TYPE_BUG, REG_EONIG_INTERNAL },
{ ONIGERR_PARSER_BUG, REG_EONIG_INTERNAL },
{ ONIGERR_STACK_BUG, REG_EONIG_INTERNAL },
{ ONIGERR_UNDEFINED_BYTECODE, REG_EONIG_INTERNAL },
{ ONIGERR_UNEXPECTED_BYTECODE, REG_EONIG_INTERNAL },
{ ONIGERR_DEFAULT_ENCODING_IS_NOT_SETTED, REG_EONIG_BADARG },
{ ONIGERR_SPECIFIED_ENCODING_CANT_CONVERT_TO_WIDE_CHAR, REG_EONIG_BADARG },
{ ONIGERR_INVALID_ARGUMENT, REG_EONIG_BADARG },
{ ONIGERR_END_PATTERN_AT_LEFT_BRACE, REG_EBRACE },
{ ONIGERR_END_PATTERN_AT_LEFT_BRACKET, REG_EBRACK },
{ ONIGERR_EMPTY_CHAR_CLASS, REG_ECTYPE },
{ ONIGERR_PREMATURE_END_OF_CHAR_CLASS, REG_ECTYPE },
{ ONIGERR_END_PATTERN_AT_ESCAPE, REG_EESCAPE },
{ ONIGERR_END_PATTERN_AT_META, REG_EESCAPE },
{ ONIGERR_END_PATTERN_AT_CONTROL, REG_EESCAPE },
{ ONIGERR_META_CODE_SYNTAX, REG_BADPAT },
{ ONIGERR_CONTROL_CODE_SYNTAX, REG_BADPAT },
{ ONIGERR_CHAR_CLASS_VALUE_AT_END_OF_RANGE, REG_ECTYPE },
{ ONIGERR_CHAR_CLASS_VALUE_AT_START_OF_RANGE, REG_ECTYPE },
{ ONIGERR_UNMATCHED_RANGE_SPECIFIER_IN_CHAR_CLASS, REG_ECTYPE },
{ ONIGERR_TARGET_OF_REPEAT_OPERATOR_NOT_SPECIFIED, REG_BADRPT },
{ ONIGERR_TARGET_OF_REPEAT_OPERATOR_INVALID, REG_BADRPT },
{ ONIGERR_NESTED_REPEAT_OPERATOR, REG_BADRPT },
{ ONIGERR_UNMATCHED_CLOSE_PARENTHESIS, REG_EPAREN },
{ ONIGERR_END_PATTERN_WITH_UNMATCHED_PARENTHESIS, REG_EPAREN },
{ ONIGERR_END_PATTERN_IN_GROUP, REG_BADPAT },
{ ONIGERR_UNDEFINED_GROUP_OPTION, REG_BADPAT },
{ ONIGERR_INVALID_POSIX_BRACKET_TYPE, REG_BADPAT },
{ ONIGERR_INVALID_LOOK_BEHIND_PATTERN, REG_BADPAT },
{ ONIGERR_INVALID_REPEAT_RANGE_PATTERN, REG_BADPAT },
{ ONIGERR_TOO_BIG_NUMBER, REG_BADPAT },
{ ONIGERR_TOO_BIG_NUMBER_FOR_REPEAT_RANGE, REG_BADBR },
{ ONIGERR_UPPER_SMALLER_THAN_LOWER_IN_REPEAT_RANGE, REG_BADBR },
{ ONIGERR_EMPTY_RANGE_IN_CHAR_CLASS, REG_ECTYPE },
{ ONIGERR_MISMATCH_CODE_LENGTH_IN_CLASS_RANGE, REG_ECTYPE },
{ ONIGERR_TOO_MANY_MULTI_BYTE_RANGES, REG_ECTYPE },
{ ONIGERR_TOO_SHORT_MULTI_BYTE_STRING, REG_BADPAT },
{ ONIGERR_TOO_BIG_BACKREF_NUMBER, REG_ESUBREG },
{ ONIGERR_INVALID_BACKREF, REG_ESUBREG },
{ ONIGERR_NUMBERED_BACKREF_OR_CALL_NOT_ALLOWED, REG_BADPAT },
{ ONIGERR_TOO_BIG_WIDE_CHAR_VALUE, REG_EONIG_BADWC },
{ ONIGERR_TOO_LONG_WIDE_CHAR_VALUE, REG_EONIG_BADWC },
{ ONIGERR_INVALID_CODE_POINT_VALUE, REG_EONIG_BADWC },
{ ONIGERR_EMPTY_GROUP_NAME, REG_BADPAT },
{ ONIGERR_INVALID_GROUP_NAME, REG_BADPAT },
{ ONIGERR_INVALID_CHAR_IN_GROUP_NAME, REG_BADPAT },
{ ONIGERR_UNDEFINED_NAME_REFERENCE, REG_BADPAT },
{ ONIGERR_UNDEFINED_GROUP_REFERENCE, REG_BADPAT },
{ ONIGERR_MULTIPLEX_DEFINED_NAME, REG_BADPAT },
{ ONIGERR_MULTIPLEX_DEFINITION_NAME_CALL, REG_BADPAT },
{ ONIGERR_NEVER_ENDING_RECURSION, REG_BADPAT },
{ ONIGERR_GROUP_NUMBER_OVER_FOR_CAPTURE_HISTORY, REG_BADPAT },
{ ONIGERR_INVALID_CHAR_PROPERTY_NAME, REG_BADPAT },
{ ONIGERR_NOT_SUPPORTED_ENCODING_COMBINATION, REG_EONIG_BADARG },
{ ONIGERR_OVER_THREAD_PASS_LIMIT_COUNT, REG_EONIG_THREAD }
};
int i;
if (code >= 0) return 0;
for (i = 0; i < (int )(sizeof(o2p) / sizeof(o2p[0])); i++) {
if (code == o2p[i].onig_err)
return o2p[i].posix_err;
}
return REG_EONIG_INTERNAL; /* but, unknown error code */
}
| 0 |
[
"CWE-125"
] |
oniguruma
|
65a9b1aa03c9bc2dc01b074295b9603232cb3b78
| 180,305,197,950,815,900,000,000,000,000,000,000,000 | 76 |
onig-5.9.2
|
compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
{
unsigned int max_util, util_cfs, cpu_util, cpu_cap;
unsigned long sum_util, energy = 0;
struct task_struct *tsk;
int cpu;
for (; pd; pd = pd->next) {
struct cpumask *pd_mask = perf_domain_span(pd);
/*
* The energy model mandates all the CPUs of a performance
* domain have the same capacity.
*/
cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
max_util = sum_util = 0;
/*
* The capacity state of CPUs of the current rd can be driven by
* CPUs of another rd if they belong to the same performance
* domain. So, account for the utilization of these CPUs too
* by masking pd with cpu_online_mask instead of the rd span.
*
* If an entire performance domain is outside of the current rd,
* it will not appear in its pd list and will not be accounted
* by compute_energy().
*/
for_each_cpu_and(cpu, pd_mask, cpu_online_mask) {
util_cfs = cpu_util_next(cpu, p, dst_cpu);
/*
* Busy time computation: utilization clamping is not
* required since the ratio (sum_util / cpu_capacity)
* is already enough to scale the EM reported power
* consumption at the (eventually clamped) cpu_capacity.
*/
sum_util += schedutil_cpu_util(cpu, util_cfs, cpu_cap,
ENERGY_UTIL, NULL);
/*
* Performance domain frequency: utilization clamping
* must be considered since it affects the selection
* of the performance domain frequency.
* NOTE: in case RT tasks are running, by default the
* FREQUENCY_UTIL's utilization can be max OPP.
*/
tsk = cpu == dst_cpu ? p : NULL;
cpu_util = schedutil_cpu_util(cpu, util_cfs, cpu_cap,
FREQUENCY_UTIL, tsk);
max_util = max(max_util, cpu_util);
}
energy += em_pd_energy(pd->em_pd, max_util, sum_util);
}
return energy;
}
| 0 |
[
"CWE-400",
"CWE-703"
] |
linux
|
de53fd7aedb100f03e5d2231cfce0e4993282425
| 7,381,471,348,294,076,000,000,000,000,000,000,000 | 57 |
sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices
It has been observed, that highly-threaded, non-cpu-bound applications
running under cpu.cfs_quota_us constraints can hit a high percentage of
periods throttled while simultaneously not consuming the allocated
amount of quota. This use case is typical of user-interactive non-cpu
bound applications, such as those running in kubernetes or mesos when
run on multiple cpu cores.
This has been root caused to cpu-local run queue being allocated per cpu
bandwidth slices, and then not fully using that slice within the period.
At which point the slice and quota expires. This expiration of unused
slice results in applications not being able to utilize the quota for
which they are allocated.
The non-expiration of per-cpu slices was recently fixed by
'commit 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift
condition")'. Prior to that it appears that this had been broken since
at least 'commit 51f2176d74ac ("sched/fair: Fix unlocked reads of some
cfs_b->quota/period")' which was introduced in v3.16-rc1 in 2014. That
added the following conditional which resulted in slices never being
expired.
if (cfs_rq->runtime_expires != cfs_b->runtime_expires) {
/* extend local deadline, drift is bounded above by 2 ticks */
cfs_rq->runtime_expires += TICK_NSEC;
Because this was broken for nearly 5 years, and has recently been fixed
and is now being noticed by many users running kubernetes
(https://github.com/kubernetes/kubernetes/issues/67577) it is my opinion
that the mechanisms around expiring runtime should be removed
altogether.
This allows quota already allocated to per-cpu run-queues to live longer
than the period boundary. This allows threads on runqueues that do not
use much CPU to continue to use their remaining slice over a longer
period of time than cpu.cfs_period_us. However, this helps prevent the
above condition of hitting throttling while also not fully utilizing
your cpu quota.
This theoretically allows a machine to use slightly more than its
allotted quota in some periods. This overflow would be bounded by the
remaining quota left on each per-cpu runqueueu. This is typically no
more than min_cfs_rq_runtime=1ms per cpu. For CPU bound tasks this will
change nothing, as they should theoretically fully utilize all of their
quota in each period. For user-interactive tasks as described above this
provides a much better user/application experience as their cpu
utilization will more closely match the amount they requested when they
hit throttling. This means that cpu limits no longer strictly apply per
period for non-cpu bound applications, but that they are still accurate
over longer timeframes.
This greatly improves performance of high-thread-count, non-cpu bound
applications with low cfs_quota_us allocation on high-core-count
machines. In the case of an artificial testcase (10ms/100ms of quota on
80 CPU machine), this commit resulted in almost 30x performance
improvement, while still maintaining correct cpu quota restrictions.
That testcase is available at https://github.com/indeedeng/fibtest.
Fixes: 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift condition")
Signed-off-by: Dave Chiluk <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Phil Auld <[email protected]>
Reviewed-by: Ben Segall <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: John Hammond <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Kyle Anderson <[email protected]>
Cc: Gabriel Munos <[email protected]>
Cc: Peter Oskolkov <[email protected]>
Cc: Cong Wang <[email protected]>
Cc: Brendan Gregg <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
f_test_autochdir(typval_T *argvars UNUSED, typval_T *rettv UNUSED)
{
#if defined(FEAT_AUTOCHDIR)
test_autochdir = TRUE;
#endif
}
| 0 |
[
"CWE-78"
] |
vim
|
8c62a08faf89663e5633dc5036cd8695c80f1075
| 38,632,903,561,891,030,000,000,000,000,000,000,000 | 6 |
patch 8.1.0881: can execute shell commands in rvim through interfaces
Problem: Can execute shell commands in rvim through interfaces.
Solution: Disable using interfaces in restricted mode. Allow for writing
file with writefile(), histadd() and a few others.
|
ecma_bigint_add_sub (ecma_value_t left_value, /**< left BigInt value */
ecma_value_t right_value, /**< right BigInt value */
bool is_add) /**< true if add operation should be performed */
{
JERRY_ASSERT (ecma_is_value_bigint (left_value) && ecma_is_value_bigint (right_value));
if (right_value == ECMA_BIGINT_ZERO)
{
return ecma_copy_value (left_value);
}
ecma_extended_primitive_t *right_p = ecma_get_extended_primitive_from_value (right_value);
if (left_value == ECMA_BIGINT_ZERO)
{
if (!is_add)
{
return ecma_bigint_negate (right_p);
}
ecma_ref_extended_primitive (right_p);
return right_value;
}
ecma_extended_primitive_t *left_p = ecma_get_extended_primitive_from_value (left_value);
uint32_t sign = is_add ? 0 : ECMA_BIGINT_SIGN;
if (((left_p->u.bigint_sign_and_size ^ right_p->u.bigint_sign_and_size) & ECMA_BIGINT_SIGN) == sign)
{
ecma_extended_primitive_t *result_p = ecma_big_uint_add (left_p, right_p);
if (JERRY_UNLIKELY (result_p == NULL))
{
return ecma_bigint_raise_memory_error ();
}
result_p->u.bigint_sign_and_size |= left_p->u.bigint_sign_and_size & ECMA_BIGINT_SIGN;
return ecma_make_extended_primitive_value (result_p, ECMA_TYPE_BIGINT);
}
int compare_result = ecma_big_uint_compare (left_p, right_p);
ecma_extended_primitive_t *result_p;
if (compare_result == 0)
{
return ECMA_BIGINT_ZERO;
}
if (compare_result > 0)
{
sign = left_p->u.bigint_sign_and_size & ECMA_BIGINT_SIGN;
result_p = ecma_big_uint_sub (left_p, right_p);
}
else
{
sign = right_p->u.bigint_sign_and_size & ECMA_BIGINT_SIGN;
if (!is_add)
{
sign ^= ECMA_BIGINT_SIGN;
}
result_p = ecma_big_uint_sub (right_p, left_p);
}
if (JERRY_UNLIKELY (result_p == NULL))
{
return ecma_bigint_raise_memory_error ();
}
result_p->u.bigint_sign_and_size |= sign;
return ecma_make_extended_primitive_value (result_p, ECMA_TYPE_BIGINT);
} /* ecma_bigint_add_sub */
| 0 |
[
"CWE-416"
] |
jerryscript
|
3bcd48f72d4af01d1304b754ef19fe1a02c96049
| 236,709,778,993,526,300,000,000,000,000,000,000,000 | 73 |
Improve parse_identifier (#4691)
Ascii string length is no longer computed during string allocation.
JerryScript-DCO-1.0-Signed-off-by: Daniel Batiz [email protected]
|
ConnectClientToTcpAddrWithTimeout(unsigned int host, int port, unsigned int timeout)
{
rfbSocket sock;
struct sockaddr_in addr;
int one = 1;
addr.sin_family = AF_INET;
addr.sin_port = htons(port);
addr.sin_addr.s_addr = host;
sock = socket(AF_INET, SOCK_STREAM, 0);
if (sock == RFB_INVALID_SOCKET) {
#ifdef WIN32
errno=WSAGetLastError();
#endif
rfbClientErr("ConnectToTcpAddr: socket (%s)\n",strerror(errno));
return RFB_INVALID_SOCKET;
}
if (!SetNonBlocking(sock))
return FALSE;
if (connect(sock, (struct sockaddr *)&addr, sizeof(addr)) < 0) {
#ifdef WIN32
errno=WSAGetLastError();
#endif
if (!((errno == EWOULDBLOCK || errno == EINPROGRESS) && WaitForConnected(sock, timeout))) {
rfbClientErr("ConnectToTcpAddr: connect\n");
rfbCloseSocket(sock);
return RFB_INVALID_SOCKET;
}
}
if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY,
(char *)&one, sizeof(one)) < 0) {
rfbClientErr("ConnectToTcpAddr: setsockopt\n");
rfbCloseSocket(sock);
return RFB_INVALID_SOCKET;
}
return sock;
}
| 0 |
[
"CWE-703",
"CWE-835"
] |
libvncserver
|
57433015f856cc12753378254ce4f1c78f5d9c7b
| 205,873,676,897,792,940,000,000,000,000,000,000,000 | 42 |
libvncclient: handle half-open TCP connections
When a connection is not reset properly at the TCP level (e.g. sudden
power loss or process crash) the TCP connection becomes half-open and
read() always returns -1 with errno = EAGAIN while select() always
returns 0. This leads to an infinite loop and can be fixed by closing
the connection after a certain number of retries (based on a timeout)
has been exceeded.
|
parse_device(dev_t *pdev, struct archive *a, char *val)
{
#define MAX_PACK_ARGS 3
unsigned long numbers[MAX_PACK_ARGS];
char *p, *dev;
int argc;
pack_t *pack;
dev_t result;
const char *error = NULL;
memset(pdev, 0, sizeof(*pdev));
if ((dev = strchr(val, ',')) != NULL) {
/*
* Device's major/minor are given in a specified format.
* Decode and pack it accordingly.
*/
*dev++ = '\0';
if ((pack = pack_find(val)) == NULL) {
archive_set_error(a, ARCHIVE_ERRNO_FILE_FORMAT,
"Unknown format `%s'", val);
return ARCHIVE_WARN;
}
argc = 0;
while ((p = la_strsep(&dev, ",")) != NULL) {
if (*p == '\0') {
archive_set_error(a, ARCHIVE_ERRNO_FILE_FORMAT,
"Missing number");
return ARCHIVE_WARN;
}
numbers[argc++] = (unsigned long)mtree_atol(&p);
if (argc > MAX_PACK_ARGS) {
archive_set_error(a, ARCHIVE_ERRNO_FILE_FORMAT,
"Too many arguments");
return ARCHIVE_WARN;
}
}
if (argc < 2) {
archive_set_error(a, ARCHIVE_ERRNO_FILE_FORMAT,
"Not enough arguments");
return ARCHIVE_WARN;
}
result = (*pack)(argc, numbers, &error);
if (error != NULL) {
archive_set_error(a, ARCHIVE_ERRNO_FILE_FORMAT,
"%s", error);
return ARCHIVE_WARN;
}
} else {
/* file system raw value. */
result = (dev_t)mtree_atol(&val);
}
*pdev = result;
return ARCHIVE_OK;
#undef MAX_PACK_ARGS
}
| 1 |
[
"CWE-476",
"CWE-119"
] |
libarchive
|
a550daeecf6bc689ade371349892ea17b5b97c77
| 338,341,661,334,186,700,000,000,000,000,000,000,000 | 55 |
Fix libarchive/archive_read_support_format_mtree.c:1388:11: error: array subscript is above array bounds
|
MemoryRegion *vga_init_io(VGACommonState *s, Object *obj,
const MemoryRegionPortio **vga_ports,
const MemoryRegionPortio **vbe_ports)
{
MemoryRegion *vga_mem;
*vga_ports = vga_portio_list;
*vbe_ports = vbe_portio_list;
vga_mem = g_malloc(sizeof(*vga_mem));
memory_region_init_io(vga_mem, obj, &vga_mem_ops, s,
"vga-lowmem", 0x20000);
memory_region_set_flush_coalesced(vga_mem);
return vga_mem;
}
| 0 |
[
"CWE-200"
] |
qemu
|
c1b886c45dc70f247300f549dce9833f3fa2def5
| 241,316,143,910,334,340,000,000,000,000,000,000,000 | 16 |
vbe: rework sanity checks
Plug a bunch of holes in the bochs dispi interface parameter checking.
Add a function doing verification on all registers. Call that
unconditionally on every register write. That way we should catch
everything, even changing one register affecting the valid range of
another register.
Some of the holes have been added by commit
e9c6149f6ae6873f14a12eea554925b6aa4c4dec. Before that commit the
maximum possible framebuffer (VBE_DISPI_MAX_XRES * VBE_DISPI_MAX_YRES *
32 bpp) has been smaller than the qemu vga memory (8MB) and the checking
for VBE_DISPI_MAX_XRES + VBE_DISPI_MAX_YRES + VBE_DISPI_MAX_BPP was ok.
Some of the holes have been there forever, such as
VBE_DISPI_INDEX_X_OFFSET and VBE_DISPI_INDEX_Y_OFFSET register writes
lacking any verification.
Security impact:
(1) Guest can make the ui (gtk/vnc/...) use memory rages outside the vga
frame buffer as source -> host memory leak. Memory isn't leaked to
the guest but to the vnc client though.
(2) Qemu will segfault in case the memory range happens to include
unmapped areas -> Guest can DoS itself.
The guest can not modify host memory, so I don't think this can be used
by the guest to escape.
CVE-2014-3615
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gerd Hoffmann <[email protected]>
Reviewed-by: Laszlo Ersek <[email protected]>
|
dp_packet_size(const struct dp_packet *b)
{
return b->mbuf.pkt_len;
}
| 0 |
[
"CWE-400"
] |
ovs
|
79349cbab0b2a755140eedb91833ad2760520a83
| 69,115,694,704,401,630,000,000,000,000,000,000,000 | 4 |
flow: Support extra padding length.
Although not required, padding can be optionally added until
the packet length is MTU bytes. A packet with extra padding
currently fails sanity checks.
Vulnerability: CVE-2020-35498
Fixes: fa8d9001a624 ("miniflow_extract: Properly handle small IP packets.")
Reported-by: Joakim Hindersson <[email protected]>
Acked-by: Ilya Maximets <[email protected]>
Signed-off-by: Flavio Leitner <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
|
sp_variable *LEX::sp_param_init(LEX_CSTRING *name)
{
if (spcont->find_variable(name, true))
{
my_error(ER_SP_DUP_PARAM, MYF(0), name->str);
return NULL;
}
sp_variable *spvar= spcont->add_variable(thd, name);
init_last_field(&spvar->field_def, name,
thd->variables.collation_database);
return spvar;
}
| 0 |
[
"CWE-703"
] |
server
|
39feab3cd31b5414aa9b428eaba915c251ac34a2
| 276,338,735,705,203,000,000,000,000,000,000,000,000 | 12 |
MDEV-26412 Server crash in Item_field::fix_outer_field for INSERT SELECT
IF an INSERT/REPLACE SELECT statement contained an ON expression in the top
level select and this expression used a subquery with a column reference
that could not be resolved then an attempt to resolve this reference as
an outer reference caused a crash of the server. This happened because the
outer context field in the Name_resolution_context structure was not set
to NULL for such references. Rather it pointed to the first element in
the select_stack.
Note that starting from 10.4 we cannot use the SELECT_LEX::outer_select()
method when parsing a SELECT construct.
Approved by Oleksandr Byelkin <[email protected]>
|
bool LEX::stmt_execute(const Lex_ident_sys_st &ident, List<Item> *params)
{
sql_command= SQLCOM_EXECUTE;
prepared_stmt.set(ident, NULL, params);
return stmt_prepare_validate("EXECUTE..USING");
}
| 0 |
[
"CWE-703"
] |
server
|
39feab3cd31b5414aa9b428eaba915c251ac34a2
| 22,680,583,703,445,467,000,000,000,000,000,000,000 | 6 |
MDEV-26412 Server crash in Item_field::fix_outer_field for INSERT SELECT
IF an INSERT/REPLACE SELECT statement contained an ON expression in the top
level select and this expression used a subquery with a column reference
that could not be resolved then an attempt to resolve this reference as
an outer reference caused a crash of the server. This happened because the
outer context field in the Name_resolution_context structure was not set
to NULL for such references. Rather it pointed to the first element in
the select_stack.
Note that starting from 10.4 we cannot use the SELECT_LEX::outer_select()
method when parsing a SELECT construct.
Approved by Oleksandr Byelkin <[email protected]>
|
GF_Err vmhd_Write(GF_Box *s, GF_BitStream *bs)
{
GF_Err e;
GF_VideoMediaHeaderBox *ptr = (GF_VideoMediaHeaderBox *)s;
e = gf_isom_full_box_write(s, bs);
if (e) return e;
gf_bs_write_u64(bs, ptr->reserved);
return GF_OK;
}
| 0 |
[
"CWE-125"
] |
gpac
|
bceb03fd2be95097a7b409ea59914f332fb6bc86
| 184,637,246,758,701,400,000,000,000,000,000,000,000 | 10 |
fixed 2 possible heap overflows (inc. #1088)
|
Client::sentRequestBody(const CommIoCbParams &io)
{
debugs(11, 5, "sentRequestBody: FD " << io.fd << ": size " << io.size << ": errflag " << io.flag << ".");
debugs(32,3,HERE << "sentRequestBody called");
requestSender = NULL;
if (io.size > 0) {
fd_bytes(io.fd, io.size, FD_WRITE);
statCounter.server.all.kbytes_out += io.size;
// kids should increment their counters
}
if (io.flag == Comm::ERR_CLOSING)
return;
if (!requestBodySource) {
debugs(9,3, HERE << "detected while-we-were-sending abort");
return; // do nothing;
}
// both successful and failed writes affect response times
request->hier.notePeerWrite();
if (io.flag) {
debugs(11, DBG_IMPORTANT, "sentRequestBody error: FD " << io.fd << ": " << xstrerr(io.xerrno));
ErrorState *err;
err = new ErrorState(ERR_WRITE_ERROR, Http::scBadGateway, fwd->request);
err->xerrno = io.xerrno;
fwd->fail(err);
abortOnData("I/O error while sending request body");
return;
}
if (EBIT_TEST(entry->flags, ENTRY_ABORTED)) {
abortOnData("store entry aborted while sending request body");
return;
}
if (!requestBodySource->exhausted())
sendMoreRequestBody();
else if (receivedWholeRequestBody)
doneSendingRequestBody();
else
debugs(9,3, HERE << "waiting for body production end or abort");
}
| 0 |
[
"CWE-20"
] |
squid
|
1e05a85bd28c22c9ca5d3ac9f5e86d6269ec0a8c
| 209,120,136,158,985,620,000,000,000,000,000,000,000 | 46 |
Handle more partial responses (#791)
|
radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags)
{
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
struct hstate *h = hstate_file(file);
struct vm_unmapped_area_info info;
if (unlikely(addr > mm->context.addr_limit && addr < TASK_SIZE))
mm->context.addr_limit = TASK_SIZE;
if (len & ~huge_page_mask(h))
return -EINVAL;
if (len > mm->task_size)
return -ENOMEM;
if (flags & MAP_FIXED) {
if (prepare_hugepage_range(file, addr, len))
return -EINVAL;
return addr;
}
if (addr) {
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (mm->task_size - len >= addr &&
(!vma || addr + len <= vma->vm_start))
return addr;
}
/*
* We are always doing an topdown search here. Slice code
* does that too.
*/
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
info.low_limit = PAGE_SIZE;
info.high_limit = current->mm->mmap_base;
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
if (addr > DEFAULT_MAP_WINDOW)
info.high_limit += mm->context.addr_limit - DEFAULT_MAP_WINDOW;
return vm_unmapped_area(&info);
}
| 1 |
[
"CWE-119"
] |
linux
|
1be7107fbe18eed3e319a6c3e83c78254b693acb
| 241,306,794,657,289,400,000,000,000,000,000,000,000 | 46 |
mm: larger stack guard gap, between vmas
Stack guard page is a useful feature to reduce a risk of stack smashing
into a different mapping. We have been using a single page gap which
is sufficient to prevent having stack adjacent to a different mapping.
But this seems to be insufficient in the light of the stack usage in
userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
which is 256kB or stack strings with MAX_ARG_STRLEN.
This will become especially dangerous for suid binaries and the default
no limit for the stack size limit because those applications can be
tricked to consume a large portion of the stack and a single glibc call
could jump over the guard page. These attacks are not theoretical,
unfortunatelly.
Make those attacks less probable by increasing the stack guard gap
to 1MB (on systems with 4k pages; but make it depend on the page size
because systems with larger base pages might cap stack allocations in
the PAGE_SIZE units) which should cover larger alloca() and VLA stack
allocations. It is obviously not a full fix because the problem is
somehow inherent, but it should reduce attack space a lot.
One could argue that the gap size should be configurable from userspace,
but that can be done later when somebody finds that the new 1MB is wrong
for some special case applications. For now, add a kernel command line
option (stack_guard_gap) to specify the stack gap size (in page units).
Implementation wise, first delete all the old code for stack guard page:
because although we could get away with accounting one extra page in a
stack vma, accounting a larger gap can break userspace - case in point,
a program run with "ulimit -S -v 20000" failed when the 1MB gap was
counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
and strict non-overcommit mode.
Instead of keeping gap inside the stack vma, maintain the stack guard
gap as a gap between vmas: using vm_start_gap() in place of vm_start
(or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
places which need to respect the gap - mainly arch_get_unmapped_area(),
and and the vma tree's subtree_gap support for that.
Original-patch-by: Oleg Nesterov <[email protected]>
Original-patch-by: Michal Hocko <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Tested-by: Helge Deller <[email protected]> # parisc
Signed-off-by: Linus Torvalds <[email protected]>
|
static void ext2_xattr_rehash(struct ext2_xattr_header *header,
struct ext2_xattr_entry *entry)
{
struct ext2_xattr_entry *here;
__u32 hash = 0;
ext2_xattr_hash_entry(header, entry);
here = ENTRY(header+1);
while (!IS_LAST_ENTRY(here)) {
if (!here->e_hash) {
/* Block is not shared if an entry's hash value == 0 */
hash = 0;
break;
}
hash = (hash << BLOCK_HASH_SHIFT) ^
(hash >> (8*sizeof(hash) - BLOCK_HASH_SHIFT)) ^
le32_to_cpu(here->e_hash);
here = EXT2_XATTR_NEXT(here);
}
header->h_hash = cpu_to_le32(hash);
}
| 0 |
[
"CWE-241",
"CWE-19"
] |
linux
|
be0726d33cb8f411945884664924bed3cb8c70ee
| 181,304,242,134,108,000,000,000,000,000,000,000,000 | 21 |
ext2: convert to mbcache2
The conversion is generally straightforward. We convert filesystem from
a global cache to per-fs one. Similarly to ext4 the tricky part is that
xattr block corresponding to found mbcache entry can get freed before we
get buffer lock for that block. So we have to check whether the entry is
still valid after getting the buffer lock.
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Theodore Ts'o <[email protected]>
|
void CWebServer::Cmd_SystemReboot(WebEmSession & session, const request& req, Json::Value &root)
{
if (session.rights != 2)
{
session.reply_status = reply::forbidden;
return; //Only admin user allowed
}
#ifdef WIN32
int ret = system("shutdown -r -f -t 1 -d up:125:1");
#else
int ret = system("sudo shutdown -r now");
#endif
if (ret != 0)
{
_log.Log(LOG_ERROR, "Error executing reboot command. returned: %d", ret);
return;
}
root["title"] = "SystemReboot";
root["status"] = "OK";
}
| 0 |
[
"CWE-89"
] |
domoticz
|
ee70db46f81afa582c96b887b73bcd2a86feda00
| 156,360,856,354,366,340,000,000,000,000,000,000,000 | 20 |
Fixed possible SQL Injection Vulnerability (Thanks to Fabio Carretto!)
|
static unsigned char *get_params(unsigned char *p, int *param, int *len)
{
int n;
*len = 0;
while (*p != '\0') {
while (*p == ' ' || *p == '\t') {
p++;
}
if (isdigit(*p)) {
for (n = 0; isdigit(*p); p++) {
n = n * 10 + (*p - '0');
}
if (*len < 10) {
param[(*len)++] = n;
}
while (*p == ' ' || *p == '\t') {
p++;
}
if (*p == ';') {
p++;
}
} else if (*p == ';') {
if (*len < 10) {
param[(*len)++] = 0;
}
p++;
} else
break;
}
return p;
}
| 0 |
[
"CWE-399",
"CWE-119"
] |
ImageMagick
|
eedd0c35bb2d8af7aa05f215689fdebd11633fa1
| 157,558,428,896,910,800,000,000,000,000,000,000,000 | 32 |
Prevent buffer overflow in SIXEL, PDB, MAP, and CALS coders (bug report from Donghai Zhu)
|
static void f3(struct mg_connection *c, int ev, void *ev_data, void *fn_data) {
int *ok = (int *) fn_data;
// LOG(LL_INFO, ("%d", ev));
if (ev == MG_EV_CONNECT) {
// c->is_hexdumping = 1;
mg_printf(c, "GET / HTTP/1.0\r\nHost: %s\r\n\r\n",
c->peer.is_ip6 ? "ipv6.google.com" : "cesanta.com");
} else if (ev == MG_EV_HTTP_MSG) {
struct mg_http_message *hm = (struct mg_http_message *) ev_data;
// LOG(LL_INFO, ("-->[%.*s]", (int) hm->message.len, hm->message.ptr));
// ASSERT(mg_vcmp(&hm->method, "HTTP/1.1") == 0);
// ASSERT(mg_vcmp(&hm->uri, "301") == 0);
*ok = atoi(hm->uri.ptr);
} else if (ev == MG_EV_CLOSE) {
if (*ok == 0) *ok = 888;
} else if (ev == MG_EV_ERROR) {
if (*ok == 0) *ok = 777;
}
}
| 0 |
[
"CWE-552"
] |
mongoose
|
c65c8fdaaa257e0487ab0aaae9e8f6b439335945
| 115,032,596,977,364,680,000,000,000,000,000,000,000 | 19 |
Protect against the directory traversal in mg_upload()
|
static void sco_skb_put_cmsg(struct sk_buff *skb, struct msghdr *msg,
struct sock *sk)
{
if (sco_pi(sk)->cmsg_mask & SCO_CMSG_PKT_STATUS)
put_cmsg(msg, SOL_BLUETOOTH, BT_SCM_PKT_STATUS,
sizeof(bt_cb(skb)->sco.pkt_status),
&bt_cb(skb)->sco.pkt_status);
}
| 0 |
[] |
linux
|
f6b8c6b5543983e9de29dc14716bfa4eb3f157c4
| 225,527,747,106,409,500,000,000,000,000,000,000,000 | 8 |
Bluetooth: sco: Fix crash when using BT_SNDMTU/BT_RCVMTU option
This commit add the invalid check for connected socket, without it will
causes the following crash due to sco_pi(sk)->conn being NULL:
KASAN: null-ptr-deref in range [0x0000000000000050-0x0000000000000057]
CPU: 3 PID: 4284 Comm: test_sco Not tainted 5.10.0-rc3+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1 04/01/2014
RIP: 0010:sco_sock_getsockopt+0x45d/0x8e0
Code: 48 c1 ea 03 80 3c 02 00 0f 85 ca 03 00 00 49 8b 9d f8 04 00 00 48 b8 00
00 00 00 00 fc ff df 48 8d 7b 50 48 89 fa 48 c1 ea 03 <0f> b6 04 02 84
c0 74 08 3c 03 0f 8e b5 03 00 00 8b 43 50 48 8b 0c
RSP: 0018:ffff88801bb17d88 EFLAGS: 00010206
RAX: dffffc0000000000 RBX: 0000000000000000 RCX: ffffffff83a4ecdf
RDX: 000000000000000a RSI: ffffc90002fce000 RDI: 0000000000000050
RBP: 1ffff11003762fb4 R08: 0000000000000001 R09: ffff88810e1008c0
R10: ffffffffbd695dcf R11: fffffbfff7ad2bb9 R12: 0000000000000000
R13: ffff888018ff1000 R14: dffffc0000000000 R15: 000000000000000d
FS: 00007fb4f76c1700(0000) GS:ffff88811af80000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005555e3b7a938 CR3: 00000001117be001 CR4: 0000000000770ee0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
? sco_skb_put_cmsg+0x80/0x80
? sco_skb_put_cmsg+0x80/0x80
__sys_getsockopt+0x12a/0x220
? __ia32_sys_setsockopt+0x150/0x150
? syscall_enter_from_user_mode+0x18/0x50
? rcu_read_lock_bh_held+0xb0/0xb0
__x64_sys_getsockopt+0xba/0x150
? syscall_enter_from_user_mode+0x1d/0x50
do_syscall_64+0x33/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Fixes: 0fc1a726f897 ("Bluetooth: sco: new getsockopt options BT_SNDMTU/BT_RCVMTU")
Reported-by: Hulk Robot <[email protected]>
Signed-off-by: Wei Yongjun <[email protected]>
Reviewed-by: Luiz Augusto Von Dentz <[email protected]>
Signed-off-by: Marcel Holtmann <[email protected]>
Signed-off-by: Johan Hedberg <[email protected]>
|
static inline void ide_abort_command(IDEState *s)
{
ide_transfer_stop(s);
s->status = READY_STAT | ERR_STAT;
s->error = ABRT_ERR;
}
| 0 |
[
"CWE-399"
] |
qemu
|
3251bdcf1c67427d964517053c3d185b46e618e8
| 104,883,885,242,131,400,000,000,000,000,000,000,000 | 6 |
ide: Correct handling of malformed/short PRDTs
This impacts both BMDMA and AHCI HBA interfaces for IDE.
Currently, we confuse the difference between a PRDT having
"0 bytes" and a PRDT having "0 complete sectors."
When we receive an incomplete sector, inconsistent error checking
leads to an infinite loop wherein the call succeeds, but it
didn't give us enough bytes -- leading us to re-call the
DMA chain over and over again. This leads to, in the BMDMA case,
leaked memory for short PRDTs, and infinite loops and resource
usage in the AHCI case.
The .prepare_buf() callback is reworked to return the number of
bytes that it successfully prepared. 0 is a valid, non-error
answer that means the table was empty and described no bytes.
-1 indicates an error.
Our current implementation uses the io_buffer in IDEState to
ultimately describe the size of a prepared scatter-gather list.
Even though the AHCI PRDT/SGList can be as large as 256GiB, the
AHCI command header limits transactions to just 4GiB. ATA8-ACS3,
however, defines the largest transaction to be an LBA48 command
that transfers 65,536 sectors. With a 512 byte sector size, this
is just 32MiB.
Since our current state structures use the int type to describe
the size of the buffer, and this state is migrated as int32, we
are limited to describing 2GiB buffer sizes unless we change the
migration protocol.
For this reason, this patch begins to unify the assertions in the
IDE pathways that the scatter-gather list provided by either the
AHCI PRDT or the PCI BMDMA PRDs can only describe, at a maximum,
2GiB. This should be resilient enough unless we need a sector
size that exceeds 32KiB.
Further, the likelihood of any guest operating system actually
attempting to transfer this much data in a single operation is
very slim.
To this end, the IDEState variables have been updated to more
explicitly clarify our maximum supported size. Callers to the
prepare_buf callback have been reworked to understand the new
return code, and all versions of the prepare_buf callback have
been adjusted accordingly.
Lastly, the ahci_populate_sglist helper, relied upon by the
AHCI implementation of .prepare_buf() as well as the PCI
implementation of the callback have had overflow assertions
added to help make clear the reasonings behind the various
type changes.
[Added %d -> %"PRId64" fix John sent because off_pos changed from int to
int64_t.
--Stefan]
Signed-off-by: John Snow <[email protected]>
Reviewed-by: Paolo Bonzini <[email protected]>
Message-id: [email protected]
Signed-off-by: Stefan Hajnoczi <[email protected]>
|
flatpak_save_override_keyfile (GKeyFile *metakey,
const char *app_id,
gboolean user,
GError **error)
{
g_autoptr(GFile) base_dir = NULL;
g_autoptr(GFile) override_dir = NULL;
g_autoptr(GFile) file = NULL;
g_autofree char *filename = NULL;
g_autofree char *parent = NULL;
if (user)
base_dir = flatpak_get_user_base_dir_location ();
else
base_dir = flatpak_get_system_default_base_dir_location ();
override_dir = g_file_get_child (base_dir, "overrides");
if (app_id)
file = g_file_get_child (override_dir, app_id);
else
file = g_file_get_child (override_dir, "global");
filename = g_file_get_path (file);
parent = g_path_get_dirname (filename);
if (g_mkdir_with_parents (parent, 0755))
{
glnx_set_error_from_errno (error);
return FALSE;
}
return g_key_file_save_to_file (metakey, filename, error);
}
| 0 |
[
"CWE-668"
] |
flatpak
|
cd2142888fc4c199723a0dfca1f15ea8788a5483
| 202,040,079,617,642,950,000,000,000,000,000,000,000 | 33 |
Don't expose /proc when running apply_extra
As shown by CVE-2019-5736, it is sometimes possible for the sandbox
app to access outside files using /proc/self/exe. This is not
typically an issue for flatpak as the sandbox runs as the user which
has no permissions to e.g. modify the host files.
However, when installing apps using extra-data into the system repo
we *do* actually run a sandbox as root. So, in this case we disable mounting
/proc in the sandbox, which will neuter attacks like this.
|
sec_establish_key(void)
{
uint32 length = g_server_public_key_len + SEC_PADDING_SIZE;
uint32 flags = SEC_EXCHANGE_PKT;
STREAM s;
s = sec_init(flags, length + 4);
out_uint32_le(s, length);
out_uint8p(s, g_sec_crypted_random, g_server_public_key_len);
out_uint8s(s, SEC_PADDING_SIZE);
s_mark_end(s);
sec_send(s, flags);
}
| 0 |
[
"CWE-119",
"CWE-125",
"CWE-703",
"CWE-787"
] |
rdesktop
|
4dca546d04321a610c1835010b5dad85163b65e1
| 98,374,837,213,370,420,000,000,000,000,000,000,000 | 15 |
Malicious RDP server security fixes
This commit includes fixes for a set of 21 vulnerabilities in
rdesktop when a malicious RDP server is used.
All vulnerabilities was identified and reported by Eyal Itkin.
* Add rdp_protocol_error function that is used in several fixes
* Refactor of process_bitmap_updates
* Fix possible integer overflow in s_check_rem() on 32bit arch
* Fix memory corruption in process_bitmap_data - CVE-2018-8794
* Fix remote code execution in process_bitmap_data - CVE-2018-8795
* Fix remote code execution in process_plane - CVE-2018-8797
* Fix Denial of Service in mcs_recv_connect_response - CVE-2018-20175
* Fix Denial of Service in mcs_parse_domain_params - CVE-2018-20175
* Fix Denial of Service in sec_parse_crypt_info - CVE-2018-20176
* Fix Denial of Service in sec_recv - CVE-2018-20176
* Fix minor information leak in rdpdr_process - CVE-2018-8791
* Fix Denial of Service in cssp_read_tsrequest - CVE-2018-8792
* Fix remote code execution in cssp_read_tsrequest - CVE-2018-8793
* Fix Denial of Service in process_bitmap_data - CVE-2018-8796
* Fix minor information leak in rdpsnd_process_ping - CVE-2018-8798
* Fix Denial of Service in process_secondary_order - CVE-2018-8799
* Fix remote code execution in in ui_clip_handle_data - CVE-2018-8800
* Fix major information leak in ui_clip_handle_data - CVE-2018-20174
* Fix memory corruption in rdp_in_unistr - CVE-2018-20177
* Fix Denial of Service in process_demand_active - CVE-2018-20178
* Fix remote code execution in lspci_process - CVE-2018-20179
* Fix remote code execution in rdpsnddbg_process - CVE-2018-20180
* Fix remote code execution in seamless_process - CVE-2018-20181
* Fix remote code execution in seamless_process_line - CVE-2018-20182
|
d2s_clip_array (const double *src, int count, short *dest, double scale)
{ while (--count >= 0)
{ double tmp = scale * src [count] ;
if (CPU_CLIPS_POSITIVE == 0 && tmp > 32767.0)
dest [count] = SHRT_MAX ;
else if (CPU_CLIPS_NEGATIVE == 0 && tmp < -32768.0)
dest [count] = SHRT_MIN ;
else
dest [count] = lrint (tmp) ;
} ;
} /* d2s_clip_array */
| 0 |
[
"CWE-369"
] |
libsndfile
|
85c877d5072866aadbe8ed0c3e0590fbb5e16788
| 22,499,077,447,043,435,000,000,000,000,000,000,000 | 12 |
double64_init: Check psf->sf.channels against upper bound
This prevents division by zero later in the code.
While the trivial case to catch this (i.e. sf.channels < 1) has already
been covered, a crafted file may report a number of channels that is
so high (i.e. > INT_MAX/sizeof(double)) that it "somehow" gets
miscalculated to zero (if this makes sense) in the determination of the
blockwidth. Since we only support a limited number of channels anyway,
make sure to check here as well.
CVE-2017-14634
Closes: https://github.com/erikd/libsndfile/issues/318
Signed-off-by: Erik de Castro Lopo <[email protected]>
|
nfsd4_encode_readdir(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_readdir *readdir)
{
int maxcount;
int bytes_left;
loff_t offset;
__be64 wire_offset;
struct xdr_stream *xdr = &resp->xdr;
int starting_len = xdr->buf->len;
__be32 *p;
if (nfserr)
return nfserr;
p = xdr_reserve_space(xdr, NFS4_VERIFIER_SIZE);
if (!p)
return nfserr_resource;
/* XXX: Following NFSv3, we ignore the READDIR verifier for now. */
*p++ = cpu_to_be32(0);
*p++ = cpu_to_be32(0);
resp->xdr.buf->head[0].iov_len = ((char *)resp->xdr.p)
- (char *)resp->xdr.buf->head[0].iov_base;
/*
* Number of bytes left for directory entries allowing for the
* final 8 bytes of the readdir and a following failed op:
*/
bytes_left = xdr->buf->buflen - xdr->buf->len
- COMPOUND_ERR_SLACK_SPACE - 8;
if (bytes_left < 0) {
nfserr = nfserr_resource;
goto err_no_verf;
}
maxcount = min_t(u32, readdir->rd_maxcount, INT_MAX);
/*
* Note the rfc defines rd_maxcount as the size of the
* READDIR4resok structure, which includes the verifier above
* and the 8 bytes encoded at the end of this function:
*/
if (maxcount < 16) {
nfserr = nfserr_toosmall;
goto err_no_verf;
}
maxcount = min_t(int, maxcount-16, bytes_left);
/* RFC 3530 14.2.24 allows us to ignore dircount when it's 0: */
if (!readdir->rd_dircount)
readdir->rd_dircount = INT_MAX;
readdir->xdr = xdr;
readdir->rd_maxcount = maxcount;
readdir->common.err = 0;
readdir->cookie_offset = 0;
offset = readdir->rd_cookie;
nfserr = nfsd_readdir(readdir->rd_rqstp, readdir->rd_fhp,
&offset,
&readdir->common, nfsd4_encode_dirent);
if (nfserr == nfs_ok &&
readdir->common.err == nfserr_toosmall &&
xdr->buf->len == starting_len + 8) {
/* nothing encoded; which limit did we hit?: */
if (maxcount - 16 < bytes_left)
/* It was the fault of rd_maxcount: */
nfserr = nfserr_toosmall;
else
/* We ran out of buffer space: */
nfserr = nfserr_resource;
}
if (nfserr)
goto err_no_verf;
if (readdir->cookie_offset) {
wire_offset = cpu_to_be64(offset);
write_bytes_to_xdr_buf(xdr->buf, readdir->cookie_offset,
&wire_offset, 8);
}
p = xdr_reserve_space(xdr, 8);
if (!p) {
WARN_ON_ONCE(1);
goto err_no_verf;
}
*p++ = 0; /* no more entries */
*p++ = htonl(readdir->common.err == nfserr_eof);
return 0;
err_no_verf:
xdr_truncate_encode(xdr, starting_len);
return nfserr;
}
| 0 |
[
"CWE-20",
"CWE-129"
] |
linux
|
f961e3f2acae94b727380c0b74e2d3954d0edf79
| 236,006,352,750,615,500,000,000,000,000,000,000,000 | 91 |
nfsd: encoders mustn't use unitialized values in error cases
In error cases, lgp->lg_layout_type may be out of bounds; so we
shouldn't be using it until after the check of nfserr.
This was seen to crash nfsd threads when the server receives a LAYOUTGET
request with a large layout type.
GETDEVICEINFO has the same problem.
Reported-by: Ari Kauppi <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Cc: [email protected]
Signed-off-by: J. Bruce Fields <[email protected]>
|
static bool is_inf(const long double val) {
#ifdef isinf
return (bool)isinf(val);
#else
return !is_nan(val) && (val<cimg::type<half>::min() || val>cimg::type<half>::max());
#endif
}
| 0 |
[
"CWE-770"
] |
cimg
|
619cb58dd90b4e03ac68286c70ed98acbefd1c90
| 110,500,332,030,821,030,000,000,000,000,000,000,000 | 7 |
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
|
BOOL update_write_cache_bitmap_v2_order(wStream* s, CACHE_BITMAP_V2_ORDER* cache_bitmap_v2,
BOOL compressed, UINT16* flags)
{
BYTE bitsPerPixelId;
if (!Stream_EnsureRemainingCapacity(
s, update_approximate_cache_bitmap_v2_order(cache_bitmap_v2, compressed, flags)))
return FALSE;
bitsPerPixelId = BPP_CBR2[cache_bitmap_v2->bitmapBpp];
*flags = (cache_bitmap_v2->cacheId & 0x0003) | (bitsPerPixelId << 3) |
((cache_bitmap_v2->flags << 7) & 0xFF80);
if (cache_bitmap_v2->flags & CBR2_PERSISTENT_KEY_PRESENT)
{
Stream_Write_UINT32(s, cache_bitmap_v2->key1); /* key1 (4 bytes) */
Stream_Write_UINT32(s, cache_bitmap_v2->key2); /* key2 (4 bytes) */
}
if (cache_bitmap_v2->flags & CBR2_HEIGHT_SAME_AS_WIDTH)
{
if (!update_write_2byte_unsigned(s, cache_bitmap_v2->bitmapWidth)) /* bitmapWidth */
return FALSE;
}
else
{
if (!update_write_2byte_unsigned(s, cache_bitmap_v2->bitmapWidth) || /* bitmapWidth */
!update_write_2byte_unsigned(s, cache_bitmap_v2->bitmapHeight)) /* bitmapHeight */
return FALSE;
}
if (cache_bitmap_v2->flags & CBR2_DO_NOT_CACHE)
cache_bitmap_v2->cacheIndex = BITMAP_CACHE_WAITING_LIST_INDEX;
if (!update_write_4byte_unsigned(s, cache_bitmap_v2->bitmapLength) || /* bitmapLength */
!update_write_2byte_unsigned(s, cache_bitmap_v2->cacheIndex)) /* cacheIndex */
return FALSE;
if (compressed)
{
if (!(cache_bitmap_v2->flags & CBR2_NO_BITMAP_COMPRESSION_HDR))
{
Stream_Write_UINT16(
s, cache_bitmap_v2->cbCompFirstRowSize); /* cbCompFirstRowSize (2 bytes) */
Stream_Write_UINT16(
s, cache_bitmap_v2->cbCompMainBodySize); /* cbCompMainBodySize (2 bytes) */
Stream_Write_UINT16(s, cache_bitmap_v2->cbScanWidth); /* cbScanWidth (2 bytes) */
Stream_Write_UINT16(
s, cache_bitmap_v2->cbUncompressedSize); /* cbUncompressedSize (2 bytes) */
cache_bitmap_v2->bitmapLength = cache_bitmap_v2->cbCompMainBodySize;
}
if (!Stream_EnsureRemainingCapacity(s, cache_bitmap_v2->bitmapLength))
return FALSE;
Stream_Write(s, cache_bitmap_v2->bitmapDataStream, cache_bitmap_v2->bitmapLength);
}
else
{
if (!Stream_EnsureRemainingCapacity(s, cache_bitmap_v2->bitmapLength))
return FALSE;
Stream_Write(s, cache_bitmap_v2->bitmapDataStream, cache_bitmap_v2->bitmapLength);
}
cache_bitmap_v2->compressed = compressed;
return TRUE;
}
| 1 |
[
"CWE-125"
] |
FreeRDP
|
b8beb55913471952f92770c90c372139d78c16c0
| 162,081,631,190,877,050,000,000,000,000,000,000,000 | 68 |
Fixed OOB read in update_read_cache_bitmap_v3_order
CVE-2020-11096 thanks @antonio-morales for finding this.
|
static long usbdev_do_ioctl(struct file *file, unsigned int cmd,
void __user *p)
{
struct usb_dev_state *ps = file->private_data;
struct inode *inode = file_inode(file);
struct usb_device *dev = ps->dev;
int ret = -ENOTTY;
if (!(file->f_mode & FMODE_WRITE))
return -EPERM;
usb_lock_device(dev);
/* Reap operations are allowed even after disconnection */
switch (cmd) {
case USBDEVFS_REAPURB:
snoop(&dev->dev, "%s: REAPURB\n", __func__);
ret = proc_reapurb(ps, p);
goto done;
case USBDEVFS_REAPURBNDELAY:
snoop(&dev->dev, "%s: REAPURBNDELAY\n", __func__);
ret = proc_reapurbnonblock(ps, p);
goto done;
#ifdef CONFIG_COMPAT
case USBDEVFS_REAPURB32:
snoop(&dev->dev, "%s: REAPURB32\n", __func__);
ret = proc_reapurb_compat(ps, p);
goto done;
case USBDEVFS_REAPURBNDELAY32:
snoop(&dev->dev, "%s: REAPURBNDELAY32\n", __func__);
ret = proc_reapurbnonblock_compat(ps, p);
goto done;
#endif
}
if (!connected(ps)) {
usb_unlock_device(dev);
return -ENODEV;
}
switch (cmd) {
case USBDEVFS_CONTROL:
snoop(&dev->dev, "%s: CONTROL\n", __func__);
ret = proc_control(ps, p);
if (ret >= 0)
inode->i_mtime = CURRENT_TIME;
break;
case USBDEVFS_BULK:
snoop(&dev->dev, "%s: BULK\n", __func__);
ret = proc_bulk(ps, p);
if (ret >= 0)
inode->i_mtime = CURRENT_TIME;
break;
case USBDEVFS_RESETEP:
snoop(&dev->dev, "%s: RESETEP\n", __func__);
ret = proc_resetep(ps, p);
if (ret >= 0)
inode->i_mtime = CURRENT_TIME;
break;
case USBDEVFS_RESET:
snoop(&dev->dev, "%s: RESET\n", __func__);
ret = proc_resetdevice(ps);
break;
case USBDEVFS_CLEAR_HALT:
snoop(&dev->dev, "%s: CLEAR_HALT\n", __func__);
ret = proc_clearhalt(ps, p);
if (ret >= 0)
inode->i_mtime = CURRENT_TIME;
break;
case USBDEVFS_GETDRIVER:
snoop(&dev->dev, "%s: GETDRIVER\n", __func__);
ret = proc_getdriver(ps, p);
break;
case USBDEVFS_CONNECTINFO:
snoop(&dev->dev, "%s: CONNECTINFO\n", __func__);
ret = proc_connectinfo(ps, p);
break;
case USBDEVFS_SETINTERFACE:
snoop(&dev->dev, "%s: SETINTERFACE\n", __func__);
ret = proc_setintf(ps, p);
break;
case USBDEVFS_SETCONFIGURATION:
snoop(&dev->dev, "%s: SETCONFIGURATION\n", __func__);
ret = proc_setconfig(ps, p);
break;
case USBDEVFS_SUBMITURB:
snoop(&dev->dev, "%s: SUBMITURB\n", __func__);
ret = proc_submiturb(ps, p);
if (ret >= 0)
inode->i_mtime = CURRENT_TIME;
break;
#ifdef CONFIG_COMPAT
case USBDEVFS_CONTROL32:
snoop(&dev->dev, "%s: CONTROL32\n", __func__);
ret = proc_control_compat(ps, p);
if (ret >= 0)
inode->i_mtime = CURRENT_TIME;
break;
case USBDEVFS_BULK32:
snoop(&dev->dev, "%s: BULK32\n", __func__);
ret = proc_bulk_compat(ps, p);
if (ret >= 0)
inode->i_mtime = CURRENT_TIME;
break;
case USBDEVFS_DISCSIGNAL32:
snoop(&dev->dev, "%s: DISCSIGNAL32\n", __func__);
ret = proc_disconnectsignal_compat(ps, p);
break;
case USBDEVFS_SUBMITURB32:
snoop(&dev->dev, "%s: SUBMITURB32\n", __func__);
ret = proc_submiturb_compat(ps, p);
if (ret >= 0)
inode->i_mtime = CURRENT_TIME;
break;
case USBDEVFS_IOCTL32:
snoop(&dev->dev, "%s: IOCTL32\n", __func__);
ret = proc_ioctl_compat(ps, ptr_to_compat(p));
break;
#endif
case USBDEVFS_DISCARDURB:
snoop(&dev->dev, "%s: DISCARDURB %p\n", __func__, p);
ret = proc_unlinkurb(ps, p);
break;
case USBDEVFS_DISCSIGNAL:
snoop(&dev->dev, "%s: DISCSIGNAL\n", __func__);
ret = proc_disconnectsignal(ps, p);
break;
case USBDEVFS_CLAIMINTERFACE:
snoop(&dev->dev, "%s: CLAIMINTERFACE\n", __func__);
ret = proc_claiminterface(ps, p);
break;
case USBDEVFS_RELEASEINTERFACE:
snoop(&dev->dev, "%s: RELEASEINTERFACE\n", __func__);
ret = proc_releaseinterface(ps, p);
break;
case USBDEVFS_IOCTL:
snoop(&dev->dev, "%s: IOCTL\n", __func__);
ret = proc_ioctl_default(ps, p);
break;
case USBDEVFS_CLAIM_PORT:
snoop(&dev->dev, "%s: CLAIM_PORT\n", __func__);
ret = proc_claim_port(ps, p);
break;
case USBDEVFS_RELEASE_PORT:
snoop(&dev->dev, "%s: RELEASE_PORT\n", __func__);
ret = proc_release_port(ps, p);
break;
case USBDEVFS_GET_CAPABILITIES:
ret = proc_get_capabilities(ps, p);
break;
case USBDEVFS_DISCONNECT_CLAIM:
ret = proc_disconnect_claim(ps, p);
break;
case USBDEVFS_ALLOC_STREAMS:
ret = proc_alloc_streams(ps, p);
break;
case USBDEVFS_FREE_STREAMS:
ret = proc_free_streams(ps, p);
break;
case USBDEVFS_DROP_PRIVILEGES:
ret = proc_drop_privileges(ps, p);
break;
}
done:
usb_unlock_device(dev);
if (ret >= 0)
inode->i_atime = CURRENT_TIME;
return ret;
}
| 0 |
[
"CWE-200"
] |
linux
|
681fef8380eb818c0b845fca5d2ab1dcbab114ee
| 260,129,296,747,565,380,000,000,000,000,000,000,000 | 194 |
USB: usbfs: fix potential infoleak in devio
The stack object “ci” has a total size of 8 bytes. Its last 3 bytes
are padding bytes which are not initialized and leaked to userland
via “copy_to_user”.
Signed-off-by: Kangjie Lu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
respip_inform_super(struct module_qstate* qstate, int id,
struct module_qstate* super)
{
struct respip_qstate* rq = (struct respip_qstate*)super->minfo[id];
struct reply_info* new_rep = NULL;
rq->state = RESPIP_SUBQUERY_FINISHED;
/* respip subquery should have always been created with a valid reply
* in super. */
log_assert(super->return_msg && super->return_msg->rep);
/* return_msg can be NULL when, e.g., the sub query resulted in
* SERVFAIL, in which case we regard it as a failure of the original
* query. Other checks are probably redundant, but we check them
* for safety. */
if(!qstate->return_msg || !qstate->return_msg->rep ||
qstate->return_rcode != LDNS_RCODE_NOERROR)
goto fail;
if(!respip_merge_cname(super->return_msg->rep, &qstate->qinfo,
qstate->return_msg->rep, super->client_info,
super->env->need_to_validate, &new_rep, super->region))
goto fail;
super->return_msg->rep = new_rep;
return;
fail:
super->return_rcode = LDNS_RCODE_SERVFAIL;
super->return_msg = NULL;
return;
}
| 0 |
[
"CWE-190"
] |
unbound
|
02080f6b180232f43b77f403d0c038e9360a460f
| 313,190,841,660,354,060,000,000,000,000,000,000,000 | 32 |
- Fix Integer Overflows in Size Calculations,
reported by X41 D-Sec.
|
void Field::hash(ulong *nr, ulong *nr2)
{
if (is_null())
{
*nr^= (*nr << 1) | 1;
}
else
{
uint len= pack_length();
CHARSET_INFO *cs= sort_charset();
cs->coll->hash_sort(cs, ptr, len, nr, nr2);
}
}
| 0 |
[
"CWE-416",
"CWE-703"
] |
server
|
08c7ab404f69d9c4ca6ca7a9cf7eec74c804f917
| 247,304,641,605,944,020,000,000,000,000,000,000,000 | 13 |
MDEV-24176 Server crashes after insert in the table with virtual
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <[email protected]>
|
Subsets and Splits
CWE 416 & 19
The query filters records related to specific CWEs (Common Weakness Enumerations), providing a basic overview of entries with these vulnerabilities but without deeper analysis.
CWE Frequency in Train Set
Counts the occurrences of each CWE (Common Weakness Enumeration) in the dataset, providing a basic distribution but limited insight.