func
stringlengths 0
484k
| target
int64 0
1
| cwe
listlengths 0
4
| project
stringclasses 799
values | commit_id
stringlengths 40
40
| hash
float64 1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
| size
int64 1
24k
| message
stringlengths 0
13.3k
|
---|---|---|---|---|---|---|---|
GuestFileWrite *qmp_guest_file_write(int64_t handle, const char *buf_b64,
bool has_count, int64_t count,
Error **errp)
{
GuestFileWrite *write_data = NULL;
guchar *buf;
gsize buf_len;
int write_count;
GuestFileHandle *gfh = guest_file_handle_find(handle, errp);
FILE *fh;
if (!gfh) {
return NULL;
}
fh = gfh->fh;
if (gfh->state == RW_STATE_READING) {
int ret = fseek(fh, 0, SEEK_CUR);
if (ret == -1) {
error_setg_errno(errp, errno, "failed to seek file");
return NULL;
}
gfh->state = RW_STATE_NEW;
}
buf = qbase64_decode(buf_b64, -1, &buf_len, errp);
if (!buf) {
return NULL;
}
if (!has_count) {
count = buf_len;
} else if (count < 0 || count > buf_len) {
error_setg(errp, "value '%" PRId64 "' is invalid for argument count",
count);
g_free(buf);
return NULL;
}
write_count = fwrite(buf, 1, count, fh);
if (ferror(fh)) {
error_setg_errno(errp, errno, "failed to write to file");
slog("guest-file-write failed, handle: %" PRId64, handle);
} else {
write_data = g_new0(GuestFileWrite, 1);
write_data->count = write_count;
write_data->eof = feof(fh);
gfh->state = RW_STATE_WRITING;
}
g_free(buf);
clearerr(fh);
return write_data;
}
| 0 |
[
"CWE-190"
] |
qemu
|
141b197408ab398c4f474ac1a728ab316e921f2b
| 191,176,751,688,744,240,000,000,000,000,000,000,000 | 55 |
qga: check bytes count read by guest-file-read
While reading file content via 'guest-file-read' command,
'qmp_guest_file_read' routine allocates buffer of count+1
bytes. It could overflow for large values of 'count'.
Add check to avoid it.
Reported-by: Fakhri Zulkifli <[email protected]>
Signed-off-by: Prasad J Pandit <[email protected]>
Cc: [email protected]
Signed-off-by: Michael Roth <[email protected]>
|
__vma_link(struct mm_struct *mm, struct vm_area_struct *vma,
struct vm_area_struct *prev, struct rb_node **rb_link,
struct rb_node *rb_parent)
{
__vma_link_list(mm, vma, prev, rb_parent);
__vma_link_rb(mm, vma, rb_link, rb_parent);
}
| 0 |
[
"CWE-119"
] |
linux
|
1be7107fbe18eed3e319a6c3e83c78254b693acb
| 20,113,987,283,230,552,000,000,000,000,000,000,000 | 7 |
mm: larger stack guard gap, between vmas
Stack guard page is a useful feature to reduce a risk of stack smashing
into a different mapping. We have been using a single page gap which
is sufficient to prevent having stack adjacent to a different mapping.
But this seems to be insufficient in the light of the stack usage in
userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
which is 256kB or stack strings with MAX_ARG_STRLEN.
This will become especially dangerous for suid binaries and the default
no limit for the stack size limit because those applications can be
tricked to consume a large portion of the stack and a single glibc call
could jump over the guard page. These attacks are not theoretical,
unfortunatelly.
Make those attacks less probable by increasing the stack guard gap
to 1MB (on systems with 4k pages; but make it depend on the page size
because systems with larger base pages might cap stack allocations in
the PAGE_SIZE units) which should cover larger alloca() and VLA stack
allocations. It is obviously not a full fix because the problem is
somehow inherent, but it should reduce attack space a lot.
One could argue that the gap size should be configurable from userspace,
but that can be done later when somebody finds that the new 1MB is wrong
for some special case applications. For now, add a kernel command line
option (stack_guard_gap) to specify the stack gap size (in page units).
Implementation wise, first delete all the old code for stack guard page:
because although we could get away with accounting one extra page in a
stack vma, accounting a larger gap can break userspace - case in point,
a program run with "ulimit -S -v 20000" failed when the 1MB gap was
counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
and strict non-overcommit mode.
Instead of keeping gap inside the stack vma, maintain the stack guard
gap as a gap between vmas: using vm_start_gap() in place of vm_start
(or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
places which need to respect the gap - mainly arch_get_unmapped_area(),
and and the vma tree's subtree_gap support for that.
Original-patch-by: Oleg Nesterov <[email protected]>
Original-patch-by: Michal Hocko <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Tested-by: Helge Deller <[email protected]> # parisc
Signed-off-by: Linus Torvalds <[email protected]>
|
htmlCtxtReadFd(htmlParserCtxtPtr ctxt, int fd,
const char *URL, const char *encoding, int options)
{
xmlParserInputBufferPtr input;
xmlParserInputPtr stream;
if (fd < 0)
return (NULL);
if (ctxt == NULL)
return (NULL);
xmlInitParser();
htmlCtxtReset(ctxt);
input = xmlParserInputBufferCreateFd(fd, XML_CHAR_ENCODING_NONE);
if (input == NULL)
return (NULL);
stream = xmlNewIOInputStream(ctxt, input, XML_CHAR_ENCODING_NONE);
if (stream == NULL) {
xmlFreeParserInputBuffer(input);
return (NULL);
}
inputPush(ctxt, stream);
return (htmlDoRead(ctxt, URL, encoding, options, 1));
}
| 0 |
[
"CWE-119"
] |
libxml2
|
e724879d964d774df9b7969fc846605aa1bac54c
| 204,193,843,939,935,580,000,000,000,000,000,000,000 | 26 |
Fix parsing short unclosed comment uninitialized access
For https://bugzilla.gnome.org/show_bug.cgi?id=746048
The HTML parser was too optimistic when processing comments and
didn't check for the end of the stream on the first 2 characters
|
ip6t_do_table(struct sk_buff *skb,
const struct nf_hook_state *state,
struct xt_table *table)
{
unsigned int hook = state->hook;
static const char nulldevname[IFNAMSIZ] __attribute__((aligned(sizeof(long))));
/* Initializing verdict to NF_DROP keeps gcc happy. */
unsigned int verdict = NF_DROP;
const char *indev, *outdev;
const void *table_base;
struct ip6t_entry *e, **jumpstack;
unsigned int stackidx, cpu;
const struct xt_table_info *private;
struct xt_action_param acpar;
unsigned int addend;
/* Initialization */
stackidx = 0;
indev = state->in ? state->in->name : nulldevname;
outdev = state->out ? state->out->name : nulldevname;
/* We handle fragments by dealing with the first fragment as
* if it was a normal packet. All other fragments are treated
* normally, except that they will NEVER match rules that ask
* things we don't know, ie. tcp syn flag or ports). If the
* rule is also a fragment-specific rule, non-fragments won't
* match it. */
acpar.hotdrop = false;
acpar.state = state;
WARN_ON(!(table->valid_hooks & (1 << hook)));
local_bh_disable();
addend = xt_write_recseq_begin();
private = READ_ONCE(table->private); /* Address dependency. */
cpu = smp_processor_id();
table_base = private->entries;
jumpstack = (struct ip6t_entry **)private->jumpstack[cpu];
/* Switch to alternate jumpstack if we're being invoked via TEE.
* TEE issues XT_CONTINUE verdict on original skb so we must not
* clobber the jumpstack.
*
* For recursion via REJECT or SYNPROXY the stack will be clobbered
* but it is no problem since absolute verdict is issued by these.
*/
if (static_key_false(&xt_tee_enabled))
jumpstack += private->stacksize * __this_cpu_read(nf_skb_duplicated);
e = get_entry(table_base, private->hook_entry[hook]);
do {
const struct xt_entry_target *t;
const struct xt_entry_match *ematch;
struct xt_counters *counter;
WARN_ON(!e);
acpar.thoff = 0;
if (!ip6_packet_match(skb, indev, outdev, &e->ipv6,
&acpar.thoff, &acpar.fragoff, &acpar.hotdrop)) {
no_match:
e = ip6t_next_entry(e);
continue;
}
xt_ematch_foreach(ematch, e) {
acpar.match = ematch->u.kernel.match;
acpar.matchinfo = ematch->data;
if (!acpar.match->match(skb, &acpar))
goto no_match;
}
counter = xt_get_this_cpu_counter(&e->counters);
ADD_COUNTER(*counter, skb->len, 1);
t = ip6t_get_target_c(e);
WARN_ON(!t->u.kernel.target);
#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE)
/* The packet is traced: log it */
if (unlikely(skb->nf_trace))
trace_packet(state->net, skb, hook, state->in,
state->out, table->name, private, e);
#endif
/* Standard target? */
if (!t->u.kernel.target->target) {
int v;
v = ((struct xt_standard_target *)t)->verdict;
if (v < 0) {
/* Pop from stack? */
if (v != XT_RETURN) {
verdict = (unsigned int)(-v) - 1;
break;
}
if (stackidx == 0)
e = get_entry(table_base,
private->underflow[hook]);
else
e = ip6t_next_entry(jumpstack[--stackidx]);
continue;
}
if (table_base + v != ip6t_next_entry(e) &&
!(e->ipv6.flags & IP6T_F_GOTO)) {
jumpstack[stackidx++] = e;
}
e = get_entry(table_base, v);
continue;
}
acpar.target = t->u.kernel.target;
acpar.targinfo = t->data;
verdict = t->u.kernel.target->target(skb, &acpar);
if (verdict == XT_CONTINUE)
e = ip6t_next_entry(e);
else
/* Verdict */
break;
} while (!acpar.hotdrop);
xt_write_recseq_end(addend);
local_bh_enable();
if (acpar.hotdrop)
return NF_DROP;
else return verdict;
}
| 1 |
[
"CWE-476"
] |
linux
|
57ebd808a97d7c5b1e1afb937c2db22beba3c1f8
| 148,808,271,292,760,140,000,000,000,000,000,000,000 | 128 |
netfilter: add back stackpointer size checks
The rationale for removing the check is only correct for rulesets
generated by ip(6)tables.
In iptables, a jump can only occur to a user-defined chain, i.e.
because we size the stack based on number of user-defined chains we
cannot exceed stack size.
However, the underlying binary format has no such restriction,
and the validation step only ensures that the jump target is a
valid rule start point.
IOW, its possible to build a rule blob that has no user-defined
chains but does contain a jump.
If this happens, no jump stack gets allocated and crash occurs
because no jumpstack was allocated.
Fixes: 7814b6ec6d0d6 ("netfilter: xtables: don't save/restore jumpstack offset")
Reported-by: [email protected]
Signed-off-by: Florian Westphal <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
ZEND_VM_COLD_CONSTCONST_HANDLER(16, ZEND_IS_IDENTICAL, CONST|TMP|VAR|CV, CONST|TMP|VAR|CV, SPEC(COMMUTATIVE))
{
USE_OPLINE
zend_free_op free_op1, free_op2;
zval *op1, *op2;
zend_bool result;
SAVE_OPLINE();
op1 = GET_OP1_ZVAL_PTR_DEREF(BP_VAR_R);
op2 = GET_OP2_ZVAL_PTR_DEREF(BP_VAR_R);
result = fast_is_identical_function(op1, op2);
FREE_OP1();
FREE_OP2();
ZEND_VM_SMART_BRANCH(result, 1);
ZVAL_BOOL(EX_VAR(opline->result.var), result);
ZEND_VM_NEXT_OPCODE_CHECK_EXCEPTION();
}
| 0 |
[
"CWE-787"
] |
php-src
|
f1ce8d5f5839cb2069ea37ff424fb96b8cd6932d
| 216,469,347,371,048,050,000,000,000,000,000,000,000 | 17 |
Fix #73122: Integer Overflow when concatenating strings
We must avoid integer overflows in memory allocations, so we introduce
an additional check in the VM, and bail out in the rare case of an
overflow. Since the recent fix for bug #74960 still doesn't catch all
possible overflows, we fix that right away.
|
static int ZEND_FASTCALL ZEND_BW_XOR_SPEC_CONST_CONST_HANDLER(ZEND_OPCODE_HANDLER_ARGS)
{
zend_op *opline = EX(opline);
bitwise_xor_function(&EX_T(opline->result.u.var).tmp_var,
&opline->op1.u.constant,
&opline->op2.u.constant TSRMLS_CC);
ZEND_VM_NEXT_OPCODE();
}
| 0 |
[] |
php-src
|
ce96fd6b0761d98353761bf78d5bfb55291179fd
| 76,731,876,499,685,310,000,000,000,000,000,000,000 | 12 |
- fix #39863, do not accept paths with NULL in them. See http://news.php.net/php.internals/50191, trunk will have the patch later (adding a macro and/or changing (some) APIs. Patch by Rasmus
|
static int cac_read_binary(sc_card_t *card, unsigned int idx,
unsigned char *buf, size_t count, unsigned long flags)
{
cac_private_data_t * priv = CAC_DATA(card);
int r = 0;
u8 *tl = NULL, *val = NULL;
u8 *tl_ptr, *val_ptr, *tlv_ptr, *tl_start;
u8 *cert_ptr;
size_t tl_len, val_len, tlv_len;
size_t len, tl_head_len, cert_len;
u8 cert_type, tag;
SC_FUNC_CALLED(card->ctx, SC_LOG_DEBUG_VERBOSE);
/* if we didn't return it all last time, return the remainder */
if (priv->cached) {
sc_debug(card->ctx, SC_LOG_DEBUG_NORMAL,
"returning cached value idx=%d count=%"SC_FORMAT_LEN_SIZE_T"u",
idx, count);
if (idx > priv->cache_buf_len) {
SC_FUNC_RETURN(card->ctx, SC_LOG_DEBUG_NORMAL, SC_ERROR_FILE_END_REACHED);
}
len = MIN(count, priv->cache_buf_len-idx);
memcpy(buf, &priv->cache_buf[idx], len);
SC_FUNC_RETURN(card->ctx, SC_LOG_DEBUG_NORMAL, len);
}
sc_debug(card->ctx, SC_LOG_DEBUG_NORMAL,
"clearing cache idx=%d count=%"SC_FORMAT_LEN_SIZE_T"u",
idx, count);
if (priv->cache_buf) {
free(priv->cache_buf);
priv->cache_buf = NULL;
priv->cache_buf_len = 0;
}
if (priv->object_type <= 0)
SC_FUNC_RETURN(card->ctx, SC_LOG_DEBUG_NORMAL, SC_ERROR_INTERNAL);
r = cac_read_file(card, CAC_FILE_TAG, &tl, &tl_len);
if (r < 0) {
goto done;
}
r = cac_read_file(card, CAC_FILE_VALUE, &val, &val_len);
if (r < 0)
goto done;
switch (priv->object_type) {
case CAC_OBJECT_TYPE_TLV_FILE:
tlv_len = tl_len + val_len;
priv->cache_buf = malloc(tlv_len);
if (priv->cache_buf == NULL) {
r = SC_ERROR_OUT_OF_MEMORY;
goto done;
}
priv->cache_buf_len = tlv_len;
for (tl_ptr = tl, val_ptr=val, tlv_ptr = priv->cache_buf;
tl_len >= 2 && tlv_len > 0;
val_len -= len, tlv_len -= len, val_ptr += len, tlv_ptr += len) {
/* get the tag and the length */
tl_start = tl_ptr;
if (sc_simpletlv_read_tag(&tl_ptr, tl_len, &tag, &len) != SC_SUCCESS)
break;
tl_head_len = (tl_ptr - tl_start);
sc_simpletlv_put_tag(tag, len, tlv_ptr, tlv_len, &tlv_ptr);
tlv_len -= tl_head_len;
tl_len -= tl_head_len;
/* don't crash on bad data */
if (val_len < len) {
len = val_len;
}
/* if we run out of return space, truncate */
if (tlv_len < len) {
len = tlv_len;
}
memcpy(tlv_ptr, val_ptr, len);
}
break;
case CAC_OBJECT_TYPE_CERT:
/* read file */
sc_debug(card->ctx, SC_LOG_DEBUG_NORMAL,
" obj= cert_file, val_len=%"SC_FORMAT_LEN_SIZE_T"u (0x%04"SC_FORMAT_LEN_SIZE_T"x)",
val_len, val_len);
cert_len = 0;
cert_ptr = NULL;
cert_type = 0;
for (tl_ptr = tl, val_ptr = val; tl_len >= 2;
val_len -= len, val_ptr += len, tl_len -= tl_head_len) {
tl_start = tl_ptr;
if (sc_simpletlv_read_tag(&tl_ptr, tl_len, &tag, &len) != SC_SUCCESS)
break;
tl_head_len = tl_ptr - tl_start;
/* incomplete value */
if (val_len < len)
break;
if (tag == CAC_TAG_CERTIFICATE) {
cert_len = len;
cert_ptr = val_ptr;
}
if (tag == CAC_TAG_CERTINFO) {
if ((len >= 1) && (val_len >=1)) {
cert_type = *val_ptr;
}
}
if (tag == CAC_TAG_MSCUID) {
sc_log_hex(card->ctx, "MSCUID", val_ptr, len);
}
if ((val_len < len) || (tl_len < tl_head_len)) {
break;
}
}
/* if the info byte is 1, then the cert is compressed, decompress it */
if ((cert_type & 0x3) == 1) {
#ifdef ENABLE_ZLIB
r = sc_decompress_alloc(&priv->cache_buf, &priv->cache_buf_len,
cert_ptr, cert_len, COMPRESSION_AUTO);
#else
sc_log(card->ctx, "CAC compression not supported, no zlib");
r = SC_ERROR_NOT_SUPPORTED;
#endif
if (r)
goto done;
} else if (cert_len > 0) {
priv->cache_buf = malloc(cert_len);
if (priv->cache_buf == NULL) {
r = SC_ERROR_OUT_OF_MEMORY;
goto done;
}
priv->cache_buf_len = cert_len;
memcpy(priv->cache_buf, cert_ptr, cert_len);
} else {
sc_log(card->ctx, "Can't read zero-length certificate");
goto done;
}
break;
case CAC_OBJECT_TYPE_GENERIC:
/* TODO
* We have some two buffers in unknown encoding that we
* need to present in PKCS#15 layer.
*/
default:
/* Unknown object type */
sc_log(card->ctx, "Unknown object type: %x", priv->object_type);
r = SC_ERROR_INTERNAL;
goto done;
}
/* OK we've read the data, now copy the required portion out to the callers buffer */
priv->cached = 1;
len = MIN(count, priv->cache_buf_len-idx);
memcpy(buf, &priv->cache_buf[idx], len);
r = len;
done:
if (tl)
free(tl);
if (val)
free(val);
SC_FUNC_RETURN(card->ctx, SC_LOG_DEBUG_NORMAL, r);
}
| 0 |
[
"CWE-415",
"CWE-119"
] |
OpenSC
|
360e95d45ac4123255a4c796db96337f332160ad
| 279,652,844,804,698,900,000,000,000,000,000,000,000 | 166 |
fixed out of bounds writes
Thanks to Eric Sesterhenn from X41 D-SEC GmbH
for reporting the problems.
|
GF_Err tsro_box_size(GF_Box *s)
{
s->size += 4;
return GF_OK;
}
| 0 |
[
"CWE-787"
] |
gpac
|
77510778516803b7f7402d7423c6d6bef50254c3
| 283,997,969,432,080,000,000,000,000,000,000,000,000 | 5 |
fixed #2255
|
static void SFDDumpTtfInstrsExplicit(FILE *sfd,uint8 *ttf_instrs, int16 ttf_instrs_len )
{
char *instrs = _IVUnParseInstrs( ttf_instrs, ttf_instrs_len );
char *pt;
fprintf( sfd, "TtInstrs:\n" );
for ( pt=instrs; *pt!='\0'; ++pt )
putc(*pt,sfd);
if ( pt[-1]!='\n' )
putc('\n',sfd);
free(instrs);
fprintf( sfd, "%s\n", end_tt_instrs );
}
| 0 |
[
"CWE-416"
] |
fontforge
|
048a91e2682c1a8936ae34dbc7bd70291ec05410
| 62,342,185,062,866,690,000,000,000,000,000,000,000 | 12 |
Fix for #4084 Use-after-free (heap) in the SFD_GetFontMetaData() function
Fix for #4086 NULL pointer dereference in the SFDGetSpiros() function
Fix for #4088 NULL pointer dereference in the SFD_AssignLookups() function
Add empty sf->fontname string if it isn't set, fixing #4089 #4090 and many
other potential issues (many downstream calls to strlen() on the value).
|
R_API bool r_buf_prepend_bytes(RBuffer *b, const ut8 *buf, ut64 length) {
r_return_val_if_fail (b && buf && !b->readonly, false);
return r_buf_insert_bytes (b, 0, buf, length) >= 0;
}
| 0 |
[
"CWE-400",
"CWE-703"
] |
radare2
|
634b886e84a5c568d243e744becc6b3223e089cf
| 317,283,941,958,243,520,000,000,000,000,000,000,000 | 4 |
Fix DoS in PE/QNX/DYLDCACHE/PSX parsers ##crash
* Reported by lazymio
* Reproducer: AAA4AAAAAB4=
|
TEST_F(RenameCollectionTest, RenameCollectionForApplyOpsAcrossDatabaseWithTargetUuid) {
_createCollection(_opCtx.get(), _sourceNss);
auto dbName = _sourceNss.db().toString();
auto uuid = UUID::gen();
auto uuidDoc = BSON("ui" << uuid);
auto cmd = BSON("renameCollection" << _sourceNss.ns() << "to" << _targetNssDifferentDb.ns()
<< "dropTarget"
<< true);
ASSERT_OK(renameCollectionForApplyOps(_opCtx.get(), dbName, uuidDoc["ui"], cmd, {}));
ASSERT_FALSE(_collectionExists(_opCtx.get(), _sourceNss));
ASSERT_EQUALS(uuid, _getCollectionUuid(_opCtx.get(), _targetNssDifferentDb));
}
| 0 |
[
"CWE-20"
] |
mongo
|
35c1b1f588f04926a958ad2fe4d9c59d79f81e8b
| 22,292,785,604,600,706,000,000,000,000,000,000,000 | 12 |
SERVER-35636 renameCollectionForApplyOps checks for complete namespace
|
static inline int complete_emulated_io(struct kvm_vcpu *vcpu)
{
int r;
vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
r = kvm_emulate_instruction(vcpu, EMULTYPE_NO_DECODE);
srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
if (r != EMULATE_DONE)
return 0;
return 1;
}
| 0 |
[
"CWE-476"
] |
linux
|
e97f852fd4561e77721bb9a4e0ea9d98305b1e93
| 17,507,847,468,353,647,000,000,000,000,000,000,000 | 10 |
KVM: X86: Fix scan ioapic use-before-initialization
Reported by syzkaller:
BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
PGD 80000003ec4da067 P4D 80000003ec4da067 PUD 3f7bfa067 PMD 0
Oops: 0000 [#1] PREEMPT SMP PTI
CPU: 7 PID: 5059 Comm: debug Tainted: G OE 4.19.0-rc5 #16
RIP: 0010:__lock_acquire+0x1a6/0x1990
Call Trace:
lock_acquire+0xdb/0x210
_raw_spin_lock+0x38/0x70
kvm_ioapic_scan_entry+0x3e/0x110 [kvm]
vcpu_enter_guest+0x167e/0x1910 [kvm]
kvm_arch_vcpu_ioctl_run+0x35c/0x610 [kvm]
kvm_vcpu_ioctl+0x3e9/0x6d0 [kvm]
do_vfs_ioctl+0xa5/0x690
ksys_ioctl+0x6d/0x80
__x64_sys_ioctl+0x1a/0x20
do_syscall_64+0x83/0x6e0
entry_SYSCALL_64_after_hwframe+0x49/0xbe
The reason is that the testcase writes hyperv synic HV_X64_MSR_SINT6 msr
and triggers scan ioapic logic to load synic vectors into EOI exit bitmap.
However, irqchip is not initialized by this simple testcase, ioapic/apic
objects should not be accessed.
This can be triggered by the following program:
#define _GNU_SOURCE
#include <endian.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>
uint64_t r[3] = {0xffffffffffffffff, 0xffffffffffffffff, 0xffffffffffffffff};
int main(void)
{
syscall(__NR_mmap, 0x20000000, 0x1000000, 3, 0x32, -1, 0);
long res = 0;
memcpy((void*)0x20000040, "/dev/kvm", 9);
res = syscall(__NR_openat, 0xffffffffffffff9c, 0x20000040, 0, 0);
if (res != -1)
r[0] = res;
res = syscall(__NR_ioctl, r[0], 0xae01, 0);
if (res != -1)
r[1] = res;
res = syscall(__NR_ioctl, r[1], 0xae41, 0);
if (res != -1)
r[2] = res;
memcpy(
(void*)0x20000080,
"\x01\x00\x00\x00\x00\x5b\x61\xbb\x96\x00\x00\x40\x00\x00\x00\x00\x01\x00"
"\x08\x00\x00\x00\x00\x00\x0b\x77\xd1\x78\x4d\xd8\x3a\xed\xb1\x5c\x2e\x43"
"\xaa\x43\x39\xd6\xff\xf5\xf0\xa8\x98\xf2\x3e\x37\x29\x89\xde\x88\xc6\x33"
"\xfc\x2a\xdb\xb7\xe1\x4c\xac\x28\x61\x7b\x9c\xa9\xbc\x0d\xa0\x63\xfe\xfe"
"\xe8\x75\xde\xdd\x19\x38\xdc\x34\xf5\xec\x05\xfd\xeb\x5d\xed\x2e\xaf\x22"
"\xfa\xab\xb7\xe4\x42\x67\xd0\xaf\x06\x1c\x6a\x35\x67\x10\x55\xcb",
106);
syscall(__NR_ioctl, r[2], 0x4008ae89, 0x20000080);
syscall(__NR_ioctl, r[2], 0xae80, 0);
return 0;
}
This patch fixes it by bailing out scan ioapic if ioapic is not initialized in
kernel.
Reported-by: Wei Wu <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Cc: Wei Wu <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
|
static inline unsigned int shash_align_buffer_size(unsigned len,
unsigned long mask)
{
typedef u8 __aligned_largest u8_aligned;
return len + (mask & ~(__alignof__(u8_aligned) - 1));
}
| 0 |
[
"CWE-787"
] |
linux
|
af3ff8045bbf3e32f1a448542e73abb4c8ceb6f1
| 104,774,452,030,046,270,000,000,000,000,000,000,000 | 6 |
crypto: hmac - require that the underlying hash algorithm is unkeyed
Because the HMAC template didn't check that its underlying hash
algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))"
through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC
being used without having been keyed, resulting in sha3_update() being
called without sha3_init(), causing a stack buffer overflow.
This is a very old bug, but it seems to have only started causing real
problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3)
because the innermost hash's state is ->import()ed from a zeroed buffer,
and it just so happens that other hash algorithms are fine with that,
but SHA-3 is not. However, there could be arch or hardware-dependent
hash algorithms also affected; I couldn't test everything.
Fix the bug by introducing a function crypto_shash_alg_has_setkey()
which tests whether a shash algorithm is keyed. Then update the HMAC
template to require that its underlying hash algorithm is unkeyed.
Here is a reproducer:
#include <linux/if_alg.h>
#include <sys/socket.h>
int main()
{
int algfd;
struct sockaddr_alg addr = {
.salg_type = "hash",
.salg_name = "hmac(hmac(sha3-512-generic))",
};
char key[4096] = { 0 };
algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
bind(algfd, (const struct sockaddr *)&addr, sizeof(addr));
setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key));
}
Here was the KASAN report from syzbot:
BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:341 [inline]
BUG: KASAN: stack-out-of-bounds in sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161
Write of size 4096 at addr ffff8801cca07c40 by task syzkaller076574/3044
CPU: 1 PID: 3044 Comm: syzkaller076574 Not tainted 4.14.0-mm1+ #25
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:53
print_address_description+0x73/0x250 mm/kasan/report.c:252
kasan_report_error mm/kasan/report.c:351 [inline]
kasan_report+0x25b/0x340 mm/kasan/report.c:409
check_memory_region_inline mm/kasan/kasan.c:260 [inline]
check_memory_region+0x137/0x190 mm/kasan/kasan.c:267
memcpy+0x37/0x50 mm/kasan/kasan.c:303
memcpy include/linux/string.h:341 [inline]
sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161
crypto_shash_update+0xcb/0x220 crypto/shash.c:109
shash_finup_unaligned+0x2a/0x60 crypto/shash.c:151
crypto_shash_finup+0xc4/0x120 crypto/shash.c:165
hmac_finup+0x182/0x330 crypto/hmac.c:152
crypto_shash_finup+0xc4/0x120 crypto/shash.c:165
shash_digest_unaligned+0x9e/0xd0 crypto/shash.c:172
crypto_shash_digest+0xc4/0x120 crypto/shash.c:186
hmac_setkey+0x36a/0x690 crypto/hmac.c:66
crypto_shash_setkey+0xad/0x190 crypto/shash.c:64
shash_async_setkey+0x47/0x60 crypto/shash.c:207
crypto_ahash_setkey+0xaf/0x180 crypto/ahash.c:200
hash_setkey+0x40/0x90 crypto/algif_hash.c:446
alg_setkey crypto/af_alg.c:221 [inline]
alg_setsockopt+0x2a1/0x350 crypto/af_alg.c:254
SYSC_setsockopt net/socket.c:1851 [inline]
SyS_setsockopt+0x189/0x360 net/socket.c:1830
entry_SYSCALL_64_fastpath+0x1f/0x96
Reported-by: syzbot <[email protected]>
Cc: <[email protected]>
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
static int send_release(uint32_t server, uint32_t ciaddr)
{
struct dhcp_packet packet;
/* Fill in: op, htype, hlen, cookie, chaddr, random xid fields,
* client-id option (unless -C), message type option:
*/
init_packet(&packet, DHCPRELEASE);
/* DHCPRELEASE uses ciaddr, not "requested ip", to store IP being released */
packet.ciaddr = ciaddr;
udhcp_add_simple_option(&packet, DHCP_SERVER_ID, server);
bb_info_msg("Sending release...");
/* Note: normally we unicast here since "server" is not zero.
* However, there _are_ people who run "address-less" DHCP servers,
* and reportedly ISC dhcp client and Windows allow that.
*/
return bcast_or_ucast(&packet, ciaddr, server);
}
| 0 |
[
"CWE-119"
] |
busybox
|
352f79acbd759c14399e39baef21fc4ffe180ac2
| 241,134,143,235,755,540,000,000,000,000,000,000,000 | 21 |
udhcpc: fix OPTION_6RD parsing (could overflow its malloced buffer)
Signed-off-by: Denys Vlasenko <[email protected]>
|
qemuProcessReadLog(qemuDomainLogContextPtr logCtxt,
char **msg,
size_t max)
{
char *buf;
ssize_t got;
char *eol;
char *filter_next;
size_t skip;
if ((got = qemuDomainLogContextRead(logCtxt, &buf)) < 0)
return -1;
/* Filter out debug messages from intermediate libvirt process */
filter_next = buf;
while ((eol = strchr(filter_next, '\n'))) {
*eol = '\0';
if (virLogProbablyLogMessage(filter_next) ||
strstr(filter_next, "char device redirected to")) {
skip = (eol + 1) - filter_next;
memmove(filter_next, eol + 1, buf + got - eol);
got -= skip;
} else {
filter_next = eol + 1;
*eol = '\n';
}
}
filter_next = NULL; /* silence false coverity warning */
if (got > 0 &&
buf[got - 1] == '\n') {
buf[got - 1] = '\0';
got--;
}
if (max > 0 && got > max) {
skip = got - max;
if (buf[skip - 1] != '\n' &&
(eol = strchr(buf + skip, '\n')) &&
!virStringIsEmpty(eol + 1))
skip = eol + 1 - buf;
memmove(buf, buf + skip, got - skip + 1);
got -= skip;
}
buf = g_renew(char, buf, got + 1);
*msg = buf;
return 0;
}
| 0 |
[
"CWE-416"
] |
libvirt
|
1ac703a7d0789e46833f4013a3876c2e3af18ec7
| 311,589,794,573,956,700,000,000,000,000,000,000,000 | 51 |
qemu: Add missing lock in qemuProcessHandleMonitorEOF
qemuMonitorUnregister will be called in multiple threads (e.g. threads
in rpc worker pool and the vm event thread). In some cases, it isn't
protected by the monitor lock, which may lead to call g_source_unref
more than one time and a use-after-free problem eventually.
Add the missing lock in qemuProcessHandleMonitorEOF (which is the only
position missing lock of monitor I found).
Suggested-by: Michal Privoznik <[email protected]>
Signed-off-by: Peng Liang <[email protected]>
Signed-off-by: Michal Privoznik <[email protected]>
Reviewed-by: Michal Privoznik <[email protected]>
|
int ssl3_get_client_certificate(SSL *s)
{
int i, ok, al, ret = -1;
X509 *x = NULL;
unsigned long l, nc, llen, n;
const unsigned char *p, *q;
unsigned char *d;
STACK_OF(X509) *sk = NULL;
n = s->method->ssl_get_message(s,
SSL3_ST_SR_CERT_A,
SSL3_ST_SR_CERT_B,
-1, s->max_cert_list, &ok);
if (!ok)
return ((int)n);
if (s->s3->tmp.message_type == SSL3_MT_CLIENT_KEY_EXCHANGE) {
if ((s->verify_mode & SSL_VERIFY_PEER) &&
(s->verify_mode & SSL_VERIFY_FAIL_IF_NO_PEER_CERT)) {
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,
SSL_R_PEER_DID_NOT_RETURN_A_CERTIFICATE);
al = SSL_AD_HANDSHAKE_FAILURE;
goto f_err;
}
/*
* If tls asked for a client cert, the client must return a 0 list
*/
if ((s->version > SSL3_VERSION) && s->s3->tmp.cert_request) {
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,
SSL_R_TLS_PEER_DID_NOT_RESPOND_WITH_CERTIFICATE_LIST);
al = SSL_AD_UNEXPECTED_MESSAGE;
goto f_err;
}
s->s3->tmp.reuse_message = 1;
return (1);
}
if (s->s3->tmp.message_type != SSL3_MT_CERTIFICATE) {
al = SSL_AD_UNEXPECTED_MESSAGE;
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_WRONG_MESSAGE_TYPE);
goto f_err;
}
p = d = (unsigned char *)s->init_msg;
if ((sk = sk_X509_new_null()) == NULL) {
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, ERR_R_MALLOC_FAILURE);
goto err;
}
n2l3(p, llen);
if (llen + 3 != n) {
al = SSL_AD_DECODE_ERROR;
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_LENGTH_MISMATCH);
goto f_err;
}
for (nc = 0; nc < llen;) {
n2l3(p, l);
if ((l + nc + 3) > llen) {
al = SSL_AD_DECODE_ERROR;
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,
SSL_R_CERT_LENGTH_MISMATCH);
goto f_err;
}
q = p;
x = d2i_X509(NULL, &p, l);
if (x == NULL) {
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, ERR_R_ASN1_LIB);
goto err;
}
if (p != (q + l)) {
al = SSL_AD_DECODE_ERROR;
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,
SSL_R_CERT_LENGTH_MISMATCH);
goto f_err;
}
if (!sk_X509_push(sk, x)) {
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, ERR_R_MALLOC_FAILURE);
goto err;
}
x = NULL;
nc += l + 3;
}
if (sk_X509_num(sk) <= 0) {
/* TLS does not mind 0 certs returned */
if (s->version == SSL3_VERSION) {
al = SSL_AD_HANDSHAKE_FAILURE;
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,
SSL_R_NO_CERTIFICATES_RETURNED);
goto f_err;
}
/* Fail for TLS only if we required a certificate */
else if ((s->verify_mode & SSL_VERIFY_PEER) &&
(s->verify_mode & SSL_VERIFY_FAIL_IF_NO_PEER_CERT)) {
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,
SSL_R_PEER_DID_NOT_RETURN_A_CERTIFICATE);
al = SSL_AD_HANDSHAKE_FAILURE;
goto f_err;
}
/* No client certificate so digest cached records */
if (s->s3->handshake_buffer && !ssl3_digest_cached_records(s)) {
al = SSL_AD_INTERNAL_ERROR;
goto f_err;
}
} else {
i = ssl_verify_cert_chain(s, sk);
if (i <= 0) {
al = ssl_verify_alarm_type(s->verify_result);
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,
SSL_R_CERTIFICATE_VERIFY_FAILED);
goto f_err;
}
}
if (s->session->peer != NULL) /* This should not be needed */
X509_free(s->session->peer);
s->session->peer = sk_X509_shift(sk);
s->session->verify_result = s->verify_result;
/*
* With the current implementation, sess_cert will always be NULL when we
* arrive here.
*/
if (s->session->sess_cert == NULL) {
s->session->sess_cert = ssl_sess_cert_new();
if (s->session->sess_cert == NULL) {
SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, ERR_R_MALLOC_FAILURE);
goto err;
}
}
if (s->session->sess_cert->cert_chain != NULL)
sk_X509_pop_free(s->session->sess_cert->cert_chain, X509_free);
s->session->sess_cert->cert_chain = sk;
/*
* Inconsistency alert: cert_chain does *not* include the peer's own
* certificate, while we do include it in s3_clnt.c
*/
sk = NULL;
ret = 1;
if (0) {
f_err:
ssl3_send_alert(s, SSL3_AL_FATAL, al);
err:
s->state = SSL_ST_ERR;
}
if (x != NULL)
X509_free(x);
if (sk != NULL)
sk_X509_pop_free(sk, X509_free);
return (ret);
}
| 0 |
[
"CWE-362"
] |
openssl
|
3c66a669dfc7b3792f7af0758ea26fe8502ce70c
| 316,980,437,181,037,100,000,000,000,000,000,000,000 | 156 |
Fix PSK handling.
The PSK identity hint should be stored in the SSL_SESSION structure
and not in the parent context (which will overwrite values used
by other SSL structures with the same SSL_CTX).
Use BUF_strndup when copying identity as it may not be null terminated.
Reviewed-by: Tim Hudson <[email protected]>
|
ber_parse_header(STREAM s, int tagval, uint32 *length)
{
int tag, len;
if (tagval > 0xff)
{
in_uint16_be(s, tag);
}
else
{
in_uint8(s, tag);
}
if (tag != tagval)
{
error("expected tag %d, got %d\n", tagval, tag);
return False;
}
in_uint8(s, len);
if (len & 0x80)
{
len &= ~0x80;
*length = 0;
while (len--)
next_be(s, *length);
}
else
*length = len;
return s_check(s);
}
| 0 |
[
"CWE-787"
] |
rdesktop
|
766ebcf6f23ccfe8323ac10242ae6e127d4505d2
| 192,307,064,125,810,200,000,000,000,000,000,000,000 | 33 |
Malicious RDP server security fixes
This commit includes fixes for a set of 21 vulnerabilities in
rdesktop when a malicious RDP server is used.
All vulnerabilities was identified and reported by Eyal Itkin.
* Add rdp_protocol_error function that is used in several fixes
* Refactor of process_bitmap_updates
* Fix possible integer overflow in s_check_rem() on 32bit arch
* Fix memory corruption in process_bitmap_data - CVE-2018-8794
* Fix remote code execution in process_bitmap_data - CVE-2018-8795
* Fix remote code execution in process_plane - CVE-2018-8797
* Fix Denial of Service in mcs_recv_connect_response - CVE-2018-20175
* Fix Denial of Service in mcs_parse_domain_params - CVE-2018-20175
* Fix Denial of Service in sec_parse_crypt_info - CVE-2018-20176
* Fix Denial of Service in sec_recv - CVE-2018-20176
* Fix minor information leak in rdpdr_process - CVE-2018-8791
* Fix Denial of Service in cssp_read_tsrequest - CVE-2018-8792
* Fix remote code execution in cssp_read_tsrequest - CVE-2018-8793
* Fix Denial of Service in process_bitmap_data - CVE-2018-8796
* Fix minor information leak in rdpsnd_process_ping - CVE-2018-8798
* Fix Denial of Service in process_secondary_order - CVE-2018-8799
* Fix remote code execution in in ui_clip_handle_data - CVE-2018-8800
* Fix major information leak in ui_clip_handle_data - CVE-2018-20174
* Fix memory corruption in rdp_in_unistr - CVE-2018-20177
* Fix Denial of Service in process_demand_active - CVE-2018-20178
* Fix remote code execution in lspci_process - CVE-2018-20179
* Fix remote code execution in rdpsnddbg_process - CVE-2018-20180
* Fix remote code execution in seamless_process - CVE-2018-20181
* Fix remote code execution in seamless_process_line - CVE-2018-20182
|
bool operator==(const LowerCaseString& rhs) const { return string_ == rhs.string_; }
| 0 |
[] |
envoy
|
2c60632d41555ec8b3d9ef5246242be637a2db0f
| 305,453,085,153,286,300,000,000,000,000,000,000,000 | 1 |
http: header map security fixes for duplicate headers (#197)
Previously header matching did not match on all headers for
non-inline headers. This patch changes the default behavior to
always logically match on all headers. Multiple individual
headers will be logically concatenated with ',' similar to what
is done with inline headers. This makes the behavior effectively
consistent. This behavior can be temporary reverted by setting
the runtime value "envoy.reloadable_features.header_match_on_all_headers"
to "false".
Targeted fixes have been additionally performed on the following
extensions which make them consider all duplicate headers by default as
a comma concatenated list:
1) Any extension using CEL matching on headers.
2) The header to metadata filter.
3) The JWT filter.
4) The Lua filter.
Like primary header matching used in routing, RBAC, etc. this behavior
can be disabled by setting the runtime value
"envoy.reloadable_features.header_match_on_all_headers" to false.
Finally, the setCopy() header map API previously only set the first
header in the case of duplicate non-inline headers. setCopy() now
behaves similiarly to the other set*() APIs and replaces all found
headers with a single value. This may have had security implications
in the extauth filter which uses this API. This behavior can be disabled
by setting the runtime value
"envoy.reloadable_features.http_set_copy_replace_all_headers" to false.
Fixes https://github.com/envoyproxy/envoy-setec/issues/188
Signed-off-by: Matt Klein <[email protected]>
|
void MainWindow::onMultitrackModified()
{
setWindowModified(true);
// Reflect this playlist info onto the producer for keyframes dock.
if (!m_timelineDock->selection().isEmpty()) {
int trackIndex = m_timelineDock->selection().first().y();
int clipIndex = m_timelineDock->selection().first().x();
QScopedPointer<Mlt::ClipInfo> info(m_timelineDock->getClipInfo(trackIndex, clipIndex));
if (info && info->producer && info->producer->is_valid()) {
int expected = info->frame_in;
QScopedPointer<Mlt::ClipInfo> info2(m_timelineDock->getClipInfo(trackIndex, clipIndex - 1));
if (info2 && info2->producer && info2->producer->is_valid()
&& info2->producer->get(kShotcutTransitionProperty)) {
// Factor in a transition left of the clip.
expected -= info2->frame_count;
info->producer->set(kPlaylistStartProperty, info2->start);
} else {
info->producer->set(kPlaylistStartProperty, info->start);
}
if (expected != info->producer->get_int(kFilterInProperty)) {
int delta = expected - info->producer->get_int(kFilterInProperty);
info->producer->set(kFilterInProperty, expected);
emit m_filtersDock->producerInChanged(delta);
}
expected = info->frame_out;
info2.reset(m_timelineDock->getClipInfo(trackIndex, clipIndex + 1));
if (info2 && info2->producer && info2->producer->is_valid()
&& info2->producer->get(kShotcutTransitionProperty)) {
// Factor in a transition right of the clip.
expected += info2->frame_count;
}
if (expected != info->producer->get_int(kFilterOutProperty)) {
int delta = expected - info->producer->get_int(kFilterOutProperty);
info->producer->set(kFilterOutProperty, expected);
emit m_filtersDock->producerOutChanged(delta);
}
}
}
}
| 0 |
[
"CWE-89",
"CWE-327",
"CWE-295"
] |
shotcut
|
f008adc039642307f6ee3378d378cdb842e52c1d
| 328,472,306,940,687,070,000,000,000,000,000,000,000 | 40 |
fix upgrade check is not using TLS correctly
|
R_API int r_bin_java_load_bin(RBinJavaObj *bin, const ut8 *buf, ut64 buf_sz) {
ut64 adv = 0;
R_BIN_JAVA_GLOBAL_BIN = bin;
if (!bin) {
return false;
}
r_bin_java_reset_bin_info (bin);
memcpy ((ut8 *) &bin->cf, buf, 10);
if (memcmp (bin->cf.cafebabe, "\xCA\xFE\xBA\xBE", 4)) {
eprintf ("r_bin_java_new_bin: Invalid header (%02x %02x %02x %02x)\n",
bin->cf.cafebabe[0], bin->cf.cafebabe[1],
bin->cf.cafebabe[2], bin->cf.cafebabe[3]);
return false;
}
if (bin->cf.major[0] == bin->cf.major[1] && bin->cf.major[0] == 0) {
eprintf ("Java CLASS with MACH0 header?\n");
return false;
}
adv += 8;
// -2 so that the cp_count will be parsed
adv += r_bin_java_parse_cp_pool (bin, adv, buf, buf_sz);
if (adv > buf_sz) {
eprintf ("[X] r_bin_java: Error unable to parse remainder of classfile after Constant Pool.\n");
return true;
}
adv += r_bin_java_read_class_file2 (bin, adv, buf, buf_sz);
if (adv > buf_sz) {
eprintf ("[X] r_bin_java: Error unable to parse remainder of classfile after class file info.\n");
return true;
}
IFDBG eprintf("This class: %d %s\n", bin->cf2.this_class, bin->cf2.this_class_name);
IFDBG eprintf("0x%"PFMT64x " Access flags: 0x%04x\n", adv, bin->cf2.access_flags);
adv += r_bin_java_parse_interfaces (bin, adv, buf, buf_sz);
if (adv > buf_sz) {
eprintf ("[X] r_bin_java: Error unable to parse remainder of classfile after Interfaces.\n");
return true;
}
adv += r_bin_java_parse_fields (bin, adv, buf, buf_sz);
if (adv > buf_sz) {
eprintf ("[X] r_bin_java: Error unable to parse remainder of classfile after Fields.\n");
return true;
}
adv += r_bin_java_parse_methods (bin, adv, buf, buf_sz);
if (adv > buf_sz) {
eprintf ("[X] r_bin_java: Error unable to parse remainder of classfile after Methods.\n");
return true;
}
adv += r_bin_java_parse_attrs (bin, adv, buf, buf_sz);
bin->calc_size = adv;
// if (adv > buf_sz) {
// eprintf ("[X] r_bin_java: Error unable to parse remainder of classfile after Attributes.\n");
// return true;
// }
// add_cp_objs_to_sdb(bin);
// add_method_infos_to_sdb(bin);
// add_field_infos_to_sdb(bin);
return true;
}
| 0 |
[
"CWE-787"
] |
radare2
|
9650e3c352f675687bf6c6f65ff2c4a3d0e288fa
| 185,958,795,959,761,040,000,000,000,000,000,000,000 | 59 |
Fix oobread segfault in java arith8.class ##crash
* Reported by Cen Zhang via huntr.dev
|
static int uvc_register_terms(struct uvc_device *dev,
struct uvc_video_chain *chain)
{
struct uvc_streaming *stream;
struct uvc_entity *term;
int ret;
list_for_each_entry(term, &chain->entities, chain) {
if (UVC_ENTITY_TYPE(term) != UVC_TT_STREAMING)
continue;
stream = uvc_stream_by_id(dev, term->id);
if (stream == NULL) {
uvc_printk(KERN_INFO, "No streaming interface found "
"for terminal %u.", term->id);
continue;
}
stream->chain = chain;
ret = uvc_register_video(dev, stream);
if (ret < 0)
return ret;
/* Register a metadata node, but ignore a possible failure,
* complete registration of video nodes anyway.
*/
uvc_meta_register(stream);
term->vdev = &stream->vdev;
}
return 0;
}
| 0 |
[
"CWE-269"
] |
linux
|
68035c80e129c4cfec659aac4180354530b26527
| 172,000,632,023,694,200,000,000,000,000,000,000,000 | 33 |
media: uvcvideo: Avoid cyclic entity chains due to malformed USB descriptors
Way back in 2017, fuzzing the 4.14-rc2 USB stack with syzkaller kicked
up the following WARNING from the UVC chain scanning code:
| list_add double add: new=ffff880069084010, prev=ffff880069084010,
| next=ffff880067d22298.
| ------------[ cut here ]------------
| WARNING: CPU: 1 PID: 1846 at lib/list_debug.c:31 __list_add_valid+0xbd/0xf0
| Modules linked in:
| CPU: 1 PID: 1846 Comm: kworker/1:2 Not tainted
| 4.14.0-rc2-42613-g1488251d1a98 #238
| Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
| Workqueue: usb_hub_wq hub_event
| task: ffff88006b01ca40 task.stack: ffff880064358000
| RIP: 0010:__list_add_valid+0xbd/0xf0 lib/list_debug.c:29
| RSP: 0018:ffff88006435ddd0 EFLAGS: 00010286
| RAX: 0000000000000058 RBX: ffff880067d22298 RCX: 0000000000000000
| RDX: 0000000000000058 RSI: ffffffff85a58800 RDI: ffffed000c86bbac
| RBP: ffff88006435dde8 R08: 1ffff1000c86ba52 R09: 0000000000000000
| R10: 0000000000000002 R11: 0000000000000000 R12: ffff880069084010
| R13: ffff880067d22298 R14: ffff880069084010 R15: ffff880067d222a0
| FS: 0000000000000000(0000) GS:ffff88006c900000(0000) knlGS:0000000000000000
| CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
| CR2: 0000000020004ff2 CR3: 000000006b447000 CR4: 00000000000006e0
| Call Trace:
| __list_add ./include/linux/list.h:59
| list_add_tail+0x8c/0x1b0 ./include/linux/list.h:92
| uvc_scan_chain_forward.isra.8+0x373/0x416
| drivers/media/usb/uvc/uvc_driver.c:1471
| uvc_scan_chain drivers/media/usb/uvc/uvc_driver.c:1585
| uvc_scan_device drivers/media/usb/uvc/uvc_driver.c:1769
| uvc_probe+0x77f2/0x8f00 drivers/media/usb/uvc/uvc_driver.c:2104
Looking into the output from usbmon, the interesting part is the
following data packet:
ffff880069c63e00 30710169 C Ci:1:002:0 0 143 = 09028f00 01030080
00090403 00000e01 00000924 03000103 7c003328 010204db
If we drop the lead configuration and interface descriptors, we're left
with an output terminal descriptor describing a generic display:
/* Output terminal descriptor */
buf[0] 09
buf[1] 24
buf[2] 03 /* UVC_VC_OUTPUT_TERMINAL */
buf[3] 00 /* ID */
buf[4] 01 /* type == 0x0301 (UVC_OTT_DISPLAY) */
buf[5] 03
buf[6] 7c
buf[7] 00 /* source ID refers to self! */
buf[8] 33
The problem with this descriptor is that it is self-referential: the
source ID of 0 matches itself! This causes the 'struct uvc_entity'
representing the display to be added to its chain list twice during
'uvc_scan_chain()': once via 'uvc_scan_chain_entity()' when it is
processed directly from the 'dev->entities' list and then again
immediately afterwards when trying to follow the source ID in
'uvc_scan_chain_forward()'
Add a check before adding an entity to a chain list to ensure that the
entity is not already part of a chain.
Link: https://lore.kernel.org/linux-media/CAAeHK+z+Si69jUR+N-SjN9q4O+o5KFiNManqEa-PjUta7EOb7A@mail.gmail.com/
Cc: <[email protected]>
Fixes: c0efd232929c ("V4L/DVB (8145a): USB Video Class driver")
Reported-by: Andrey Konovalov <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Laurent Pinchart <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
|
static rpmRC rpmpkgRead(rpmKeyring keyring, rpmVSFlags vsflags,
FD_t fd,
Header * hdrp, unsigned int *keyidp, char **msg)
{
pgpDigParams sig = NULL;
Header sigh = NULL;
rpmTagVal sigtag;
struct rpmtd_s sigtd;
struct sigtInfo_s sinfo;
Header h = NULL;
rpmRC rc = RPMRC_FAIL; /* assume failure */
int leadtype = -1;
headerGetFlags hgeflags = HEADERGET_DEFAULT;
if (hdrp) *hdrp = NULL;
rpmtdReset(&sigtd);
if ((rc = rpmLeadRead(fd, NULL, &leadtype, msg)) != RPMRC_OK) {
/* Avoid message spew on manifests */
if (rc == RPMRC_NOTFOUND) {
*msg = _free(*msg);
}
goto exit;
}
/* Read the signature header. */
rc = rpmReadSignature(fd, &sigh, RPMSIGTYPE_HEADERSIG, msg);
if (rc != RPMRC_OK) {
goto exit;
}
#define _chk(_mask, _tag) \
(sigtag == 0 && !(vsflags & (_mask)) && headerIsEntry(sigh, (_tag)))
/*
* Figger the most effective means of verification available, prefer
* signatures over digests. Legacy header+payload entries are not used.
* DSA will be preferred over RSA if both exist because tested first.
*/
sigtag = 0;
if (_chk(RPMVSF_NODSAHEADER, RPMSIGTAG_DSA)) {
sigtag = RPMSIGTAG_DSA;
} else if (_chk(RPMVSF_NORSAHEADER, RPMSIGTAG_RSA)) {
sigtag = RPMSIGTAG_RSA;
} else if (_chk(RPMVSF_NOSHA1HEADER, RPMSIGTAG_SHA1)) {
sigtag = RPMSIGTAG_SHA1;
}
/* Read the metadata, computing digest(s) on the fly. */
h = NULL;
rc = rpmpkgReadHeader(keyring, vsflags, fd, &h, msg);
if (rc != RPMRC_OK || h == NULL) {
goto exit;
}
/* Any digests or signatures to check? */
if (sigtag == 0) {
rc = RPMRC_OK;
goto exit;
}
/* Free up any previous "ok" message before signature/digest check */
*msg = _free(*msg);
/* Retrieve the tag parameters from the signature header. */
if (!headerGet(sigh, sigtag, &sigtd, hgeflags)) {
rc = RPMRC_FAIL;
goto exit;
}
if (rpmSigInfoParse(&sigtd, "package", &sinfo, &sig, msg) == RPMRC_OK) {
struct rpmtd_s utd;
DIGEST_CTX ctx = rpmDigestInit(sinfo.hashalgo, RPMDIGEST_NONE);
if (headerGet(h, RPMTAG_HEADERIMMUTABLE, &utd, hgeflags)) {
rpmDigestUpdate(ctx, rpm_header_magic, sizeof(rpm_header_magic));
rpmDigestUpdate(ctx, utd.data, utd.count);
rpmtdFreeData(&utd);
}
/** @todo Implement disable/enable/warn/error/anal policy. */
rc = rpmVerifySignature(keyring, &sigtd, sig, ctx, msg);
rpmDigestFinal(ctx, NULL, NULL, 0);
} else {
rc = RPMRC_FAIL;
}
exit:
if (rc != RPMRC_FAIL && h != NULL && hdrp != NULL) {
/* Retrofit RPMTAG_SOURCEPACKAGE to srpms for compatibility */
if (leadtype == RPMLEAD_SOURCE && headerIsSource(h)) {
if (!headerIsEntry(h, RPMTAG_SOURCEPACKAGE)) {
uint32_t one = 1;
headerPutUint32(h, RPMTAG_SOURCEPACKAGE, &one, 1);
}
}
/*
* Try to make sure binary rpms have RPMTAG_SOURCERPM set as that's
* what we use for differentiating binary vs source elsewhere.
*/
if (!headerIsEntry(h, RPMTAG_SOURCEPACKAGE) && headerIsSource(h)) {
headerPutString(h, RPMTAG_SOURCERPM, "(none)");
}
/*
* Convert legacy headers on the fly. Not having immutable region
* equals a truly ancient package, do full retrofit. OTOH newer
* packages might have been built with --nodirtokens, test and handle
* the non-compressed filelist case separately.
*/
if (!headerIsEntry(h, RPMTAG_HEADERIMMUTABLE))
headerConvert(h, HEADERCONV_RETROFIT_V3);
else if (headerIsEntry(h, RPMTAG_OLDFILENAMES))
headerConvert(h, HEADERCONV_COMPRESSFILELIST);
/* Append (and remap) signature tags to the metadata. */
headerMergeLegacySigs(h, sigh);
/* Bump reference count for return. */
*hdrp = headerLink(h);
if (keyidp)
*keyidp = getKeyid(sig);
}
rpmtdFreeData(&sigtd);
h = headerFree(h);
pgpDigParamsFree(sig);
sigh = rpmFreeSignature(sigh);
return rc;
}
| 0 |
[] |
rpm
|
8e847d52c811e9a57239e18672d40f781e0ec48e
| 24,508,561,729,324,430,000,000,000,000,000,000,000 | 133 |
Sanity check that there is at least one tag in header region
|
static int add_memory_block(unsigned long base_section_nr)
{
int section_count = 0;
unsigned long nr;
for (nr = base_section_nr; nr < base_section_nr + sections_per_block;
nr++)
if (present_section_nr(nr))
section_count++;
if (section_count == 0)
return 0;
return init_memory_block(memory_block_id(base_section_nr),
MEM_ONLINE);
}
| 0 |
[
"CWE-787"
] |
linux
|
aa838896d87af561a33ecefea1caa4c15a68bc47
| 51,674,717,449,115,355,000,000,000,000,000,000,000 | 15 |
drivers core: Use sysfs_emit and sysfs_emit_at for show(device *...) functions
Convert the various sprintf fmaily calls in sysfs device show functions
to sysfs_emit and sysfs_emit_at for PAGE_SIZE buffer safety.
Done with:
$ spatch -sp-file sysfs_emit_dev.cocci --in-place --max-width=80 .
And cocci script:
$ cat sysfs_emit_dev.cocci
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- strcpy(buf, chr);
+ sysfs_emit(buf, chr);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
- len += scnprintf(buf + len, PAGE_SIZE - len,
+ len += sysfs_emit_at(buf, len,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
...
- strcpy(buf, chr);
- return strlen(buf);
+ return sysfs_emit(buf, chr);
}
Signed-off-by: Joe Perches <[email protected]>
Link: https://lore.kernel.org/r/3d033c33056d88bbe34d4ddb62afd05ee166ab9a.1600285923.git.joe@perches.com
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
uint32_t smb2cli_conn_max_trans_size(struct smbXcli_conn *conn)
{
return conn->smb2.server.max_trans_size;
}
| 0 |
[
"CWE-20"
] |
samba
|
a819d2b440aafa3138d95ff6e8b824da885a70e9
| 111,874,072,251,336,100,000,000,000,000,000,000,000 | 4 |
CVE-2015-5296: libcli/smb: make sure we require signing when we demand encryption on a session
BUG: https://bugzilla.samba.org/show_bug.cgi?id=11536
Signed-off-by: Stefan Metzmacher <[email protected]>
Reviewed-by: Jeremy Allison <[email protected]>
|
int sqlite3Fts3GetVarint(const char *pBuf, sqlite_int64 *v){
const unsigned char *p = (const unsigned char*)pBuf;
const unsigned char *pStart = p;
u32 a;
u64 b;
int shift;
GETVARINT_INIT(a, p, 0, 0x00, 0x80, *v, 1);
GETVARINT_STEP(a, p, 7, 0x7F, 0x4000, *v, 2);
GETVARINT_STEP(a, p, 14, 0x3FFF, 0x200000, *v, 3);
GETVARINT_STEP(a, p, 21, 0x1FFFFF, 0x10000000, *v, 4);
b = (a & 0x0FFFFFFF );
for(shift=28; shift<=63; shift+=7){
u64 c = *p++;
b += (c&0x7F) << shift;
if( (c & 0x80)==0 ) break;
}
*v = b;
return (int)(p - pStart);
}
| 0 |
[
"CWE-787"
] |
sqlite
|
c72f2fb7feff582444b8ffdc6c900c69847ce8a9
| 216,372,937,620,268,800,000,000,000,000,000,000,000 | 21 |
More improvements to shadow table corruption detection in FTS3.
FossilOrigin-Name: 51525f9c3235967bc00a090e84c70a6400698c897aa4742e817121c725b8c99d
|
static void parse_capture(dictionary *ubridge_config, const char *bridge_name, bridge_t *bridge)
{
const char *pcap_file = NULL;
const char *pcap_linktype = "EN10MB";
getstr(ubridge_config, bridge_name, "pcap_protocol", &pcap_linktype);
if (getstr(ubridge_config, bridge_name, "pcap_file", &pcap_file)) {
printf("Starting packet capture to %s with protocol %s\n", pcap_file, pcap_linktype);
bridge->capture = create_pcap_capture(pcap_file, pcap_linktype);
}
}
| 0 |
[
"CWE-200",
"CWE-269"
] |
ubridge
|
2eb0d1dab6a6de76cf3556130a2d52af101077db
| 20,729,611,538,078,477,000,000,000,000,000,000,000 | 11 |
Hide errored line content during parsing configuration INI file on default
|
file_asynch_zero (struct rw *rw, struct command *command,
nbd_completion_callback cb, bool allocate)
{
int dummy = 0;
if (!file_synch_zero (rw, command->offset, command->slice.len, allocate))
return false;
if (cb.callback (cb.user_data, &dummy) == -1) {
perror (rw->name);
exit (EXIT_FAILURE);
}
return true;
}
| 1 |
[
"CWE-252"
] |
libnbd
|
8d444b41d09a700c7ee6f9182a649f3f2d325abb
| 93,632,573,361,955,670,000,000,000,000,000,000,000 | 13 |
copy: CVE-2022-0485: Fail nbdcopy if NBD read or write fails
nbdcopy has a nasty bug when performing multi-threaded copies using
asynchronous nbd calls - it was blindly treating the completion of an
asynchronous command as successful, rather than checking the *error
parameter. This can result in the silent creation of a corrupted
image in two different ways: when a read fails, we blindly wrote
garbage to the destination; when a write fails, we did not flag that
the destination was not written.
Since nbdcopy already calls exit() on a synchronous read or write
failure to a file, doing the same for an asynchronous op to an NBD
server is the simplest solution. A nicer solution, but more invasive
to code and thus not done here, might be to allow up to N retries of
the transaction (in case the read or write failure was transient), or
even having a mode where as much data is copied as possible (portions
of the copy that failed would be logged on stderr, and nbdcopy would
still fail with a non-zero exit status, but this would copy more than
just stopping at the first error, as can be done with rsync or
ddrescue).
Note that since we rely on auto-retiring and do NOT call
nbd_aio_command_completed, our completion callbacks must always return
1 (if they do not exit() first), even when acting on *error, so as not
leave the command allocated until nbd_close. As such, there is no
sane way to return an error to a manual caller of the callback, and
therefore we can drop dead code that calls perror() and exit() if the
callback "failed". It is also worth documenting the contract on when
we must manually call the callback during the asynch_zero callback, so
that we do not leak or double-free the command; thankfully, all the
existing code paths were correct.
The added testsuite script demonstrates several scenarios, some of
which fail without the rest of this patch in place, and others which
showcase ways in which sparse images can bypass errors.
Once backports are complete, a followup patch on the main branch will
edit docs/libnbd-security.pod with the mailing list announcement of
the stable branch commit ids and release versions that incorporate
this fix.
Reported-by: Nir Soffer <[email protected]>
Fixes: bc896eec4d ("copy: Implement multi-conn, multiple threads, multiple requests in flight.", v1.5.6)
Fixes: https://bugzilla.redhat.com/2046194
Message-Id: <[email protected]>
Acked-by: Richard W.M. Jones <[email protected]>
Acked-by: Nir Soffer <[email protected]>
[eblake: fix error message per Nir, tweak requires lines in unit test per Rich]
Reviewed-by: Laszlo Ersek <[email protected]>
|
uint8_t* FAST_FUNC udhcp_get_option32(struct dhcp_packet *packet, int code)
{
uint8_t *r = udhcp_get_option(packet, code);
if (r) {
if (r[-1] != 4)
r = NULL;
}
return r;
}
| 1 |
[
"CWE-125"
] |
busybox
|
74d9f1ba37010face4bd1449df4d60dd84450b06
| 241,600,937,410,668,120,000,000,000,000,000,000,000 | 9 |
udhcpc: when decoding DHCP_SUBNET, ensure it is 4 bytes long
function old new delta
udhcp_run_script 795 801 +6
Signed-off-by: Denys Vlasenko <[email protected]>
|
void qemu_chr_fe_take_focus(CharBackend *b)
{
if (!b->chr) {
return;
}
if (b->chr->is_mux) {
mux_set_focus(b->chr->opaque, b->tag);
}
}
| 0 |
[
"CWE-416"
] |
qemu
|
a4afa548fc6dd9842ed86639b4d37d4d1c4ad480
| 241,843,425,439,471,330,000,000,000,000,000,000,000 | 10 |
char: move front end handlers in CharBackend
Since the hanlders are associated with a CharBackend, rather than the
CharDriverState, it is more appropriate to store in CharBackend. This
avoids the handler copy dance in qemu_chr_fe_set_handlers() then
mux_chr_update_read_handler(), by storing the CharBackend pointer
directly.
Also a mux CharDriver should go through mux->backends[focused], since
chr->be will stay NULL. Before that, it was possible to call
chr->handler by mistake with surprising results, for ex through
qemu_chr_be_can_write(), which would result in calling the last set
handler front end, not the one with focus.
Signed-off-by: Marc-André Lureau <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
static MagickBooleanType RenderPostscript(Image *image,
const DrawInfo *draw_info,const PointInfo *offset,TypeMetric *metrics)
{
char
filename[MaxTextExtent],
geometry[MaxTextExtent],
*text;
FILE
*file;
Image
*annotate_image;
ImageInfo
*annotate_info;
int
unique_file;
MagickBooleanType
identity;
PointInfo
extent,
point,
resolution;
register ssize_t
i;
size_t
length;
ssize_t
y;
/*
Render label with a Postscript font.
*/
if (image->debug != MagickFalse)
(void) LogMagickEvent(AnnotateEvent,GetMagickModule(),
"Font %s; pointsize %g",draw_info->font != (char *) NULL ?
draw_info->font : "none",draw_info->pointsize);
file=(FILE *) NULL;
unique_file=AcquireUniqueFileResource(filename);
if (unique_file != -1)
file=fdopen(unique_file,"wb");
if ((unique_file == -1) || (file == (FILE *) NULL))
{
ThrowFileException(&image->exception,FileOpenError,"UnableToOpenFile",
filename);
return(MagickFalse);
}
(void) FormatLocaleFile(file,"%%!PS-Adobe-3.0\n");
(void) FormatLocaleFile(file,"/ReencodeType\n");
(void) FormatLocaleFile(file,"{\n");
(void) FormatLocaleFile(file," findfont dup length\n");
(void) FormatLocaleFile(file,
" dict begin { 1 index /FID ne {def} {pop pop} ifelse } forall\n");
(void) FormatLocaleFile(file,
" /Encoding ISOLatin1Encoding def currentdict end definefont pop\n");
(void) FormatLocaleFile(file,"} bind def\n");
/*
Sample to compute bounding box.
*/
identity=(fabs(draw_info->affine.sx-draw_info->affine.sy) < MagickEpsilon) &&
(fabs(draw_info->affine.rx) < MagickEpsilon) &&
(fabs(draw_info->affine.ry) < MagickEpsilon) ? MagickTrue : MagickFalse;
extent.x=0.0;
extent.y=0.0;
length=strlen(draw_info->text);
for (i=0; i <= (ssize_t) (length+2); i++)
{
point.x=fabs(draw_info->affine.sx*i*draw_info->pointsize+
draw_info->affine.ry*2.0*draw_info->pointsize);
point.y=fabs(draw_info->affine.rx*i*draw_info->pointsize+
draw_info->affine.sy*2.0*draw_info->pointsize);
if (point.x > extent.x)
extent.x=point.x;
if (point.y > extent.y)
extent.y=point.y;
}
(void) FormatLocaleFile(file,"%g %g moveto\n",identity != MagickFalse ? 0.0 :
extent.x/2.0,extent.y/2.0);
(void) FormatLocaleFile(file,"%g %g scale\n",draw_info->pointsize,
draw_info->pointsize);
if ((draw_info->font == (char *) NULL) || (*draw_info->font == '\0') ||
(strchr(draw_info->font,'/') != (char *) NULL))
(void) FormatLocaleFile(file,
"/Times-Roman-ISO dup /Times-Roman ReencodeType findfont setfont\n");
else
(void) FormatLocaleFile(file,
"/%s-ISO dup /%s ReencodeType findfont setfont\n",draw_info->font,
draw_info->font);
(void) FormatLocaleFile(file,"[%g %g %g %g 0 0] concat\n",
draw_info->affine.sx,-draw_info->affine.rx,-draw_info->affine.ry,
draw_info->affine.sy);
text=EscapeParenthesis(draw_info->text);
if (identity == MagickFalse)
(void) FormatLocaleFile(file,"(%s) stringwidth pop -0.5 mul -0.5 rmoveto\n",
text);
(void) FormatLocaleFile(file,"(%s) show\n",text);
text=DestroyString(text);
(void) FormatLocaleFile(file,"showpage\n");
(void) fclose(file);
(void) FormatLocaleString(geometry,MaxTextExtent,"%.20gx%.20g+0+0!",
floor(extent.x+0.5),floor(extent.y+0.5));
annotate_info=AcquireImageInfo();
(void) FormatLocaleString(annotate_info->filename,MaxTextExtent,"ps:%s",
filename);
(void) CloneString(&annotate_info->page,geometry);
if (draw_info->density != (char *) NULL)
(void) CloneString(&annotate_info->density,draw_info->density);
annotate_info->antialias=draw_info->text_antialias;
annotate_image=ReadImage(annotate_info,&image->exception);
CatchException(&image->exception);
annotate_info=DestroyImageInfo(annotate_info);
(void) RelinquishUniqueFileResource(filename);
if (annotate_image == (Image *) NULL)
return(MagickFalse);
resolution.x=DefaultResolution;
resolution.y=DefaultResolution;
if (draw_info->density != (char *) NULL)
{
GeometryInfo
geometry_info;
MagickStatusType
flags;
flags=ParseGeometry(draw_info->density,&geometry_info);
resolution.x=geometry_info.rho;
resolution.y=geometry_info.sigma;
if ((flags & SigmaValue) == 0)
resolution.y=resolution.x;
}
if (identity == MagickFalse)
(void) TransformImage(&annotate_image,"0x0",(char *) NULL);
else
{
RectangleInfo
crop_info;
crop_info=GetImageBoundingBox(annotate_image,&annotate_image->exception);
crop_info.height=(size_t) ((resolution.y/DefaultResolution)*
ExpandAffine(&draw_info->affine)*draw_info->pointsize+0.5);
crop_info.y=(ssize_t) ceil((resolution.y/DefaultResolution)*extent.y/8.0-
0.5);
(void) FormatLocaleString(geometry,MaxTextExtent,
"%.20gx%.20g%+.20g%+.20g",(double) crop_info.width,(double)
crop_info.height,(double) crop_info.x,(double) crop_info.y);
(void) TransformImage(&annotate_image,geometry,(char *) NULL);
}
metrics->pixels_per_em.x=(resolution.y/DefaultResolution)*
ExpandAffine(&draw_info->affine)*draw_info->pointsize;
metrics->pixels_per_em.y=metrics->pixels_per_em.x;
metrics->ascent=metrics->pixels_per_em.x;
metrics->descent=metrics->pixels_per_em.y/-5.0;
metrics->width=(double) annotate_image->columns/
ExpandAffine(&draw_info->affine);
metrics->height=1.152*metrics->pixels_per_em.x;
metrics->max_advance=metrics->pixels_per_em.x;
metrics->bounds.x1=0.0;
metrics->bounds.y1=metrics->descent;
metrics->bounds.x2=metrics->ascent+metrics->descent;
metrics->bounds.y2=metrics->ascent+metrics->descent;
metrics->underline_position=(-2.0);
metrics->underline_thickness=1.0;
if (draw_info->render == MagickFalse)
{
annotate_image=DestroyImage(annotate_image);
return(MagickTrue);
}
if (draw_info->fill.opacity != TransparentOpacity)
{
ExceptionInfo
*exception;
MagickBooleanType
sync;
PixelPacket
fill_color;
CacheView
*annotate_view;
/*
Render fill color.
*/
if (image->matte == MagickFalse)
(void) SetImageAlphaChannel(image,OpaqueAlphaChannel);
if (annotate_image->matte == MagickFalse)
(void) SetImageAlphaChannel(annotate_image,OpaqueAlphaChannel);
fill_color=draw_info->fill;
exception=(&image->exception);
annotate_view=AcquireAuthenticCacheView(annotate_image,exception);
for (y=0; y < (ssize_t) annotate_image->rows; y++)
{
register ssize_t
x;
register PixelPacket
*magick_restrict q;
q=GetCacheViewAuthenticPixels(annotate_view,0,y,annotate_image->columns,
1,exception);
if (q == (PixelPacket *) NULL)
break;
for (x=0; x < (ssize_t) annotate_image->columns; x++)
{
(void) GetFillColor(draw_info,x,y,&fill_color);
SetPixelAlpha(q,ClampToQuantum((((QuantumRange-GetPixelIntensity(
annotate_image,q))*(QuantumRange-fill_color.opacity))/
QuantumRange)));
SetPixelRed(q,fill_color.red);
SetPixelGreen(q,fill_color.green);
SetPixelBlue(q,fill_color.blue);
q++;
}
sync=SyncCacheViewAuthenticPixels(annotate_view,exception);
if (sync == MagickFalse)
break;
}
annotate_view=DestroyCacheView(annotate_view);
(void) CompositeImage(image,OverCompositeOp,annotate_image,
(ssize_t) ceil(offset->x-0.5),(ssize_t) ceil(offset->y-(metrics->ascent+
metrics->descent)-0.5));
}
annotate_image=DestroyImage(annotate_image);
return(MagickTrue);
}
| 0 |
[
"CWE-125"
] |
ImageMagick6
|
f6ffc702c6eecd963587273a429dcd608c648984
| 315,218,130,775,010,200,000,000,000,000,000,000,000 | 233 |
https://github.com/ImageMagick/ImageMagick/issues/1588
|
//! Fill pixel values along the Z-axis at a specified pixel position \overloading.
CImg<T>& fillZ(const unsigned int x, const unsigned int y, const unsigned int c, const double a0, ...) {
const ulongT wh = (ulongT)_width*_height;
if (x<_width && y<_height && c<_spectrum) _cimg_fill1(x,y,0,c,wh,_depth,double);
return *this;
| 0 |
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
| 26,084,850,221,884,940,000,000,000,000,000,000,000 | 5 |
Fix other issues in 'CImg<T>::load_bmp()'.
|
static void fuse_do_prepare_interrupt(fuse_req_t req, struct fuse_intr_data *d)
{
d->id = pthread_self();
pthread_cond_init(&d->cond, NULL);
d->finished = 0;
fuse_req_interrupt_func(req, fuse_interrupt, d);
}
| 0 |
[] |
ntfs-3g
|
fb28eef6f1c26170566187c1ab7dc913a13ea43c
| 3,828,533,961,322,445,500,000,000,000,000,000,000 | 7 |
Hardened the checking of directory offset requested by a readdir
When asked for the next directory entries, make sure the chunk offset
is within valid values, otherwise return no more entries in chunk.
|
void DNP3_Base::PrecomputeCRCTable()
{
for( unsigned int i = 0; i < 256; i++)
{
unsigned int crc = i;
for ( unsigned int j = 0; j < 8; ++j )
{
if ( crc & 0x0001 )
crc = (crc >> 1) ^ 0xA6BC; // Generating polynomial.
else
crc >>= 1;
}
crc_table[i] = crc;
}
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
bro
|
6cedd67c381ff22fde653adf02ee31caf66c81a0
| 215,580,513,702,955,260,000,000,000,000,000,000,000 | 17 |
DNP3: fix reachable assertion and buffer over-read/overflow.
A DNP3 packet using a link layer header that specifies a zero length can
trigger an assertion failure if assertions are enabled. Assertions are
enabled unless Bro is compiled with the NDEBUG preprocessor macro
defined. The default configuration of Bro will define this macro and so
disables assertions, but using the --enable-debug option in the
configure script will enable assertions. When assertions are disabled,
or also for certain length values, the DNP3 parser may attempt to pass a
negative value as the third argument to memcpy (number of bytes to copy)
and result in a buffer over-read or overflow.
Reported by Travis Emmert.
|
static MagickBooleanType TIFFGetProfiles(TIFF *tiff,Image *image)
{
MagickBooleanType
status;
uint32
length = 0;
unsigned char
*profile = (unsigned char *) NULL;
status=MagickTrue;
#if defined(TIFFTAG_ICCPROFILE)
if ((TIFFGetField(tiff,TIFFTAG_ICCPROFILE,&length,&profile) == 1) &&
(profile != (unsigned char *) NULL))
status=ReadProfile(image,"icc",profile,(ssize_t) length);
#endif
#if defined(TIFFTAG_PHOTOSHOP)
if ((TIFFGetField(tiff,TIFFTAG_PHOTOSHOP,&length,&profile) == 1) &&
(profile != (unsigned char *) NULL))
status=ReadProfile(image,"8bim",profile,(ssize_t) length);
#endif
#if defined(TIFFTAG_RICHTIFFIPTC) && (TIFFLIB_VERSION >= 20191103)
if ((TIFFGetField(tiff,TIFFTAG_RICHTIFFIPTC,&length,&profile) == 1) &&
(profile != (unsigned char *) NULL))
{
const TIFFField
*field;
field=TIFFFieldWithTag(tiff,TIFFTAG_RICHTIFFIPTC);
if (TIFFFieldDataType(field) != TIFF_LONG)
status=ReadProfile(image,"iptc",profile,length);
else
{
if (TIFFIsByteSwapped(tiff) != 0)
TIFFSwabArrayOfLong((uint32 *) profile,(size_t) length);
status=ReadProfile(image,"iptc",profile,4L*length);
}
}
#endif
#if defined(TIFFTAG_XMLPACKET)
if ((TIFFGetField(tiff,TIFFTAG_XMLPACKET,&length,&profile) == 1) &&
(profile != (unsigned char *) NULL))
{
StringInfo
*dng;
status=ReadProfile(image,"xmp",profile,(ssize_t) length);
dng=BlobToStringInfo(profile,length);
if (dng != (StringInfo *) NULL)
{
const char
*target = "dc:format=\"image/dng\"";
if (strstr((char *) GetStringInfoDatum(dng),target) != (char *) NULL)
(void) CopyMagickString(image->magick,"DNG",MagickPathExtent);
dng=DestroyStringInfo(dng);
}
}
#endif
if ((TIFFGetField(tiff,34118,&length,&profile) == 1) &&
(profile != (unsigned char *) NULL))
status=ReadProfile(image,"tiff:34118",profile,(ssize_t) length);
if ((TIFFGetField(tiff,37724,&length,&profile) == 1) &&
(profile != (unsigned char *) NULL))
status=ReadProfile(image,"tiff:37724",profile,(ssize_t) length);
return(status);
}
| 0 |
[
"CWE-401"
] |
ImageMagick6
|
cd7f9fb7751b0d59d5a74b12d971155caad5a792
| 300,796,328,705,394,900,000,000,000,000,000,000,000 | 68 |
https://github.com/ImageMagick/ImageMagick/issues/3540
|
DLLIMPORT int cfg_opt_setnfloat(cfg_opt_t *opt, double value, unsigned int index)
{
cfg_value_t *val;
if (!opt || opt->type != CFGT_FLOAT) {
errno = EINVAL;
return CFG_FAIL;
}
val = cfg_opt_getval(opt, index);
if (!val)
return CFG_FAIL;
val->fpnumber = value;
opt->flags |= CFGF_MODIFIED;
return CFG_SUCCESS;
}
| 0 |
[] |
libconfuse
|
d73777c2c3566fb2647727bb56d9a2295b81669b
| 299,631,404,556,491,460,000,000,000,000,000,000,000 | 18 |
Fix #163: unterminated username used with getpwnam()
Signed-off-by: Joachim Wiberg <[email protected]>
|
static int raw_enable_allfilters(struct net_device *dev, struct sock *sk)
{
struct raw_sock *ro = raw_sk(sk);
int err;
err = raw_enable_filters(dev, sk, ro->filter, ro->count);
if (!err) {
err = raw_enable_errfilter(dev, sk, ro->err_mask);
if (err)
raw_disable_filters(dev, sk, ro->filter, ro->count);
}
return err;
}
| 0 |
[
"CWE-200"
] |
linux-2.6
|
e84b90ae5eb3c112d1f208964df1d8156a538289
| 279,517,016,522,530,870,000,000,000,000,000,000,000 | 14 |
can: Fix raw_getname() leak
raw_getname() can leak 10 bytes of kernel memory to user
(two bytes hole between can_family and can_ifindex,
8 bytes at the end of sockaddr_can structure)
Signed-off-by: Eric Dumazet <[email protected]>
Acked-by: Oliver Hartkopp <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static void dw210x_led_ctrl(struct dvb_frontend *fe, int offon)
{
static u8 led_off[] = { 0 };
static u8 led_on[] = { 1 };
struct i2c_msg msg = {
.addr = DW2102_LED_CTRL,
.flags = 0,
.buf = led_off,
.len = 1
};
struct dvb_usb_adapter *udev_adap =
(struct dvb_usb_adapter *)(fe->dvb->priv);
if (offon)
msg.buf = led_on;
i2c_transfer(&udev_adap->dev->i2c_adap, &msg, 1);
}
| 0 |
[
"CWE-476",
"CWE-119"
] |
linux
|
606142af57dad981b78707234cfbd15f9f7b7125
| 4,198,195,057,153,125,000,000,000,000,000,000,000 | 17 |
[media] dw2102: don't do DMA on stack
On Kernel 4.9, WARNINGs about doing DMA on stack are hit at
the dw2102 driver: one in su3000_power_ctrl() and the other in tt_s2_4600_frontend_attach().
Both were due to the use of buffers on the stack as parameters to
dvb_usb_generic_rw() and the resulting attempt to do DMA with them.
The device was non-functional as a result.
So, switch this driver over to use a buffer within the device state
structure, as has been done with other DVB-USB drivers.
Tested with TechnoTrend TT-connect S2-4600.
[[email protected]: fixed a warning at su3000_i2c_transfer() that
state var were dereferenced before check 'd']
Signed-off-by: Jonathan McDowell <[email protected]>
Cc: <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
|
check_options(
dle_t *dle)
{
if (GPOINTER_TO_INT(dle->estimatelist->data) == ES_CALCSIZE) {
need_calcsize=1;
}
if (strcmp(dle->program,"GNUTAR") == 0) {
need_gnutar=1;
if(dle->device && dle->device[0] == '/' && dle->device[1] == '/') {
if(dle->exclude_file && dle->exclude_file->nb_element > 1) {
g_printf(_("ERROR [samba support only one exclude file]\n"));
}
if (dle->exclude_list && dle->exclude_list->nb_element > 0 &&
dle->exclude_optional==0) {
g_printf(_("ERROR [samba does not support exclude list]\n"));
}
if (dle->include_file && dle->include_file->nb_element > 0) {
g_printf(_("ERROR [samba does not support include file]\n"));
}
if (dle->include_list && dle->include_list->nb_element > 0 &&
dle->include_optional==0) {
g_printf(_("ERROR [samba does not support include list]\n"));
}
need_samba=1;
} else {
int nb_exclude = 0;
int nb_include = 0;
char *file_exclude = NULL;
char *file_include = NULL;
if (dle->exclude_file) nb_exclude += dle->exclude_file->nb_element;
if (dle->exclude_list) nb_exclude += dle->exclude_list->nb_element;
if (dle->include_file) nb_include += dle->include_file->nb_element;
if (dle->include_list) nb_include += dle->include_list->nb_element;
if (nb_exclude > 0) file_exclude = build_exclude(dle, 1);
if (nb_include > 0) file_include = build_include(dle, 1);
amfree(file_exclude);
amfree(file_include);
need_runtar=1;
}
}
if (strcmp(dle->program,"DUMP") == 0) {
if (dle->exclude_file && dle->exclude_file->nb_element > 0) {
g_printf(_("ERROR [DUMP does not support exclude file]\n"));
}
if (dle->exclude_list && dle->exclude_list->nb_element > 0) {
g_printf(_("ERROR [DUMP does not support exclude list]\n"));
}
if (dle->include_file && dle->include_file->nb_element > 0) {
g_printf(_("ERROR [DUMP does not support include file]\n"));
}
if (dle->include_list && dle->include_list->nb_element > 0) {
g_printf(_("ERROR [DUMP does not support include list]\n"));
}
#ifdef USE_RUNDUMP
need_rundump=1;
#endif
#ifndef AIX_BACKUP
#ifdef VDUMP
#ifdef DUMP
if (dle->device && strcmp(amname_to_fstype(dle->device), "advfs") == 0)
#else
if (1)
#endif
{
need_vdump=1;
need_rundump=1;
if (dle->create_index)
need_vrestore=1;
}
else
#endif /* VDUMP */
#ifdef XFSDUMP
#ifdef DUMP
if (dle->device && strcmp(amname_to_fstype(dle->device), "xfs") == 0)
#else
if (1)
#endif
{
need_xfsdump=1;
need_rundump=1;
if (dle->create_index)
need_xfsrestore=1;
}
else
#endif /* XFSDUMP */
#ifdef VXDUMP
#ifdef DUMP
if (dle->device && strcmp(amname_to_fstype(dle->device), "vxfs") == 0)
#else
if (1)
#endif
{
need_vxdump=1;
if (dle->create_index)
need_vxrestore=1;
}
else
#endif /* VXDUMP */
{
need_dump=1;
if (dle->create_index)
need_restore=1;
}
#else
/* AIX backup program */
need_dump=1;
if (dle->create_index)
need_restore=1;
#endif
}
if ((dle->compress == COMP_BEST) || (dle->compress == COMP_FAST)
|| (dle->compress == COMP_CUST)) {
need_compress_path=1;
}
if (dle->auth && amandad_auth) {
if (strcasecmp(dle->auth, amandad_auth) != 0) {
g_fprintf(stdout,_("ERROR [client configured for auth=%s while server requested '%s']\n"),
amandad_auth, dle->auth);
if (strcmp(dle->auth, "ssh") == 0) {
g_fprintf(stderr, _("ERROR [The auth in ~/.ssh/authorized_keys "
"should be \"--auth=ssh\", or use another auth "
" for the DLE]\n"));
}
else {
g_fprintf(stderr, _("ERROR [The auth in the inetd/xinetd configuration "
" must be the same as the DLE]\n"));
}
}
}
}
| 0 |
[
"CWE-264"
] |
amanda
|
4bf5b9b356848da98560ffbb3a07a9cb5c4ea6d7
| 91,701,197,371,226,640,000,000,000,000,000,000,000 | 136 |
* Add a /etc/amanda-security.conf file
git-svn-id: https://svn.code.sf.net/p/amanda/code/amanda/branches/3_3@6486 a8d146d6-cc15-0410-8900-af154a0219e0
|
ossl_cipher_initialize(VALUE self, VALUE str)
{
EVP_CIPHER_CTX *ctx;
const EVP_CIPHER *cipher;
char *name;
unsigned char dummy_key[EVP_MAX_KEY_LENGTH] = { 0 };
name = StringValueCStr(str);
GetCipherInit(self, ctx);
if (ctx) {
ossl_raise(rb_eRuntimeError, "Cipher already inititalized!");
}
AllocCipher(self, ctx);
if (!(cipher = EVP_get_cipherbyname(name))) {
ossl_raise(rb_eRuntimeError, "unsupported cipher algorithm (%"PRIsVALUE")", str);
}
/*
* EVP_CipherInit_ex() allows to specify NULL to key and IV, however some
* ciphers don't handle well (OpenSSL's bug). [Bug #2768]
*
* The EVP which has EVP_CIPH_RAND_KEY flag (such as DES3) allows
* uninitialized key, but other EVPs (such as AES) does not allow it.
* Calling EVP_CipherUpdate() without initializing key causes SEGV so we
* set the data filled with "\0" as the key by default.
*/
if (EVP_CipherInit_ex(ctx, cipher, NULL, dummy_key, NULL, -1) != 1)
ossl_raise(eCipherError, NULL);
return self;
}
| 1 |
[
"CWE-326",
"CWE-310",
"CWE-703"
] |
openssl
|
8108e0a6db133f3375608303fdd2083eb5115062
| 103,402,460,480,922,620,000,000,000,000,000,000,000 | 30 |
cipher: don't set dummy encryption key in Cipher#initialize
Remove the encryption key initialization from Cipher#initialize. This
is effectively a revert of r32723 ("Avoid possible SEGV from AES
encryption/decryption", 2011-07-28).
r32723, which added the key initialization, was a workaround for
Ruby Bug #2768. For some certain ciphers, calling EVP_CipherUpdate()
before setting an encryption key caused segfault. It was not a problem
until OpenSSL implemented GCM mode - the encryption key could be
overridden by repeated calls of EVP_CipherInit_ex(). But, it is not the
case for AES-GCM ciphers. Setting a key, an IV, a key, in this order
causes the IV to be reset to an all-zero IV.
The problem of Bug #2768 persists on the current versions of OpenSSL.
So, make Cipher#update raise an exception if a key is not yet set by the
user. Since encrypting or decrypting without key does not make any
sense, this should not break existing applications.
Users can still call Cipher#key= and Cipher#iv= multiple times with
their own responsibility.
Reference: https://bugs.ruby-lang.org/issues/2768
Reference: https://bugs.ruby-lang.org/issues/8221
Reference: https://github.com/ruby/openssl/issues/49
|
static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
{
if (!is_guest_mode(vcpu)) {
vmx_set_rvi(max_irr);
return;
}
if (max_irr == -1)
return;
/*
* In guest mode. If a vmexit is needed, vmx_check_nested_events
* handles it.
*/
if (nested_exit_on_intr(vcpu))
return;
/*
* Else, fall back to pre-APICv interrupt injection since L2
* is run without virtual interrupt delivery.
*/
if (!kvm_event_needs_reinjection(vcpu) &&
vmx_interrupt_allowed(vcpu)) {
kvm_queue_interrupt(vcpu, max_irr, false);
vmx_inject_irq(vcpu);
}
}
| 0 |
[
"CWE-20",
"CWE-617"
] |
linux
|
3a8b0677fc6180a467e26cc32ce6b0c09a32f9bb
| 33,198,532,030,006,890,000,000,000,000,000,000,000 | 27 |
KVM: VMX: Do not BUG() on out-of-bounds guest IRQ
The value of the guest_irq argument to vmx_update_pi_irte() is
ultimately coming from a KVM_IRQFD API call. Do not BUG() in
vmx_update_pi_irte() if the value is out-of bounds. (Especially,
since KVM as a whole seems to hang after that.)
Instead, print a message only once if we find that we don't have a
route for a certain IRQ (which can be out-of-bounds or within the
array).
This fixes CVE-2017-1000252.
Fixes: efc644048ecde54 ("KVM: x86: Update IRTE for posted-interrupts")
Signed-off-by: Jan H. Schönherr <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
Header::pixelAspectRatio ()
{
return static_cast <FloatAttribute &>
((*this)["pixelAspectRatio"]).value();
}
| 0 |
[
"CWE-125"
] |
openexr
|
e79d2296496a50826a15c667bf92bdc5a05518b4
| 201,838,723,759,029,020,000,000,000,000,000,000,000 | 5 |
fix memory leaks and invalid memory accesses
Signed-off-by: Peter Hillman <[email protected]>
|
void Monitor::_generate_command_map(map<string,cmd_vartype>& cmdmap,
map<string,string> ¶m_str_map)
{
for (map<string,cmd_vartype>::const_iterator p = cmdmap.begin();
p != cmdmap.end(); ++p) {
if (p->first == "prefix")
continue;
if (p->first == "caps") {
vector<string> cv;
if (cmd_getval(g_ceph_context, cmdmap, "caps", cv) &&
cv.size() % 2 == 0) {
for (unsigned i = 0; i < cv.size(); i += 2) {
string k = string("caps_") + cv[i];
param_str_map[k] = cv[i + 1];
}
continue;
}
}
param_str_map[p->first] = cmd_vartype_stringify(p->second);
}
}
| 0 |
[
"CWE-287",
"CWE-284"
] |
ceph
|
5ead97120e07054d80623dada90a5cc764c28468
| 327,298,404,513,869,800,000,000,000,000,000,000,000 | 21 |
auth/cephx: add authorizer challenge
Allow the accepting side of a connection to reject an initial authorizer
with a random challenge. The connecting side then has to respond with an
updated authorizer proving they are able to decrypt the service's challenge
and that the new authorizer was produced for this specific connection
instance.
The accepting side requires this challenge and response unconditionally
if the client side advertises they have the feature bit. Servers wishing
to require this improved level of authentication simply have to require
the appropriate feature.
Signed-off-by: Sage Weil <[email protected]>
(cherry picked from commit f80b848d3f830eb6dba50123e04385173fa4540b)
# Conflicts:
# src/auth/Auth.h
# src/auth/cephx/CephxProtocol.cc
# src/auth/cephx/CephxProtocol.h
# src/auth/none/AuthNoneProtocol.h
# src/msg/Dispatcher.h
# src/msg/async/AsyncConnection.cc
- const_iterator
- ::decode vs decode
- AsyncConnection ctor arg noise
- get_random_bytes(), not cct->random()
|
static int netlink_seq_show(struct seq_file *seq, void *v)
{
if (v == SEQ_START_TOKEN)
seq_puts(seq,
"sk Eth Pid Groups "
"Rmem Wmem Dump Locks Drops Inode\n");
else {
struct sock *s = v;
struct netlink_sock *nlk = nlk_sk(s);
seq_printf(seq, "%pK %-3d %-6d %08x %-8d %-8d %pK %-8d %-8d %-8lu\n",
s,
s->sk_protocol,
nlk->pid,
nlk->groups ? (u32)nlk->groups[0] : 0,
sk_rmem_alloc_get(s),
sk_wmem_alloc_get(s),
nlk->cb,
atomic_read(&s->sk_refcnt),
atomic_read(&s->sk_drops),
sock_i_ino(s)
);
}
return 0;
}
| 0 |
[] |
linux-2.6
|
16e5726269611b71c930054ffe9b858c1cea88eb
| 178,364,504,466,687,100,000,000,000,000,000,000,000 | 26 |
af_unix: dont send SCM_CREDENTIALS by default
Since commit 7361c36c5224 (af_unix: Allow credentials to work across
user and pid namespaces) af_unix performance dropped a lot.
This is because we now take a reference on pid and cred in each write(),
and release them in read(), usually done from another process,
eventually from another cpu. This triggers false sharing.
# Events: 154K cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... .................. .........................
#
10.40% hackbench [kernel.kallsyms] [k] put_pid
8.60% hackbench [kernel.kallsyms] [k] unix_stream_recvmsg
7.87% hackbench [kernel.kallsyms] [k] unix_stream_sendmsg
6.11% hackbench [kernel.kallsyms] [k] do_raw_spin_lock
4.95% hackbench [kernel.kallsyms] [k] unix_scm_to_skb
4.87% hackbench [kernel.kallsyms] [k] pid_nr_ns
4.34% hackbench [kernel.kallsyms] [k] cred_to_ucred
2.39% hackbench [kernel.kallsyms] [k] unix_destruct_scm
2.24% hackbench [kernel.kallsyms] [k] sub_preempt_count
1.75% hackbench [kernel.kallsyms] [k] fget_light
1.51% hackbench [kernel.kallsyms] [k]
__mutex_lock_interruptible_slowpath
1.42% hackbench [kernel.kallsyms] [k] sock_alloc_send_pskb
This patch includes SCM_CREDENTIALS information in a af_unix message/skb
only if requested by the sender, [man 7 unix for details how to include
ancillary data using sendmsg() system call]
Note: This might break buggy applications that expected SCM_CREDENTIAL
from an unaware write() system call, and receiver not using SO_PASSCRED
socket option.
If SOCK_PASSCRED is set on source or destination socket, we still
include credentials for mere write() syscalls.
Performance boost in hackbench : more than 50% gain on a 16 thread
machine (2 quad-core cpus, 2 threads per core)
hackbench 20 thread 2000
4.228 sec instead of 9.102 sec
Signed-off-by: Eric Dumazet <[email protected]>
Acked-by: Tim Chen <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
int crypt_key_in_keyring(struct crypt_device *cd)
{
return cd ? cd->key_in_keyring : 0;
}
| 0 |
[
"CWE-345"
] |
cryptsetup
|
0113ac2d889c5322659ad0596d4cfc6da53e356c
| 121,607,408,849,556,120,000,000,000,000,000,000,000 | 4 |
Fix CVE-2021-4122 - LUKS2 reencryption crash recovery attack
Fix possible attacks against data confidentiality through LUKS2 online
reencryption extension crash recovery.
An attacker can modify on-disk metadata to simulate decryption in
progress with crashed (unfinished) reencryption step and persistently
decrypt part of the LUKS device.
This attack requires repeated physical access to the LUKS device but
no knowledge of user passphrases.
The decryption step is performed after a valid user activates
the device with a correct passphrase and modified metadata.
There are no visible warnings for the user that such recovery happened
(except using the luksDump command). The attack can also be reversed
afterward (simulating crashed encryption from a plaintext) with
possible modification of revealed plaintext.
The problem was caused by reusing a mechanism designed for actual
reencryption operation without reassessing the security impact for new
encryption and decryption operations. While the reencryption requires
calculating and verifying both key digests, no digest was needed to
initiate decryption recovery if the destination is plaintext (no
encryption key). Also, some metadata (like encryption cipher) is not
protected, and an attacker could change it. Note that LUKS2 protects
visible metadata only when a random change occurs. It does not protect
against intentional modification but such modification must not cause
a violation of data confidentiality.
The fix introduces additional digest protection of reencryption
metadata. The digest is calculated from known keys and critical
reencryption metadata. Now an attacker cannot create correct metadata
digest without knowledge of a passphrase for used keyslots.
For more details, see LUKS2 On-Disk Format Specification version 1.1.0.
|
for_samples_continue(i_ctx_t *i_ctx_p)
{
os_ptr op = osp;
es_ptr ep = esp;
int var = ep[-4].value.intval;
float a = ep[-3].value.realval;
int n = ep[-2].value.intval;
float b = ep[-1].value.realval;
if (var > n) {
esp -= 6; /* pop everything */
return o_pop_estack;
}
push(1);
make_real(op, ((n - var) * a + var * b) / n);
ep[-4].value.intval = var + 1;
ref_assign_inline(ep + 2, ep); /* saved proc */
esp = ep + 2;
return o_push_estack;
}
| 0 |
[
"CWE-200"
] |
ghostpdl
|
34cc326eb2c5695833361887fe0b32e8d987741c
| 138,380,776,381,948,270,000,000,000,000,000,000,000 | 20 |
Bug 699927: don't include operator arrays in execstack output
When we transfer the contents of the execution stack into the array, take the
extra step of replacing any operator arrays on the stack with the operator
that reference them.
This prevents the contents of Postscript defined, internal only operators (those
created with .makeoperator) being exposed via execstack (and thus, via error
handling).
This necessitates a change in the resource remapping 'resource', which contains
a procedure which relies on the contents of the operators arrays being present.
As we already had internal-only variants of countexecstack and execstack
(.countexecstack and .execstack) - using those, and leaving thier operation
including the operator arrays means the procedure continues to work correctly.
Both .countexecstack and .execstack are undefined after initialization.
Also, when we store the execstack (or part thereof) for an execstackoverflow
error, make the same oparray/operator substitution as above for execstack.
|
GST_START_TEST (test_GstDateTime_iso8601)
{
GstDateTime *dt, *dt2;
gchar *str, *str2;
GDateTime *gdt, *gdt2;
dt = gst_date_time_new_now_utc ();
fail_unless (gst_date_time_has_year (dt));
fail_unless (gst_date_time_has_month (dt));
fail_unless (gst_date_time_has_day (dt));
fail_unless (gst_date_time_has_time (dt));
fail_unless (gst_date_time_has_second (dt));
str = gst_date_time_to_iso8601_string (dt);
fail_unless (str != NULL);
fail_unless_equals_int (strlen (str), strlen ("2012-06-26T22:46:43Z"));
fail_unless (g_str_has_suffix (str, "Z"));
dt2 = gst_date_time_new_from_iso8601_string (str);
fail_unless (gst_date_time_get_year (dt) == gst_date_time_get_year (dt2));
fail_unless (gst_date_time_get_month (dt) == gst_date_time_get_month (dt2));
fail_unless (gst_date_time_get_day (dt) == gst_date_time_get_day (dt2));
fail_unless (gst_date_time_get_hour (dt) == gst_date_time_get_hour (dt2));
fail_unless (gst_date_time_get_minute (dt) == gst_date_time_get_minute (dt2));
fail_unless (gst_date_time_get_second (dt) == gst_date_time_get_second (dt2));
/* This will succeed because we're not comparing microseconds when
* checking for equality */
fail_unless (date_times_are_equal (dt, dt2));
str2 = gst_date_time_to_iso8601_string (dt2);
fail_unless_equals_string (str, str2);
g_free (str2);
gst_date_time_unref (dt2);
g_free (str);
gst_date_time_unref (dt);
/* ---- year only ---- */
dt = gst_date_time_new_y (2010);
fail_unless (gst_date_time_has_year (dt));
fail_unless (!gst_date_time_has_month (dt));
fail_unless (!gst_date_time_has_day (dt));
fail_unless (!gst_date_time_has_time (dt));
fail_unless (!gst_date_time_has_second (dt));
str = gst_date_time_to_iso8601_string (dt);
fail_unless (str != NULL);
fail_unless_equals_string (str, "2010");
dt2 = gst_date_time_new_from_iso8601_string (str);
fail_unless (gst_date_time_get_year (dt) == gst_date_time_get_year (dt2));
fail_unless (date_times_are_equal (dt, dt2));
str2 = gst_date_time_to_iso8601_string (dt2);
fail_unless_equals_string (str, str2);
g_free (str2);
gst_date_time_unref (dt2);
g_free (str);
gst_date_time_unref (dt);
/* ---- year and month ---- */
dt = gst_date_time_new_ym (2010, 10);
fail_unless (gst_date_time_has_year (dt));
fail_unless (gst_date_time_has_month (dt));
fail_unless (!gst_date_time_has_day (dt));
fail_unless (!gst_date_time_has_time (dt));
fail_unless (!gst_date_time_has_second (dt));
str = gst_date_time_to_iso8601_string (dt);
fail_unless (str != NULL);
fail_unless_equals_string (str, "2010-10");
dt2 = gst_date_time_new_from_iso8601_string (str);
fail_unless (gst_date_time_get_year (dt) == gst_date_time_get_year (dt2));
fail_unless (gst_date_time_get_month (dt) == gst_date_time_get_month (dt2));
fail_unless (date_times_are_equal (dt, dt2));
str2 = gst_date_time_to_iso8601_string (dt2);
fail_unless_equals_string (str, str2);
g_free (str2);
gst_date_time_unref (dt2);
g_free (str);
gst_date_time_unref (dt);
/* ---- year and month ---- */
dt = gst_date_time_new_ymd (2010, 10, 30);
fail_unless (gst_date_time_has_year (dt));
fail_unless (gst_date_time_has_month (dt));
fail_unless (gst_date_time_has_day (dt));
fail_unless (!gst_date_time_has_time (dt));
fail_unless (!gst_date_time_has_second (dt));
str = gst_date_time_to_iso8601_string (dt);
fail_unless (str != NULL);
fail_unless_equals_string (str, "2010-10-30");
dt2 = gst_date_time_new_from_iso8601_string (str);
fail_unless (gst_date_time_get_year (dt) == gst_date_time_get_year (dt2));
fail_unless (gst_date_time_get_month (dt) == gst_date_time_get_month (dt2));
fail_unless (gst_date_time_get_day (dt) == gst_date_time_get_day (dt2));
fail_unless (date_times_are_equal (dt, dt2));
str2 = gst_date_time_to_iso8601_string (dt2);
fail_unless_equals_string (str, str2);
g_free (str2);
gst_date_time_unref (dt2);
g_free (str);
gst_date_time_unref (dt);
/* ---- date and time, but no seconds ---- */
dt = gst_date_time_new (-4.5, 2010, 10, 30, 15, 50, -1);
fail_unless (gst_date_time_has_year (dt));
fail_unless (gst_date_time_has_month (dt));
fail_unless (gst_date_time_has_day (dt));
fail_unless (gst_date_time_has_time (dt));
fail_unless (!gst_date_time_has_second (dt));
str = gst_date_time_to_iso8601_string (dt);
fail_unless (str != NULL);
fail_unless_equals_string (str, "2010-10-30T15:50-0430");
dt2 = gst_date_time_new_from_iso8601_string (str);
fail_unless (gst_date_time_get_year (dt) == gst_date_time_get_year (dt2));
fail_unless (gst_date_time_get_month (dt) == gst_date_time_get_month (dt2));
fail_unless (gst_date_time_get_day (dt) == gst_date_time_get_day (dt2));
fail_unless (gst_date_time_get_hour (dt) == gst_date_time_get_hour (dt2));
fail_unless (gst_date_time_get_minute (dt) == gst_date_time_get_minute (dt2));
fail_unless (date_times_are_equal (dt, dt2));
str2 = gst_date_time_to_iso8601_string (dt2);
fail_unless_equals_string (str, str2);
g_free (str2);
gst_date_time_unref (dt2);
g_free (str);
gst_date_time_unref (dt);
/* ---- date and time, but no seconds (UTC) ---- */
dt = gst_date_time_new (0, 2010, 10, 30, 15, 50, -1);
fail_unless (gst_date_time_has_year (dt));
fail_unless (gst_date_time_has_month (dt));
fail_unless (gst_date_time_has_day (dt));
fail_unless (gst_date_time_has_time (dt));
fail_unless (!gst_date_time_has_second (dt));
str = gst_date_time_to_iso8601_string (dt);
fail_unless (str != NULL);
fail_unless_equals_string (str, "2010-10-30T15:50Z");
dt2 = gst_date_time_new_from_iso8601_string (str);
fail_unless (gst_date_time_get_year (dt) == gst_date_time_get_year (dt2));
fail_unless (gst_date_time_get_month (dt) == gst_date_time_get_month (dt2));
fail_unless (gst_date_time_get_day (dt) == gst_date_time_get_day (dt2));
fail_unless (gst_date_time_get_hour (dt) == gst_date_time_get_hour (dt2));
fail_unless (gst_date_time_get_minute (dt) == gst_date_time_get_minute (dt2));
fail_unless (date_times_are_equal (dt, dt2));
str2 = gst_date_time_to_iso8601_string (dt2);
fail_unless_equals_string (str, str2);
g_free (str2);
gst_date_time_unref (dt2);
g_free (str);
gst_date_time_unref (dt);
/* ---- date and time, with seconds ---- */
dt = gst_date_time_new (-4.5, 2010, 10, 30, 15, 50, 0);
fail_unless (gst_date_time_has_year (dt));
fail_unless (gst_date_time_has_month (dt));
fail_unless (gst_date_time_has_day (dt));
fail_unless (gst_date_time_has_time (dt));
fail_unless (gst_date_time_has_second (dt));
str = gst_date_time_to_iso8601_string (dt);
fail_unless (str != NULL);
fail_unless_equals_string (str, "2010-10-30T15:50:00-0430");
dt2 = gst_date_time_new_from_iso8601_string (str);
fail_unless (gst_date_time_get_year (dt) == gst_date_time_get_year (dt2));
fail_unless (gst_date_time_get_month (dt) == gst_date_time_get_month (dt2));
fail_unless (gst_date_time_get_day (dt) == gst_date_time_get_day (dt2));
fail_unless (gst_date_time_get_hour (dt) == gst_date_time_get_hour (dt2));
fail_unless (gst_date_time_get_minute (dt) == gst_date_time_get_minute (dt2));
fail_unless (date_times_are_equal (dt, dt2));
str2 = gst_date_time_to_iso8601_string (dt2);
fail_unless_equals_string (str, str2);
g_free (str2);
gst_date_time_unref (dt2);
g_free (str);
gst_date_time_unref (dt);
/* ---- date and time, with seconds (UTC) ---- */
dt = gst_date_time_new (0, 2010, 10, 30, 15, 50, 0);
fail_unless (gst_date_time_has_year (dt));
fail_unless (gst_date_time_has_month (dt));
fail_unless (gst_date_time_has_day (dt));
fail_unless (gst_date_time_has_time (dt));
fail_unless (gst_date_time_has_second (dt));
str = gst_date_time_to_iso8601_string (dt);
fail_unless (str != NULL);
fail_unless_equals_string (str, "2010-10-30T15:50:00Z");
dt2 = gst_date_time_new_from_iso8601_string (str);
fail_unless (gst_date_time_get_year (dt) == gst_date_time_get_year (dt2));
fail_unless (gst_date_time_get_month (dt) == gst_date_time_get_month (dt2));
fail_unless (gst_date_time_get_day (dt) == gst_date_time_get_day (dt2));
fail_unless (gst_date_time_get_hour (dt) == gst_date_time_get_hour (dt2));
fail_unless (gst_date_time_get_minute (dt) == gst_date_time_get_minute (dt2));
fail_unless (date_times_are_equal (dt, dt2));
str2 = gst_date_time_to_iso8601_string (dt2);
fail_unless_equals_string (str, str2);
g_free (str2);
gst_date_time_unref (dt2);
g_free (str);
gst_date_time_unref (dt);
/* ---- date and time, but without the 'T' and without timezone */
dt = gst_date_time_new_from_iso8601_string ("2010-10-30 15:50");
fail_unless (gst_date_time_get_year (dt) == 2010);
fail_unless (gst_date_time_get_month (dt) == 10);
fail_unless (gst_date_time_get_day (dt) == 30);
fail_unless (gst_date_time_get_hour (dt) == 15);
fail_unless (gst_date_time_get_minute (dt) == 50);
fail_unless (!gst_date_time_has_second (dt));
gst_date_time_unref (dt);
/* ---- date and time+secs, but without the 'T' and without timezone */
dt = gst_date_time_new_from_iso8601_string ("2010-10-30 15:50:33");
fail_unless (gst_date_time_get_year (dt) == 2010);
fail_unless (gst_date_time_get_month (dt) == 10);
fail_unless (gst_date_time_get_day (dt) == 30);
fail_unless (gst_date_time_get_hour (dt) == 15);
fail_unless (gst_date_time_get_minute (dt) == 50);
fail_unless (gst_date_time_get_second (dt) == 33);
gst_date_time_unref (dt);
/* ---- dates with 00s */
dt = gst_date_time_new_from_iso8601_string ("2010-10-00");
fail_unless (gst_date_time_get_year (dt) == 2010);
fail_unless (gst_date_time_get_month (dt) == 10);
fail_unless (!gst_date_time_has_day (dt));
fail_unless (!gst_date_time_has_time (dt));
gst_date_time_unref (dt);
dt = gst_date_time_new_from_iso8601_string ("2010-00-00");
fail_unless (gst_date_time_get_year (dt) == 2010);
fail_unless (!gst_date_time_has_month (dt));
fail_unless (!gst_date_time_has_day (dt));
fail_unless (!gst_date_time_has_time (dt));
gst_date_time_unref (dt);
dt = gst_date_time_new_from_iso8601_string ("2010-00-30");
fail_unless (gst_date_time_get_year (dt) == 2010);
fail_unless (!gst_date_time_has_month (dt));
fail_unless (!gst_date_time_has_day (dt));
fail_unless (!gst_date_time_has_time (dt));
gst_date_time_unref (dt);
/* completely invalid */
dt = gst_date_time_new_from_iso8601_string ("0000-00-00");
fail_unless (dt == NULL);
/* partially invalid - here we'll just extract the year */
dt = gst_date_time_new_from_iso8601_string ("2010/05/30");
fail_unless (gst_date_time_get_year (dt) == 2010);
fail_unless (!gst_date_time_has_month (dt));
fail_unless (!gst_date_time_has_day (dt));
fail_unless (!gst_date_time_has_time (dt));
gst_date_time_unref (dt);
/* only time provided - we assume today's date */
gdt = g_date_time_new_now_utc ();
dt = gst_date_time_new_from_iso8601_string ("15:50:33");
fail_unless (gst_date_time_get_year (dt) == g_date_time_get_year (gdt));
fail_unless (gst_date_time_get_month (dt) == g_date_time_get_month (gdt));
fail_unless (gst_date_time_get_day (dt) ==
g_date_time_get_day_of_month (gdt));
fail_unless (gst_date_time_get_hour (dt) == 15);
fail_unless (gst_date_time_get_minute (dt) == 50);
fail_unless (gst_date_time_get_second (dt) == 33);
gst_date_time_unref (dt);
dt = gst_date_time_new_from_iso8601_string ("15:50:33Z");
fail_unless (gst_date_time_get_year (dt) == g_date_time_get_year (gdt));
fail_unless (gst_date_time_get_month (dt) == g_date_time_get_month (gdt));
fail_unless (gst_date_time_get_day (dt) ==
g_date_time_get_day_of_month (gdt));
fail_unless (gst_date_time_get_hour (dt) == 15);
fail_unless (gst_date_time_get_minute (dt) == 50);
fail_unless (gst_date_time_get_second (dt) == 33);
gst_date_time_unref (dt);
dt = gst_date_time_new_from_iso8601_string ("15:50");
fail_unless (gst_date_time_get_year (dt) == g_date_time_get_year (gdt));
fail_unless (gst_date_time_get_month (dt) == g_date_time_get_month (gdt));
fail_unless (gst_date_time_get_day (dt) ==
g_date_time_get_day_of_month (gdt));
fail_unless (gst_date_time_get_hour (dt) == 15);
fail_unless (gst_date_time_get_minute (dt) == 50);
fail_unless (!gst_date_time_has_second (dt));
gst_date_time_unref (dt);
dt = gst_date_time_new_from_iso8601_string ("15:50Z");
fail_unless (gst_date_time_get_year (dt) == g_date_time_get_year (gdt));
fail_unless (gst_date_time_get_month (dt) == g_date_time_get_month (gdt));
fail_unless (gst_date_time_get_day (dt) ==
g_date_time_get_day_of_month (gdt));
fail_unless (gst_date_time_get_hour (dt) == 15);
fail_unless (gst_date_time_get_minute (dt) == 50);
fail_unless (!gst_date_time_has_second (dt));
gst_date_time_unref (dt);
gdt2 = g_date_time_add_minutes (gdt, -270);
g_date_time_unref (gdt);
dt = gst_date_time_new_from_iso8601_string ("15:50:33-0430");
fail_unless (gst_date_time_get_year (dt) == g_date_time_get_year (gdt2));
fail_unless (gst_date_time_get_month (dt) == g_date_time_get_month (gdt2));
fail_unless (gst_date_time_get_day (dt) ==
g_date_time_get_day_of_month (gdt2));
fail_unless (gst_date_time_get_hour (dt) == 15);
fail_unless (gst_date_time_get_minute (dt) == 50);
fail_unless (gst_date_time_get_second (dt) == 33);
gst_date_time_unref (dt);
dt = gst_date_time_new_from_iso8601_string ("15:50-0430");
fail_unless (gst_date_time_get_year (dt) == g_date_time_get_year (gdt2));
fail_unless (gst_date_time_get_month (dt) == g_date_time_get_month (gdt2));
fail_unless (gst_date_time_get_day (dt) ==
g_date_time_get_day_of_month (gdt2));
fail_unless (gst_date_time_get_hour (dt) == 15);
fail_unless (gst_date_time_get_minute (dt) == 50);
fail_unless (!gst_date_time_has_second (dt));
gst_date_time_unref (dt);
g_date_time_unref (gdt2);
}
| 1 |
[
"CWE-125"
] |
gstreamer
|
9398b7f1a75b38844ae7050b5a7967e4cdebe24f
| 129,521,491,408,753,320,000,000,000,000,000,000,000 | 322 |
datetime: fix potential out-of-bound read on malformed datetime string
https://bugzilla.gnome.org/show_bug.cgi?id=777263
|
static void jbd2_freeze_jh_data(struct journal_head *jh)
{
struct page *page;
int offset;
char *source;
struct buffer_head *bh = jh2bh(jh);
J_EXPECT_JH(jh, buffer_uptodate(bh), "Possible IO failure.\n");
page = bh->b_page;
offset = offset_in_page(bh->b_data);
source = kmap_atomic(page);
/* Fire data frozen trigger just before we copy the data */
jbd2_buffer_frozen_trigger(jh, source + offset, jh->b_triggers);
memcpy(jh->b_frozen_data, source + offset, bh->b_size);
kunmap_atomic(source);
/*
* Now that the frozen data is saved off, we need to store any matching
* triggers.
*/
jh->b_frozen_triggers = jh->b_triggers;
}
| 0 |
[
"CWE-787"
] |
linux
|
e09463f220ca9a1a1ecfda84fcda658f99a1f12a
| 315,162,550,773,451,560,000,000,000,000,000,000,000 | 22 |
jbd2: don't mark block as modified if the handle is out of credits
Do not set the b_modified flag in block's journal head should not
until after we're sure that jbd2_journal_dirty_metadat() will not
abort with an error due to there not being enough space reserved in
the jbd2 handle.
Otherwise, future attempts to modify the buffer may lead a large
number of spurious errors and warnings.
This addresses CVE-2018-10883.
https://bugzilla.kernel.org/show_bug.cgi?id=200071
Signed-off-by: Theodore Ts'o <[email protected]>
Cc: [email protected]
|
static void jas_image_cmpt_destroy(jas_image_cmpt_t *cmpt)
{
if (cmpt->stream_) {
jas_stream_close(cmpt->stream_);
}
jas_free(cmpt);
}
| 0 |
[
"CWE-189"
] |
jasper
|
3c55b399c36ef46befcb21e4ebc4799367f89684
| 308,261,290,152,105,640,000,000,000,000,000,000,000 | 7 |
At many places in the code, jas_malloc or jas_recalloc was being
invoked with the size argument being computed in a manner that would not
allow integer overflow to be detected. Now, these places in the code
have been modified to use special-purpose memory allocation functions
(e.g., jas_alloc2, jas_alloc3, jas_realloc2) that check for overflow.
This should fix many security problems.
|
void Http1ServerConnectionImplTest::expect400(Protocol p, bool allow_absolute_url,
Buffer::OwnedImpl& buffer,
absl::string_view details) {
InSequence sequence;
if (allow_absolute_url) {
codec_settings_.allow_absolute_url_ = allow_absolute_url;
codec_ = std::make_unique<ServerConnectionImpl>(
connection_, http1CodecStats(), callbacks_, codec_settings_, max_request_headers_kb_,
max_request_headers_count_, envoy::config::core::v3::HttpProtocolOptions::ALLOW);
}
MockRequestDecoder decoder;
Http::ResponseEncoder* response_encoder = nullptr;
EXPECT_CALL(callbacks_, newStream(_, _))
.WillOnce(Invoke([&](ResponseEncoder& encoder, bool) -> RequestDecoder& {
response_encoder = &encoder;
return decoder;
}));
EXPECT_CALL(decoder, sendLocalReply(_, Http::Code::BadRequest, "Bad Request", _, _, _, _));
auto status = codec_->dispatch(buffer);
EXPECT_TRUE(isCodecProtocolError(status));
EXPECT_EQ(p, codec_->protocol());
if (!details.empty()) {
EXPECT_EQ(details, response_encoder->getStream().responseDetails());
}
}
| 0 |
[
"CWE-770"
] |
envoy
|
7ca28ff7d46454ae930e193d97b7d08156b1ba59
| 117,035,684,661,478,890,000,000,000,000,000,000,000 | 28 |
[http1] Include request URL in request header size computation, and reject partial headers that exceed configured limits (#145)
Signed-off-by: antonio <[email protected]>
|
basic_obj_respond_to(mrb_state *mrb, mrb_value obj, mrb_sym id, int pub)
{
return mrb_respond_to(mrb, obj, id);
}
| 0 |
[
"CWE-824"
] |
mruby
|
b64ce17852b180dfeea81cf458660be41a78974d
| 194,736,205,902,274,150,000,000,000,000,000,000,000 | 4 |
Should not call `initialize_copy` for `TT_ICLASS`; fix #4027
Since `TT_ICLASS` is a internal object that should never be revealed
to Ruby world.
|
acl_fetch_shdr_val(struct proxy *px, struct session *l4, void *l7, int dir,
struct acl_expr *expr, struct acl_test *test)
{
struct http_txn *txn = l7;
if (!txn)
return 0;
if (txn->rsp.msg_state < HTTP_MSG_BODY)
return 0;
return acl_fetch_hdr_val(px, l4, txn, txn->rsp.sol, expr, test);
}
| 0 |
[] |
haproxy-1.4
|
dc80672211e085c211f1fc47e15cfe57ab587d38
| 76,896,009,605,335,550,000,000,000,000,000,000,000 | 13 |
BUG/CRITICAL: using HTTP information in tcp-request content may crash the process
During normal HTTP request processing, request buffers are realigned if
there are less than global.maxrewrite bytes available after them, in
order to leave enough room for rewriting headers after the request. This
is done in http_wait_for_request().
However, if some HTTP inspection happens during a "tcp-request content"
rule, this realignment is not performed. In theory this is not a problem
because empty buffers are always aligned and TCP inspection happens at
the beginning of a connection. But with HTTP keep-alive, it also happens
at the beginning of each subsequent request. So if a second request was
pipelined by the client before the first one had a chance to be forwarded,
the second request will not be realigned. Then, http_wait_for_request()
will not perform such a realignment either because the request was
already parsed and marked as such. The consequence of this, is that the
rewrite of a sufficient number of such pipelined, unaligned requests may
leave less room past the request been processed than the configured
reserve, which can lead to a buffer overflow if request processing appends
some data past the end of the buffer.
A number of conditions are required for the bug to be triggered :
- HTTP keep-alive must be enabled ;
- HTTP inspection in TCP rules must be used ;
- some request appending rules are needed (reqadd, x-forwarded-for)
- since empty buffers are always realigned, the client must pipeline
enough requests so that the buffer always contains something till
the point where there is no more room for rewriting.
While such a configuration is quite unlikely to be met (which is
confirmed by the bug's lifetime), a few people do use these features
together for very specific usages. And more importantly, writing such
a configuration and the request to attack it is trivial.
A quick workaround consists in forcing keep-alive off by adding
"option httpclose" or "option forceclose" in the frontend. Alternatively,
disabling HTTP-based TCP inspection rules enough if the application
supports it.
At first glance, this bug does not look like it could lead to remote code
execution, as the overflowing part is controlled by the configuration and
not by the user. But some deeper analysis should be performed to confirm
this. And anyway, corrupting the process' memory and crashing it is quite
trivial.
Special thanks go to Yves Lafon from the W3C who reported this bug and
deployed significant efforts to collect the relevant data needed to
understand it in less than one week.
CVE-2013-1912 was assigned to this issue.
Note that 1.4 is also affected so the fix must be backported.
(cherry picked from commit aae75e3279c6c9bd136413a72dafdcd4986bb89a)
|
void Inspect::operator()(Custom_Error_Ptr e)
{
append_token(e->message(), e);
}
| 0 |
[
"CWE-476"
] |
libsass
|
38f4c3699d06b64128bebc7cf1e8b3125be74dc4
| 201,017,811,934,433,740,000,000,000,000,000,000,000 | 4 |
Fix possible bug with handling empty reference combinators
Fixes #2665
|
static inline void crypto_drop_ahash(struct crypto_ahash_spawn *spawn)
{
crypto_drop_spawn(&spawn->base);
}
| 0 |
[
"CWE-835"
] |
linux
|
ef0579b64e93188710d48667cb5e014926af9f1b
| 66,533,193,039,656,940,000,000,000,000,000,000,000 | 4 |
crypto: ahash - Fix EINPROGRESS notification callback
The ahash API modifies the request's callback function in order
to clean up after itself in some corner cases (unaligned final
and missing finup).
When the request is complete ahash will restore the original
callback and everything is fine. However, when the request gets
an EBUSY on a full queue, an EINPROGRESS callback is made while
the request is still ongoing.
In this case the ahash API will incorrectly call its own callback.
This patch fixes the problem by creating a temporary request
object on the stack which is used to relay EINPROGRESS back to
the original completion function.
This patch also adds code to preserve the original flags value.
Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...")
Cc: <[email protected]>
Reported-by: Sabrina Dubroca <[email protected]>
Tested-by: Sabrina Dubroca <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
static int md5_import(struct shash_desc *desc, const void *in)
{
struct md5_state *ctx = shash_desc_ctx(desc);
memcpy(ctx, in, sizeof(*ctx));
return 0;
}
| 0 |
[
"CWE-703"
] |
linux
|
bc0b96b54a21246e377122d54569eef71cec535f
| 272,369,897,977,253,500,000,000,000,000,000,000,000 | 7 |
crypto: Move md5_transform to lib/md5.c
We are going to use this for TCP/IP sequence number and fragment ID
generation.
Signed-off-by: David S. Miller <[email protected]>
|
void cgi_load_variables(void)
{
static char *line;
char *p, *s, *tok;
int len, i;
FILE *f = stdin;
#ifdef DEBUG_COMMENTS
char dummy[100]="";
print_title(dummy);
d_printf("<!== Start dump in cgi_load_variables() %s ==>\n",__FILE__);
#endif
if (!content_length) {
p = getenv("CONTENT_LENGTH");
len = p?atoi(p):0;
} else {
len = content_length;
}
if (len > 0 &&
(request_post ||
((s=getenv("REQUEST_METHOD")) &&
strequal(s,"POST")))) {
while (len && (line=grab_line(f, &len))) {
p = strchr_m(line,'=');
if (!p) continue;
*p = 0;
variables[num_variables].name = SMB_STRDUP(line);
variables[num_variables].value = SMB_STRDUP(p+1);
SAFE_FREE(line);
if (!variables[num_variables].name ||
!variables[num_variables].value)
continue;
plus_to_space_unescape(variables[num_variables].value);
rfc1738_unescape(variables[num_variables].value);
plus_to_space_unescape(variables[num_variables].name);
rfc1738_unescape(variables[num_variables].name);
#ifdef DEBUG_COMMENTS
printf("<!== POST var %s has value \"%s\" ==>\n",
variables[num_variables].name,
variables[num_variables].value);
#endif
num_variables++;
if (num_variables == MAX_VARIABLES) break;
}
}
fclose(stdin);
open("/dev/null", O_RDWR);
if ((s=query_string) || (s=getenv("QUERY_STRING"))) {
char *saveptr;
for (tok=strtok_r(s, "&;", &saveptr); tok;
tok=strtok_r(NULL, "&;", &saveptr)) {
p = strchr_m(tok,'=');
if (!p) continue;
*p = 0;
variables[num_variables].name = SMB_STRDUP(tok);
variables[num_variables].value = SMB_STRDUP(p+1);
if (!variables[num_variables].name ||
!variables[num_variables].value)
continue;
plus_to_space_unescape(variables[num_variables].value);
rfc1738_unescape(variables[num_variables].value);
plus_to_space_unescape(variables[num_variables].name);
rfc1738_unescape(variables[num_variables].name);
#ifdef DEBUG_COMMENTS
printf("<!== Commandline var %s has value \"%s\" ==>\n",
variables[num_variables].name,
variables[num_variables].value);
#endif
num_variables++;
if (num_variables == MAX_VARIABLES) break;
}
}
#ifdef DEBUG_COMMENTS
printf("<!== End dump in cgi_load_variables() ==>\n");
#endif
/* variables from the client are in UTF-8 - convert them
to our internal unix charset before use */
for (i=0;i<num_variables;i++) {
TALLOC_CTX *frame = talloc_stackframe();
char *dest = NULL;
size_t dest_len;
convert_string_talloc(frame, CH_UTF8, CH_UNIX,
variables[i].name, strlen(variables[i].name),
&dest, &dest_len, True);
SAFE_FREE(variables[i].name);
variables[i].name = SMB_STRDUP(dest ? dest : "");
dest = NULL;
convert_string_talloc(frame, CH_UTF8, CH_UNIX,
variables[i].value, strlen(variables[i].value),
&dest, &dest_len, True);
SAFE_FREE(variables[i].value);
variables[i].value = SMB_STRDUP(dest ? dest : "");
TALLOC_FREE(frame);
}
}
| 0 |
[] |
samba
|
91f4275873ebeda8f57684f09df67162ae80515a
| 321,422,790,982,812,880,000,000,000,000,000,000,000 | 116 |
swat: Use additional nonce on XSRF protection
If the user had a weak password on the root account of a machine running
SWAT, there still was a chance of being targetted by an XSRF on a
malicious web site targetting the SWAT setup.
Use a random nonce stored in secrets.tdb to close this possible attack
window. Thanks to Jann Horn for reporting this issue.
Signed-off-by: Kai Blin <[email protected]>
Fix bug #9577: CVE-2013-0214: Potential XSRF in SWAT.
|
TEST_F(HeaderToMetadataTest, BadProtobufValueTypeInBase64UrlTest) {
const std::string response_config_yaml = R"EOF(
response_rules:
- header: x-authenticated
on_header_present:
key: auth
type: PROTOBUF_VALUE
encode: BASE64
)EOF";
initializeFilter(response_config_yaml);
std::string data = "invalid";
const auto encoded = Base64::encode(data.c_str(), data.size());
Http::TestResponseHeaderMapImpl incoming_headers{{"x-authenticated", encoded}};
EXPECT_CALL(encoder_callbacks_, streamInfo()).WillRepeatedly(ReturnRef(req_info_));
EXPECT_CALL(req_info_, setDynamicMetadata(_, _)).Times(0);
EXPECT_EQ(Http::FilterHeadersStatus::Continue, filter_->encodeHeaders(incoming_headers, false));
}
| 0 |
[] |
envoy
|
2c60632d41555ec8b3d9ef5246242be637a2db0f
| 268,223,430,636,782,580,000,000,000,000,000,000,000 | 18 |
http: header map security fixes for duplicate headers (#197)
Previously header matching did not match on all headers for
non-inline headers. This patch changes the default behavior to
always logically match on all headers. Multiple individual
headers will be logically concatenated with ',' similar to what
is done with inline headers. This makes the behavior effectively
consistent. This behavior can be temporary reverted by setting
the runtime value "envoy.reloadable_features.header_match_on_all_headers"
to "false".
Targeted fixes have been additionally performed on the following
extensions which make them consider all duplicate headers by default as
a comma concatenated list:
1) Any extension using CEL matching on headers.
2) The header to metadata filter.
3) The JWT filter.
4) The Lua filter.
Like primary header matching used in routing, RBAC, etc. this behavior
can be disabled by setting the runtime value
"envoy.reloadable_features.header_match_on_all_headers" to false.
Finally, the setCopy() header map API previously only set the first
header in the case of duplicate non-inline headers. setCopy() now
behaves similiarly to the other set*() APIs and replaces all found
headers with a single value. This may have had security implications
in the extauth filter which uses this API. This behavior can be disabled
by setting the runtime value
"envoy.reloadable_features.http_set_copy_replace_all_headers" to false.
Fixes https://github.com/envoyproxy/envoy-setec/issues/188
Signed-off-by: Matt Klein <[email protected]>
|
dwg_free_object (Dwg_Object *obj)
{
int error = 0;
long unsigned int j;
Dwg_Data *dwg;
Bit_Chain *dat = &pdat;
if (obj && obj->parent)
{
dwg = obj->parent;
dat->version = dwg->header.version;
}
else
return;
if (obj->type == DWG_TYPE_FREED || obj->tio.object == NULL)
return;
dat->from_version = dat->version;
if (obj->supertype == DWG_SUPERTYPE_UNKNOWN)
goto unhandled;
switch (obj->type)
{
case DWG_TYPE_TEXT:
dwg_free_TEXT (dat, obj);
break;
case DWG_TYPE_ATTRIB:
dwg_free_ATTRIB (dat, obj);
break;
case DWG_TYPE_ATTDEF:
dwg_free_ATTDEF (dat, obj);
break;
case DWG_TYPE_BLOCK:
dwg_free_BLOCK (dat, obj);
break;
case DWG_TYPE_ENDBLK:
dwg_free_ENDBLK (dat, obj);
break;
case DWG_TYPE_SEQEND:
dwg_free_SEQEND (dat, obj);
break;
case DWG_TYPE_INSERT:
dwg_free_INSERT (dat, obj);
break;
case DWG_TYPE_MINSERT:
dwg_free_MINSERT (dat, obj);
break;
case DWG_TYPE_VERTEX_2D:
dwg_free_VERTEX_2D (dat, obj);
break;
case DWG_TYPE_VERTEX_3D:
dwg_free_VERTEX_3D (dat, obj);
break;
case DWG_TYPE_VERTEX_MESH:
dwg_free_VERTEX_MESH (dat, obj);
break;
case DWG_TYPE_VERTEX_PFACE:
dwg_free_VERTEX_PFACE (dat, obj);
break;
case DWG_TYPE_VERTEX_PFACE_FACE:
dwg_free_VERTEX_PFACE_FACE (dat, obj);
break;
case DWG_TYPE_POLYLINE_2D:
dwg_free_POLYLINE_2D (dat, obj);
break;
case DWG_TYPE_POLYLINE_3D:
dwg_free_POLYLINE_3D (dat, obj);
break;
case DWG_TYPE_ARC:
dwg_free_ARC (dat, obj);
break;
case DWG_TYPE_CIRCLE:
dwg_free_CIRCLE (dat, obj);
break;
case DWG_TYPE_LINE:
dwg_free_LINE (dat, obj);
break;
case DWG_TYPE_DIMENSION_ORDINATE:
dwg_free_DIMENSION_ORDINATE (dat, obj);
break;
case DWG_TYPE_DIMENSION_LINEAR:
dwg_free_DIMENSION_LINEAR (dat, obj);
break;
case DWG_TYPE_DIMENSION_ALIGNED:
dwg_free_DIMENSION_ALIGNED (dat, obj);
break;
case DWG_TYPE_DIMENSION_ANG3PT:
dwg_free_DIMENSION_ANG3PT (dat, obj);
break;
case DWG_TYPE_DIMENSION_ANG2LN:
dwg_free_DIMENSION_ANG2LN (dat, obj);
break;
case DWG_TYPE_DIMENSION_RADIUS:
dwg_free_DIMENSION_RADIUS (dat, obj);
break;
case DWG_TYPE_DIMENSION_DIAMETER:
dwg_free_DIMENSION_DIAMETER (dat, obj);
break;
case DWG_TYPE_POINT:
dwg_free_POINT (dat, obj);
break;
case DWG_TYPE__3DFACE:
dwg_free__3DFACE (dat, obj);
break;
case DWG_TYPE_POLYLINE_PFACE:
dwg_free_POLYLINE_PFACE (dat, obj);
break;
case DWG_TYPE_POLYLINE_MESH:
dwg_free_POLYLINE_MESH (dat, obj);
break;
case DWG_TYPE_SOLID:
dwg_free_SOLID (dat, obj);
break;
case DWG_TYPE_TRACE:
dwg_free_TRACE (dat, obj);
break;
case DWG_TYPE_SHAPE:
dwg_free_SHAPE (dat, obj);
break;
case DWG_TYPE_VIEWPORT:
dwg_free_VIEWPORT (dat, obj);
break;
case DWG_TYPE_ELLIPSE:
dwg_free_ELLIPSE (dat, obj);
break;
case DWG_TYPE_SPLINE:
dwg_free_SPLINE (dat, obj);
break;
case DWG_TYPE_REGION:
dwg_free_REGION (dat, obj);
break;
case DWG_TYPE__3DSOLID:
dwg_free__3DSOLID (dat, obj);
break; /* Check the type of the object */
case DWG_TYPE_BODY:
dwg_free_BODY (dat, obj);
break;
case DWG_TYPE_RAY:
dwg_free_RAY (dat, obj);
break;
case DWG_TYPE_XLINE:
dwg_free_XLINE (dat, obj);
break;
case DWG_TYPE_DICTIONARY:
dwg_free_DICTIONARY (dat, obj);
break;
case DWG_TYPE_MTEXT:
dwg_free_MTEXT (dat, obj);
break;
case DWG_TYPE_LEADER:
dwg_free_LEADER (dat, obj);
break;
case DWG_TYPE_TOLERANCE:
dwg_free_TOLERANCE (dat, obj);
break;
case DWG_TYPE_MLINE:
dwg_free_MLINE (dat, obj);
break;
case DWG_TYPE_BLOCK_CONTROL:
dwg_free_BLOCK_CONTROL (dat, obj);
break;
case DWG_TYPE_BLOCK_HEADER:
dwg_free_BLOCK_HEADER (dat, obj);
break;
case DWG_TYPE_LAYER_CONTROL:
dwg_free_LAYER_CONTROL (dat, obj);
break;
case DWG_TYPE_LAYER:
dwg_free_LAYER (dat, obj);
break;
case DWG_TYPE_STYLE_CONTROL:
dwg_free_STYLE_CONTROL (dat, obj);
break;
case DWG_TYPE_STYLE:
dwg_free_STYLE (dat, obj);
break;
case DWG_TYPE_LTYPE_CONTROL:
dwg_free_LTYPE_CONTROL (dat, obj);
break;
case DWG_TYPE_LTYPE:
dwg_free_LTYPE (dat, obj);
break;
case DWG_TYPE_VIEW_CONTROL:
dwg_free_VIEW_CONTROL (dat, obj);
break;
case DWG_TYPE_VIEW:
dwg_free_VIEW (dat, obj);
break;
case DWG_TYPE_UCS_CONTROL:
dwg_free_UCS_CONTROL (dat, obj);
break;
case DWG_TYPE_UCS:
dwg_free_UCS (dat, obj);
break;
case DWG_TYPE_VPORT_CONTROL:
dwg_free_VPORT_CONTROL (dat, obj);
break;
case DWG_TYPE_VPORT:
dwg_free_VPORT (dat, obj);
break;
case DWG_TYPE_APPID_CONTROL:
dwg_free_APPID_CONTROL (dat, obj);
break;
case DWG_TYPE_APPID:
dwg_free_APPID (dat, obj);
break;
case DWG_TYPE_DIMSTYLE_CONTROL:
dwg_free_DIMSTYLE_CONTROL (dat, obj);
break;
case DWG_TYPE_DIMSTYLE:
dwg_free_DIMSTYLE (dat, obj);
break;
case DWG_TYPE_VPORT_ENTITY_CONTROL:
dwg_free_VPORT_ENTITY_CONTROL (dat, obj);
break;
case DWG_TYPE_VPORT_ENTITY_HEADER:
dwg_free_VPORT_ENTITY_HEADER (dat, obj);
break;
case DWG_TYPE_GROUP:
dwg_free_GROUP (dat, obj);
break;
case DWG_TYPE_MLINESTYLE:
dwg_free_MLINESTYLE (dat, obj);
break;
case DWG_TYPE_OLE2FRAME:
dwg_free_OLE2FRAME (dat, obj);
break;
case DWG_TYPE_DUMMY:
dwg_free_DUMMY (dat, obj);
break;
case DWG_TYPE_LONG_TRANSACTION:
dwg_free_LONG_TRANSACTION (dat, obj);
break;
case DWG_TYPE_LWPOLYLINE:
dwg_free_LWPOLYLINE (dat, obj);
break;
case DWG_TYPE_HATCH:
dwg_free_HATCH (dat, obj);
break;
case DWG_TYPE_XRECORD:
dwg_free_XRECORD (dat, obj);
break;
case DWG_TYPE_PLACEHOLDER:
dwg_free_PLACEHOLDER (dat, obj);
break;
case DWG_TYPE_OLEFRAME:
dwg_free_OLEFRAME (dat, obj);
break;
#ifdef DEBUG_VBA_PROJECT
case DWG_TYPE_VBA_PROJECT:
dwg_free_VBA_PROJECT (dat, obj);
break;
#endif
case DWG_TYPE_LAYOUT:
dwg_free_LAYOUT (dat, obj);
break;
case DWG_TYPE_PROXY_ENTITY:
dwg_free_PROXY_ENTITY (dat, obj);
break;
case DWG_TYPE_PROXY_OBJECT:
dwg_free_PROXY_OBJECT (dat, obj);
break;
default:
if (obj->type == obj->parent->layout_type)
{
SINCE (R_13)
{
dwg_free_LAYOUT (dat, obj); // XXX avoid double-free, esp. in eed
}
}
else if ((error = dwg_free_variable_type (obj->parent, obj))
& DWG_ERR_UNHANDLEDCLASS)
{
int is_entity;
int i;
Dwg_Class *klass;
unhandled:
is_entity = 0;
i = obj->type - 500;
klass = NULL;
dwg = obj->parent;
if (dwg->dwg_class && i >= 0 && i < (int)dwg->num_classes)
{
klass = &dwg->dwg_class[i];
is_entity = klass ? dwg_class_is_entity (klass) : 0;
}
// indxf (and later injson) already creates some DEBUGGING classes
if (obj->fixedtype == DWG_TYPE_TABLE)
{
// just the preview, i.e. common. plus some colors: leak
dwg_free_UNKNOWN_ENT (dat, obj);
}
else if (obj->fixedtype == DWG_TYPE_DATATABLE)
{
dwg_free_UNKNOWN_OBJ (dat, obj);
}
else if (klass && !is_entity)
{
dwg_free_UNKNOWN_OBJ (dat, obj);
}
else if (klass && is_entity)
{
dwg_free_UNKNOWN_ENT (dat, obj);
}
else // not a class
{
FREE_IF (obj->tio.unknown);
}
}
}
/* With this importer the dxfname is dynamic, just the name is const */
if (dwg->opts & DWG_OPTS_INDXF)
FREE_IF (obj->dxfname);
obj->type = DWG_TYPE_FREED;
}
| 1 |
[
"CWE-703",
"CWE-835"
] |
libredwg
|
c6f6668b82bfe595899cc820279ac37bb9ef16f5
| 152,824,459,625,928,600,000,000,000,000,000,000,000 | 317 |
cleanup tio.unknown
not needed anymore, we only have UNKNOWN_OBJ or UNKNOWN_ENT with full common
entity_data.
Fixes GH #178 heap_overflow2
|
static inline bool is_supported_parser_charset(CHARSET_INFO *cs)
{
return MY_TEST(cs->mbminlen == 1 && cs->number != 17 /* filename */);
}
| 0 |
[
"CWE-416"
] |
server
|
4681b6f2d8c82b4ec5cf115e83698251963d80d5
| 49,443,100,334,604,240,000,000,000,000,000,000,000 | 4 |
MDEV-26281 ASAN use-after-poison when complex conversion is involved in blob
the bug was that in_vector array in Item_func_in was allocated in the
statement arena, not in the table->expr_arena.
revert part of the 5acd391e8b2d. Instead, change the arena correctly
in fix_all_session_vcol_exprs().
Remove TABLE_ARENA, that was introduced in 5acd391e8b2d to force
item tree changes to be rolled back (because they were allocated in the
wrong arena and didn't persist. now they do)
|
convertToLinear (unsigned short s[16])
{
for (int i = 0; i < 16; ++i)
s[i] = logTable[s[i]];
}
| 0 |
[
"CWE-190"
] |
openexr
|
eec0dba242bedd2778c973ae4af112107b33d9c9
| 302,670,966,695,398,600,000,000,000,000,000,000,000 | 5 |
fix undefined behavior: ignore unused bits in B44 mode detection (#832)
Signed-off-by: Peter Hillman <[email protected]>
|
static int alloc_debug_processing(struct kmem_cache *s, struct page *page,
void *object, void *addr)
{
if (!check_slab(s, page))
goto bad;
if (!on_freelist(s, page, object)) {
object_err(s, page, object, "Object already allocated");
goto bad;
}
if (!check_valid_pointer(s, page, object)) {
object_err(s, page, object, "Freelist Pointer check fails");
goto bad;
}
if (!check_object(s, page, object, 0))
goto bad;
/* Success perform special debug activities for allocs */
if (s->flags & SLAB_STORE_USER)
set_track(s, object, TRACK_ALLOC, addr);
trace(s, page, object, 1);
init_object(s, object, 1);
return 1;
bad:
if (PageSlab(page)) {
/*
* If this is a slab page then lets do the best we can
* to avoid issues in the future. Marking all objects
* as used avoids touching the remaining objects.
*/
slab_fix(s, "Marking all objects used");
page->inuse = page->objects;
page->freelist = NULL;
}
return 0;
}
| 0 |
[
"CWE-189"
] |
linux
|
f8bd2258e2d520dff28c855658bd24bdafb5102d
| 226,878,867,428,597,900,000,000,000,000,000,000,000 | 39 |
remove div_long_long_rem
x86 is the only arch right now, which provides an optimized for
div_long_long_rem and it has the downside that one has to be very careful that
the divide doesn't overflow.
The API is a little akward, as the arguments for the unsigned divide are
signed. The signed version also doesn't handle a negative divisor and
produces worse code on 64bit archs.
There is little incentive to keep this API alive, so this converts the few
users to the new API.
Signed-off-by: Roman Zippel <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: john stultz <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
flatpak_dir_set_active (FlatpakDir *self,
FlatpakDecomposed *ref,
const char *active_id,
GCancellable *cancellable,
GError **error)
{
gboolean ret = FALSE;
g_autoptr(GFile) deploy_base = NULL;
g_autoptr(GFile) active_tmp_link = NULL;
g_autoptr(GFile) active_link = NULL;
g_autoptr(GError) my_error = NULL;
g_autofree char *tmpname = g_strdup (".active-XXXXXX");
deploy_base = flatpak_dir_get_deploy_dir (self, ref);
active_link = g_file_get_child (deploy_base, "active");
if (active_id != NULL)
{
glnx_gen_temp_name (tmpname);
active_tmp_link = g_file_get_child (deploy_base, tmpname);
if (!g_file_make_symbolic_link (active_tmp_link, active_id, cancellable, error))
goto out;
if (!flatpak_file_rename (active_tmp_link,
active_link,
cancellable, error))
goto out;
}
else
{
if (!g_file_delete (active_link, cancellable, &my_error) &&
!g_error_matches (my_error, G_IO_ERROR, G_IO_ERROR_NOT_FOUND))
{
g_propagate_error (error, my_error);
my_error = NULL;
goto out;
}
}
ret = TRUE;
out:
return ret;
}
| 0 |
[
"CWE-74"
] |
flatpak
|
fb473cad801c6b61706353256cab32330557374a
| 73,805,730,378,474,750,000,000,000,000,000,000,000 | 43 |
dir: Pass environment via bwrap --setenv when running apply_extra
This means we can systematically pass the environment variables
through bwrap(1), even if it is setuid and thus is filtering out
security-sensitive environment variables. bwrap ends up being
run with an empty environment instead.
As with the previous commit, this regressed while fixing CVE-2021-21261.
Fixes: 6d1773d2 "run: Convert all environment variables into bwrap arguments"
Signed-off-by: Simon McVittie <[email protected]>
|
ecma_string_pad (ecma_value_t original_string_p, /**< Input ecma string */
ecma_value_t max_length, /**< Length to pad to, including original length */
ecma_value_t fill_string, /**< The string to pad with */
bool pad_on_start) /**< true - if we are padding to the start, calling with padStart
false - if we are padding to the end, calling with padEnd */
{
/* 3 */
ecma_length_t int_max_length;
if (ECMA_IS_VALUE_ERROR (ecma_op_to_length (max_length, &int_max_length)))
{
return ECMA_VALUE_ERROR;
}
/* 4 */
ecma_string_t *original_str_val_p = ecma_get_string_from_value (original_string_p);
const uint32_t string_length = ecma_string_get_length (original_str_val_p);
/* 5 */
if (int_max_length <= string_length)
{
ecma_ref_ecma_string (original_str_val_p);
return original_string_p;
}
ecma_string_t *filler_p = ecma_get_magic_string (LIT_MAGIC_STRING_SPACE_CHAR);
/* 6 - 7 */
if (!ecma_is_value_undefined (fill_string))
{
filler_p = ecma_op_to_string (fill_string);
if (filler_p == NULL)
{
return ECMA_VALUE_ERROR;
}
if (ecma_string_is_empty (filler_p))
{
ecma_ref_ecma_string (original_str_val_p);
return original_string_p;
}
}
if (int_max_length >= UINT32_MAX)
{
ecma_deref_ecma_string (filler_p);
return ecma_raise_range_error (ECMA_ERR_MSG ("Maximum string length is reached"));
}
/* 9 */
uint32_t fill_len = (uint32_t) int_max_length - string_length;
/* 10 */
uint32_t filler_length = ecma_string_get_length (filler_p);
uint32_t prepend_count = fill_len / filler_length;
ecma_stringbuilder_t builder = ecma_stringbuilder_create ();
if (!pad_on_start)
{
ecma_stringbuilder_append (&builder, original_str_val_p);
}
for (uint32_t i = 0; i < prepend_count; i++)
{
ecma_stringbuilder_append (&builder, filler_p);
}
uint32_t remaining = fill_len - (prepend_count * filler_length);
ECMA_STRING_TO_UTF8_STRING (filler_p, start_p, utf8_str_size);
const lit_utf8_byte_t *temp_start_p = start_p;
while (remaining > 0)
{
ecma_char_t ch;
lit_utf8_size_t read_size = lit_read_code_unit_from_cesu8 (temp_start_p, &ch);
ecma_stringbuilder_append_char (&builder, ch);
temp_start_p += read_size;
remaining--;
}
ECMA_FINALIZE_UTF8_STRING (start_p, utf8_str_size);
ecma_deref_ecma_string (filler_p);
/* 11 - 12 */
if (pad_on_start)
{
ecma_stringbuilder_append (&builder, original_str_val_p);
}
return ecma_make_string_value (ecma_stringbuilder_finalize (&builder));
} /* ecma_string_pad */
| 0 |
[
"CWE-416"
] |
jerryscript
|
3bcd48f72d4af01d1304b754ef19fe1a02c96049
| 209,704,052,164,652,940,000,000,000,000,000,000,000 | 86 |
Improve parse_identifier (#4691)
Ascii string length is no longer computed during string allocation.
JerryScript-DCO-1.0-Signed-off-by: Daniel Batiz [email protected]
|
static SQInteger array_insert(HSQUIRRELVM v)
{
SQObject &o=stack_get(v,1);
SQObject &idx=stack_get(v,2);
SQObject &val=stack_get(v,3);
if(!_array(o)->Insert(tointeger(idx),val))
return sq_throwerror(v,_SC("index out of range"));
sq_pop(v,2);
return 1;
}
| 0 |
[
"CWE-703",
"CWE-787"
] |
squirrel
|
a6413aa690e0bdfef648c68693349a7b878fe60d
| 122,788,159,013,536,350,000,000,000,000,000,000,000 | 10 |
fix in thread.call
|
static json_t * oidc_get_state_from_cookie(request_rec *r, oidc_cfg *c,
const char *cookieValue) {
json_t *result = NULL;
oidc_util_jwt_verify(r, c->crypto_passphrase, cookieValue, &result);
return result;
}
| 0 |
[
"CWE-20"
] |
mod_auth_openidc
|
612e309bfffd6f9b8ad7cdccda3019fc0865f3b4
| 216,959,981,298,358,700,000,000,000,000,000,000,000 | 6 |
don't echo query params on invalid requests to redirect URI; closes #212
thanks @LukasReschke; I'm sure there's some OWASP guideline that warns
against this
|
static int php_stdiop_flush(php_stream *stream)
{
php_stdio_stream_data *data = (php_stdio_stream_data*)stream->abstract;
assert(data != NULL);
/*
* stdio buffers data in user land. By calling fflush(3), this
* data is send to the kernel using write(2). fsync'ing is
* something completely different.
*/
if (data->file) {
return fflush(data->file);
}
return 0;
}
| 0 |
[
"CWE-264"
] |
php-src
|
e3133e4db70476fb7adfdedb738483e2255ce0e1
| 261,124,643,189,467,000,000,000,000,000,000,000,000 | 16 |
Fix bug #77630 - safer rename() procedure
In order to rename safer, we do the following:
- set umask to 077 (unfortunately, not TS, so excluding ZTS)
- chown() first, to set proper group before allowing group access
- chmod() after, even if chown() fails
|
bool Parser::peek_newline(const char* start)
{
return peek_linefeed(start ? start : position)
&& ! peek_css<exactly<'{'>>(start);
}
| 0 |
[
"CWE-125"
] |
libsass
|
eb15533b07773c30dc03c9d742865604f47120ef
| 260,768,409,840,141,600,000,000,000,000,000,000,000 | 5 |
Fix memory leak in `parse_ie_keyword_arg`
`kwd_arg` would never get freed when there was a parse error in
`parse_ie_keyword_arg`.
Closes #2656
|
static void __exit snd_msnd_exit(void)
{
#ifdef CONFIG_PNP
if (pnp_registered)
pnp_unregister_card_driver(&msnd_pnpc_driver);
if (isa_registered)
#endif
isa_unregister_driver(&snd_msnd_driver);
}
| 0 |
[
"CWE-125",
"CWE-401"
] |
linux
|
20e2b791796bd68816fa115f12be5320de2b8021
| 91,299,725,954,360,920,000,000,000,000,000,000,000 | 9 |
ALSA: msnd: Optimize / harden DSP and MIDI loops
The ISA msnd drivers have loops fetching the ring-buffer head, tail
and size values inside the loops. Such codes are inefficient and
fragile.
This patch optimizes it, and also adds the sanity check to avoid the
endless loops.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196131
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196133
Signed-off-by: Takashi Iwai <[email protected]>
|
hwaddr virtio_queue_get_addr(VirtIODevice *vdev, int n)
{
return vdev->vq[n].pa;
}
| 0 |
[
"CWE-94"
] |
qemu
|
cc45995294b92d95319b4782750a3580cabdbc0c
| 95,109,781,586,511,000,000,000,000,000,000,000,000 | 4 |
virtio: out-of-bounds buffer write on invalid state load
CVE-2013-4151 QEMU 1.0 out-of-bounds buffer write in
virtio_load@hw/virtio/virtio.c
So we have this code since way back when:
num = qemu_get_be32(f);
for (i = 0; i < num; i++) {
vdev->vq[i].vring.num = qemu_get_be32(f);
array of vqs has size VIRTIO_PCI_QUEUE_MAX, so
on invalid input this will write beyond end of buffer.
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Michael Roth <[email protected]>
Signed-off-by: Juan Quintela <[email protected]>
|
asn1_der_decoding (asn1_node * element, const void *ider, int len,
char *errorDescription)
{
asn1_node node, p, p2, p3;
char temp[128];
int counter, len2, len3, len4, move, ris, tlen;
unsigned char class;
unsigned long tag;
int indefinite, result;
const unsigned char *der = ider;
node = *element;
if (errorDescription != NULL)
errorDescription[0] = 0;
if (node == NULL)
return ASN1_ELEMENT_NOT_FOUND;
if (node->type & CONST_OPTION)
{
result = ASN1_GENERIC_ERROR;
goto cleanup;
}
counter = 0;
move = DOWN;
p = node;
while (1)
{
ris = ASN1_SUCCESS;
if (move != UP)
{
if (p->type & CONST_SET)
{
p2 = _asn1_find_up (p);
len2 = _asn1_strtol (p2->value, NULL, 10);
if (len2 == -1)
{
if (!der[counter] && !der[counter + 1])
{
p = p2;
move = UP;
counter += 2;
continue;
}
}
else if (counter == len2)
{
p = p2;
move = UP;
continue;
}
else if (counter > len2)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
p2 = p2->down;
while (p2)
{
if ((p2->type & CONST_SET) && (p2->type & CONST_NOT_USED))
{
ris =
extract_tag_der_recursive (p2, der + counter,
len - counter, &len2);
if (ris == ASN1_SUCCESS)
{
p2->type &= ~CONST_NOT_USED;
p = p2;
break;
}
}
p2 = p2->right;
}
if (p2 == NULL)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
}
if ((p->type & CONST_OPTION) || (p->type & CONST_DEFAULT))
{
p2 = _asn1_find_up (p);
len2 = _asn1_strtol (p2->value, NULL, 10);
if (counter == len2)
{
if (p->right)
{
p2 = p->right;
move = RIGHT;
}
else
move = UP;
if (p->type & CONST_OPTION)
asn1_delete_structure (&p);
p = p2;
continue;
}
}
if (type_field (p->type) == ASN1_ETYPE_CHOICE)
{
while (p->down)
{
if (counter < len)
ris =
extract_tag_der_recursive (p->down, der + counter,
len - counter, &len2);
else
ris = ASN1_DER_ERROR;
if (ris == ASN1_SUCCESS)
{
delete_unneeded_choice_fields(p->down);
break;
}
else if (ris == ASN1_ERROR_TYPE_ANY)
{
result = ASN1_ERROR_TYPE_ANY;
goto cleanup;
}
else
{
p2 = p->down;
asn1_delete_structure (&p2);
}
}
if (p->down == NULL)
{
if (!(p->type & CONST_OPTION))
{
result = ASN1_DER_ERROR;
goto cleanup;
}
}
else if (type_field (p->type) != ASN1_ETYPE_CHOICE)
p = p->down;
}
if ((p->type & CONST_OPTION) || (p->type & CONST_DEFAULT))
{
p2 = _asn1_find_up (p);
len2 = _asn1_strtol (p2->value, NULL, 10);
if ((len2 != -1) && (counter > len2))
ris = ASN1_TAG_ERROR;
}
if (ris == ASN1_SUCCESS)
ris =
extract_tag_der_recursive (p, der + counter, len - counter, &len2);
if (ris != ASN1_SUCCESS)
{
if (p->type & CONST_OPTION)
{
p->type |= CONST_NOT_USED;
move = RIGHT;
}
else if (p->type & CONST_DEFAULT)
{
_asn1_set_value (p, NULL, 0);
move = RIGHT;
}
else
{
if (errorDescription != NULL)
_asn1_error_description_tag_error (p, errorDescription);
result = ASN1_TAG_ERROR;
goto cleanup;
}
}
else
counter += len2;
}
if (ris == ASN1_SUCCESS)
{
switch (type_field (p->type))
{
case ASN1_ETYPE_NULL:
if (der[counter])
{
result = ASN1_DER_ERROR;
goto cleanup;
}
counter++;
move = RIGHT;
break;
case ASN1_ETYPE_BOOLEAN:
if (der[counter++] != 1)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
if (der[counter++] == 0)
_asn1_set_value (p, "F", 1);
else
_asn1_set_value (p, "T", 1);
move = RIGHT;
break;
case ASN1_ETYPE_INTEGER:
case ASN1_ETYPE_ENUMERATED:
len2 =
asn1_get_length_der (der + counter, len - counter, &len3);
if (len2 < 0)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
_asn1_set_value (p, der + counter, len3 + len2);
counter += len3 + len2;
move = RIGHT;
break;
case ASN1_ETYPE_OBJECT_ID:
result =
_asn1_get_objectid_der (der + counter, len - counter, &len2,
temp, sizeof (temp));
if (result != ASN1_SUCCESS)
goto cleanup;
tlen = strlen (temp);
if (tlen > 0)
_asn1_set_value (p, temp, tlen + 1);
counter += len2;
move = RIGHT;
break;
case ASN1_ETYPE_GENERALIZED_TIME:
case ASN1_ETYPE_UTC_TIME:
result =
_asn1_get_time_der (der + counter, len - counter, &len2, temp,
sizeof (temp) - 1);
if (result != ASN1_SUCCESS)
goto cleanup;
tlen = strlen (temp);
if (tlen > 0)
_asn1_set_value (p, temp, tlen);
counter += len2;
move = RIGHT;
break;
case ASN1_ETYPE_OCTET_STRING:
len3 = len - counter;
result = _asn1_get_octet_string (der + counter, p, &len3);
if (result != ASN1_SUCCESS)
goto cleanup;
counter += len3;
move = RIGHT;
break;
case ASN1_ETYPE_GENERALSTRING:
case ASN1_ETYPE_NUMERIC_STRING:
case ASN1_ETYPE_IA5_STRING:
case ASN1_ETYPE_TELETEX_STRING:
case ASN1_ETYPE_PRINTABLE_STRING:
case ASN1_ETYPE_UNIVERSAL_STRING:
case ASN1_ETYPE_BMP_STRING:
case ASN1_ETYPE_UTF8_STRING:
case ASN1_ETYPE_VISIBLE_STRING:
case ASN1_ETYPE_BIT_STRING:
len2 =
asn1_get_length_der (der + counter, len - counter, &len3);
if (len2 < 0)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
_asn1_set_value (p, der + counter, len3 + len2);
counter += len3 + len2;
move = RIGHT;
break;
case ASN1_ETYPE_SEQUENCE:
case ASN1_ETYPE_SET:
if (move == UP)
{
len2 = _asn1_strtol (p->value, NULL, 10);
_asn1_set_value (p, NULL, 0);
if (len2 == -1)
{ /* indefinite length method */
if (len - counter + 1 > 0)
{
if ((der[counter]) || der[counter + 1])
{
result = ASN1_DER_ERROR;
goto cleanup;
}
}
else
{
result = ASN1_DER_ERROR;
goto cleanup;
}
counter += 2;
}
else
{ /* definite length method */
if (len2 != counter)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
}
move = RIGHT;
}
else
{ /* move==DOWN || move==RIGHT */
len3 =
asn1_get_length_der (der + counter, len - counter, &len2);
if (len3 < -1)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
counter += len2;
if (len3 > 0)
{
_asn1_ltostr (counter + len3, temp);
tlen = strlen (temp);
if (tlen > 0)
_asn1_set_value (p, temp, tlen + 1);
move = DOWN;
}
else if (len3 == 0)
{
p2 = p->down;
while (p2)
{
if (type_field (p2->type) != ASN1_ETYPE_TAG)
{
p3 = p2->right;
asn1_delete_structure (&p2);
p2 = p3;
}
else
p2 = p2->right;
}
move = RIGHT;
}
else
{ /* indefinite length method */
_asn1_set_value (p, "-1", 3);
move = DOWN;
}
}
break;
case ASN1_ETYPE_SEQUENCE_OF:
case ASN1_ETYPE_SET_OF:
if (move == UP)
{
len2 = _asn1_strtol (p->value, NULL, 10);
if (len2 == -1)
{ /* indefinite length method */
if ((counter + 2) > len)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
if ((der[counter]) || der[counter + 1])
{
_asn1_append_sequence_set (p);
p = p->down;
while (p->right)
p = p->right;
move = RIGHT;
continue;
}
_asn1_set_value (p, NULL, 0);
counter += 2;
}
else
{ /* definite length method */
if (len2 > counter)
{
_asn1_append_sequence_set (p);
p = p->down;
while (p->right)
p = p->right;
move = RIGHT;
continue;
}
_asn1_set_value (p, NULL, 0);
if (len2 != counter)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
}
}
else
{ /* move==DOWN || move==RIGHT */
len3 =
asn1_get_length_der (der + counter, len - counter, &len2);
if (len3 < -1)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
counter += len2;
if (len3)
{
if (len3 > 0)
{ /* definite length method */
_asn1_ltostr (counter + len3, temp);
tlen = strlen (temp);
if (tlen > 0)
_asn1_set_value (p, temp, tlen + 1);
}
else
{ /* indefinite length method */
_asn1_set_value (p, "-1", 3);
}
p2 = p->down;
while ((type_field (p2->type) == ASN1_ETYPE_TAG)
|| (type_field (p2->type) == ASN1_ETYPE_SIZE))
p2 = p2->right;
if (p2->right == NULL)
_asn1_append_sequence_set (p);
p = p2;
}
}
move = RIGHT;
break;
case ASN1_ETYPE_ANY:
if (asn1_get_tag_der
(der + counter, len - counter, &class, &len2,
&tag) != ASN1_SUCCESS)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
if (counter + len2 > len)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
len4 =
asn1_get_length_der (der + counter + len2,
len - counter - len2, &len3);
if (len4 < -1)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
if (len4 != -1)
{
len2 += len4;
_asn1_set_value_lv (p, der + counter, len2 + len3);
counter += len2 + len3;
}
else
{ /* indefinite length */
/* Check indefinite lenth method in an EXPLICIT TAG */
if ((p->type & CONST_TAG) && (der[counter - 1] == 0x80))
indefinite = 1;
else
indefinite = 0;
len2 = len - counter;
result =
_asn1_get_indefinite_length_string (der + counter, &len2);
if (result != ASN1_SUCCESS)
goto cleanup;
_asn1_set_value_lv (p, der + counter, len2);
counter += len2;
/* Check if a couple of 0x00 are present due to an EXPLICIT TAG with
an indefinite length method. */
if (indefinite)
{
if (!der[counter] && !der[counter + 1])
{
counter += 2;
}
else
{
result = ASN1_DER_ERROR;
goto cleanup;
}
}
}
move = RIGHT;
break;
default:
move = (move == UP) ? RIGHT : DOWN;
break;
}
}
if (p == node && move != DOWN)
break;
if (move == DOWN)
{
if (p->down)
p = p->down;
else
move = RIGHT;
}
if ((move == RIGHT) && !(p->type & CONST_SET))
{
if (p->right)
p = p->right;
else
move = UP;
}
if (move == UP)
p = _asn1_find_up (p);
}
_asn1_delete_not_used (*element);
if (counter != len)
{
result = ASN1_DER_ERROR;
goto cleanup;
}
return ASN1_SUCCESS;
cleanup:
asn1_delete_structure (element);
return result;
}
| 1 |
[] |
libtasn1
|
154909136c12cfa5c60732b7210827dfb1ec6aee
| 115,604,692,679,076,540,000,000,000,000,000,000,000 | 532 |
More precise tracking of data.
|
bool operator()(const CPUDevice& d,
typename TTypes<float, 4>::ConstTensor grads,
typename TTypes<T, 4>::ConstTensor image,
typename TTypes<float, 2>::ConstTensor boxes,
typename TTypes<int32, 1>::ConstTensor box_index,
typename TTypes<float, 2>::Tensor grads_boxes) {
const int batch_size = image.dimension(0);
const int image_height = image.dimension(1);
const int image_width = image.dimension(2);
const int num_boxes = grads.dimension(0);
const int crop_height = grads.dimension(1);
const int crop_width = grads.dimension(2);
const int depth = grads.dimension(3);
grads_boxes.setZero();
for (int b = 0; b < num_boxes; ++b) {
const float y1 = boxes(b, 0);
const float x1 = boxes(b, 1);
const float y2 = boxes(b, 2);
const float x2 = boxes(b, 3);
const int32_t b_in = box_index(b);
if (!FastBoundsCheck(b_in, batch_size)) {
continue;
}
const float height_ratio =
(crop_height > 1)
? static_cast<float>(image_height - 1) / (crop_height - 1)
: 0;
const float width_ratio =
(crop_width > 1)
? static_cast<float>(image_width - 1) / (crop_width - 1)
: 0;
const float height_scale =
(crop_height > 1) ? (y2 - y1) * height_ratio : 0;
const float width_scale = (crop_width > 1) ? (x2 - x1) * width_ratio : 0;
for (int y = 0; y < crop_height; ++y) {
const float in_y = (crop_height > 1)
? y1 * (image_height - 1) + y * height_scale
: 0.5 * (y1 + y2) * (image_height - 1);
if (in_y < 0 || in_y > image_height - 1) {
continue;
}
const int top_y_index = floorf(in_y);
const int bottom_y_index = ceilf(in_y);
const float y_lerp = in_y - top_y_index;
for (int x = 0; x < crop_width; ++x) {
const float in_x = (crop_width > 1)
? x1 * (image_width - 1) + x * width_scale
: 0.5 * (x1 + x2) * (image_width - 1);
if (in_x < 0 || in_x > image_width - 1) {
continue;
}
const int left_x_index = floorf(in_x);
const int right_x_index = ceilf(in_x);
const float x_lerp = in_x - left_x_index;
for (int d = 0; d < depth; ++d) {
const float top_left(
static_cast<float>(image(b_in, top_y_index, left_x_index, d)));
const float top_right(
static_cast<float>(image(b_in, top_y_index, right_x_index, d)));
const float bottom_left(static_cast<float>(
image(b_in, bottom_y_index, left_x_index, d)));
const float bottom_right(static_cast<float>(
image(b_in, bottom_y_index, right_x_index, d)));
// Compute the image gradient.
float image_grad_y = (1 - x_lerp) * (bottom_left - top_left) +
x_lerp * (bottom_right - top_right);
float image_grad_x = (1 - y_lerp) * (top_right - top_left) +
y_lerp * (bottom_right - bottom_left);
// Modulate the image gradient with the incoming gradient.
const float top_grad = grads(b, y, x, d);
image_grad_y *= top_grad;
image_grad_x *= top_grad;
// dy1, dy2
if (crop_height > 1) {
grads_boxes(b, 0) +=
image_grad_y * (image_height - 1 - y * height_ratio);
grads_boxes(b, 2) += image_grad_y * (y * height_ratio);
} else {
grads_boxes(b, 0) += image_grad_y * 0.5 * (image_height - 1);
grads_boxes(b, 2) += image_grad_y * 0.5 * (image_height - 1);
}
// dx1, dx2
if (crop_width > 1) {
grads_boxes(b, 1) +=
image_grad_x * (image_width - 1 - x * width_ratio);
grads_boxes(b, 3) += image_grad_x * (x * width_ratio);
} else {
grads_boxes(b, 1) += image_grad_x * 0.5 * (image_width - 1);
grads_boxes(b, 3) += image_grad_x * 0.5 * (image_width - 1);
}
}
}
}
}
return true;
}
| 0 |
[
"CWE-190"
] |
tensorflow
|
7c1692bd417eb4f9b33ead749a41166d6080af85
| 306,788,366,940,198,140,000,000,000,000,000,000,000 | 105 |
PR #51732: Fix crash of tf.image.crop_and_resize when input is large number
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/51732
This PR is part of the effort in #46890 where
tf.image.crop_and_resize will crash if shape consists of large number.
Signed-off-by: Yong Tang <[email protected]>
Copybara import of the project:
--
c8d87055a56d8740d27ad8bdc74a7459ede6900e by Yong Tang <[email protected]>:
Fix crash of tf.image.crop_and_resize when input is large number
This PR is part of the effort in 46890 where
tf.image.crop_and_resize will crash if shape consists of large number.
Signed-off-by: Yong Tang <[email protected]>
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/51732 from yongtang:46890-tf.image.crop_and_resize c8d87055a56d8740d27ad8bdc74a7459ede6900e
PiperOrigin-RevId: 394109830
Change-Id: If049dad0844df9353722029ee95bc76819eda1f4
|
static PHP_INI_MH(OnTypeLibFileUpdate)
{
FILE *typelib_file;
char *typelib_name_buffer;
char *strtok_buf = NULL;
int cached;
if (NULL == new_value || !new_value->val[0] || (typelib_file = VCWD_FOPEN(new_value->val, "r"))==NULL) {
return FAILURE;
}
typelib_name_buffer = (char *) emalloc(sizeof(char)*1024);
while (fgets(typelib_name_buffer, 1024, typelib_file)) {
ITypeLib *pTL;
char *typelib_name;
char *modifier, *ptr;
int mode = CONST_CS | CONST_PERSISTENT; /* CONST_PERSISTENT is ok here */
if (typelib_name_buffer[0]==';') {
continue;
}
typelib_name = php_strtok_r(typelib_name_buffer, "\r\n", &strtok_buf); /* get rid of newlines */
if (typelib_name == NULL) {
continue;
}
typelib_name = php_strtok_r(typelib_name, "#", &strtok_buf);
modifier = php_strtok_r(NULL, "#", &strtok_buf);
if (modifier != NULL) {
if (!strcmp(modifier, "cis") || !strcmp(modifier, "case_insensitive")) {
mode &= ~CONST_CS;
}
}
/* Remove leading/training white spaces on search_string */
while (isspace(*typelib_name)) {/* Ends on '\0' in worst case */
typelib_name ++;
}
ptr = typelib_name + strlen(typelib_name) - 1;
while ((ptr != typelib_name) && isspace(*ptr)) {
*ptr = '\0';
ptr--;
}
if ((pTL = php_com_load_typelib_via_cache(typelib_name, COMG(code_page), &cached)) != NULL) {
if (!cached) {
php_com_import_typelib(pTL, mode, COMG(code_page));
}
ITypeLib_Release(pTL);
}
}
efree(typelib_name_buffer);
fclose(typelib_file);
return SUCCESS;
}
| 0 |
[
"CWE-502"
] |
php-src
|
115ee49b0be12e3df7d2c7027609fbe1a1297e42
| 330,280,181,138,137,940,000,000,000,000,000,000,000 | 57 |
Fix #77177: Serializing or unserializing COM objects crashes
Firstly, we avoid returning NULL from the get_property handler, but
instead return an empty HashTable, which already prevents the crashes.
Secondly, since (de-)serialization obviously makes no sense for COM,
DOTNET and VARIANT objects (at least with the current implementation),
we prohibit it right away.
|
void * nedmalloc(size_t size) THROWSPEC { return nedpmalloc(0, size); }
| 0 |
[
"CWE-119",
"CWE-787"
] |
git
|
34fa79a6cde56d6d428ab0d3160cb094ebad3305
| 237,200,272,537,773,120,000,000,000,000,000,000,000 | 1 |
prefer memcpy to strcpy
When we already know the length of a string (e.g., because
we just malloc'd to fit it), it's nicer to use memcpy than
strcpy, as it makes it more obvious that we are not going to
overflow the buffer (because the size we pass matches the
size in the allocation).
This also eliminates calls to strcpy, which make auditing
the code base harder.
Signed-off-by: Jeff King <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]>
|
deref_function_name(
char_u **arg,
char_u **tofree,
evalarg_T *evalarg,
int verbose)
{
typval_T ref;
char_u *name = *arg;
int save_flags = 0;
ref.v_type = VAR_UNKNOWN;
if (evalarg != NULL)
{
// need to evaluate this to get an import, like in "a.Func"
save_flags = evalarg->eval_flags;
evalarg->eval_flags |= EVAL_EVALUATE;
}
if (eval9(arg, &ref, evalarg, FALSE) == FAIL)
{
dictitem_T *v;
// If <SID>VarName was used it would not be found, try another way.
v = find_var_also_in_script(name, NULL, FALSE);
if (v == NULL)
{
name = NULL;
goto theend;
}
copy_tv(&v->di_tv, &ref);
}
if (*skipwhite(*arg) != NUL)
{
if (verbose)
semsg(_(e_trailing_characters_str), *arg);
name = NULL;
}
else if (ref.v_type == VAR_FUNC && ref.vval.v_string != NULL)
{
name = ref.vval.v_string;
ref.vval.v_string = NULL;
*tofree = name;
}
else if (ref.v_type == VAR_PARTIAL && ref.vval.v_partial != NULL)
{
if (ref.vval.v_partial->pt_argc > 0
|| ref.vval.v_partial->pt_dict != NULL)
{
if (verbose)
emsg(_(e_cannot_use_partial_here));
name = NULL;
}
else
{
name = vim_strsave(partial_name(ref.vval.v_partial));
*tofree = name;
}
}
else
{
if (verbose)
semsg(_(e_not_callable_type_str), name);
name = NULL;
}
theend:
clear_tv(&ref);
if (evalarg != NULL)
evalarg->eval_flags = save_flags;
return name;
}
| 0 |
[
"CWE-476"
] |
vim
|
69082916c8b5d321545d60b9f5facad0a2dd5a4e
| 220,351,947,254,618,130,000,000,000,000,000,000,000 | 70 |
patch 9.0.0552: crash when using NUL in buffer that uses :source
Problem: Crash when using NUL in buffer that uses :source.
Solution: Don't get a next line when skipping over NL.
|
size_t RemoteIo::size() const
{
return (long) p_->size_;
}
| 0 |
[
"CWE-125"
] |
exiv2
|
6e3855aed7ba8bb4731fc4087ca7f9078b2f3d97
| 78,992,547,486,235,440,000,000,000,000,000,000,000 | 4 |
Fix https://github.com/Exiv2/exiv2/issues/55
|
size_t olm_pk_signature_length(void) {
return olm::encode_base64_length(ED25519_SIGNATURE_LENGTH);
}
| 0 |
[
"CWE-787"
] |
olm
|
ccc0d122ee1b4d5e5ca4ec1432086be17d5f901b
| 216,757,520,469,136,820,000,000,000,000,000,000,000 | 3 |
olm_pk_decrypt: Ensure inputs are of correct length.
|
BlockDriver *bdrv_find_format(const char *format_name)
{
BlockDriver *drv1;
QLIST_FOREACH(drv1, &bdrv_drivers, list) {
if (!strcmp(drv1->format_name, format_name)) {
return drv1;
}
}
return NULL;
}
| 0 |
[
"CWE-190"
] |
qemu
|
8f4754ede56e3f9ea3fd7207f4a7c4453e59285b
| 177,212,086,327,015,330,000,000,000,000,000,000,000 | 10 |
block: Limit request size (CVE-2014-0143)
Limiting the size of a single request to INT_MAX not only fixes a
direct integer overflow in bdrv_check_request() (which would only
trigger bad behaviour with ridiculously huge images, as in close to
2^64 bytes), but can also prevent overflows in all block drivers.
Signed-off-by: Kevin Wolf <[email protected]>
Reviewed-by: Max Reitz <[email protected]>
Signed-off-by: Stefan Hajnoczi <[email protected]>
|
int BN_bn2lebinpad(const BIGNUM *a, unsigned char *to, int tolen)
{
int i;
BN_ULONG l;
bn_check_top(a);
i = BN_num_bytes(a);
if (tolen < i)
return -1;
/* Add trailing zeroes if necessary */
if (tolen > i)
memset(to + i, 0, tolen - i);
to += i;
while (i--) {
l = a->d[i / BN_BYTES];
to--;
*to = (unsigned char)(l >> (8 * (i % BN_BYTES))) & 0xff;
}
return tolen;
}
| 0 |
[
"CWE-310"
] |
openssl
|
aab7c770353b1dc4ba045938c8fb446dd1c4531e
| 61,507,084,604,781,420,000,000,000,000,000,000,000 | 19 |
Elliptic curve scalar multiplication with timing attack defenses
Co-authored-by: Nicola Tuveri <[email protected]>
Co-authored-by: Cesar Pereida Garcia <[email protected]>
Co-authored-by: Sohaib ul Hassan <[email protected]>
Reviewed-by: Andy Polyakov <[email protected]>
Reviewed-by: Matt Caswell <[email protected]>
(Merged from https://github.com/openssl/openssl/pull/6009)
(cherry picked from commit 40e48e54582e46c1a01e184ecf5bd31f4f7f8294)
|
int dccp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
size_t len, int nonblock, int flags, int *addr_len)
{
const struct dccp_hdr *dh;
long timeo;
lock_sock(sk);
if (sk->sk_state == DCCP_LISTEN) {
len = -ENOTCONN;
goto out;
}
timeo = sock_rcvtimeo(sk, nonblock);
do {
struct sk_buff *skb = skb_peek(&sk->sk_receive_queue);
if (skb == NULL)
goto verify_sock_status;
dh = dccp_hdr(skb);
switch (dh->dccph_type) {
case DCCP_PKT_DATA:
case DCCP_PKT_DATAACK:
goto found_ok_skb;
case DCCP_PKT_CLOSE:
case DCCP_PKT_CLOSEREQ:
if (!(flags & MSG_PEEK))
dccp_finish_passive_close(sk);
/* fall through */
case DCCP_PKT_RESET:
dccp_pr_debug("found fin (%s) ok!\n",
dccp_packet_name(dh->dccph_type));
len = 0;
goto found_fin_ok;
default:
dccp_pr_debug("packet_type=%s\n",
dccp_packet_name(dh->dccph_type));
sk_eat_skb(sk, skb, false);
}
verify_sock_status:
if (sock_flag(sk, SOCK_DONE)) {
len = 0;
break;
}
if (sk->sk_err) {
len = sock_error(sk);
break;
}
if (sk->sk_shutdown & RCV_SHUTDOWN) {
len = 0;
break;
}
if (sk->sk_state == DCCP_CLOSED) {
if (!sock_flag(sk, SOCK_DONE)) {
/* This occurs when user tries to read
* from never connected socket.
*/
len = -ENOTCONN;
break;
}
len = 0;
break;
}
if (!timeo) {
len = -EAGAIN;
break;
}
if (signal_pending(current)) {
len = sock_intr_errno(timeo);
break;
}
sk_wait_data(sk, &timeo);
continue;
found_ok_skb:
if (len > skb->len)
len = skb->len;
else if (len < skb->len)
msg->msg_flags |= MSG_TRUNC;
if (skb_copy_datagram_iovec(skb, 0, msg->msg_iov, len)) {
/* Exception. Bailout! */
len = -EFAULT;
break;
}
if (flags & MSG_TRUNC)
len = skb->len;
found_fin_ok:
if (!(flags & MSG_PEEK))
sk_eat_skb(sk, skb, false);
break;
} while (1);
out:
release_sock(sk);
return len;
}
| 1 |
[] |
linux
|
7bced397510ab569d31de4c70b39e13355046387
| 222,684,846,263,845,800,000,000,000,000,000,000,000 | 105 |
net_dma: simple removal
Per commit "77873803363c net_dma: mark broken" net_dma is no longer used
and there is no plan to fix it.
This is the mechanical removal of bits in CONFIG_NET_DMA ifdef guards.
Reverting the remainder of the net_dma induced changes is deferred to
subsequent patches.
Marked for stable due to Roman's report of a memory leak in
dma_pin_iovec_pages():
https://lkml.org/lkml/2014/9/3/177
Cc: Dave Jiang <[email protected]>
Cc: Vinod Koul <[email protected]>
Cc: David Whipple <[email protected]>
Cc: Alexander Duyck <[email protected]>
Cc: <[email protected]>
Reported-by: Roman Gushchin <[email protected]>
Acked-by: David S. Miller <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
|
dictionary * iniparser_load(const char * ininame)
{
FILE * in ;
char line [ASCIILINESZ+1] ;
char section [ASCIILINESZ+1] ;
char key [ASCIILINESZ+1] ;
char tmp [(ASCIILINESZ * 2) + 1] ;
char val [ASCIILINESZ+1] ;
int last=0 ;
int len ;
int lineno=0 ;
int errs=0;
dictionary * dict ;
if ((in=fopen(ininame, "r"))==NULL) {
fprintf(stderr, "iniparser: cannot open %s\n", ininame);
return NULL ;
}
dict = dictionary_new(0) ;
if (!dict) {
fclose(in);
return NULL ;
}
memset(line, 0, ASCIILINESZ);
memset(section, 0, ASCIILINESZ);
memset(key, 0, ASCIILINESZ);
memset(val, 0, ASCIILINESZ);
last=0 ;
while (fgets(line+last, ASCIILINESZ-last, in)!=NULL) {
lineno++ ;
len = (int)strlen(line)-1;
if (len<=0)
continue;
/* Safety check against buffer overflows */
if (line[len]!='\n' && !feof(in)) {
fprintf(stderr,
"iniparser: input line too long in %s (%d)\n",
ininame,
lineno);
dictionary_del(dict);
fclose(in);
return NULL ;
}
/* Get rid of \n and spaces at end of line */
while ((len>=0) &&
((line[len]=='\n') || (isspace(line[len])))) {
line[len]=0 ;
len-- ;
}
if (len < 0) { /* Line was entirely \n and/or spaces */
len = 0;
}
/* Detect multi-line */
if (line[len]=='\\') {
/* Multi-line value */
last=len ;
continue ;
} else {
last=0 ;
}
switch (iniparser_line(line, section, key, val)) {
case LINE_EMPTY:
case LINE_COMMENT:
break ;
case LINE_SECTION:
errs = dictionary_set(dict, section, NULL);
break ;
case LINE_VALUE:
sprintf(tmp, "%s:%s", section, key);
errs = dictionary_set(dict, tmp, val) ;
break ;
case LINE_ERROR:
fprintf(stderr, "iniparser: syntax error in %s (%d):\n",
ininame,
lineno);
fprintf(stderr, "-> %s\n", line);
errs++ ;
break;
default:
break ;
}
memset(line, 0, ASCIILINESZ);
last=0;
if (errs<0) {
fprintf(stderr, "iniparser: memory allocation failure\n");
break ;
}
}
if (errs) {
dictionary_del(dict);
dict = NULL ;
}
fclose(in);
return dict ;
}
| 0 |
[] |
iniparser
|
4f870752abbb756911d7b11405d49e9769d082bd
| 121,284,349,865,156,460,000,000,000,000,000,000,000 | 105 |
Fix #68 when reading file with only \0 char
|
static int is_accepting_streams(h2_proxy_session *session)
{
switch (session->state) {
case H2_PROXYS_ST_IDLE:
case H2_PROXYS_ST_BUSY:
case H2_PROXYS_ST_WAIT:
return 1;
default:
return 0;
}
}
| 0 |
[
"CWE-770"
] |
mod_h2
|
dd05d49abe0f67512ce9ed5ba422d7711effecfb
| 62,914,385,516,494,030,000,000,000,000,000,000,000 | 11 |
* fixes Timeout vs. KeepAliveTimeout behaviour, see PR 63534 (for trunk now,
mpm event backport to 2.4.x up for vote).
* Fixes stream cleanup when connection throttling is in place.
* Counts stream resets by client on streams initiated by client as cause
for connection throttling.
* Header length checks are now logged similar to HTTP/1.1 protocol handler (thanks @mkaufmann)
* Header length is checked also on the merged value from several header instances
and results in a 431 response.
|
static int StreamTcpTest16 (void)
{
Packet *p = SCMalloc(SIZE_OF_PACKET);
if (unlikely(p == NULL))
return 0;
Flow f;
ThreadVars tv;
StreamTcpThread stt;
TCPHdr tcph;
uint8_t payload[4];
struct in_addr addr;
IPV4Hdr ipv4h;
char os_policy_name[10] = "windows";
const char *ip_addr;
PacketQueue pq;
memset(&pq,0,sizeof(PacketQueue));
memset(p, 0, SIZE_OF_PACKET);
memset (&f, 0, sizeof(Flow));
memset(&tv, 0, sizeof (ThreadVars));
memset(&stt, 0, sizeof (StreamTcpThread));
memset(&tcph, 0, sizeof (TCPHdr));
memset(&addr, 0, sizeof(addr));
memset(&ipv4h, 0, sizeof(ipv4h));
FLOW_INITIALIZE(&f);
p->flow = &f;
int ret = 0;
StreamTcpUTInit(&stt.ra_ctx);
/* Load the config string in to parser */
ConfCreateContextBackup();
ConfInit();
ConfYamlLoadString(dummy_conf_string1, strlen(dummy_conf_string1));
/* Get the IP address as string and add it to Host info tree for lookups */
ip_addr = StreamTcpParseOSPolicy(os_policy_name);
SCHInfoAddHostOSInfo(os_policy_name, ip_addr, -1);
strlcpy(os_policy_name, "linux\0", sizeof(os_policy_name));
ip_addr = StreamTcpParseOSPolicy(os_policy_name);
SCHInfoAddHostOSInfo(os_policy_name, ip_addr, -1);
addr.s_addr = inet_addr("192.168.0.1");
tcph.th_win = htons(5480);
tcph.th_seq = htonl(10);
tcph.th_ack = htonl(20);
tcph.th_flags = TH_ACK|TH_PUSH;
p->tcph = &tcph;
p->dst.family = AF_INET;
p->dst.address.address_un_data32[0] = addr.s_addr;
p->ip4h = &ipv4h;
StreamTcpCreateTestPacket(payload, 0x41, 3, sizeof(payload)); /*AAA*/
p->payload = payload;
p->payload_len = 3;
FLOWLOCK_WRLOCK(&f);
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
p->tcph->th_seq = htonl(20);
p->tcph->th_ack = htonl(13);
p->tcph->th_flags = TH_ACK|TH_PUSH;
p->flowflags = FLOW_PKT_TOCLIENT;
StreamTcpCreateTestPacket(payload, 0x42, 3, sizeof(payload)); /*BBB*/
p->payload = payload;
p->payload_len = 3;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
p->tcph->th_seq = htonl(15);
p->tcph->th_ack = htonl(23);
p->tcph->th_flags = TH_ACK|TH_PUSH;
p->flowflags = FLOW_PKT_TOSERVER;
StreamTcpCreateTestPacket(payload, 0x43, 3, sizeof(payload)); /*CCC*/
p->payload = payload;
p->payload_len = 3;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
p->tcph->th_seq = htonl(14);
p->tcph->th_ack = htonl(23);
p->tcph->th_flags = TH_ACK|TH_PUSH;
p->flowflags = FLOW_PKT_TOSERVER;
StreamTcpCreateTestPacket(payload, 0x43, 3, sizeof(payload)); /*CCC*/
p->payload = payload;
p->payload_len = 3;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
addr.s_addr = inet_addr("192.168.1.1");
p->tcph->th_seq = htonl(25);
p->tcph->th_ack = htonl(13);
p->tcph->th_flags = TH_ACK|TH_PUSH;
p->flowflags = FLOW_PKT_TOCLIENT;
p->dst.address.address_un_data32[0] = addr.s_addr;
StreamTcpCreateTestPacket(payload, 0x44, 3, sizeof(payload)); /*DDD*/
p->payload = payload;
p->payload_len = 3;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
p->tcph->th_seq = htonl(24);
p->tcph->th_ack = htonl(13);
p->tcph->th_flags = TH_ACK|TH_PUSH;
p->flowflags = FLOW_PKT_TOCLIENT;
StreamTcpCreateTestPacket(payload, 0x44, 3, sizeof(payload)); /*DDD*/
p->payload = payload;
p->payload_len = 3;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
if (stream_config.midstream != TRUE) {
ret = 1;
goto end;
}
if (((TcpSession *)(p->flow->protoctx))->state != TCP_ESTABLISHED)
goto end;
if (((TcpSession *)(p->flow->protoctx))->client.next_seq != 13 &&
((TcpSession *)(p->flow->protoctx))->server.next_seq != 23) {
printf("failed in next_seq match client.next_seq %"PRIu32""
" server.next_seq %"PRIu32"\n",
((TcpSession *)(p->flow->protoctx))->client.next_seq,
((TcpSession *)(p->flow->protoctx))->server.next_seq);
goto end;
}
if (((TcpSession *)(p->flow->protoctx))->client.os_policy !=
OS_POLICY_LINUX && ((TcpSession *)
(p->flow->protoctx))->server.os_policy != OS_POLICY_WINDOWS)
{
printf("failed in setting up OS policy, client.os_policy: %"PRIu8""
" should be %"PRIu8" and server.os_policy: %"PRIu8""
" should be %"PRIu8"\n", ((TcpSession *)
(p->flow->protoctx))->client.os_policy, (uint8_t)OS_POLICY_LINUX,
((TcpSession *)(p->flow->protoctx))->server.os_policy,
(uint8_t)OS_POLICY_WINDOWS);
goto end;
}
StreamTcpSessionPktFree(p);
ret = 1;
end:
ConfDeInit();
ConfRestoreContextBackup();
FLOWLOCK_UNLOCK(&f);
SCFree(p);
FLOW_DESTROY(&f);
StreamTcpUTDeinit(stt.ra_ctx);
return ret;
}
| 0 |
[] |
suricata
|
843d0b7a10bb45627f94764a6c5d468a24143345
| 2,103,765,273,083,562,300,000,000,000,000,000,000 | 161 |
stream: support RST getting lost/ignored
In case of a valid RST on a SYN, the state is switched to 'TCP_CLOSED'.
However, the target of the RST may not have received it, or may not
have accepted it. Also, the RST may have been injected, so the supposed
sender may not actually be aware of the RST that was sent in it's name.
In this case the previous behavior was to switch the state to CLOSED and
accept no further TCP updates or stream reassembly.
This patch changes this. It still switches the state to CLOSED, as this
is by far the most likely to be correct. However, it will reconsider
the state if the receiver continues to talk.
To do this on each state change the previous state will be recorded in
TcpSession::pstate. If a non-RST packet is received after a RST, this
TcpSession::pstate is used to try to continue the conversation.
If the (supposed) sender of the RST is also continueing the conversation
as normal, it's highly likely it didn't send the RST. In this case
a stream event is generated.
Ticket: #2501
Reported-By: Kirill Shipulin
|
static int ntop_load_dump_prefs(lua_State* vm) {
NetworkInterfaceView *ntop_interface = getCurrentInterface(vm);
ntop->getTrace()->traceEvent(TRACE_INFO, "%s() called", __FUNCTION__);
ntop_interface->loadDumpPrefs();
return(CONST_LUA_OK);
}
| 0 |
[
"CWE-254"
] |
ntopng
|
2e0620be3410f5e22c9aa47e261bc5a12be692c6
| 93,155,109,667,392,370,000,000,000,000,000,000,000 | 8 |
Added security fix to avoid escalating privileges to non-privileged users
Many thanks to Dolev Farhi for reporting it
|
static const unsigned char *sha1_access(size_t pos, void *table)
{
struct pack_idx_entry **index = table;
return index[pos]->sha1;
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
git
|
de1e67d0703894cb6ea782e36abb63976ab07e60
| 195,194,752,097,010,730,000,000,000,000,000,000,000 | 5 |
list-objects: pass full pathname to callbacks
When we find a blob at "a/b/c", we currently pass this to
our show_object_fn callbacks as two components: "a/b/" and
"c". Callbacks which want the full value then call
path_name(), which concatenates the two. But this is an
inefficient interface; the path is a strbuf, and we could
simply append "c" to it temporarily, then roll back the
length, without creating a new copy.
So we could improve this by teaching the callsites of
path_name() this trick (and there are only 3). But we can
also notice that no callback actually cares about the
broken-down representation, and simply pass each callback
the full path "a/b/c" as a string. The callback code becomes
even simpler, then, as we do not have to worry about freeing
an allocated buffer, nor rolling back our modification to
the strbuf.
This is theoretically less efficient, as some callbacks
would not bother to format the final path component. But in
practice this is not measurable. Since we use the same
strbuf over and over, our work to grow it is amortized, and
we really only pay to memcpy a few bytes.
Signed-off-by: Jeff King <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]>
|
static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx)
{
}
| 0 |
[
"CWE-703",
"CWE-189"
] |
linux
|
8176cced706b5e5d15887584150764894e94e02f
| 70,913,845,147,926,730,000,000,000,000,000,000,000 | 3 |
perf: Treat attr.config as u64 in perf_swevent_init()
Trinity discovered that we fail to check all 64 bits of
attr.config passed by user space, resulting to out-of-bounds
access of the perf_swevent_enabled array in
sw_perf_event_destroy().
Introduced in commit b0a873ebb ("perf: Register PMU
implementations").
Signed-off-by: Tommi Rantala <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Cc: Paul Mackerras <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
get_compl_len(void)
{
int off = (int)curwin->w_cursor.col - (int)compl_col;
if (off < 0)
return 0;
return off;
}
| 0 |
[
"CWE-125"
] |
vim
|
f12129f1714f7d2301935bb21d896609bdac221c
| 29,735,961,103,781,857,000,000,000,000,000,000,000 | 8 |
patch 9.0.0020: with some completion reading past end of string
Problem: With some completion reading past end of string.
Solution: Check the length of the string.
|
static const char *cmd_audit_log_relevant_status(cmd_parms *cmd, void *_dcfg,
const char *p1)
{
directory_config *dcfg = _dcfg;
dcfg->auditlog_relevant_regex = msc_pregcomp(cmd->pool, p1, PCRE_DOTALL, NULL, NULL);
if (dcfg->auditlog_relevant_regex == NULL) {
return apr_psprintf(cmd->pool, "ModSecurity: Invalid regular expression: %s", p1);
}
return NULL;
}
| 0 |
[
"CWE-20",
"CWE-611"
] |
ModSecurity
|
d4d80b38aa85eccb26e3c61b04d16e8ca5de76fe
| 31,864,758,016,586,020,000,000,000,000,000,000,000 | 12 |
Added SecXmlExternalEntity
|
int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
struct kvm_dirty_log *log)
{
int r;
struct kvm_memory_slot *memslot;
unsigned long n, nr_dirty_pages;
mutex_lock(&kvm->slots_lock);
r = -EINVAL;
if (log->slot >= KVM_MEMORY_SLOTS)
goto out;
memslot = id_to_memslot(kvm->memslots, log->slot);
r = -ENOENT;
if (!memslot->dirty_bitmap)
goto out;
n = kvm_dirty_bitmap_bytes(memslot);
nr_dirty_pages = memslot->nr_dirty_pages;
/* If nothing is dirty, don't bother messing with page tables. */
if (nr_dirty_pages) {
struct kvm_memslots *slots, *old_slots;
unsigned long *dirty_bitmap, *dirty_bitmap_head;
dirty_bitmap = memslot->dirty_bitmap;
dirty_bitmap_head = memslot->dirty_bitmap_head;
if (dirty_bitmap == dirty_bitmap_head)
dirty_bitmap_head += n / sizeof(long);
memset(dirty_bitmap_head, 0, n);
r = -ENOMEM;
slots = kmemdup(kvm->memslots, sizeof(*kvm->memslots), GFP_KERNEL);
if (!slots)
goto out;
memslot = id_to_memslot(slots, log->slot);
memslot->nr_dirty_pages = 0;
memslot->dirty_bitmap = dirty_bitmap_head;
update_memslots(slots, NULL);
old_slots = kvm->memslots;
rcu_assign_pointer(kvm->memslots, slots);
synchronize_srcu_expedited(&kvm->srcu);
kfree(old_slots);
write_protect_slot(kvm, memslot, dirty_bitmap, nr_dirty_pages);
r = -EFAULT;
if (copy_to_user(log->dirty_bitmap, dirty_bitmap, n))
goto out;
} else {
r = -EFAULT;
if (clear_user(log->dirty_bitmap, n))
goto out;
}
r = 0;
out:
mutex_unlock(&kvm->slots_lock);
return r;
}
| 0 |
[] |
kvm
|
0769c5de24621141c953fbe1f943582d37cb4244
| 192,537,544,699,657,100,000,000,000,000,000,000,000 | 63 |
KVM: x86: extend "struct x86_emulate_ops" with "get_cpuid"
In order to be able to proceed checks on CPU-specific properties
within the emulator, function "get_cpuid" is introduced.
With "get_cpuid" it is possible to virtually call the guests
"cpuid"-opcode without changing the VM's context.
[mtosatti: cleanup/beautify code]
Signed-off-by: Stephan Baerwolf <[email protected]>
Signed-off-by: Marcelo Tosatti <[email protected]>
|
int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, int new_below)
{
struct vm_area_struct *new;
int err;
if (is_vm_hugetlb_page(vma) && (addr &
~(huge_page_mask(hstate_vma(vma)))))
return -EINVAL;
new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
if (!new)
return -ENOMEM;
/* most fields are the same, copy all, and then fixup */
*new = *vma;
INIT_LIST_HEAD(&new->anon_vma_chain);
if (new_below)
new->vm_end = addr;
else {
new->vm_start = addr;
new->vm_pgoff += ((addr - vma->vm_start) >> PAGE_SHIFT);
}
err = vma_dup_policy(vma, new);
if (err)
goto out_free_vma;
err = anon_vma_clone(new, vma);
if (err)
goto out_free_mpol;
if (new->vm_file)
get_file(new->vm_file);
if (new->vm_ops && new->vm_ops->open)
new->vm_ops->open(new);
if (new_below)
err = vma_adjust(vma, addr, vma->vm_end, vma->vm_pgoff +
((addr - new->vm_start) >> PAGE_SHIFT), new);
else
err = vma_adjust(vma, vma->vm_start, addr, vma->vm_pgoff, new);
/* Success. */
if (!err)
return 0;
/* Clean everything up if vma_adjust failed. */
if (new->vm_ops && new->vm_ops->close)
new->vm_ops->close(new);
if (new->vm_file)
fput(new->vm_file);
unlink_anon_vmas(new);
out_free_mpol:
mpol_put(vma_policy(new));
out_free_vma:
kmem_cache_free(vm_area_cachep, new);
return err;
}
| 0 |
[
"CWE-119"
] |
linux
|
1be7107fbe18eed3e319a6c3e83c78254b693acb
| 170,903,807,571,604,930,000,000,000,000,000,000,000 | 62 |
mm: larger stack guard gap, between vmas
Stack guard page is a useful feature to reduce a risk of stack smashing
into a different mapping. We have been using a single page gap which
is sufficient to prevent having stack adjacent to a different mapping.
But this seems to be insufficient in the light of the stack usage in
userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
which is 256kB or stack strings with MAX_ARG_STRLEN.
This will become especially dangerous for suid binaries and the default
no limit for the stack size limit because those applications can be
tricked to consume a large portion of the stack and a single glibc call
could jump over the guard page. These attacks are not theoretical,
unfortunatelly.
Make those attacks less probable by increasing the stack guard gap
to 1MB (on systems with 4k pages; but make it depend on the page size
because systems with larger base pages might cap stack allocations in
the PAGE_SIZE units) which should cover larger alloca() and VLA stack
allocations. It is obviously not a full fix because the problem is
somehow inherent, but it should reduce attack space a lot.
One could argue that the gap size should be configurable from userspace,
but that can be done later when somebody finds that the new 1MB is wrong
for some special case applications. For now, add a kernel command line
option (stack_guard_gap) to specify the stack gap size (in page units).
Implementation wise, first delete all the old code for stack guard page:
because although we could get away with accounting one extra page in a
stack vma, accounting a larger gap can break userspace - case in point,
a program run with "ulimit -S -v 20000" failed when the 1MB gap was
counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
and strict non-overcommit mode.
Instead of keeping gap inside the stack vma, maintain the stack guard
gap as a gap between vmas: using vm_start_gap() in place of vm_start
(or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
places which need to respect the gap - mainly arch_get_unmapped_area(),
and and the vma tree's subtree_gap support for that.
Original-patch-by: Oleg Nesterov <[email protected]>
Original-patch-by: Michal Hocko <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Tested-by: Helge Deller <[email protected]> # parisc
Signed-off-by: Linus Torvalds <[email protected]>
|
ruby_glob0(const char *path, int fd, const char *base, int flags,
const ruby_glob_funcs_t *funcs, VALUE arg,
rb_encoding *enc)
{
struct glob_pattern *list;
const char *root, *start;
char *buf;
size_t n, baselen = 0;
int status, dirsep = FALSE;
start = root = path;
if (*root == '{') {
struct push_glob0_args args;
args.fd = fd;
args.base = base;
args.flags = flags;
args.funcs = funcs;
args.arg = arg;
return ruby_brace_expand(path, flags, push_glob0_caller, (VALUE)&args, enc, Qfalse);
}
flags |= FNM_SYSCASE;
#if defined DOSISH
root = rb_enc_path_skip_prefix(root, root + strlen(root), enc);
#endif
if (*root == '/') root++;
n = root - start;
if (!n && base) {
n = strlen(base);
baselen = n;
start = base;
dirsep = TRUE;
}
buf = GLOB_ALLOC_N(char, n + 1);
if (!buf) return -1;
MEMCPY(buf, start, char, n);
buf[n] = '\0';
list = glob_make_pattern(root, root + strlen(root), flags, enc);
if (!list) {
GLOB_FREE(buf);
return -1;
}
status = glob_helper(fd, buf, baselen, n-baselen, dirsep,
path_unknown, &list, &list + 1,
flags, funcs, arg, enc);
glob_free_pattern(list);
GLOB_FREE(buf);
return status;
}
| 0 |
[] |
ruby
|
a0a2640b398cffd351f87d3f6243103add66575b
| 75,933,004,547,874,000,000,000,000,000,000,000,000 | 54 |
Fix for wrong fnmatch patttern
* dir.c (file_s_fnmatch): ensure that pattern does not contain a
NUL character. https://hackerone.com/reports/449617
|
keystr_from_pk_with_sub (PKT_public_key *main_pk, PKT_public_key *sub_pk)
{
keyid_from_pk (main_pk, NULL);
if (sub_pk)
keyid_from_pk (sub_pk, NULL);
return keystr_with_sub (main_pk->keyid, sub_pk? sub_pk->keyid:NULL);
}
| 0 |
[
"CWE-20"
] |
gnupg
|
2183683bd633818dd031b090b5530951de76f392
| 249,975,140,685,110,500,000,000,000,000,000,000,000 | 8 |
Use inline functions to convert buffer data to scalars.
* common/host2net.h (buf16_to_ulong, buf16_to_uint): New.
(buf16_to_ushort, buf16_to_u16): New.
(buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New.
--
Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to
avoid all sign extension on shift problems. Hanno Böck found a case
with an invalid read due to this problem. To fix that once and for
all almost all uses of "<< 24" and "<< 8" are changed by this patch to
use an inline function from host2net.h.
Signed-off-by: Werner Koch <[email protected]>
|
static struct file *aio_private_file(struct kioctx *ctx, loff_t nr_pages)
{
struct qstr this = QSTR_INIT("[aio]", 5);
struct file *file;
struct path path;
struct inode *inode = alloc_anon_inode(aio_mnt->mnt_sb);
if (IS_ERR(inode))
return ERR_CAST(inode);
inode->i_mapping->a_ops = &aio_ctx_aops;
inode->i_mapping->private_data = ctx;
inode->i_size = PAGE_SIZE * nr_pages;
path.dentry = d_alloc_pseudo(aio_mnt->mnt_sb, &this);
if (!path.dentry) {
iput(inode);
return ERR_PTR(-ENOMEM);
}
path.mnt = mntget(aio_mnt);
d_instantiate(path.dentry, inode);
file = alloc_file(&path, FMODE_READ | FMODE_WRITE, &aio_ring_fops);
if (IS_ERR(file)) {
path_put(&path);
return file;
}
file->f_flags = O_RDWR;
file->private_data = ctx;
return file;
}
| 0 |
[
"CWE-200"
] |
linux
|
edfbbf388f293d70bf4b7c0bc38774d05e6f711a
| 206,510,938,226,349,300,000,000,000,000,000,000,000 | 31 |
aio: fix kernel memory disclosure in io_getevents() introduced in v3.10
A kernel memory disclosure was introduced in aio_read_events_ring() in v3.10
by commit a31ad380bed817aa25f8830ad23e1a0480fef797. The changes made to
aio_read_events_ring() failed to correctly limit the index into
ctx->ring_pages[], allowing an attacked to cause the subsequent kmap() of
an arbitrary page with a copy_to_user() to copy the contents into userspace.
This vulnerability has been assigned CVE-2014-0206. Thanks to Mateusz and
Petr for disclosing this issue.
This patch applies to v3.12+. A separate backport is needed for 3.10/3.11.
Signed-off-by: Benjamin LaHaise <[email protected]>
Cc: Mateusz Guzik <[email protected]>
Cc: Petr Matousek <[email protected]>
Cc: Kent Overstreet <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: [email protected]
|
static unsigned long segment_base(u16 selector)
{
struct desc_struct *table;
unsigned long v;
if (!(selector & ~SEGMENT_RPL_MASK))
return 0;
table = get_current_gdt_ro();
if ((selector & SEGMENT_TI_MASK) == SEGMENT_LDT) {
u16 ldt_selector = kvm_read_ldt();
if (!(ldt_selector & ~SEGMENT_RPL_MASK))
return 0;
table = (struct desc_struct *)segment_base(ldt_selector);
}
v = get_desc_base(&table[selector >> 3]);
return v;
}
| 0 |
[
"CWE-284"
] |
linux
|
727ba748e110b4de50d142edca9d6a9b7e6111d8
| 263,497,604,163,547,270,000,000,000,000,000,000,000 | 21 |
kvm: nVMX: Enforce cpl=0 for VMX instructions
VMX instructions executed inside a L1 VM will always trigger a VM exit
even when executed with cpl 3. This means we must perform the
privilege check in software.
Fixes: 70f3aac964ae("kvm: nVMX: Remove superfluous VMX instruction fault checks")
Cc: [email protected]
Signed-off-by: Felix Wilhelm <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
MagickExport Image *EnhanceImage(const Image *image,ExceptionInfo *exception)
{
#define EnhanceImageTag "Enhance/Image"
#define EnhancePixel(weight) \
mean=QuantumScale*((double) GetPixelRed(image,r)+pixel.red)/2.0; \
distance=QuantumScale*((double) GetPixelRed(image,r)-pixel.red); \
distance_squared=(4.0+mean)*distance*distance; \
mean=QuantumScale*((double) GetPixelGreen(image,r)+pixel.green)/2.0; \
distance=QuantumScale*((double) GetPixelGreen(image,r)-pixel.green); \
distance_squared+=(7.0-mean)*distance*distance; \
mean=QuantumScale*((double) GetPixelBlue(image,r)+pixel.blue)/2.0; \
distance=QuantumScale*((double) GetPixelBlue(image,r)-pixel.blue); \
distance_squared+=(5.0-mean)*distance*distance; \
mean=QuantumScale*((double) GetPixelBlack(image,r)+pixel.black)/2.0; \
distance=QuantumScale*((double) GetPixelBlack(image,r)-pixel.black); \
distance_squared+=(5.0-mean)*distance*distance; \
mean=QuantumScale*((double) GetPixelAlpha(image,r)+pixel.alpha)/2.0; \
distance=QuantumScale*((double) GetPixelAlpha(image,r)-pixel.alpha); \
distance_squared+=(5.0-mean)*distance*distance; \
if (distance_squared < 0.069) \
{ \
aggregate.red+=(weight)*GetPixelRed(image,r); \
aggregate.green+=(weight)*GetPixelGreen(image,r); \
aggregate.blue+=(weight)*GetPixelBlue(image,r); \
aggregate.black+=(weight)*GetPixelBlack(image,r); \
aggregate.alpha+=(weight)*GetPixelAlpha(image,r); \
total_weight+=(weight); \
} \
r+=GetPixelChannels(image);
CacheView
*enhance_view,
*image_view;
Image
*enhance_image;
MagickBooleanType
status;
MagickOffsetType
progress;
ssize_t
y;
/*
Initialize enhanced image attributes.
*/
assert(image != (const Image *) NULL);
assert(image->signature == MagickCoreSignature);
if (image->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",image->filename);
assert(exception != (ExceptionInfo *) NULL);
assert(exception->signature == MagickCoreSignature);
enhance_image=CloneImage(image,0,0,MagickTrue,
exception);
if (enhance_image == (Image *) NULL)
return((Image *) NULL);
if (SetImageStorageClass(enhance_image,DirectClass,exception) == MagickFalse)
{
enhance_image=DestroyImage(enhance_image);
return((Image *) NULL);
}
/*
Enhance image.
*/
status=MagickTrue;
progress=0;
image_view=AcquireVirtualCacheView(image,exception);
enhance_view=AcquireAuthenticCacheView(enhance_image,exception);
#if defined(MAGICKCORE_OPENMP_SUPPORT)
#pragma omp parallel for schedule(static) shared(progress,status) \
magick_number_threads(image,enhance_image,image->rows,1)
#endif
for (y=0; y < (ssize_t) image->rows; y++)
{
PixelInfo
pixel;
register const Quantum
*magick_restrict p;
register Quantum
*magick_restrict q;
register ssize_t
x;
ssize_t
center;
if (status == MagickFalse)
continue;
p=GetCacheViewVirtualPixels(image_view,-2,y-2,image->columns+4,5,exception);
q=QueueCacheViewAuthenticPixels(enhance_view,0,y,enhance_image->columns,1,
exception);
if ((p == (const Quantum *) NULL) || (q == (Quantum *) NULL))
{
status=MagickFalse;
continue;
}
center=(ssize_t) GetPixelChannels(image)*(2*(image->columns+4)+2);
GetPixelInfo(image,&pixel);
for (x=0; x < (ssize_t) image->columns; x++)
{
double
distance,
distance_squared,
mean,
total_weight;
PixelInfo
aggregate;
register const Quantum
*magick_restrict r;
GetPixelInfo(image,&aggregate);
total_weight=0.0;
GetPixelInfoPixel(image,p+center,&pixel);
r=p;
EnhancePixel(5.0); EnhancePixel(8.0); EnhancePixel(10.0);
EnhancePixel(8.0); EnhancePixel(5.0);
r=p+GetPixelChannels(image)*(image->columns+4);
EnhancePixel(8.0); EnhancePixel(20.0); EnhancePixel(40.0);
EnhancePixel(20.0); EnhancePixel(8.0);
r=p+2*GetPixelChannels(image)*(image->columns+4);
EnhancePixel(10.0); EnhancePixel(40.0); EnhancePixel(80.0);
EnhancePixel(40.0); EnhancePixel(10.0);
r=p+3*GetPixelChannels(image)*(image->columns+4);
EnhancePixel(8.0); EnhancePixel(20.0); EnhancePixel(40.0);
EnhancePixel(20.0); EnhancePixel(8.0);
r=p+4*GetPixelChannels(image)*(image->columns+4);
EnhancePixel(5.0); EnhancePixel(8.0); EnhancePixel(10.0);
EnhancePixel(8.0); EnhancePixel(5.0);
if (total_weight > MagickEpsilon)
{
pixel.red=((aggregate.red+total_weight/2.0)/total_weight);
pixel.green=((aggregate.green+total_weight/2.0)/total_weight);
pixel.blue=((aggregate.blue+total_weight/2.0)/total_weight);
pixel.black=((aggregate.black+total_weight/2.0)/total_weight);
pixel.alpha=((aggregate.alpha+total_weight/2.0)/total_weight);
}
SetPixelViaPixelInfo(enhance_image,&pixel,q);
p+=GetPixelChannels(image);
q+=GetPixelChannels(enhance_image);
}
if (SyncCacheViewAuthenticPixels(enhance_view,exception) == MagickFalse)
status=MagickFalse;
if (image->progress_monitor != (MagickProgressMonitor) NULL)
{
MagickBooleanType
proceed;
#if defined(MAGICKCORE_OPENMP_SUPPORT)
#pragma omp atomic
#endif
progress++;
proceed=SetImageProgress(image,EnhanceImageTag,progress,image->rows);
if (proceed == MagickFalse)
status=MagickFalse;
}
}
enhance_view=DestroyCacheView(enhance_view);
image_view=DestroyCacheView(image_view);
if (status == MagickFalse)
enhance_image=DestroyImage(enhance_image);
return(enhance_image);
}
| 0 |
[
"CWE-399",
"CWE-119",
"CWE-787"
] |
ImageMagick
|
d4fc44b58a14f76b1ac997517d742ee12c9dc5d3
| 302,976,961,672,308,500,000,000,000,000,000,000,000 | 170 |
https://github.com/ImageMagick/ImageMagick/issues/1611
|
int ndisc_ifinfo_sysctl_change(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos)
{
struct net_device *dev = ctl->extra1;
struct inet6_dev *idev;
int ret;
if ((strcmp(ctl->procname, "retrans_time") == 0) ||
(strcmp(ctl->procname, "base_reachable_time") == 0))
ndisc_warn_deprecated_sysctl(ctl, "syscall", dev ? dev->name : "default");
if (strcmp(ctl->procname, "retrans_time") == 0)
ret = neigh_proc_dointvec(ctl, write, buffer, lenp, ppos);
else if (strcmp(ctl->procname, "base_reachable_time") == 0)
ret = neigh_proc_dointvec_jiffies(ctl, write,
buffer, lenp, ppos);
else if ((strcmp(ctl->procname, "retrans_time_ms") == 0) ||
(strcmp(ctl->procname, "base_reachable_time_ms") == 0))
ret = neigh_proc_dointvec_ms_jiffies(ctl, write,
buffer, lenp, ppos);
else
ret = -1;
if (write && ret == 0 && dev && (idev = in6_dev_get(dev)) != NULL) {
if (ctl->data == &NEIGH_VAR(idev->nd_parms, BASE_REACHABLE_TIME))
idev->nd_parms->reachable_time =
neigh_rand_reach_time(NEIGH_VAR(idev->nd_parms, BASE_REACHABLE_TIME));
idev->tstamp = jiffies;
inet6_ifinfo_notify(RTM_NEWLINK, idev);
in6_dev_put(idev);
}
return ret;
}
| 0 |
[
"CWE-17"
] |
linux
|
6fd99094de2b83d1d4c8457f2c83483b2828e75a
| 4,023,364,495,039,905,500,000,000,000,000,000,000 | 34 |
ipv6: Don't reduce hop limit for an interface
A local route may have a lower hop_limit set than global routes do.
RFC 3756, Section 4.2.7, "Parameter Spoofing"
> 1. The attacker includes a Current Hop Limit of one or another small
> number which the attacker knows will cause legitimate packets to
> be dropped before they reach their destination.
> As an example, one possible approach to mitigate this threat is to
> ignore very small hop limits. The nodes could implement a
> configurable minimum hop limit, and ignore attempts to set it below
> said limit.
Signed-off-by: D.S. Ljungmark <[email protected]>
Acked-by: Hannes Frederic Sowa <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
void get_whoami_test()
{
print_whoami("a/b/c/quack1");
print_whoami("a/b/c/quack2.exe");
print_whoami("a\\b\\c\\quack3");
print_whoami("a\\b\\c\\quack4.exe");
}
| 0 |
[
"CWE-125"
] |
qpdf
|
6d46346eb93d5032c08cf1e39023b5d57260a766
| 189,028,178,539,225,400,000,000,000,000,000,000,000 | 7 |
Detect integer overflow/underflow
|
static int __io_sqe_files_update(struct io_ring_ctx *ctx,
struct io_uring_files_update *up,
unsigned nr_args)
{
struct fixed_file_data *data = ctx->file_data;
struct fixed_file_ref_node *ref_node;
struct file *file;
__s32 __user *fds;
int fd, i, err;
__u32 done;
bool needs_switch = false;
if (check_add_overflow(up->offset, nr_args, &done))
return -EOVERFLOW;
if (done > ctx->nr_user_files)
return -EINVAL;
ref_node = alloc_fixed_file_ref_node(ctx);
if (IS_ERR(ref_node))
return PTR_ERR(ref_node);
done = 0;
fds = u64_to_user_ptr(up->fds);
while (nr_args) {
struct fixed_file_table *table;
unsigned index;
err = 0;
if (copy_from_user(&fd, &fds[done], sizeof(fd))) {
err = -EFAULT;
break;
}
i = array_index_nospec(up->offset, ctx->nr_user_files);
table = &ctx->file_data->table[i >> IORING_FILE_TABLE_SHIFT];
index = i & IORING_FILE_TABLE_MASK;
if (table->files[index]) {
file = table->files[index];
err = io_queue_file_removal(data, file);
if (err)
break;
table->files[index] = NULL;
needs_switch = true;
}
if (fd != -1) {
file = fget(fd);
if (!file) {
err = -EBADF;
break;
}
/*
* Don't allow io_uring instances to be registered. If
* UNIX isn't enabled, then this causes a reference
* cycle and this instance can never get freed. If UNIX
* is enabled we'll handle it just fine, but there's
* still no point in allowing a ring fd as it doesn't
* support regular read/write anyway.
*/
if (file->f_op == &io_uring_fops) {
fput(file);
err = -EBADF;
break;
}
table->files[index] = file;
err = io_sqe_file_register(ctx, file, i);
if (err) {
table->files[index] = NULL;
fput(file);
break;
}
}
nr_args--;
done++;
up->offset++;
}
if (needs_switch) {
percpu_ref_kill(data->cur_refs);
spin_lock(&data->lock);
list_add(&ref_node->node, &data->ref_list);
data->cur_refs = &ref_node->refs;
spin_unlock(&data->lock);
percpu_ref_get(&ctx->file_data->refs);
} else
destroy_fixed_file_ref_node(ref_node);
return done ? done : err;
}
| 0 |
[] |
linux
|
0f2122045b946241a9e549c2a76cea54fa58a7ff
| 23,563,316,250,476,817,000,000,000,000,000,000,000 | 87 |
io_uring: don't rely on weak ->files references
Grab actual references to the files_struct. To avoid circular references
issues due to this, we add a per-task note that keeps track of what
io_uring contexts a task has used. When the tasks execs or exits its
assigned files, we cancel requests based on this tracking.
With that, we can grab proper references to the files table, and no
longer need to rely on stashing away ring_fd and ring_file to check
if the ring_fd may have been closed.
Cc: [email protected] # v5.5+
Reviewed-by: Pavel Begunkov <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
copy_attr_error (struct error_context *ctx, char const *fmt, ...)
{
int err = errno;
va_list ap;
if (err != ENOSYS && err != ENOTSUP && err != EPERM)
{
/* use verror module to print error message */
va_start (ap, fmt);
verror (0, err, fmt, ap);
va_end (ap);
}
}
| 0 |
[
"CWE-59"
] |
patch
|
dce4683cbbe107a95f1f0d45fabc304acfb5d71a
| 135,115,459,267,985,390,000,000,000,000,000,000,000 | 13 |
Don't follow symlinks unless --follow-symlinks is given
* src/inp.c (plan_a, plan_b), src/util.c (copy_to_fd, copy_file,
append_to_file): Unless the --follow-symlinks option is given, open files with
the O_NOFOLLOW flag to avoid following symlinks. So far, we were only doing
that consistently for input files.
* src/util.c (create_backup): When creating empty backup files, (re)create them
with O_CREAT | O_EXCL to avoid following symlinks in that case as well.
|
static void free_cgrp_cset_links(struct list_head *links_to_free)
{
struct cgrp_cset_link *link, *tmp_link;
list_for_each_entry_safe(link, tmp_link, links_to_free, cset_link) {
list_del(&link->cset_link);
kfree(link);
}
}
| 0 |
[
"CWE-416"
] |
linux
|
a06247c6804f1a7c86a2e5398a4c1f1db1471848
| 39,336,353,933,636,320,000,000,000,000,000,000,000 | 9 |
psi: Fix uaf issue when psi trigger is destroyed while being polled
With write operation on psi files replacing old trigger with a new one,
the lifetime of its waitqueue is totally arbitrary. Overwriting an
existing trigger causes its waitqueue to be freed and pending poll()
will stumble on trigger->event_wait which was destroyed.
Fix this by disallowing to redefine an existing psi trigger. If a write
operation is used on a file descriptor with an already existing psi
trigger, the operation will fail with EBUSY error.
Also bypass a check for psi_disabled in the psi_trigger_destroy as the
flag can be flipped after the trigger is created, leading to a memory
leak.
Fixes: 0e94682b73bf ("psi: introduce psi monitor")
Reported-by: [email protected]
Suggested-by: Linus Torvalds <[email protected]>
Analyzed-by: Eric Biggers <[email protected]>
Signed-off-by: Suren Baghdasaryan <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: [email protected]
Link: https://lore.kernel.org/r/[email protected]
|
TEST_F(HttpConnectionManagerImplTest, TestAccessLog) {
static constexpr char local_address[] = "0.0.0.0";
static constexpr char xff_address[] = "1.2.3.4";
// stream_info.downstreamRemoteAddress will infer the address from request
// headers instead of the physical connection
use_remote_address_ = false;
setup(false, "");
std::shared_ptr<MockStreamDecoderFilter> filter(new NiceMock<MockStreamDecoderFilter>());
std::shared_ptr<AccessLog::MockInstance> handler(new NiceMock<AccessLog::MockInstance>());
EXPECT_CALL(filter_factory_, createFilterChain(_))
.WillOnce(Invoke([&](FilterChainFactoryCallbacks& callbacks) -> void {
callbacks.addStreamDecoderFilter(filter);
callbacks.addAccessLogHandler(handler);
}));
EXPECT_CALL(*handler, log(_, _, _, _))
.WillOnce(Invoke([](const HeaderMap*, const HeaderMap*, const HeaderMap*,
const StreamInfo::StreamInfo& stream_info) {
EXPECT_TRUE(stream_info.responseCode());
EXPECT_EQ(stream_info.responseCode().value(), uint32_t(200));
EXPECT_NE(nullptr, stream_info.downstreamLocalAddress());
EXPECT_NE(nullptr, stream_info.downstreamRemoteAddress());
EXPECT_NE(nullptr, stream_info.downstreamDirectRemoteAddress());
EXPECT_NE(nullptr, stream_info.routeEntry());
EXPECT_EQ(stream_info.downstreamRemoteAddress()->ip()->addressAsString(), xff_address);
EXPECT_EQ(stream_info.downstreamDirectRemoteAddress()->ip()->addressAsString(),
local_address);
}));
StreamDecoder* decoder = nullptr;
NiceMock<MockStreamEncoder> encoder;
EXPECT_CALL(*codec_, dispatch(_)).WillRepeatedly(Invoke([&](Buffer::Instance& data) -> void {
decoder = &conn_manager_->newStream(encoder);
HeaderMapPtr headers{
new TestHeaderMapImpl{{":method", "GET"},
{":authority", "host"},
{":path", "/"},
{"x-forwarded-for", xff_address},
{"x-request-id", "125a4afb-6f55-a4ba-ad80-413f09f48a28"}}};
decoder->decodeHeaders(std::move(headers), true);
HeaderMapPtr response_headers{new TestHeaderMapImpl{{":status", "200"}}};
filter->callbacks_->encodeHeaders(std::move(response_headers), true);
data.drain(4);
}));
Buffer::OwnedImpl fake_input("1234");
conn_manager_->onData(fake_input, false);
}
| 0 |
[
"CWE-400",
"CWE-703"
] |
envoy
|
afc39bea36fd436e54262f150c009e8d72db5014
| 159,101,042,868,530,670,000,000,000,000,000,000,000 | 55 |
Track byteSize of HeaderMap internally.
Introduces a cached byte size updated internally in HeaderMap. The value
is stored as an optional, and is cleared whenever a non-const pointer or
reference to a HeaderEntry is accessed. The cached value can be set with
refreshByteSize() which performs an iteration over the HeaderMap to sum
the size of each key and value in the HeaderMap.
Signed-off-by: Asra Ali <[email protected]>
|
rend_service_path(const rend_service_t *service, const char *file_name)
{
char *file_path = NULL;
tor_assert(service->directory);
/* Can never fail: asserts rather than leaving file_path NULL. */
tor_asprintf(&file_path, "%s%s%s",
service->directory, PATH_SEPARATOR, file_name);
return file_path;
}
| 0 |
[
"CWE-532"
] |
tor
|
09ea89764a4d3a907808ed7d4fe42abfe64bd486
| 254,107,430,087,403,200,000,000,000,000,000,000,000 | 12 |
Fix log-uninitialized-stack bug in rend_service_intro_established.
Fixes bug 23490; bugfix on 0.2.7.2-alpha.
TROVE-2017-008
CVE-2017-0380
|
Subsets and Splits
CWE 416 & 19
The query filters records related to specific CWEs (Common Weakness Enumerations), providing a basic overview of entries with these vulnerabilities but without deeper analysis.
CWE Frequency in Train Set
Counts the occurrences of each CWE (Common Weakness Enumeration) in the dataset, providing a basic distribution but limited insight.