func
stringlengths 0
484k
| target
int64 0
1
| cwe
sequencelengths 0
4
| project
stringclasses 799
values | commit_id
stringlengths 40
40
| hash
float64 1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
| size
int64 1
24k
| message
stringlengths 0
13.3k
|
---|---|---|---|---|---|---|---|
int save_in_field(Field *field, bool no_conversions)
{
field->set_notnull();
return field->store_hex_hybrid(str_value.ptr(), str_value.length());
} | 0 | [
"CWE-617"
] | server | 2e7891080667c59ac80f788eef4d59d447595772 | 326,047,647,818,030,050,000,000,000,000,000,000,000 | 5 | MDEV-25635 Assertion failure when pushing from HAVING into WHERE of view
This bug could manifest itself after pushing a where condition over a
mergeable derived table / view / CTE DT into a grouping view / derived
table / CTE V whose item list contained set functions with constant
arguments such as MIN(2), SUM(1) etc. In such cases the field references
used in the condition pushed into the view V that correspond set functions
are wrapped into Item_direct_view_ref wrappers. Due to a wrong implementation
of the virtual method const_item() for the class Item_direct_view_ref the
wrapped set functions with constant arguments could be erroneously taken
for constant items. This could lead to a wrong result set returned by the
main select query in 10.2. In 10.4 where a possibility of pushing condition
from HAVING into WHERE had been added this could cause a crash.
Approved by Sergey Petrunya <[email protected]> |
printInfo(ev::sig &watcher, int revents) {
cerr << "---------- Begin LoggingAgent status ----------\n";
loggingServer->dump(cerr);
cerr.flush();
cerr << "---------- End LoggingAgent status ----------\n";
} | 0 | [
"CWE-59"
] | passenger | 9dda49f4a3ebe9bafc48da1bd45799f30ce19566 | 272,209,930,183,734,620,000,000,000,000,000,000,000 | 6 | Fixed a problem with graceful web server restarts.
This problem was introduced in 4.0.6 during the attempt to fix issue #910. |
static inline void control_tx_irq_watermark(struct cx23885_dev *dev,
enum tx_fifo_watermark level)
{
cx23888_ir_and_or4(dev, CX23888_IR_CNTRL_REG, ~CNTRL_TIC, level);
} | 0 | [
"CWE-400",
"CWE-401"
] | linux | a7b2df76b42bdd026e3106cf2ba97db41345a177 | 62,536,058,320,782,240,000,000,000,000,000,000,000 | 5 | media: rc: prevent memory leak in cx23888_ir_probe
In cx23888_ir_probe if kfifo_alloc fails the allocated memory for state
should be released.
Signed-off-by: Navid Emamdoost <[email protected]>
Signed-off-by: Sean Young <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]> |
Next_select_func setup_end_select_func(JOIN *join, JOIN_TAB *tab)
{
TMP_TABLE_PARAM *tmp_tbl= tab ? tab->tmp_table_param : &join->tmp_table_param;
/*
Choose method for presenting result to user. Use end_send_group
if the query requires grouping (has a GROUP BY clause and/or one or
more aggregate functions). Use end_send if the query should not
be grouped.
*/
if (join->sort_and_group && !tmp_tbl->precomputed_group_by)
{
DBUG_PRINT("info",("Using end_send_group"));
return end_send_group;
}
DBUG_PRINT("info",("Using end_send"));
return end_send;
} | 0 | [] | server | ff77a09bda884fe6bf3917eb29b9d3a2f53f919b | 75,060,888,505,972,080,000,000,000,000,000,000,000 | 18 | MDEV-22464 Server crash on UPDATE with nested subquery
Uninitialized ref_pointer_array[] because setup_fields() got empty
fields list. mysql_multi_update() for some reason does that by
substituting the fields list with empty total_list for the
mysql_select() call (looks like wrong merge since total_list is not
used anywhere else and is always empty). The fix would be to return
back the original fields list. But this fails update_use_source.test
case:
--error ER_BAD_FIELD_ERROR
update v1 set t1c1=2 order by 1;
Actually not failing the above seems to be ok.
The other fix would be to keep resolve_in_select_list false (and that
keeps outer context from being resolved in
Item_ref::fix_fields()). This fix is more consistent with how SELECT
behaves:
--error ER_SUBQUERY_NO_1_ROW
select a from t1 where a= (select 2 from t1 having (a = 3));
So this patch implements this fix. |
static void proc_set_tty(struct tty_struct *tty)
{
spin_lock_irq(¤t->sighand->siglock);
__proc_set_tty(tty);
spin_unlock_irq(¤t->sighand->siglock);
} | 0 | [
"CWE-200",
"CWE-362"
] | linux | 5c17c861a357e9458001f021a7afa7aab9937439 | 231,099,748,773,509,550,000,000,000,000,000,000,000 | 6 | tty: Fix unsafe ldisc reference via ioctl(TIOCGETD)
ioctl(TIOCGETD) retrieves the line discipline id directly from the
ldisc because the line discipline id (c_line) in termios is untrustworthy;
userspace may have set termios via ioctl(TCSETS*) without actually
changing the line discipline via ioctl(TIOCSETD).
However, directly accessing the current ldisc via tty->ldisc is
unsafe; the ldisc ptr dereferenced may be stale if the line discipline
is changing via ioctl(TIOCSETD) or hangup.
Wait for the line discipline reference (just like read() or write())
to retrieve the "current" line discipline id.
Cc: <[email protected]>
Signed-off-by: Peter Hurley <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]> |
void sock_kfree_s(struct sock *sk, void *mem, int size)
{
__sock_kfree_s(sk, mem, size, false);
} | 0 | [
"CWE-119",
"CWE-787"
] | linux | b98b0bc8c431e3ceb4b26b0dfc8db509518fb290 | 104,686,827,137,613,100,000,000,000,000,000,000,000 | 4 | net: avoid signed overflows for SO_{SND|RCV}BUFFORCE
CAP_NET_ADMIN users should not be allowed to set negative
sk_sndbuf or sk_rcvbuf values, as it can lead to various memory
corruptions, crashes, OOM...
Note that before commit 82981930125a ("net: cleanups in
sock_setsockopt()"), the bug was even more serious, since SO_SNDBUF
and SO_RCVBUF were vulnerable.
This needs to be backported to all known linux kernels.
Again, many thanks to syzkaller team for discovering this gem.
Signed-off-by: Eric Dumazet <[email protected]>
Reported-by: Andrey Konovalov <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
int snd_timer_open(struct snd_timer_instance **ti,
char *owner, struct snd_timer_id *tid,
unsigned int slave_id)
{
struct snd_timer *timer;
struct snd_timer_instance *timeri = NULL;
if (tid->dev_class == SNDRV_TIMER_CLASS_SLAVE) {
/* open a slave instance */
if (tid->dev_sclass <= SNDRV_TIMER_SCLASS_NONE ||
tid->dev_sclass > SNDRV_TIMER_SCLASS_OSS_SEQUENCER) {
pr_debug("ALSA: timer: invalid slave class %i\n",
tid->dev_sclass);
return -EINVAL;
}
mutex_lock(®ister_mutex);
timeri = snd_timer_instance_new(owner, NULL);
if (!timeri) {
mutex_unlock(®ister_mutex);
return -ENOMEM;
}
timeri->slave_class = tid->dev_sclass;
timeri->slave_id = tid->device;
timeri->flags |= SNDRV_TIMER_IFLG_SLAVE;
list_add_tail(&timeri->open_list, &snd_timer_slave_list);
snd_timer_check_slave(timeri);
mutex_unlock(®ister_mutex);
*ti = timeri;
return 0;
}
/* open a master instance */
mutex_lock(®ister_mutex);
timer = snd_timer_find(tid);
#ifdef CONFIG_MODULES
if (!timer) {
mutex_unlock(®ister_mutex);
snd_timer_request(tid);
mutex_lock(®ister_mutex);
timer = snd_timer_find(tid);
}
#endif
if (!timer) {
mutex_unlock(®ister_mutex);
return -ENODEV;
}
if (!list_empty(&timer->open_list_head)) {
timeri = list_entry(timer->open_list_head.next,
struct snd_timer_instance, open_list);
if (timeri->flags & SNDRV_TIMER_IFLG_EXCLUSIVE) {
mutex_unlock(®ister_mutex);
return -EBUSY;
}
}
timeri = snd_timer_instance_new(owner, timer);
if (!timeri) {
mutex_unlock(®ister_mutex);
return -ENOMEM;
}
timeri->slave_class = tid->dev_sclass;
timeri->slave_id = slave_id;
if (list_empty(&timer->open_list_head) && timer->hw.open)
timer->hw.open(timer);
list_add_tail(&timeri->open_list, &timer->open_list_head);
snd_timer_check_master(timeri);
mutex_unlock(®ister_mutex);
*ti = timeri;
return 0;
} | 0 | [
"CWE-200",
"CWE-362"
] | linux | ee8413b01045c74340aa13ad5bdf905de32be736 | 15,464,896,072,493,654,000,000,000,000,000,000,000 | 69 | ALSA: timer: Fix double unlink of active_list
ALSA timer instance object has a couple of linked lists and they are
unlinked unconditionally at snd_timer_stop(). Meanwhile
snd_timer_interrupt() unlinks it, but it calls list_del() which leaves
the element list itself unchanged. This ends up with unlinking twice,
and it was caught by syzkaller fuzzer.
The fix is to use list_del_init() variant properly there, too.
Reported-by: Dmitry Vyukov <[email protected]>
Tested-by: Dmitry Vyukov <[email protected]>
Cc: <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]> |
static double mp_linear_add(_cimg_math_parser& mp) {
return _mp_arg(2)*_mp_arg(3) + _mp_arg(4); | 0 | [
"CWE-125"
] | CImg | 10af1e8c1ad2a58a0a3342a856bae63e8f257abb | 243,660,661,266,380,540,000,000,000,000,000,000,000 | 3 | Fix other issues in 'CImg<T>::load_bmp()'. |
static enum write_one_status write_one(struct sha1file *f,
struct object_entry *e,
off_t *offset)
{
unsigned long size;
int recursing;
/*
* we set offset to 1 (which is an impossible value) to mark
* the fact that this object is involved in "write its base
* first before writing a deltified object" recursion.
*/
recursing = (e->idx.offset == 1);
if (recursing) {
warning("recursive delta detected for object %s",
sha1_to_hex(e->idx.sha1));
return WRITE_ONE_RECURSIVE;
} else if (e->idx.offset || e->preferred_base) {
/* offset is non zero if object is written already. */
return WRITE_ONE_SKIP;
}
/* if we are deltified, write out base object first. */
if (e->delta) {
e->idx.offset = 1; /* now recurse */
switch (write_one(f, e->delta, offset)) {
case WRITE_ONE_RECURSIVE:
/* we cannot depend on this one */
e->delta = NULL;
break;
default:
break;
case WRITE_ONE_BREAK:
e->idx.offset = recursing;
return WRITE_ONE_BREAK;
}
}
e->idx.offset = *offset;
size = write_object(f, e, *offset);
if (!size) {
e->idx.offset = recursing;
return WRITE_ONE_BREAK;
}
written_list[nr_written++] = &e->idx;
/* make sure off_t is sufficiently large not to wrap */
if (signed_add_overflows(*offset, size))
die("pack too large for current definition of off_t");
*offset += size;
return WRITE_ONE_WRITTEN;
} | 0 | [
"CWE-119",
"CWE-787"
] | git | de1e67d0703894cb6ea782e36abb63976ab07e60 | 148,525,408,013,273,290,000,000,000,000,000,000,000 | 52 | list-objects: pass full pathname to callbacks
When we find a blob at "a/b/c", we currently pass this to
our show_object_fn callbacks as two components: "a/b/" and
"c". Callbacks which want the full value then call
path_name(), which concatenates the two. But this is an
inefficient interface; the path is a strbuf, and we could
simply append "c" to it temporarily, then roll back the
length, without creating a new copy.
So we could improve this by teaching the callsites of
path_name() this trick (and there are only 3). But we can
also notice that no callback actually cares about the
broken-down representation, and simply pass each callback
the full path "a/b/c" as a string. The callback code becomes
even simpler, then, as we do not have to worry about freeing
an allocated buffer, nor rolling back our modification to
the strbuf.
This is theoretically less efficient, as some callbacks
would not bother to format the final path component. But in
practice this is not measurable. Since we use the same
strbuf over and over, our work to grow it is amortized, and
we really only pay to memcpy a few bytes.
Signed-off-by: Jeff King <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]> |
cJSON *cJSON_CreateBool(int b) {cJSON *item=cJSON_New_Item();if(item)item->type=b?cJSON_True:cJSON_False;return item;} | 0 | [
"CWE-120",
"CWE-119",
"CWE-787"
] | iperf | 91f2fa59e8ed80dfbf400add0164ee0e508e412a | 167,215,786,972,382,780,000,000,000,000,000,000,000 | 1 | Fix a buffer overflow / heap corruption issue that could occur if a
malformed JSON string was passed on the control channel. This issue,
present in the cJSON library, was already fixed upstream, so was
addressed here in iperf3 by importing a newer version of cJSON (plus
local ESnet modifications).
Discovered and reported by Dave McDaniel, Cisco Talos.
Based on a patch by @dopheide-esnet, with input from @DaveGamble.
Cross-references: TALOS-CAN-0164, ESNET-SECADV-2016-0001,
CVE-2016-4303
(cherry picked from commit ed94082be27d971a5e1b08b666e2c217cf470a40)
Signed-off-by: Bruce A. Mah <[email protected]> |
static void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
{
unsigned long hw_cr0 = (cr0 & ~KVM_GUEST_CR0_MASK) |
KVM_VM_CR0_ALWAYS_ON;
vmx_fpu_deactivate(vcpu);
if (vcpu->arch.rmode.active && (cr0 & X86_CR0_PE))
enter_pmode(vcpu);
if (!vcpu->arch.rmode.active && !(cr0 & X86_CR0_PE))
enter_rmode(vcpu);
#ifdef CONFIG_X86_64
if (vcpu->arch.shadow_efer & EFER_LME) {
if (!is_paging(vcpu) && (cr0 & X86_CR0_PG))
enter_lmode(vcpu);
if (is_paging(vcpu) && !(cr0 & X86_CR0_PG))
exit_lmode(vcpu);
}
#endif
if (vm_need_ept())
ept_update_paging_mode_cr0(&hw_cr0, cr0, vcpu);
vmcs_writel(CR0_READ_SHADOW, cr0);
vmcs_writel(GUEST_CR0, hw_cr0);
vcpu->arch.cr0 = cr0;
if (!(cr0 & X86_CR0_TS) || !(cr0 & X86_CR0_PE))
vmx_fpu_activate(vcpu);
} | 0 | [
"CWE-20"
] | linux-2.6 | 16175a796d061833aacfbd9672235f2d2725df65 | 280,484,419,707,230,100,000,000,000,000,000,000,000 | 32 | KVM: VMX: Don't allow uninhibited access to EFER on i386
vmx_set_msr() does not allow i386 guests to touch EFER, but they can still
do so through the default: label in the switch. If they set EFER_LME, they
can oops the host.
Fix by having EFER access through the normal channel (which will check for
EFER_LME) even on i386.
Reported-and-tested-by: Benjamin Gilbert <[email protected]>
Cc: [email protected]
Signed-off-by: Avi Kivity <[email protected]> |
do_unmount (UnmountData *data)
{
GMountOperation *mount_op;
if (data->mount_operation)
{
mount_op = g_object_ref (data->mount_operation);
}
else
{
mount_op = gtk_mount_operation_new (data->parent_window);
}
if (data->eject)
{
g_mount_eject_with_operation (data->mount,
0,
mount_op,
NULL,
unmount_mount_callback,
data);
}
else
{
g_mount_unmount_with_operation (data->mount,
0,
mount_op,
NULL,
unmount_mount_callback,
data);
}
g_object_unref (mount_op);
} | 0 | [
"CWE-20"
] | nautilus | 1630f53481f445ada0a455e9979236d31a8d3bb0 | 245,214,132,343,058,070,000,000,000,000,000,000,000 | 32 | mime-actions: use file metadata for trusting desktop files
Currently we only trust desktop files that have the executable bit
set, and don't replace the displayed icon or the displayed name until
it's trusted, which prevents for running random programs by a malicious
desktop file.
However, the executable permission is preserved if the desktop file
comes from a compressed file.
To prevent this, add a metadata::trusted metadata to the file once the
user acknowledges the file as trusted. This adds metadata to the file,
which cannot be added unless it has access to the computer.
Also remove the SHEBANG "trusted" content we were putting inside the
desktop file, since that doesn't add more security since it can come
with the file itself.
https://bugzilla.gnome.org/show_bug.cgi?id=777991 |
string_to_compress_algo(const char *string)
{
/* TRANSLATORS: See doc/TRANSLATE about this string. */
if(match_multistr(_("uncompressed|none"),string))
return 0;
else if(ascii_strcasecmp(string,"uncompressed")==0)
return 0;
else if(ascii_strcasecmp(string,"none")==0)
return 0;
else if(ascii_strcasecmp(string,"zip")==0)
return 1;
else if(ascii_strcasecmp(string,"zlib")==0)
return 2;
#ifdef HAVE_BZIP2
else if(ascii_strcasecmp(string,"bzip2")==0)
return 3;
#endif
else if(ascii_strcasecmp(string,"z0")==0)
return 0;
else if(ascii_strcasecmp(string,"z1")==0)
return 1;
else if(ascii_strcasecmp(string,"z2")==0)
return 2;
#ifdef HAVE_BZIP2
else if(ascii_strcasecmp(string,"z3")==0)
return 3;
#endif
else
return -1;
} | 0 | [
"CWE-20"
] | gnupg | 2183683bd633818dd031b090b5530951de76f392 | 256,713,126,905,033,000,000,000,000,000,000,000,000 | 30 | Use inline functions to convert buffer data to scalars.
* common/host2net.h (buf16_to_ulong, buf16_to_uint): New.
(buf16_to_ushort, buf16_to_u16): New.
(buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New.
--
Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to
avoid all sign extension on shift problems. Hanno Böck found a case
with an invalid read due to this problem. To fix that once and for
all almost all uses of "<< 24" and "<< 8" are changed by this patch to
use an inline function from host2net.h.
Signed-off-by: Werner Koch <[email protected]> |
cli_file_t cli_get_container_type(cli_ctx *ctx, int index)
{
if (index < 0)
index = ctx->recursion + index + 1;
if (index >= 0 && index <= ctx->recursion)
return ctx->containers[index].type;
return CL_TYPE_ANY;
} | 1 | [] | clamav-devel | 167c0079292814ec5523d0b97a9e1b002bf8819b | 27,427,583,034,634,850,000,000,000,000,000,000,000 | 8 | fix 0.99.3 false negative of virus Pdf.Exploit.CVE_2016_1046-1. |
check_signals_and_traps ()
{
check_signals ();
run_pending_traps ();
} | 0 | [] | bash | 955543877583837c85470f7fb8a97b7aa8d45e6c | 271,044,569,725,834,100,000,000,000,000,000,000,000 | 6 | bash-4.4-rc2 release |
static int __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
void *value, u64 map_flags,
bool onallcpus)
{
struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
struct htab_elem *l_new = NULL, *l_old;
struct hlist_nulls_head *head;
unsigned long flags;
struct bucket *b;
u32 key_size, hash;
int ret;
if (unlikely(map_flags > BPF_EXIST))
/* unknown flags */
return -EINVAL;
WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held() &&
!rcu_read_lock_bh_held());
key_size = map->key_size;
hash = htab_map_hash(key, key_size, htab->hashrnd);
b = __select_bucket(htab, hash);
head = &b->head;
/* For LRU, we need to alloc before taking bucket's
* spinlock because LRU's elem alloc may need
* to remove older elem from htab and this removal
* operation will need a bucket lock.
*/
if (map_flags != BPF_EXIST) {
l_new = prealloc_lru_pop(htab, key, hash);
if (!l_new)
return -ENOMEM;
}
ret = htab_lock_bucket(htab, b, hash, &flags);
if (ret)
return ret;
l_old = lookup_elem_raw(head, hash, key, key_size);
ret = check_flags(htab, l_old, map_flags);
if (ret)
goto err;
if (l_old) {
bpf_lru_node_set_ref(&l_old->lru_node);
/* per-cpu hash map can update value in-place */
pcpu_copy_value(htab, htab_elem_get_ptr(l_old, key_size),
value, onallcpus);
} else {
pcpu_init_value(htab, htab_elem_get_ptr(l_new, key_size),
value, onallcpus);
hlist_nulls_add_head_rcu(&l_new->hash_node, head);
l_new = NULL;
}
ret = 0;
err:
htab_unlock_bucket(htab, b, hash, flags);
if (l_new)
bpf_lru_push_free(&htab->lru, &l_new->lru_node);
return ret;
} | 0 | [
"CWE-787"
] | bpf | c4eb1f403243fc7bbb7de644db8587c03de36da6 | 319,919,010,287,507,270,000,000,000,000,000,000,000 | 66 | bpf: Fix integer overflow involving bucket_size
In __htab_map_lookup_and_delete_batch(), hash buckets are iterated
over to count the number of elements in each bucket (bucket_size).
If bucket_size is large enough, the multiplication to calculate
kvmalloc() size could overflow, resulting in out-of-bounds write
as reported by KASAN:
[...]
[ 104.986052] BUG: KASAN: vmalloc-out-of-bounds in __htab_map_lookup_and_delete_batch+0x5ce/0xb60
[ 104.986489] Write of size 4194224 at addr ffffc9010503be70 by task crash/112
[ 104.986889]
[ 104.987193] CPU: 0 PID: 112 Comm: crash Not tainted 5.14.0-rc4 #13
[ 104.987552] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 104.988104] Call Trace:
[ 104.988410] dump_stack_lvl+0x34/0x44
[ 104.988706] print_address_description.constprop.0+0x21/0x140
[ 104.988991] ? __htab_map_lookup_and_delete_batch+0x5ce/0xb60
[ 104.989327] ? __htab_map_lookup_and_delete_batch+0x5ce/0xb60
[ 104.989622] kasan_report.cold+0x7f/0x11b
[ 104.989881] ? __htab_map_lookup_and_delete_batch+0x5ce/0xb60
[ 104.990239] kasan_check_range+0x17c/0x1e0
[ 104.990467] memcpy+0x39/0x60
[ 104.990670] __htab_map_lookup_and_delete_batch+0x5ce/0xb60
[ 104.990982] ? __wake_up_common+0x4d/0x230
[ 104.991256] ? htab_of_map_free+0x130/0x130
[ 104.991541] bpf_map_do_batch+0x1fb/0x220
[...]
In hashtable, if the elements' keys have the same jhash() value, the
elements will be put into the same bucket. By putting a lot of elements
into a single bucket, the value of bucket_size can be increased to
trigger the integer overflow.
Triggering the overflow is possible for both callers with CAP_SYS_ADMIN
and callers without CAP_SYS_ADMIN.
It will be trivial for a caller with CAP_SYS_ADMIN to intentionally
reach this overflow by enabling BPF_F_ZERO_SEED. As this flag will set
the random seed passed to jhash() to 0, it will be easy for the caller
to prepare keys which will be hashed into the same value, and thus put
all the elements into the same bucket.
If the caller does not have CAP_SYS_ADMIN, BPF_F_ZERO_SEED cannot be
used. However, it will be still technically possible to trigger the
overflow, by guessing the random seed value passed to jhash() (32bit)
and repeating the attempt to trigger the overflow. In this case,
the probability to trigger the overflow will be low and will take
a very long time.
Fix the integer overflow by calling kvmalloc_array() instead of
kvmalloc() to allocate memory.
Fixes: 057996380a42 ("bpf: Add batch ops to all htab bpf map")
Signed-off-by: Tatsuhiko Yasumatsu <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected] |
do_add_counters(struct net *net, const void __user *user,
unsigned int len, int compat)
{
unsigned int i;
struct xt_counters_info tmp;
struct xt_counters *paddc;
struct xt_table *t;
const struct xt_table_info *private;
int ret = 0;
struct ipt_entry *iter;
unsigned int addend;
paddc = xt_copy_counters_from_user(user, len, &tmp, compat);
if (IS_ERR(paddc))
return PTR_ERR(paddc);
t = xt_find_table_lock(net, AF_INET, tmp.name);
if (IS_ERR(t)) {
ret = PTR_ERR(t);
goto free;
}
local_bh_disable();
private = t->private;
if (private->number != tmp.num_counters) {
ret = -EINVAL;
goto unlock_up_free;
}
i = 0;
addend = xt_write_recseq_begin();
xt_entry_foreach(iter, private->entries, private->size) {
struct xt_counters *tmp;
tmp = xt_get_this_cpu_counter(&iter->counters);
ADD_COUNTER(*tmp, paddc[i].bcnt, paddc[i].pcnt);
++i;
}
xt_write_recseq_end(addend);
unlock_up_free:
local_bh_enable();
xt_table_unlock(t);
module_put(t->me);
free:
vfree(paddc);
return ret;
} | 0 | [
"CWE-476"
] | linux | 57ebd808a97d7c5b1e1afb937c2db22beba3c1f8 | 190,973,438,484,682,200,000,000,000,000,000,000,000 | 48 | netfilter: add back stackpointer size checks
The rationale for removing the check is only correct for rulesets
generated by ip(6)tables.
In iptables, a jump can only occur to a user-defined chain, i.e.
because we size the stack based on number of user-defined chains we
cannot exceed stack size.
However, the underlying binary format has no such restriction,
and the validation step only ensures that the jump target is a
valid rule start point.
IOW, its possible to build a rule blob that has no user-defined
chains but does contain a jump.
If this happens, no jump stack gets allocated and crash occurs
because no jumpstack was allocated.
Fixes: 7814b6ec6d0d6 ("netfilter: xtables: don't save/restore jumpstack offset")
Reported-by: [email protected]
Signed-off-by: Florian Westphal <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]> |
static bool winbind_name_list_to_sid_string_list(struct pwb_context *ctx,
const char *user,
const char *name_list,
char *sid_list_buffer,
int sid_list_buffer_size)
{
bool result = false;
char *current_name = NULL;
const char *search_location;
const char *comma;
int len;
if (sid_list_buffer_size > 0) {
sid_list_buffer[0] = 0;
}
search_location = name_list;
while ((comma = strchr(search_location, ',')) != NULL) {
current_name = strndup(search_location,
comma - search_location);
if (NULL == current_name) {
goto out;
}
if (!winbind_name_to_sid_string(ctx, user,
current_name,
sid_list_buffer,
sid_list_buffer_size)) {
/*
* If one group name failed, we must not fail
* the authentication totally, continue with
* the following group names. If user belongs to
* one of the valid groups, we must allow it
* login. -- BoYang
*/
_pam_log(ctx, LOG_INFO, "cannot convert group %s to sid, "
"check if group %s is valid group.", current_name,
current_name);
_make_remark_format(ctx, PAM_TEXT_INFO, _("Cannot convert group %s "
"to sid, please contact your administrator to see "
"if group %s is valid."), current_name, current_name);
SAFE_FREE(current_name);
search_location = comma + 1;
continue;
}
SAFE_FREE(current_name);
if (!safe_append_string(sid_list_buffer, ",",
sid_list_buffer_size)) {
goto out;
}
search_location = comma + 1;
}
if (!winbind_name_to_sid_string(ctx, user, search_location,
sid_list_buffer,
sid_list_buffer_size)) {
_pam_log(ctx, LOG_INFO, "cannot convert group %s to sid, "
"check if group %s is valid group.", search_location,
search_location);
_make_remark_format(ctx, PAM_TEXT_INFO, _("Cannot convert group %s "
"to sid, please contact your administrator to see "
"if group %s is valid."), search_location, search_location);
/* If no valid groups were converted we should fail outright */
if (name_list != NULL && strlen(sid_list_buffer) == 0) {
result = false;
goto out;
}
/*
* The lookup of the last name failed..
* It results in require_member_of_sid ends with ','
* It is malformated parameter here, overwrite the last ','.
*/
len = strlen(sid_list_buffer);
if ((len != 0) && (sid_list_buffer[len - 1] == ',')) {
sid_list_buffer[len - 1] = '\0';
}
}
result = true;
out:
SAFE_FREE(current_name);
return result;
} | 0 | [
"CWE-20"
] | samba | f62683956a3b182f6a61cc7a2b4ada2e74cde243 | 59,746,306,607,684,390,000,000,000,000,000,000,000 | 89 | fail authentication for single group name which cannot be converted to sid
furthermore if more than one name is supplied and no sid is converted
then also fail.
Bug: https://bugzilla.samba.org/show_bug.cgi?id=8598
Signed-off-by: Noel Power <[email protected]>
Reviewed-by: Andreas Schneider <[email protected]>
Reviewed-by: David Disseldorp <[email protected]>
Autobuild-User(master): David Disseldorp <[email protected]>
Autobuild-Date(master): Fri Nov 29 15:45:11 CET 2013 on sn-devel-104 |
static void sig_usr1_handler(int sig)
{
sig_usr1_handler_called = 1;
} | 0 | [
"CWE-287",
"CWE-284"
] | booth | 35bf0b7b048d715f671eb68974fb6b4af6528c67 | 265,755,758,471,631,900,000,000,000,000,000,000,000 | 4 | Revert "Refactor: main: substitute is_auth_req macro"
This reverts commit da79b8ba28ad4837a0fee13e5f8fb6f89fe0e24c.
authfile != authkey
Signed-off-by: Jan Friesse <[email protected]> |
void test_nghttp2_session_pack_data_with_padding(void) {
nghttp2_session *session;
my_user_data ud;
nghttp2_session_callbacks callbacks;
nghttp2_data_provider data_prd;
nghttp2_frame *frame;
size_t datalen = 55;
nghttp2_mem *mem;
mem = nghttp2_mem_default();
memset(&callbacks, 0, sizeof(callbacks));
callbacks.send_callback = block_count_send_callback;
callbacks.on_frame_send_callback = on_frame_send_callback;
callbacks.select_padding_callback = select_padding_callback;
data_prd.read_callback = fixed_length_data_source_read_callback;
nghttp2_session_client_new(&session, &callbacks, &ud);
ud.padlen = 63;
nghttp2_submit_request(session, NULL, NULL, 0, &data_prd, NULL);
ud.block_count = 1;
ud.data_source_length = datalen;
/* Sends HEADERS */
CU_ASSERT(0 == nghttp2_session_send(session));
CU_ASSERT(NGHTTP2_HEADERS == ud.sent_frame_type);
frame = &session->aob.item->frame;
CU_ASSERT(ud.padlen == frame->data.padlen);
CU_ASSERT(frame->hd.flags & NGHTTP2_FLAG_PADDED);
/* Check reception of this DATA frame */
check_session_recv_data_with_padding(&session->aob.framebufs, datalen, mem);
nghttp2_session_del(session);
} | 0 | [] | nghttp2 | 0a6ce87c22c69438ecbffe52a2859c3a32f1620f | 247,825,625,068,621,620,000,000,000,000,000,000,000 | 39 | Add nghttp2_option_set_max_outbound_ack |
static Image *ReadPNGImage(const ImageInfo *image_info,ExceptionInfo *exception)
{
Image
*image;
MagickBooleanType
logging,
status;
MngInfo
*mng_info;
char
magic_number[MaxTextExtent];
ssize_t
count;
/*
Open image file.
*/
assert(image_info != (const ImageInfo *) NULL);
assert(image_info->signature == MagickCoreSignature);
if (image_info->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",
image_info->filename);
assert(exception != (ExceptionInfo *) NULL);
assert(exception->signature == MagickCoreSignature);
logging=LogMagickEvent(CoderEvent,GetMagickModule(),"Enter ReadPNGImage()");
image=AcquireImage(image_info);
mng_info=(MngInfo *) NULL;
status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception);
if (status == MagickFalse)
return(DestroyImageList(image));
/*
Verify PNG signature.
*/
count=ReadBlob(image,8,(unsigned char *) magic_number);
if ((count < 8) || (memcmp(magic_number,"\211PNG\r\n\032\n",8) != 0))
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
/*
Verify that file size large enough to contain a PNG datastream.
*/
if (GetBlobSize(image) < 61)
ThrowReaderException(CorruptImageError,"InsufficientImageDataInFile");
/*
Allocate a MngInfo structure.
*/
mng_info=(MngInfo *) AcquireMagickMemory(sizeof(MngInfo));
if (mng_info == (MngInfo *) NULL)
ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed");
/*
Initialize members of the MngInfo structure.
*/
(void) memset(mng_info,0,sizeof(MngInfo));
mng_info->image=image;
image=ReadOnePNGImage(mng_info,image_info,exception);
mng_info=MngInfoFreeStruct(mng_info);
if (image == (Image *) NULL)
{
if (logging != MagickFalse)
(void) LogMagickEvent(CoderEvent,GetMagickModule(),
"exit ReadPNGImage() with error");
return((Image *) NULL);
}
(void) CloseBlob(image);
if ((image->columns == 0) || (image->rows == 0))
{
if (logging != MagickFalse)
(void) LogMagickEvent(CoderEvent,GetMagickModule(),
"exit ReadPNGImage() with error.");
ThrowReaderException(CorruptImageError,"CorruptImage");
}
if ((IssRGBColorspace(image->colorspace) != MagickFalse) &&
(image->gamma > .75) &&
!(image->chromaticity.red_primary.x>0.6399f &&
image->chromaticity.red_primary.x<0.6401f &&
image->chromaticity.red_primary.y>0.3299f &&
image->chromaticity.red_primary.y<0.3301f &&
image->chromaticity.green_primary.x>0.2999f &&
image->chromaticity.green_primary.x<0.3001f &&
image->chromaticity.green_primary.y>0.5999f &&
image->chromaticity.green_primary.y<0.6001f &&
image->chromaticity.blue_primary.x>0.1499f &&
image->chromaticity.blue_primary.x<0.1501f &&
image->chromaticity.blue_primary.y>0.0599f &&
image->chromaticity.blue_primary.y<0.0601f &&
image->chromaticity.white_point.x>0.3126f &&
image->chromaticity.white_point.x<0.3128f &&
image->chromaticity.white_point.y>0.3289f &&
image->chromaticity.white_point.y<0.3291f))
{
(void) LogMagickEvent(CoderEvent,GetMagickModule(),
"SetImageColorspace to RGBColorspace");
SetImageColorspace(image,RGBColorspace);
}
if (logging != MagickFalse)
(void) LogMagickEvent(CoderEvent,GetMagickModule(),
" page.w: %.20g, page.h: %.20g,page.x: %.20g, page.y: %.20g.",
(double) image->page.width,(double) image->page.height,
(double) image->page.x,(double) image->page.y);
if (logging != MagickFalse)
(void) LogMagickEvent(CoderEvent,GetMagickModule(),"exit ReadPNGImage()");
return(image);
} | 0 | [
"CWE-125"
] | ImageMagick6 | 34adc98afd5c7e7fb774d2ebdaea39e831c24dce | 76,211,082,897,281,800,000,000,000,000,000,000,000 | 124 | https://github.com/ImageMagick/ImageMagick/issues/1561 |
struct kvm_vcpu __percpu **kvm_get_running_vcpus(void)
{
return &kvm_arm_running_vcpu;
} | 0 | [
"CWE-399",
"CWE-284"
] | linux | e8180dcaa8470ceca21109f143876fdcd9fe050a | 2,350,542,899,566,891,200,000,000,000,000,000,000 | 4 | ARM: KVM: prevent NULL pointer dereferences with KVM VCPU ioctl
Some ARM KVM VCPU ioctls require the vCPU to be properly initialized
with the KVM_ARM_VCPU_INIT ioctl before being used with further
requests. KVM_RUN checks whether this initialization has been
done, but other ioctls do not.
Namely KVM_GET_REG_LIST will dereference an array with index -1
without initialization and thus leads to a kernel oops.
Fix this by adding checks before executing the ioctl handlers.
[ Removed superflous comment from static function - Christoffer ]
Changes from v1:
* moved check into a static function with a meaningful name
Signed-off-by: Andre Przywara <[email protected]>
Signed-off-by: Christoffer Dall <[email protected]> |
restore_original_file_attributes (GHashTable *created_files,
GCancellable *cancellable)
{
GHashTableIter iter;
gpointer key, value;
g_hash_table_iter_init (&iter, created_files);
while (g_hash_table_iter_next (&iter, &key, &value)) {
GFile *file = key;
GFileInfo *info = value;
_g_file_set_attributes_from_info (file, info, cancellable, NULL);
}
} | 0 | [
"CWE-22"
] | file-roller | 21dfcdbfe258984db89fb65243a1a888924e45a0 | 275,519,960,365,304,970,000,000,000,000,000,000,000 | 14 | libarchive: do not follow external links when extracting files
Do not extract a file if its parent is a symbolic link to a
directory external to the destination. |
gcab_folder_new (gint comptype)
{
return g_object_new (GCAB_TYPE_FOLDER,
"comptype", comptype,
NULL);
} | 0 | [
"CWE-787"
] | gcab | c512f6ff0c82a1139b36db2b28f93edc01c74b4b | 178,859,928,285,457,700,000,000,000,000,000,000,000 | 6 | trivial: Allocate cdata_t on the heap
Using a 91kB stack allocation for one object isn't awesome, and it also allows
us to use g_autoptr() to simplify gcab_folder_extract() |
int rsCStrStartsWithSzStr(cstr_t *pCS1, uchar *psz, size_t iLenSz)
{
register size_t i;
rsCHECKVALIDOBJECT(pCS1, OIDrsCStr);
assert(psz != NULL);
assert(iLenSz == strlen((char*)psz)); /* just make sure during debugging! */
if(pCS1->iStrLen >= iLenSz) {
/* we are using iLenSz below, because we need to check
* iLenSz characters at maximum (start with!)
*/
if(iLenSz == 0)
return 0; /* yes, it starts with a zero-sized string ;) */
else { /* we now have something to compare, so let's do it... */
for(i = 0 ; i < iLenSz ; ++i) {
if(pCS1->pBuf[i] != psz[i])
return pCS1->pBuf[i] - psz[i];
}
/* if we arrive here, the string actually starts with psz */
return 0;
}
}
else
return -1; /* pCS1 is less then psz */
} | 0 | [
"CWE-189"
] | rsyslog | 6bad782f154b7f838c7371bf99c13f6dc4ec4101 | 67,714,659,217,875,180,000,000,000,000,000,000,000 | 25 | bugfix: abort if imfile reads file line of more than 64KiB
Thanks to Peter Eisentraut for reporting and analysing this problem.
bug tracker: http://bugzilla.adiscon.com/show_bug.cgi?id=221 |
void set_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
{
if (new_exe_file)
get_file(new_exe_file);
if (mm->exe_file)
fput(mm->exe_file);
mm->exe_file = new_exe_file;
mm->num_exe_file_vmas = 0;
} | 0 | [
"CWE-20",
"CWE-362",
"CWE-416"
] | linux | 86acdca1b63e6890540fa19495cfc708beff3d8b | 221,023,335,865,394,300,000,000,000,000,000,000,000 | 9 | fix autofs/afs/etc. magic mountpoint breakage
We end up trying to kfree() nd.last.name on open("/mnt/tmp", O_CREAT)
if /mnt/tmp is an autofs direct mount. The reason is that nd.last_type
is bogus here; we want LAST_BIND for everything of that kind and we
get LAST_NORM left over from finding parent directory.
So make sure that it *is* set properly; set to LAST_BIND before
doing ->follow_link() - for normal symlinks it will be changed
by __vfs_follow_link() and everything else needs it set that way.
Signed-off-by: Al Viro <[email protected]> |
void exit_thread(struct task_struct *tsk)
{
struct thread_struct *t = &tsk->thread;
struct fpu *fpu = &t->fpu;
if (test_thread_flag(TIF_IO_BITMAP))
io_bitmap_exit(tsk);
free_vm86(t);
fpu__drop(fpu);
} | 0 | [] | linux | dbbe2ad02e9df26e372f38cc3e70dab9222c832e | 311,282,340,668,658,000,000,000,000,000,000,000,000 | 12 | x86/speculation: Prevent rogue cross-process SSBD shutdown
On context switch the change of TIF_SSBD and TIF_SPEC_IB are evaluated
to adjust the mitigations accordingly. This is optimized to avoid the
expensive MSR write if not needed.
This optimization is buggy and allows an attacker to shutdown the SSBD
protection of a victim process.
The update logic reads the cached base value for the speculation control
MSR which has neither the SSBD nor the STIBP bit set. It then OR's the
SSBD bit only when TIF_SSBD is different and requests the MSR update.
That means if TIF_SSBD of the previous and next task are the same, then
the base value is not updated, even if TIF_SSBD is set. The MSR write is
not requested.
Subsequently if the TIF_STIBP bit differs then the STIBP bit is updated
in the base value and the MSR is written with a wrong SSBD value.
This was introduced when the per task/process conditional STIPB
switching was added on top of the existing SSBD switching.
It is exploitable if the attacker creates a process which enforces SSBD
and has the contrary value of STIBP than the victim process (i.e. if the
victim process enforces STIBP, the attacker process must not enforce it;
if the victim process does not enforce STIBP, the attacker process must
enforce it) and schedule it on the same core as the victim process. If
the victim runs after the attacker the victim becomes vulnerable to
Spectre V4.
To fix this, update the MSR value independent of the TIF_SSBD difference
and dependent on the SSBD mitigation method available. This ensures that
a subsequent STIPB initiated MSR write has the correct state of SSBD.
[ tglx: Handle X86_FEATURE_VIRT_SSBD & X86_FEATURE_VIRT_SSBD correctly
and massaged changelog ]
Fixes: 5bfbe3ad5840 ("x86/speculation: Prepare for per task indirect branch speculation control")
Signed-off-by: Anthony Steinhauser <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected] |
static struct basescript *SFDParseBaseScript(FILE *sfd,struct Base *base) {
struct basescript *bs;
int i, ch;
struct baselangextent *last, *cur;
if ( base==NULL )
return(NULL);
bs = chunkalloc(sizeof(struct basescript));
bs->script = gettag(sfd);
getint(sfd,&bs->def_baseline);
if ( base->baseline_cnt!=0 ) {
bs->baseline_pos = calloc(base->baseline_cnt,sizeof(int16));
for ( i=0; i<base->baseline_cnt; ++i )
getsint(sfd, &bs->baseline_pos[i]);
}
while ( (ch=nlgetc(sfd))==' ' );
last = NULL;
while ( ch=='{' ) {
ungetc(ch,sfd);
cur = ParseBaseLang(sfd);
if ( last==NULL )
bs->langs = cur;
else
last->next = cur;
last = cur;
while ( (ch=nlgetc(sfd))==' ' );
}
return( bs );
} | 0 | [
"CWE-416"
] | fontforge | 048a91e2682c1a8936ae34dbc7bd70291ec05410 | 262,534,115,816,421,670,000,000,000,000,000,000,000 | 31 | Fix for #4084 Use-after-free (heap) in the SFD_GetFontMetaData() function
Fix for #4086 NULL pointer dereference in the SFDGetSpiros() function
Fix for #4088 NULL pointer dereference in the SFD_AssignLookups() function
Add empty sf->fontname string if it isn't set, fixing #4089 #4090 and many
other potential issues (many downstream calls to strlen() on the value). |
aiff_get_chunk_size (SF_PRIVATE *psf, const SF_CHUNK_ITERATOR * iterator, SF_CHUNK_INFO * chunk_info)
{ int indx ;
if ((indx = psf_find_read_chunk_iterator (&psf->rchunks, iterator)) < 0)
return SFE_UNKNOWN_CHUNK ;
chunk_info->datalen = psf->rchunks.chunks [indx].len ;
return SFE_NO_ERROR ;
} /* aiff_get_chunk_size */ | 0 | [
"CWE-119",
"CWE-787"
] | libsndfile | f833c53cb596e9e1792949f762e0b33661822748 | 18,936,243,274,477,895,000,000,000,000,000,000,000 | 10 | src/aiff.c: Fix a buffer read overflow
Secunia Advisory SA76717.
Found by: Laurent Delosieres, Secunia Research at Flexera Software |
static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
__poll_t mask, task_work_func_t func)
{
int ret;
/* for instances that support it check for an event match first: */
if (mask && !(mask & poll->events))
return 0;
trace_io_uring_task_add(req->ctx, req->opcode, req->user_data, mask);
list_del_init(&poll->wait.entry);
req->result = mask;
req->task_work.func = func;
percpu_ref_get(&req->ctx->refs);
/*
* If this fails, then the task is exiting. When a task exits, the
* work gets canceled, so just cancel this request as well instead
* of executing it. We can't safely execute it anyway, as we may not
* have the needed state needed for it anyway.
*/
ret = io_req_task_work_add(req);
if (unlikely(ret)) {
WRITE_ONCE(poll->canceled, true);
io_req_task_work_add_fallback(req, func);
}
return 1; | 0 | [
"CWE-667"
] | linux | 3ebba796fa251d042be42b929a2d916ee5c34a49 | 126,251,544,784,780,150,000,000,000,000,000,000,000 | 30 | io_uring: ensure that SQPOLL thread is started for exit
If we create it in a disabled state because IORING_SETUP_R_DISABLED is
set on ring creation, we need to ensure that we've kicked the thread if
we're exiting before it's been explicitly disabled. Otherwise we can run
into a deadlock where exit is waiting go park the SQPOLL thread, but the
SQPOLL thread itself is waiting to get a signal to start.
That results in the below trace of both tasks hung, waiting on each other:
INFO: task syz-executor458:8401 blocked for more than 143 seconds.
Not tainted 5.11.0-next-20210226-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor458 state:D stack:27536 pid: 8401 ppid: 8400 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4324 [inline]
__schedule+0x90c/0x21a0 kernel/sched/core.c:5075
schedule+0xcf/0x270 kernel/sched/core.c:5154
schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868
do_wait_for_common kernel/sched/completion.c:85 [inline]
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x168/0x270 kernel/sched/completion.c:138
io_sq_thread_park fs/io_uring.c:7115 [inline]
io_sq_thread_park+0xd5/0x130 fs/io_uring.c:7103
io_uring_cancel_task_requests+0x24c/0xd90 fs/io_uring.c:8745
__io_uring_files_cancel+0x110/0x230 fs/io_uring.c:8840
io_uring_files_cancel include/linux/io_uring.h:47 [inline]
do_exit+0x299/0x2a60 kernel/exit.c:780
do_group_exit+0x125/0x310 kernel/exit.c:922
__do_sys_exit_group kernel/exit.c:933 [inline]
__se_sys_exit_group kernel/exit.c:931 [inline]
__x64_sys_exit_group+0x3a/0x50 kernel/exit.c:931
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x43e899
RSP: 002b:00007ffe89376d48 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 00000000004af2f0 RCX: 000000000043e899
RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000
RBP: 0000000000000000 R08: ffffffffffffffc0 R09: 0000000010000000
R10: 0000000000008011 R11: 0000000000000246 R12: 00000000004af2f0
R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000001
INFO: task iou-sqp-8401:8402 can't die for more than 143 seconds.
task:iou-sqp-8401 state:D stack:30272 pid: 8402 ppid: 8400 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4324 [inline]
__schedule+0x90c/0x21a0 kernel/sched/core.c:5075
schedule+0xcf/0x270 kernel/sched/core.c:5154
schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868
do_wait_for_common kernel/sched/completion.c:85 [inline]
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x168/0x270 kernel/sched/completion.c:138
io_sq_thread+0x27d/0x1ae0 fs/io_uring.c:6717
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
INFO: task iou-sqp-8401:8402 blocked for more than 143 seconds.
Reported-by: [email protected]
Signed-off-by: Jens Axboe <[email protected]> |
static void stbi__YCbCr_to_RGB_simd(stbi_uc *out, stbi_uc const *y, stbi_uc const *pcb, stbi_uc const *pcr, int count, int step)
{
int i = 0;
#ifdef STBI_SSE2
// step == 3 is pretty ugly on the final interleave, and i'm not convinced
// it's useful in practice (you wouldn't use it for textures, for example).
// so just accelerate step == 4 case.
if (step == 4) {
// this is a fairly straightforward implementation and not super-optimized.
__m128i signflip = _mm_set1_epi8(-0x80);
__m128i cr_const0 = _mm_set1_epi16( (short) ( 1.40200f*4096.0f+0.5f));
__m128i cr_const1 = _mm_set1_epi16( - (short) ( 0.71414f*4096.0f+0.5f));
__m128i cb_const0 = _mm_set1_epi16( - (short) ( 0.34414f*4096.0f+0.5f));
__m128i cb_const1 = _mm_set1_epi16( (short) ( 1.77200f*4096.0f+0.5f));
__m128i y_bias = _mm_set1_epi8((char) (unsigned char) 128);
__m128i xw = _mm_set1_epi16(255); // alpha channel
for (; i+7 < count; i += 8) {
// load
__m128i y_bytes = _mm_loadl_epi64((__m128i *) (y+i));
__m128i cr_bytes = _mm_loadl_epi64((__m128i *) (pcr+i));
__m128i cb_bytes = _mm_loadl_epi64((__m128i *) (pcb+i));
__m128i cr_biased = _mm_xor_si128(cr_bytes, signflip); // -128
__m128i cb_biased = _mm_xor_si128(cb_bytes, signflip); // -128
// unpack to short (and left-shift cr, cb by 8)
__m128i yw = _mm_unpacklo_epi8(y_bias, y_bytes);
__m128i crw = _mm_unpacklo_epi8(_mm_setzero_si128(), cr_biased);
__m128i cbw = _mm_unpacklo_epi8(_mm_setzero_si128(), cb_biased);
// color transform
__m128i yws = _mm_srli_epi16(yw, 4);
__m128i cr0 = _mm_mulhi_epi16(cr_const0, crw);
__m128i cb0 = _mm_mulhi_epi16(cb_const0, cbw);
__m128i cb1 = _mm_mulhi_epi16(cbw, cb_const1);
__m128i cr1 = _mm_mulhi_epi16(crw, cr_const1);
__m128i rws = _mm_add_epi16(cr0, yws);
__m128i gwt = _mm_add_epi16(cb0, yws);
__m128i bws = _mm_add_epi16(yws, cb1);
__m128i gws = _mm_add_epi16(gwt, cr1);
// descale
__m128i rw = _mm_srai_epi16(rws, 4);
__m128i bw = _mm_srai_epi16(bws, 4);
__m128i gw = _mm_srai_epi16(gws, 4);
// back to byte, set up for transpose
__m128i brb = _mm_packus_epi16(rw, bw);
__m128i gxb = _mm_packus_epi16(gw, xw);
// transpose to interleave channels
__m128i t0 = _mm_unpacklo_epi8(brb, gxb);
__m128i t1 = _mm_unpackhi_epi8(brb, gxb);
__m128i o0 = _mm_unpacklo_epi16(t0, t1);
__m128i o1 = _mm_unpackhi_epi16(t0, t1);
// store
_mm_storeu_si128((__m128i *) (out + 0), o0);
_mm_storeu_si128((__m128i *) (out + 16), o1);
out += 32;
}
}
#endif
#ifdef STBI_NEON
// in this version, step=3 support would be easy to add. but is there demand?
if (step == 4) {
// this is a fairly straightforward implementation and not super-optimized.
uint8x8_t signflip = vdup_n_u8(0x80);
int16x8_t cr_const0 = vdupq_n_s16( (short) ( 1.40200f*4096.0f+0.5f));
int16x8_t cr_const1 = vdupq_n_s16( - (short) ( 0.71414f*4096.0f+0.5f));
int16x8_t cb_const0 = vdupq_n_s16( - (short) ( 0.34414f*4096.0f+0.5f));
int16x8_t cb_const1 = vdupq_n_s16( (short) ( 1.77200f*4096.0f+0.5f));
for (; i+7 < count; i += 8) {
// load
uint8x8_t y_bytes = vld1_u8(y + i);
uint8x8_t cr_bytes = vld1_u8(pcr + i);
uint8x8_t cb_bytes = vld1_u8(pcb + i);
int8x8_t cr_biased = vreinterpret_s8_u8(vsub_u8(cr_bytes, signflip));
int8x8_t cb_biased = vreinterpret_s8_u8(vsub_u8(cb_bytes, signflip));
// expand to s16
int16x8_t yws = vreinterpretq_s16_u16(vshll_n_u8(y_bytes, 4));
int16x8_t crw = vshll_n_s8(cr_biased, 7);
int16x8_t cbw = vshll_n_s8(cb_biased, 7);
// color transform
int16x8_t cr0 = vqdmulhq_s16(crw, cr_const0);
int16x8_t cb0 = vqdmulhq_s16(cbw, cb_const0);
int16x8_t cr1 = vqdmulhq_s16(crw, cr_const1);
int16x8_t cb1 = vqdmulhq_s16(cbw, cb_const1);
int16x8_t rws = vaddq_s16(yws, cr0);
int16x8_t gws = vaddq_s16(vaddq_s16(yws, cb0), cr1);
int16x8_t bws = vaddq_s16(yws, cb1);
// undo scaling, round, convert to byte
uint8x8x4_t o;
o.val[0] = vqrshrun_n_s16(rws, 4);
o.val[1] = vqrshrun_n_s16(gws, 4);
o.val[2] = vqrshrun_n_s16(bws, 4);
o.val[3] = vdup_n_u8(255);
// store, interleaving r/g/b/a
vst4_u8(out, o);
out += 8*4;
}
}
#endif
for (; i < count; ++i) {
int y_fixed = (y[i] << 20) + (1<<19); // rounding
int r,g,b;
int cr = pcr[i] - 128;
int cb = pcb[i] - 128;
r = y_fixed + cr* stbi__float2fixed(1.40200f);
g = y_fixed + cr*-stbi__float2fixed(0.71414f) + ((cb*-stbi__float2fixed(0.34414f)) & 0xffff0000);
b = y_fixed + cb* stbi__float2fixed(1.77200f);
r >>= 20;
g >>= 20;
b >>= 20;
if ((unsigned) r > 255) { if (r < 0) r = 0; else r = 255; }
if ((unsigned) g > 255) { if (g < 0) g = 0; else g = 255; }
if ((unsigned) b > 255) { if (b < 0) b = 0; else b = 255; }
out[0] = (stbi_uc)r;
out[1] = (stbi_uc)g;
out[2] = (stbi_uc)b;
out[3] = 255;
out += step;
}
} | 0 | [
"CWE-787"
] | stb | 5ba0baaa269b3fd681828e0e3b3ac0f1472eaf40 | 107,558,674,875,923,870,000,000,000,000,000,000,000 | 132 | stb_image: Reject fractional JPEG component subsampling ratios
The component resamplers are not written to support this and I've
never seen it happen in a real (non-crafted) JPEG file so I'm
fine rejecting this as outright corrupt.
Fixes issue #1178. |
static void dccp_enqueue_skb(struct sock *sk, struct sk_buff *skb)
{
__skb_pull(skb, dccp_hdr(skb)->dccph_doff * 4);
__skb_queue_tail(&sk->sk_receive_queue, skb);
skb_set_owner_r(skb, sk);
sk->sk_data_ready(sk);
} | 0 | [
"CWE-200",
"CWE-415"
] | linux | 5edabca9d4cff7f1f2b68f0bac55ef99d9798ba4 | 320,994,527,448,250,300,000,000,000,000,000,000,000 | 7 | dccp: fix freeing skb too early for IPV6_RECVPKTINFO
In the current DCCP implementation an skb for a DCCP_PKT_REQUEST packet
is forcibly freed via __kfree_skb in dccp_rcv_state_process if
dccp_v6_conn_request successfully returns.
However, if IPV6_RECVPKTINFO is set on a socket, the address of the skb
is saved to ireq->pktopts and the ref count for skb is incremented in
dccp_v6_conn_request, so skb is still in use. Nevertheless, it gets freed
in dccp_rcv_state_process.
Fix by calling consume_skb instead of doing goto discard and therefore
calling __kfree_skb.
Similar fixes for TCP:
fb7e2399ec17f1004c0e0ccfd17439f8759ede01 [TCP]: skb is unexpectedly freed.
0aea76d35c9651d55bbaf746e7914e5f9ae5a25d tcp: SYN packets are now
simply consumed
Signed-off-by: Andrey Konovalov <[email protected]>
Acked-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
static int write_compression_header(blosc2_context* context,
bool extended_header) {
int32_t compformat;
int dont_split;
int dict_training = context->use_dict && (context->dict_cdict == NULL);
// Set the whole header to zeros so that the reserved values are zeroed
if (extended_header) {
memset(context->dest, 0, BLOSC_EXTENDED_HEADER_LENGTH);
}
else {
memset(context->dest, 0, BLOSC_MIN_HEADER_LENGTH);
}
/* Write version header for this block */
context->dest[0] = BLOSC_VERSION_FORMAT;
/* Write compressor format */
compformat = -1;
switch (context->compcode) {
case BLOSC_BLOSCLZ:
compformat = BLOSC_BLOSCLZ_FORMAT;
context->dest[1] = BLOSC_BLOSCLZ_VERSION_FORMAT;
break;
#if defined(HAVE_LZ4)
case BLOSC_LZ4:
compformat = BLOSC_LZ4_FORMAT;
context->dest[1] = BLOSC_LZ4_VERSION_FORMAT;
break;
case BLOSC_LZ4HC:
compformat = BLOSC_LZ4HC_FORMAT;
context->dest[1] = BLOSC_LZ4HC_VERSION_FORMAT;
break;
#endif /* HAVE_LZ4 */
#if defined(HAVE_LIZARD)
case BLOSC_LIZARD:
compformat = BLOSC_LIZARD_FORMAT;
context->dest[1] = BLOSC_LIZARD_VERSION_FORMAT;
break;
#endif /* HAVE_LIZARD */
#if defined(HAVE_SNAPPY)
case BLOSC_SNAPPY:
compformat = BLOSC_SNAPPY_FORMAT;
context->dest[1] = BLOSC_SNAPPY_VERSION_FORMAT;
break;
#endif /* HAVE_SNAPPY */
#if defined(HAVE_ZLIB)
case BLOSC_ZLIB:
compformat = BLOSC_ZLIB_FORMAT;
context->dest[1] = BLOSC_ZLIB_VERSION_FORMAT;
break;
#endif /* HAVE_ZLIB */
#if defined(HAVE_ZSTD)
case BLOSC_ZSTD:
compformat = BLOSC_ZSTD_FORMAT;
context->dest[1] = BLOSC_ZSTD_VERSION_FORMAT;
break;
#endif /* HAVE_ZSTD */
default: {
const char* compname;
compname = clibcode_to_clibname(compformat);
fprintf(stderr, "Blosc has not been compiled with '%s' ", compname);
fprintf(stderr, "compression support. Please use one having it.");
return -5; /* signals no compression support */
break;
}
}
if (context->clevel == 0) {
/* Compression level 0 means buffer to be memcpy'ed */
context->header_flags |= (uint8_t)BLOSC_MEMCPYED;
}
if (context->sourcesize < BLOSC_MIN_BUFFERSIZE) {
/* Buffer is too small. Try memcpy'ing. */
context->header_flags |= (uint8_t)BLOSC_MEMCPYED;
}
bool memcpyed = context->header_flags & (uint8_t)BLOSC_MEMCPYED;
context->dest[2] = 0; /* zeroes flags */
context->dest[3] = (uint8_t)context->typesize;
_sw32(context->dest + 4, (int32_t)context->sourcesize);
_sw32(context->dest + 8, (int32_t)context->blocksize);
if (extended_header) {
/* Mark that we are handling an extended header */
context->header_flags |= (BLOSC_DOSHUFFLE | BLOSC_DOBITSHUFFLE);
/* Store filter pipeline info at the end of the header */
uint8_t *filters = context->dest + BLOSC_MIN_HEADER_LENGTH;
uint8_t *filters_meta = filters + 8;
for (int i = 0; i < BLOSC2_MAX_FILTERS; i++) {
filters[i] = context->filters[i];
filters_meta[i] = context->filters_meta[i];
}
uint8_t* blosc2_flags = context->dest + 0x1F;
*blosc2_flags = 0; // zeroes flags
*blosc2_flags |= is_little_endian() ? 0 : BLOSC2_BIGENDIAN; // endianness
if (dict_training || memcpyed) {
context->bstarts = NULL;
context->output_bytes = BLOSC_EXTENDED_HEADER_LENGTH;
} else {
context->bstarts = (int32_t*)(context->dest + BLOSC_EXTENDED_HEADER_LENGTH);
context->output_bytes = BLOSC_EXTENDED_HEADER_LENGTH +
sizeof(int32_t) * context->nblocks;
}
if (context->use_dict) {
*blosc2_flags |= BLOSC2_USEDICT;
}
} else {
// Regular header
if (memcpyed) {
context->bstarts = NULL;
context->output_bytes = BLOSC_MIN_HEADER_LENGTH;
} else {
context->bstarts = (int32_t *) (context->dest + BLOSC_MIN_HEADER_LENGTH);
context->output_bytes = BLOSC_MIN_HEADER_LENGTH +
sizeof(int32_t) * context->nblocks;
}
}
// when memcpyed bit is set, there is no point in dealing with others
if (!memcpyed) {
if (context->filter_flags & BLOSC_DOSHUFFLE) {
/* Byte-shuffle is active */
context->header_flags |= BLOSC_DOSHUFFLE;
}
if (context->filter_flags & BLOSC_DOBITSHUFFLE) {
/* Bit-shuffle is active */
context->header_flags |= BLOSC_DOBITSHUFFLE;
}
if (context->filter_flags & BLOSC_DODELTA) {
/* Delta is active */
context->header_flags |= BLOSC_DODELTA;
}
dont_split = !split_block(context, context->typesize,
context->blocksize, extended_header);
context->header_flags |= dont_split << 4; /* dont_split is in bit 4 */
context->header_flags |= compformat << 5; /* codec starts at bit 5 */
}
// store header flags in dest
context->dest[2] = context->header_flags;
return 1;
} | 0 | [
"CWE-787"
] | c-blosc2 | c4c6470e88210afc95262c8b9fcc27e30ca043ee | 201,856,728,996,569,780,000,000,000,000,000,000,000 | 153 | Fixed asan heap buffer overflow when not enough space to write compressed block size. |
static int pfkey_spdget(struct sock *sk, struct sk_buff *skb, const struct sadb_msg *hdr, void * const *ext_hdrs)
{
struct net *net = sock_net(sk);
unsigned int dir;
int err = 0, delete;
struct sadb_x_policy *pol;
struct xfrm_policy *xp;
struct km_event c;
if ((pol = ext_hdrs[SADB_X_EXT_POLICY-1]) == NULL)
return -EINVAL;
dir = xfrm_policy_id2dir(pol->sadb_x_policy_id);
if (dir >= XFRM_POLICY_MAX)
return -EINVAL;
delete = (hdr->sadb_msg_type == SADB_X_SPDDELETE2);
xp = xfrm_policy_byid(net, DUMMY_MARK, XFRM_POLICY_TYPE_MAIN,
dir, pol->sadb_x_policy_id, delete, &err);
if (xp == NULL)
return -ENOENT;
if (delete) {
xfrm_audit_policy_delete(xp, err ? 0 : 1,
audit_get_loginuid(current),
audit_get_sessionid(current), 0);
if (err)
goto out;
c.seq = hdr->sadb_msg_seq;
c.portid = hdr->sadb_msg_pid;
c.data.byid = 1;
c.event = XFRM_MSG_DELPOLICY;
km_policy_notify(xp, dir, &c);
} else {
err = key_pol_get_resp(sk, xp, hdr, dir);
}
out:
xfrm_pol_put(xp);
if (delete && err == 0)
xfrm_garbage_collect(net);
return err;
} | 0 | [
"CWE-20",
"CWE-269"
] | linux | f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | 263,933,714,021,575,450,000,000,000,000,000,000,000 | 44 | net: rework recvmsg handler msg_name and msg_namelen logic
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <[email protected]>
Suggested-by: Eric Dumazet <[email protected]>
Signed-off-by: Hannes Frederic Sowa <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
mono_assembly_get_object (MonoDomain *domain, MonoAssembly *assembly)
{
static MonoClass *System_Reflection_Assembly;
MonoReflectionAssembly *res;
CHECK_OBJECT (MonoReflectionAssembly *, assembly, NULL);
if (!System_Reflection_Assembly)
System_Reflection_Assembly = mono_class_from_name (
mono_defaults.corlib, "System.Reflection", "Assembly");
res = (MonoReflectionAssembly *)mono_object_new (domain, System_Reflection_Assembly);
res->assembly = assembly;
CACHE_OBJECT (MonoReflectionAssembly *, assembly, res, NULL);
} | 0 | [
"CWE-20"
] | mono | 4905ef1130feb26c3150b28b97e4a96752e0d399 | 336,677,163,835,691,100,000,000,000,000,000,000,000 | 14 | Handle invalid instantiation of generic methods.
* verify.c: Add new function to internal verifier API to check
method instantiations.
* reflection.c (mono_reflection_bind_generic_method_parameters):
Check the instantiation before returning it.
Fixes #655847 |
zbuilddevicecolorrendering1(i_ctx_t *i_ctx_p)
{
os_ptr op = osp;
gs_memory_t *mem = gs_gstate_memory(igs);
dict_param_list list;
gs_cie_render *pcrd = 0;
int code;
check_type(*op, t_dictionary);
code = dict_param_list_read(&list, op, NULL, false, iimemory);
if (code < 0)
return code;
code = gs_cie_render1_build(&pcrd, mem, ".builddevicecolorrendering1");
if (code >= 0) {
code = param_get_cie_render1(pcrd, (gs_param_list *) & list,
gs_currentdevice(igs));
if (code >= 0) {
/****** FIX refct ******/
/*rc_decrement(pcrd, ".builddevicecolorrendering1"); *//* build sets rc = 1 */
}
}
iparam_list_release(&list);
if (code < 0) {
rc_free_struct(pcrd, ".builddevicecolorrendering1");
return code;
}
istate->colorrendering.dict = *op;
make_istruct_new(op, a_readonly, pcrd);
return 0;
} | 0 | [
"CWE-704"
] | ghostpdl | 548bb434e81dadcc9f71adf891a3ef5bea8e2b4e | 193,723,742,103,188,100,000,000,000,000,000,000,000 | 30 | PS interpreter - add some type checking
These were 'probably' safe anyway, since they mostly treat the objects
as integers without checking, which at least can't result in a crash.
Nevertheless, we ought to check.
The return from comparedictkeys could be wrong if one of the keys had
a value which was not an array, it could incorrectly decide the two
were in fact the same. |
set_filename_bstab (string)
const char *string;
{
const char *s;
memset (filename_bstab, 0, sizeof (filename_bstab));
for (s = string; s && *s; s++)
filename_bstab[*s] = 1;
} | 0 | [
"CWE-20"
] | bash | 4f747edc625815f449048579f6e65869914dd715 | 93,220,543,676,882,220,000,000,000,000,000,000,000 | 9 | Bash-4.4 patch 7 |
static const char *wsgi_server_group(request_rec *r, const char *s)
{
const char *name = NULL;
const char *h = NULL;
apr_port_t p = 0;
if (!s)
return "";
if (*s != '%')
return s;
name = s + 1;
if (*name) {
if (!strcmp(name, "{SERVER}")) {
h = r->server->server_hostname;
p = ap_get_server_port(r);
if (p != DEFAULT_HTTP_PORT && p != DEFAULT_HTTPS_PORT)
return apr_psprintf(r->pool, "%s:%u", h, p);
else
return h;
}
if (!strcmp(name, "{GLOBAL}"))
return "";
}
return s;
} | 0 | [
"CWE-254"
] | mod_wsgi | 545354a80b9cc20d8b6916ca30542eab36c3b8bd | 128,510,165,744,500,280,000,000,000,000,000,000,000 | 32 | When there is any sort of error in setting up daemon process group, kill the process rather than risk running in an unexpected state. |
bytes_richcompare(PyBytesObject *a, PyBytesObject *b, int op)
{
int c;
Py_ssize_t len_a, len_b;
Py_ssize_t min_len;
PyObject *result;
int rc;
/* Make sure both arguments are strings. */
if (!(PyBytes_Check(a) && PyBytes_Check(b))) {
if (Py_BytesWarningFlag && (op == Py_EQ || op == Py_NE)) {
rc = PyObject_IsInstance((PyObject*)a,
(PyObject*)&PyUnicode_Type);
if (!rc)
rc = PyObject_IsInstance((PyObject*)b,
(PyObject*)&PyUnicode_Type);
if (rc < 0)
return NULL;
if (rc) {
if (PyErr_WarnEx(PyExc_BytesWarning,
"Comparison between bytes and string", 1))
return NULL;
}
}
result = Py_NotImplemented;
}
else if (a == b) {
switch (op) {
case Py_EQ:
case Py_LE:
case Py_GE:
/* a string is equal to itself */
result = Py_True;
break;
case Py_NE:
case Py_LT:
case Py_GT:
result = Py_False;
break;
default:
PyErr_BadArgument();
return NULL;
}
}
else if (op == Py_EQ || op == Py_NE) {
int eq = bytes_compare_eq(a, b);
eq ^= (op == Py_NE);
result = eq ? Py_True : Py_False;
}
else {
len_a = Py_SIZE(a);
len_b = Py_SIZE(b);
min_len = Py_MIN(len_a, len_b);
if (min_len > 0) {
c = Py_CHARMASK(*a->ob_sval) - Py_CHARMASK(*b->ob_sval);
if (c == 0)
c = memcmp(a->ob_sval, b->ob_sval, min_len);
}
else
c = 0;
if (c == 0)
c = (len_a < len_b) ? -1 : (len_a > len_b) ? 1 : 0;
switch (op) {
case Py_LT: c = c < 0; break;
case Py_LE: c = c <= 0; break;
case Py_GT: c = c > 0; break;
case Py_GE: c = c >= 0; break;
default:
PyErr_BadArgument();
return NULL;
}
result = c ? Py_True : Py_False;
}
Py_INCREF(result);
return result;
} | 0 | [
"CWE-190"
] | cpython | 6c004b40f9d51872d848981ef1a18bb08c2dfc42 | 284,096,448,858,167,480,000,000,000,000,000,000,000 | 77 | bpo-30657: Fix CVE-2017-1000158 (#4758)
Fixes possible integer overflow in PyBytes_DecodeEscape.
Co-Authored-By: Jay Bosamiya <[email protected]> |
static int handle_NPP_GetValue(rpc_connection_t *connection)
{
D(bug("handle_NPP_GetValue\n"));
int error;
PluginInstance *plugin;
int32_t variable;
error = rpc_method_get_args(connection,
RPC_TYPE_NPW_PLUGIN_INSTANCE, &plugin,
RPC_TYPE_INT32, &variable,
RPC_TYPE_INVALID);
if (error != RPC_ERROR_NO_ERROR) {
npw_printf("ERROR: could not get NPP_GetValue variable\n");
return error;
}
NPError ret = NPERR_GENERIC_ERROR;
int variable_type = rpc_type_of_NPPVariable(variable);
switch (variable_type) {
case RPC_TYPE_STRING:
{
char *str = NULL;
ret = g_NPP_GetValue(PLUGIN_INSTANCE_NPP(plugin), variable, (void *)&str);
return rpc_method_send_reply(connection, RPC_TYPE_INT32, ret, RPC_TYPE_STRING, str, RPC_TYPE_INVALID);
}
case RPC_TYPE_INT32:
{
uint32_t n = 0;
ret = g_NPP_GetValue(PLUGIN_INSTANCE_NPP(plugin), variable, (void *)&n);
return rpc_method_send_reply(connection, RPC_TYPE_INT32, ret, RPC_TYPE_INT32, n, RPC_TYPE_INVALID);
}
case RPC_TYPE_BOOLEAN:
{
NPBool b = FALSE;
ret = g_NPP_GetValue(PLUGIN_INSTANCE_NPP(plugin), variable, (void *)&b);
return rpc_method_send_reply(connection, RPC_TYPE_INT32, ret, RPC_TYPE_BOOLEAN, b, RPC_TYPE_INVALID);
}
case RPC_TYPE_NP_OBJECT:
{
NPObject *npobj = NULL;
ret = g_NPP_GetValue(PLUGIN_INSTANCE_NPP(plugin), variable, (void *)&npobj);
return rpc_method_send_reply(connection, RPC_TYPE_INT32, ret, RPC_TYPE_NP_OBJECT, npobj, RPC_TYPE_INVALID);
}
}
abort();
} | 0 | [
"CWE-264"
] | nspluginwrapper | 7e4ab8e1189846041f955e6c83f72bc1624e7a98 | 143,523,618,956,164,080,000,000,000,000,000,000,000 | 50 | Support all the new variables added |
TfLiteStatus Subgraph::SetTensorParametersReadOnly(
int tensor_index, TfLiteType type, const char* name, const size_t rank,
const int* dims, TfLiteQuantization quantization, const char* buffer,
size_t bytes, const Allocation* allocation, TfLiteSparsity* sparsity) {
// Ensure quantization cleanup on failure.
ScopedTfLiteQuantization scoped_quantization(&quantization);
ScopedTfLiteSparsity scoped_sparsity(sparsity);
if (state_ == kStateInvokableAndImmutable) {
ReportError(
"SetTensorParametersReadOnly is disallowed when graph is immutable.");
return kTfLiteError;
}
TF_LITE_ENSURE(&context_,
tensor_index < context_.tensors_size && tensor_index >= 0);
// For most tensors we know exactly how much memory is necessary so we can
// ensure the buffer is large enough. However, we need to skip string tensors
// and sparse tensors because their sizes change with the contents.
// TODO(b/145615516): Extend BytesRequired to check sparse tensors.
if (type != kTfLiteString && sparsity == nullptr) {
size_t required_bytes;
TF_LITE_ENSURE_OK(&context_,
BytesRequired(type, dims, rank, &required_bytes));
TF_LITE_ENSURE_EQ(&context_, required_bytes, bytes);
}
TfLiteTensor& tensor = context_.tensors[tensor_index];
if (type == tensor.type &&
EqualArrayAndTfLiteIntArray(tensor.dims, rank, dims)) {
// Fast path which does not invalidate the invokable property.
TfLiteTensorDataFree(&tensor);
TfLiteQuantizationFree(&tensor.quantization);
tensor.data.raw = const_cast<char*>(buffer);
if (!tensor.dims) tensor.dims = ConvertArrayToTfLiteIntArray(rank, dims);
tensor.params = GetLegacyQuantization(quantization);
tensor.quantization = *scoped_quantization.release();
tensor.sparsity = scoped_sparsity.release();
tensor.allocation_type = kTfLiteMmapRo;
tensor.allocation = allocation;
} else {
state_ = kStateUninvokable;
TfLiteTensorReset(type, name, ConvertArrayToTfLiteIntArray(rank, dims),
GetLegacyQuantization(quantization),
const_cast<char*>(buffer), bytes, kTfLiteMmapRo,
allocation, false, &tensor);
// TODO(suharshs): Update TfLiteTensorReset to include the new quantization
// if there are other required callers.
tensor.quantization = *scoped_quantization.release();
tensor.sparsity = scoped_sparsity.release();
}
return kTfLiteOk;
} | 0 | [
"CWE-20",
"CWE-787"
] | tensorflow | d58c96946b2880991d63d1dacacb32f0a4dfa453 | 178,139,227,380,250,000,000,000,000,000,000,000,000 | 53 | [tflite] Ensure inputs and outputs don't overlap.
If a model uses the same tensor for both an input and an output then this can result in data loss and memory corruption. This should not happen.
PiperOrigin-RevId: 332522916
Change-Id: If0905b142415a9dfceaf2d181872f2a8fb88f48a |
cmsBool AllCurvesAreLinear(cmsStage* mpe)
{
cmsToneCurve** Curves;
cmsUInt32Number i, n;
Curves = _cmsStageGetPtrToCurveSet(mpe);
if (Curves == NULL) return FALSE;
n = cmsStageOutputChannels(mpe);
for (i=0; i < n; i++) {
if (!cmsIsToneCurveLinear(Curves[i])) return FALSE;
}
return TRUE;
} | 0 | [] | Little-CMS | 41d222df1bc6188131a8f46c32eab0a4d4cdf1b6 | 136,137,962,083,068,680,000,000,000,000,000,000,000 | 16 | Memory squeezing fix: lcms2 cmsPipeline construction
When creating a new pipeline, lcms would often try to allocate a stage
and pass it to cmsPipelineInsertStage without checking whether the
allocation succeeded. cmsPipelineInsertStage would then assert (or crash)
if it had not.
The fix here is to change cmsPipelineInsertStage to check and return
an error value. All calling code is then checked to test this return
value and cope. |
static noinline int free_debug_processing(
struct kmem_cache *s, struct page *page,
void *head, void *tail, int bulk_cnt,
unsigned long addr)
{
struct kmem_cache_node *n = get_node(s, page_to_nid(page));
void *object = head;
int cnt = 0;
unsigned long uninitialized_var(flags);
int ret = 0;
spin_lock_irqsave(&n->list_lock, flags);
slab_lock(page);
if (s->flags & SLAB_CONSISTENCY_CHECKS) {
if (!check_slab(s, page))
goto out;
}
next_object:
cnt++;
if (s->flags & SLAB_CONSISTENCY_CHECKS) {
if (!free_consistency_checks(s, page, object, addr))
goto out;
}
if (s->flags & SLAB_STORE_USER)
set_track(s, object, TRACK_FREE, addr);
trace(s, page, object, 0);
/* Freepointer not overwritten by init_object(), SLAB_POISON moved it */
init_object(s, object, SLUB_RED_INACTIVE);
/* Reached end of constructed freelist yet? */
if (object != tail) {
object = get_freepointer(s, object);
goto next_object;
}
ret = 1;
out:
if (cnt != bulk_cnt)
slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n",
bulk_cnt, cnt);
slab_unlock(page);
spin_unlock_irqrestore(&n->list_lock, flags);
if (!ret)
slab_fix(s, "Object at 0x%p not freed", object);
return ret;
} | 0 | [] | linux | fd4d9c7d0c71866ec0c2825189ebd2ce35bd95b8 | 150,522,492,727,481,440,000,000,000,000,000,000,000 | 51 | mm: slub: add missing TID bump in kmem_cache_alloc_bulk()
When kmem_cache_alloc_bulk() attempts to allocate N objects from a percpu
freelist of length M, and N > M > 0, it will first remove the M elements
from the percpu freelist, then call ___slab_alloc() to allocate the next
element and repopulate the percpu freelist. ___slab_alloc() can re-enable
IRQs via allocate_slab(), so the TID must be bumped before ___slab_alloc()
to properly commit the freelist head change.
Fix it by unconditionally bumping c->tid when entering the slowpath.
Cc: [email protected]
Fixes: ebe909e0fdb3 ("slub: improve bulk alloc strategy")
Signed-off-by: Jann Horn <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> |
static char* get_private_subtags(const char* loc_name)
{
char* result =NULL;
int singletonPos = 0;
int len =0;
const char* mod_loc_name =NULL;
if( loc_name && (len = strlen(loc_name)>0 ) ){
mod_loc_name = loc_name ;
len = strlen(mod_loc_name);
while( (singletonPos = getSingletonPos(mod_loc_name))!= -1){
if( singletonPos!=-1){
if( (*(mod_loc_name+singletonPos)=='x') || (*(mod_loc_name+singletonPos)=='X') ){
/* private subtag start found */
if( singletonPos + 2 == len){
/* loc_name ends with '-x-' ; return NULL */
}
else{
/* result = mod_loc_name + singletonPos +2; */
result = estrndup(mod_loc_name + singletonPos+2 , (len -( singletonPos +2) ) );
}
break;
}
else{
if( singletonPos + 1 >= len){
/* String end */
break;
} else {
/* singleton found but not a private subtag , hence check further in the string for the private subtag */
mod_loc_name = mod_loc_name + singletonPos +1;
len = strlen(mod_loc_name);
}
}
}
} /* end of while */
}
return result;
} | 0 | [
"CWE-125"
] | php-src | 97eff7eb57fc2320c267a949cffd622c38712484 | 276,790,279,879,323,400,000,000,000,000,000,000,000 | 41 | Fix bug #72241: get_icu_value_internal out-of-bounds read |
PHP_FUNCTION(imagedashedline)
{
zval *IM;
long x1, y1, x2, y2, col;
gdImagePtr im;
if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "rlllll", &IM, &x1, &y1, &x2, &y2, &col) == FAILURE) {
return;
}
ZEND_FETCH_RESOURCE(im, gdImagePtr, &IM, -1, "Image", le_gd);
gdImageDashedLine(im, x1, y1, x2, y2, col);
RETURN_TRUE;
} | 0 | [
"CWE-703",
"CWE-189"
] | php-src | 2938329ce19cb8c4197dec146c3ec887c6f61d01 | 82,961,033,033,725,770,000,000,000,000,000,000,000 | 14 | Fixed bug #66356 (Heap Overflow Vulnerability in imagecrop())
And also fixed the bug: arguments are altered after some calls |
evdns_base_free_and_unlock(struct evdns_base *base, int fail_requests)
{
struct nameserver *server, *server_next;
struct search_domain *dom, *dom_next;
int i;
/* Requires that we hold the lock. */
/* TODO(nickm) we might need to refcount here. */
for (i = 0; i < base->n_req_heads; ++i) {
while (base->req_heads[i]) {
if (fail_requests)
reply_schedule_callback(base->req_heads[i], 0, DNS_ERR_SHUTDOWN, NULL);
request_finished(base->req_heads[i], &REQ_HEAD(base, base->req_heads[i]->trans_id), 1);
}
}
while (base->req_waiting_head) {
if (fail_requests)
reply_schedule_callback(base->req_waiting_head, 0, DNS_ERR_SHUTDOWN, NULL);
request_finished(base->req_waiting_head, &base->req_waiting_head, 1);
}
base->global_requests_inflight = base->global_requests_waiting = 0;
for (server = base->server_head; server; server = server_next) {
server_next = server->next;
/** already done something before */
server->probe_request = NULL;
evdns_nameserver_free(server);
if (server_next == base->server_head)
break;
}
base->server_head = NULL;
base->global_good_nameservers = 0;
if (base->global_search_state) {
for (dom = base->global_search_state->head; dom; dom = dom_next) {
dom_next = dom->next;
mm_free(dom);
}
mm_free(base->global_search_state);
base->global_search_state = NULL;
}
{
struct hosts_entry *victim;
while ((victim = TAILQ_FIRST(&base->hostsdb))) {
TAILQ_REMOVE(&base->hostsdb, victim, next);
mm_free(victim);
}
}
mm_free(base->req_heads);
EVDNS_UNLOCK(base);
EVTHREAD_FREE_LOCK(base->lock, EVTHREAD_LOCKTYPE_RECURSIVE);
mm_free(base);
} | 0 | [
"CWE-125"
] | libevent | ec65c42052d95d2c23d1d837136d1cf1d9ecef9e | 335,857,248,859,036,650,000,000,000,000,000,000,000 | 59 | evdns: fix searching empty hostnames
From #332:
Here follows a bug report by **Guido Vranken** via the _Tor bug bounty program_. Please credit Guido accordingly.
## Bug report
The DNS code of Libevent contains this rather obvious OOB read:
```c
static char *
search_make_new(const struct search_state *const state, int n, const char *const base_name) {
const size_t base_len = strlen(base_name);
const char need_to_append_dot = base_name[base_len - 1] == '.' ? 0 : 1;
```
If the length of ```base_name``` is 0, then line 3125 reads 1 byte before the buffer. This will trigger a crash on ASAN-protected builds.
To reproduce:
Build libevent with ASAN:
```
$ CFLAGS='-fomit-frame-pointer -fsanitize=address' ./configure && make -j4
```
Put the attached ```resolv.conf``` and ```poc.c``` in the source directory and then do:
```
$ gcc -fsanitize=address -fomit-frame-pointer poc.c .libs/libevent.a
$ ./a.out
=================================================================
==22201== ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60060000efdf at pc 0x4429da bp 0x7ffe1ed47300 sp 0x7ffe1ed472f8
READ of size 1 at 0x60060000efdf thread T0
```
P.S. we can add a check earlier, but since this is very uncommon, I didn't add it.
Fixes: #332 |
xfs_inode_ag_iterator(
struct xfs_mount *mp,
int (*execute)(struct xfs_inode *ip, int flags,
void *args),
int flags,
void *args)
{
return xfs_inode_ag_iterator_flags(mp, execute, flags, args, 0);
} | 0 | [
"CWE-476"
] | linux | afca6c5b2595fc44383919fba740c194b0b76aff | 301,124,222,340,836,140,000,000,000,000,000,000,000 | 9 | xfs: validate cached inodes are free when allocated
A recent fuzzed filesystem image cached random dcache corruption
when the reproducer was run. This often showed up as panics in
lookup_slow() on a null inode->i_ops pointer when doing pathwalks.
BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
....
Call Trace:
lookup_slow+0x44/0x60
walk_component+0x3dd/0x9f0
link_path_walk+0x4a7/0x830
path_lookupat+0xc1/0x470
filename_lookup+0x129/0x270
user_path_at_empty+0x36/0x40
path_listxattr+0x98/0x110
SyS_listxattr+0x13/0x20
do_syscall_64+0xf5/0x280
entry_SYSCALL_64_after_hwframe+0x42/0xb7
but had many different failure modes including deadlocks trying to
lock the inode that was just allocated or KASAN reports of
use-after-free violations.
The cause of the problem was a corrupt INOBT on a v4 fs where the
root inode was marked as free in the inobt record. Hence when we
allocated an inode, it chose the root inode to allocate, found it in
the cache and re-initialised it.
We recently fixed a similar inode allocation issue caused by inobt
record corruption problem in xfs_iget_cache_miss() in commit
ee457001ed6c ("xfs: catch inode allocation state mismatch
corruption"). This change adds similar checks to the cache-hit path
to catch it, and turns the reproducer into a corruption shutdown
situation.
Reported-by: Wen Xu <[email protected]>
Signed-Off-By: Dave Chinner <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Carlos Maiolino <[email protected]>
Reviewed-by: Darrick J. Wong <[email protected]>
[darrick: fix typos in comment]
Signed-off-by: Darrick J. Wong <[email protected]> |
void SetUp() override {
TestUtility::loadFromYaml(kConfigTemplate, proto_config_);
for (auto& it : *(proto_config_.mutable_providers())) {
it.second.mutable_local_jwks()->set_inline_string(PublicKey);
}
} | 0 | [
"CWE-303",
"CWE-703"
] | envoy | ea39e3cba652bcc4b11bb0d5c62b017e584d2e5a | 100,579,973,814,928,260,000,000,000,000,000,000,000 | 6 | jwt_authn: fix a bug where JWT with wrong issuer is allowed in allow_missing case (#15194)
[jwt] When allow_missing is used inside RequiresAny, the requests with JWT with wrong issuer are accepted. This is a bug, allow_missing should only allow requests without any JWT. This change fixed the above issue by preserving JwtUnknownIssuer in allow_missing case.
Signed-off-by: Wayne Zhang <[email protected]> |
static int ntop_get_interface_dump_max_pkts(lua_State* vm) {
NetworkInterface *ntop_interface = getCurrentInterface(vm);
int max_pkts;
ntop->getTrace()->traceEvent(TRACE_DEBUG, "%s() called", __FUNCTION__);
if(!ntop_interface)
return(CONST_LUA_ERROR);
max_pkts = ntop_interface->getDumpTrafficMaxPktsPerFile();
lua_pushnumber(vm, max_pkts);
return(CONST_LUA_OK);
} | 0 | [
"CWE-476"
] | ntopng | 01f47e04fd7c8d54399c9e465f823f0017069f8f | 318,360,707,117,437,170,000,000,000,000,000,000,000 | 15 | Security fix: prevents empty host from being used |
_flatpak_dir_find_new_flatpakrepos (FlatpakDir *self, OstreeRepo *repo)
{
g_autoptr(GHashTable) flatpakrepos = NULL;
g_autoptr(GFile) conf_dir = NULL;
g_autoptr(GFileEnumerator) dir_enum = NULL;
g_autoptr(GError) my_error = NULL;
g_autofree char *config_dir = NULL;
g_auto(GStrv) remotes = NULL;
g_auto(GStrv) applied_remotes = NULL;
g_assert (repo != NULL);
/* Predefined remotes only applies for the default system installation */
if (self->user ||
(self->extra_data != NULL &&
strcmp (self->extra_data->id, SYSTEM_DIR_DEFAULT_ID) != 0))
return NULL;
config_dir = g_strdup_printf ("%s/%s",
get_config_dir_location (),
SYSCONF_REMOTES_DIR);
conf_dir = g_file_new_for_path (config_dir);
dir_enum = g_file_enumerate_children (conf_dir,
G_FILE_ATTRIBUTE_STANDARD_NAME "," G_FILE_ATTRIBUTE_STANDARD_TYPE,
G_FILE_QUERY_INFO_NONE,
NULL, &my_error);
if (my_error != NULL)
return NULL;
remotes = ostree_repo_remote_list (repo, NULL);
applied_remotes = g_key_file_get_string_list (ostree_repo_get_config (repo),
"core", "xa.applied-remotes", NULL, NULL);
while (TRUE)
{
GFileInfo *file_info;
GFile *path;
const char *name;
guint32 type;
if (!g_file_enumerator_iterate (dir_enum, &file_info, &path,
NULL, &my_error))
{
g_debug ("Unexpected error reading file in %s: %s",
config_dir, my_error->message);
break;
}
if (file_info == NULL)
break;
name = g_file_info_get_name (file_info);
type = g_file_info_get_file_type (file_info);
if (type == G_FILE_TYPE_REGULAR && g_str_has_suffix (name, SYSCONF_REMOTES_FILE_EXT))
{
g_autofree char *remote_name = g_strndup (name, strlen (name) - strlen (SYSCONF_REMOTES_FILE_EXT));
if (remotes && g_strv_contains ((const char * const *)remotes, remote_name))
continue;
if (applied_remotes && g_strv_contains ((const char * const *)applied_remotes, remote_name))
continue;
if (flatpakrepos == NULL)
flatpakrepos = g_hash_table_new_full (g_str_hash, g_str_equal, g_free, g_object_unref);
g_hash_table_insert (flatpakrepos, g_steal_pointer (&remote_name), g_file_enumerator_get_child (dir_enum, file_info));
}
}
return g_steal_pointer (&flatpakrepos);
} | 0 | [
"CWE-74"
] | flatpak | fb473cad801c6b61706353256cab32330557374a | 176,738,497,951,489,800,000,000,000,000,000,000,000 | 73 | dir: Pass environment via bwrap --setenv when running apply_extra
This means we can systematically pass the environment variables
through bwrap(1), even if it is setuid and thus is filtering out
security-sensitive environment variables. bwrap ends up being
run with an empty environment instead.
As with the previous commit, this regressed while fixing CVE-2021-21261.
Fixes: 6d1773d2 "run: Convert all environment variables into bwrap arguments"
Signed-off-by: Simon McVittie <[email protected]> |
int AVI_set_audio_position(avi_t *AVI, int byte)
{
int n0, n1;
if(AVI->mode==AVI_MODE_WRITE) {
AVI_errno = AVI_ERR_NOT_PERM;
return -1;
}
if(!AVI->track[AVI->aptr].audio_index) {
AVI_errno = AVI_ERR_NO_IDX;
return -1;
}
if(byte < 0) byte = 0;
/* Binary search in the audio chunks */
n0 = 0;
n1 = AVI->track[AVI->aptr].audio_chunks;
while(n0<n1-1)
{
int n = (n0+n1)/2;
if(AVI->track[AVI->aptr].audio_index[n].tot>(u32) byte)
n1 = n;
else
n0 = n;
}
AVI->track[AVI->aptr].audio_posc = n0;
AVI->track[AVI->aptr].audio_posb = (u32) (byte - AVI->track[AVI->aptr].audio_index[n0].tot);
return 0;
} | 0 | [
"CWE-835"
] | gpac | 7f060bbb72966cae80d6fee338d0b07fa3fc06e1 | 158,167,899,914,870,620,000,000,000,000,000,000,000 | 34 | fixed #2159 |
static int fts3TermSelectFinishMerge(Fts3Table *p, TermSelect *pTS){
char *aOut = 0;
int nOut = 0;
int i;
/* Loop through the doclists in the aaOutput[] array. Merge them all
** into a single doclist.
*/
for(i=0; i<SizeofArray(pTS->aaOutput); i++){
if( pTS->aaOutput[i] ){
if( !aOut ){
aOut = pTS->aaOutput[i];
nOut = pTS->anOutput[i];
pTS->aaOutput[i] = 0;
}else{
int nNew;
char *aNew;
int rc = fts3DoclistOrMerge(p->bDescIdx,
pTS->aaOutput[i], pTS->anOutput[i], aOut, nOut, &aNew, &nNew
);
if( rc!=SQLITE_OK ){
sqlite3_free(aOut);
return rc;
}
sqlite3_free(pTS->aaOutput[i]);
sqlite3_free(aOut);
pTS->aaOutput[i] = 0;
aOut = aNew;
nOut = nNew;
}
}
}
pTS->aaOutput[0] = aOut;
pTS->anOutput[0] = nOut;
return SQLITE_OK;
} | 0 | [
"CWE-787"
] | sqlite | c72f2fb7feff582444b8ffdc6c900c69847ce8a9 | 317,782,553,954,137,300,000,000,000,000,000,000,000 | 39 | More improvements to shadow table corruption detection in FTS3.
FossilOrigin-Name: 51525f9c3235967bc00a090e84c70a6400698c897aa4742e817121c725b8c99d |
void ERR_free_strings(void)
{
// handled internally
} | 0 | [
"CWE-254"
] | mysql-server | e7061f7e5a96c66cb2e0bf46bec7f6ff35801a69 | 291,799,593,988,953,820,000,000,000,000,000,000,000 | 4 | Bug #22738607: YASSL FUNCTION X509_NAME_GET_INDEX_BY_NID IS NOT WORKING AS EXPECTED. |
int tls12_copy_sigalgs(SSL *s, WPACKET *pkt,
const uint16_t *psig, size_t psiglen)
{
size_t i;
int rv = 0;
for (i = 0; i < psiglen; i++, psig++) {
const SIGALG_LOOKUP *lu = tls1_lookup_sigalg(*psig);
if (!tls12_sigalg_allowed(s, SSL_SECOP_SIGALG_SUPPORTED, lu))
continue;
if (!WPACKET_put_bytes_u16(pkt, *psig))
return 0;
/*
* If TLS 1.3 must have at least one valid TLS 1.3 message
* signing algorithm: i.e. neither RSA nor SHA1/SHA224
*/
if (rv == 0 && (!SSL_IS_TLS13(s)
|| (lu->sig != EVP_PKEY_RSA
&& lu->hash != NID_sha1
&& lu->hash != NID_sha224)))
rv = 1;
}
if (rv == 0)
SSLerr(SSL_F_TLS12_COPY_SIGALGS, SSL_R_NO_SUITABLE_SIGNATURE_ALGORITHM);
return rv;
} | 0 | [
"CWE-476"
] | openssl | 5235ef44b93306a14d0b6c695b13c64b16e1fdec | 339,561,213,569,919,430,000,000,000,000,000,000,000 | 27 | Fix SSL_check_chain()
The function SSL_check_chain() can be used by applications to check that
a cert and chain is compatible with the negotiated parameters. This could
be useful (for example) from the certificate callback. Unfortunately this
function was applying TLSv1.2 sig algs rules and did not work correctly if
TLSv1.3 was negotiated.
We refactor tls_choose_sigalg to split it up and create a new function
find_sig_alg which can (optionally) take a certificate and key as
parameters and find an appropriate sig alg if one exists. If the cert and
key are not supplied then we try to find a cert and key from the ones we
have available that matches the shared sig algs.
Reviewed-by: Tomas Mraz <[email protected]>
(Merged from https://github.com/openssl/openssl/pull/9442) |
void CLASS redcine_load_raw()
{
#ifndef NO_JASPER
int c, row, col;
jas_stream_t *in;
jas_image_t *jimg;
jas_matrix_t *jmat;
jas_seqent_t *data;
ushort *img, *pix;
jas_init();
#ifndef LIBRAW_LIBRARY_BUILD
in = jas_stream_fopen (ifname, "rb");
#else
in = (jas_stream_t*)ifp->make_jas_stream();
if(!in)
throw LIBRAW_EXCEPTION_DECODE_JPEG2000;
#endif
jas_stream_seek (in, data_offset+20, SEEK_SET);
jimg = jas_image_decode (in, -1, 0);
#ifndef LIBRAW_LIBRARY_BUILD
if (!jimg) longjmp (failure, 3);
#else
if(!jimg)
{
jas_stream_close (in);
throw LIBRAW_EXCEPTION_DECODE_JPEG2000;
}
#endif
jmat = jas_matrix_create (height/2, width/2);
merror (jmat, "redcine_load_raw()");
img = (ushort *) calloc ((height+2), (width+2)*2);
merror (img, "redcine_load_raw()");
#ifdef LIBRAW_LIBRARY_BUILD
bool fastexitflag = false;
try {
#endif
FORC4 {
#ifdef LIBRAW_LIBRARY_BUILD
checkCancel();
#endif
jas_image_readcmpt (jimg, c, 0, 0, width/2, height/2, jmat);
data = jas_matrix_getref (jmat, 0, 0);
for (row = c >> 1; row < height; row+=2)
for (col = c & 1; col < width; col+=2)
img[(row+1)*(width+2)+col+1] = data[(row/2)*(width/2)+col/2];
}
for (col=1; col <= width; col++) {
img[col] = img[2*(width+2)+col];
img[(height+1)*(width+2)+col] = img[(height-1)*(width+2)+col];
}
for (row=0; row < height+2; row++) {
img[row*(width+2)] = img[row*(width+2)+2];
img[(row+1)*(width+2)-1] = img[(row+1)*(width+2)-3];
}
for (row=1; row <= height; row++) {
#ifdef LIBRAW_LIBRARY_BUILD
checkCancel();
#endif
pix = img + row*(width+2) + (col = 1 + (FC(row,1) & 1));
for ( ; col <= width; col+=2, pix+=2) {
c = (((pix[0] - 0x800) << 3) +
pix[-(width+2)] + pix[width+2] + pix[-1] + pix[1]) >> 2;
pix[0] = LIM(c,0,4095);
}
}
for (row=0; row < height; row++)
{
#ifdef LIBRAW_LIBRARY_BUILD
checkCancel();
#endif
for (col=0; col < width; col++)
RAW(row,col) = curve[img[(row+1)*(width+2)+col+1]];
}
#ifdef LIBRAW_LIBRARY_BUILD
} catch (...) {
fastexitflag=true;
}
#endif
free (img);
jas_matrix_destroy (jmat);
jas_image_destroy (jimg);
jas_stream_close (in);
#ifdef LIBRAW_LIBRARY_BUILD
if(fastexitflag)
throw LIBRAW_EXCEPTION_CANCELLED_BY_CALLBACK;
#endif
#endif
} | 0 | [] | LibRaw | 9ae25d8c3a6bfb40c582538193264f74c9b93bc0 | 300,279,864,316,522,920,000,000,000,000,000,000,000 | 89 | backported 0.15.4 datachecks |
template<typename t>
CImgList<_cimg_Tt> operator,(const CImgList<t>& list) const {
return CImgList<_cimg_Tt>(list,false).insert(*this,0); | 0 | [
"CWE-125"
] | CImg | 10af1e8c1ad2a58a0a3342a856bae63e8f257abb | 195,761,335,917,644,750,000,000,000,000,000,000,000 | 3 | Fix other issues in 'CImg<T>::load_bmp()'. |
struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
{
struct tcp_options_received tcp_opt;
struct inet_request_sock *ireq;
struct tcp_request_sock *treq;
struct ipv6_pinfo *np = inet6_sk(sk);
struct tcp_sock *tp = tcp_sk(sk);
const struct tcphdr *th = tcp_hdr(skb);
__u32 cookie = ntohl(th->ack_seq) - 1;
struct sock *ret = sk;
struct request_sock *req;
int mss;
struct dst_entry *dst;
__u8 rcv_wscale;
if (!sysctl_tcp_syncookies || !th->ack || th->rst)
goto out;
if (tcp_synq_no_recent_overflow(sk))
goto out;
mss = __cookie_v6_check(ipv6_hdr(skb), th, cookie);
if (mss == 0) {
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESFAILED);
goto out;
}
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESRECV);
/* check for timestamp cookie support */
memset(&tcp_opt, 0, sizeof(tcp_opt));
tcp_parse_options(skb, &tcp_opt, 0, NULL);
if (!cookie_timestamp_decode(&tcp_opt))
goto out;
ret = NULL;
req = inet_reqsk_alloc(&tcp6_request_sock_ops, sk, false);
if (!req)
goto out;
ireq = inet_rsk(req);
treq = tcp_rsk(req);
treq->tfo_listener = false;
if (security_inet_conn_request(sk, skb, req))
goto out_free;
req->mss = mss;
ireq->ir_rmt_port = th->source;
ireq->ir_num = ntohs(th->dest);
ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr;
ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr;
if (ipv6_opt_accepted(sk, skb, &TCP_SKB_CB(skb)->header.h6) ||
np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo ||
np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim) {
atomic_inc(&skb->users);
ireq->pktopts = skb;
}
ireq->ir_iif = sk->sk_bound_dev_if;
/* So that link locals have meaning */
if (!sk->sk_bound_dev_if &&
ipv6_addr_type(&ireq->ir_v6_rmt_addr) & IPV6_ADDR_LINKLOCAL)
ireq->ir_iif = tcp_v6_iif(skb);
ireq->ir_mark = inet_request_mark(sk, skb);
req->num_retrans = 0;
ireq->snd_wscale = tcp_opt.snd_wscale;
ireq->sack_ok = tcp_opt.sack_ok;
ireq->wscale_ok = tcp_opt.wscale_ok;
ireq->tstamp_ok = tcp_opt.saw_tstamp;
req->ts_recent = tcp_opt.saw_tstamp ? tcp_opt.rcv_tsval : 0;
treq->snt_synack.v64 = 0;
treq->rcv_isn = ntohl(th->seq) - 1;
treq->snt_isn = cookie;
/*
* We need to lookup the dst_entry to get the correct window size.
* This is taken from tcp_v6_syn_recv_sock. Somebody please enlighten
* me if there is a preferred way.
*/
{
struct in6_addr *final_p, final;
struct flowi6 fl6;
memset(&fl6, 0, sizeof(fl6));
fl6.flowi6_proto = IPPROTO_TCP;
fl6.daddr = ireq->ir_v6_rmt_addr;
final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt), &final);
fl6.saddr = ireq->ir_v6_loc_addr;
fl6.flowi6_oif = sk->sk_bound_dev_if;
fl6.flowi6_mark = ireq->ir_mark;
fl6.fl6_dport = ireq->ir_rmt_port;
fl6.fl6_sport = inet_sk(sk)->inet_sport;
security_req_classify_flow(req, flowi6_to_flowi(&fl6));
dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
if (IS_ERR(dst))
goto out_free;
}
req->rsk_window_clamp = tp->window_clamp ? :dst_metric(dst, RTAX_WINDOW);
tcp_select_initial_window(tcp_full_space(sk), req->mss,
&req->rsk_rcv_wnd, &req->rsk_window_clamp,
ireq->wscale_ok, &rcv_wscale,
dst_metric(dst, RTAX_INITRWND));
ireq->rcv_wscale = rcv_wscale;
ireq->ecn_ok = cookie_ecn_ok(&tcp_opt, sock_net(sk), dst);
ret = tcp_get_cookie_sock(sk, skb, req, dst);
out:
return ret;
out_free:
reqsk_free(req);
return NULL;
} | 0 | [
"CWE-416",
"CWE-284",
"CWE-264"
] | linux | 45f6fad84cc305103b28d73482b344d7f5b76f39 | 274,418,277,542,228,700,000,000,000,000,000,000,000 | 118 | ipv6: add complete rcu protection around np->opt
This patch addresses multiple problems :
UDP/RAW sendmsg() need to get a stable struct ipv6_txoptions
while socket is not locked : Other threads can change np->opt
concurrently. Dmitry posted a syzkaller
(http://github.com/google/syzkaller) program desmonstrating
use-after-free.
Starting with TCP/DCCP lockless listeners, tcp_v6_syn_recv_sock()
and dccp_v6_request_recv_sock() also need to use RCU protection
to dereference np->opt once (before calling ipv6_dup_options())
This patch adds full RCU protection to np->opt
Reported-by: Dmitry Vyukov <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Acked-by: Hannes Frederic Sowa <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
vte_terminal_emit_restore_window(VteTerminal *terminal)
{
_vte_debug_print(VTE_DEBUG_SIGNALS,
"Emitting `restore-window'.\n");
g_signal_emit_by_name(terminal, "restore-window");
} | 0 | [] | vte | 58bc3a942f198a1a8788553ca72c19d7c1702b74 | 147,047,734,631,403,290,000,000,000,000,000,000,000 | 6 | fix bug #548272
svn path=/trunk/; revision=2365 |
TfLiteStatus Subgraph::SetVariables(std::vector<int> variables) {
TF_LITE_ENSURE_OK(&context_, CheckTensorIndices("variables", variables.data(),
variables.size()));
variables_ = std::move(variables);
return kTfLiteOk;
} | 0 | [
"CWE-20",
"CWE-787"
] | tensorflow | d58c96946b2880991d63d1dacacb32f0a4dfa453 | 102,457,867,979,016,010,000,000,000,000,000,000,000 | 6 | [tflite] Ensure inputs and outputs don't overlap.
If a model uses the same tensor for both an input and an output then this can result in data loss and memory corruption. This should not happen.
PiperOrigin-RevId: 332522916
Change-Id: If0905b142415a9dfceaf2d181872f2a8fb88f48a |
show_entry_machine (ExifEntry *e, void *data)
{
unsigned int *ids = data;
char v[TAG_VALUE_BUF];
ExifIfd ifd = exif_entry_get_ifd (e);
if (*ids) {
fprintf (stdout, "0x%04x", e->tag);
} else {
fputs (CN (exif_tag_get_title_in_ifd (e->tag, ifd)), stdout);
}
fputc ('\t', stdout);
fputs (CN (exif_entry_get_value (e, v, sizeof (v))), stdout);
fputc ('\n', stdout);
} | 0 | [
"CWE-476"
] | exif | f6334d9d32437ef13dc902f0a88a2be0063d9d1c | 243,593,963,538,360,440,000,000,000,000,000,000,000 | 15 | added empty strign check, which would lead to NULL ptr deref/crash in exif XML display. fixes https://github.com/libexif/exif/issues/4 |
int nghttp2_session_resume_data(nghttp2_session *session, int32_t stream_id) {
int rv;
nghttp2_stream *stream;
stream = nghttp2_session_get_stream(session, stream_id);
if (stream == NULL || !nghttp2_stream_check_deferred_item(stream)) {
return NGHTTP2_ERR_INVALID_ARGUMENT;
}
rv = nghttp2_stream_resume_deferred_item(stream,
NGHTTP2_STREAM_FLAG_DEFERRED_USER);
if (nghttp2_is_fatal(rv)) {
return rv;
}
return 0;
} | 0 | [] | nghttp2 | 0a6ce87c22c69438ecbffe52a2859c3a32f1620f | 124,554,535,995,979,930,000,000,000,000,000,000,000 | 17 | Add nghttp2_option_set_max_outbound_ack |
cJSON *cJSON_Duplicate(cJSON *item,int recurse)
{
cJSON *newitem,*cptr,*nptr=0,*newchild;
/* Bail on bad ptr */
if (!item) return 0;
/* Create new item */
newitem=cJSON_New_Item();
if (!newitem) return 0;
/* Copy over all vars */
newitem->type=item->type&(~cJSON_IsReference),newitem->valueint=item->valueint,newitem->valuedouble=item->valuedouble;
if (item->valuestring) {newitem->valuestring=cJSON_strdup(item->valuestring); if (!newitem->valuestring) {cJSON_Delete(newitem);return 0;}}
if (item->string) {newitem->string=cJSON_strdup(item->string); if (!newitem->string) {cJSON_Delete(newitem);return 0;}}
/* If non-recursive, then we're done! */
if (!recurse) return newitem;
/* Walk the ->next chain for the child. */
cptr=item->child;
while (cptr)
{
newchild=cJSON_Duplicate(cptr,1); /* Duplicate (with recurse) each item in the ->next chain */
if (!newchild) {cJSON_Delete(newitem);return 0;}
if (nptr) {nptr->next=newchild,newchild->prev=nptr;nptr=newchild;} /* If newitem->child already set, then crosswire ->prev and ->next and move on */
else {newitem->child=newchild;nptr=newchild;} /* Set newitem->child and move to it */
cptr=cptr->next;
}
return newitem;
} | 0 | [
"CWE-120",
"CWE-119",
"CWE-787"
] | iperf | 91f2fa59e8ed80dfbf400add0164ee0e508e412a | 116,706,774,455,562,400,000,000,000,000,000,000,000 | 26 | Fix a buffer overflow / heap corruption issue that could occur if a
malformed JSON string was passed on the control channel. This issue,
present in the cJSON library, was already fixed upstream, so was
addressed here in iperf3 by importing a newer version of cJSON (plus
local ESnet modifications).
Discovered and reported by Dave McDaniel, Cisco Talos.
Based on a patch by @dopheide-esnet, with input from @DaveGamble.
Cross-references: TALOS-CAN-0164, ESNET-SECADV-2016-0001,
CVE-2016-4303
(cherry picked from commit ed94082be27d971a5e1b08b666e2c217cf470a40)
Signed-off-by: Bruce A. Mah <[email protected]> |
static void unzzip_cat_file(ZZIP_DIR* disk, char* name, FILE* out)
{
ZZIP_FILE* file = zzip_file_open (disk, name, 0);
if (file)
{
char buffer[1024]; int len;
while ((len = zzip_file_read (file, buffer, 1024)))
{
fwrite (buffer, 1, len, out);
}
zzip_file_close (file);
}
} | 1 | [
"CWE-835"
] | zziplib | ac9ae39ef419e9f0f83da1e583314d8c7cda34a6 | 213,540,364,375,667,920,000,000,000,000,000,000,000 | 14 | #68 ssize_t return value of zzip_file_read is a signed value being possibly -1 |
static void queue_request(struct avdtp *session, struct pending_req *req,
gboolean priority)
{
if (priority)
session->prio_queue = g_slist_append(session->prio_queue, req);
else
session->req_queue = g_slist_append(session->req_queue, req);
} | 0 | [
"CWE-703"
] | bluez | 7a80d2096f1b7125085e21448112aa02f49f5e9a | 11,858,019,165,500,434,000,000,000,000,000,000,000 | 8 | avdtp: Fix accepting invalid/malformed capabilities
Check if capabilities are valid before attempting to copy them. |
**/
T& atNXY(const int pos, const int x, const int y, const int z, const int c, const T& out_value) {
return (pos<0 || pos>=width())?(cimg::temporary(out_value)=out_value):_data[pos].atXY(x,y,z,c,out_value); | 0 | [
"CWE-119",
"CWE-787"
] | CImg | ac8003393569aba51048c9d67e1491559877b1d1 | 165,203,300,049,162,900,000,000,000,000,000,000,000 | 3 | . |
Image::AutoPtr ImageFactory::open(const std::wstring& wpath, bool useCurl)
{
Image::AutoPtr image = open(ImageFactory::createIo(wpath, useCurl)); // may throw
if (image.get() == 0) throw WError(kerFileContainsUnknownImageType, wpath);
return image;
} | 0 | [
"CWE-835"
] | exiv2 | ae49250942f4395639961abeed3c15920fcd7241 | 139,519,184,386,269,220,000,000,000,000,000,000,000 | 6 | Check in Image::printIFDStructure if seek and reads are OK |
static void gdImageHLine(gdImagePtr im, int y, int x1, int x2, int col)
{
if (im->thick > 1) {
int thickhalf = im->thick >> 1;
gdImageFilledRectangle(im, x1, y - thickhalf, x2, y + im->thick - thickhalf - 1, col);
} else {
if (x2 < x1) {
int t = x2;
x2 = x1;
x1 = t;
}
for (;x1 <= x2; x1++) {
gdImageSetPixel(im, x1, y, col);
}
}
return;
} | 0 | [
"CWE-119"
] | php-src | e7f2356665c2569191a946b6fc35b437f0ae1384 | 91,038,157,956,486,540,000,000,000,000,000,000,000 | 18 | Fix #66387: Stack overflow with imagefilltoborder
The stack overflow is caused by the recursive algorithm in combination with a
very large negative coordinate passed to gdImageFillToBorder(). As there is
already a clipping for large positive coordinates to the width and height of
the image, it seems to be consequent to clip to zero also. |
GF_Err fiel_box_read(GF_Box *s, GF_BitStream *bs)
{
GF_FieldInfoBox *ptr = (GF_FieldInfoBox *)s;
ISOM_DECREASE_SIZE(s, 2);
ptr->field_count = gf_bs_read_u8(bs);
ptr->field_order = gf_bs_read_u8(bs);
return GF_OK;
} | 0 | [
"CWE-476"
] | gpac | 6170024568f4dda310e98ef7508477b425c58d09 | 296,333,299,803,844,900,000,000,000,000,000,000,000 | 10 | fixed potential crash - cf #1263 |
int VvcVpsUnit::deserialize()
{
int rez = VvcUnit::deserialize();
if (rez)
return rez;
try
{
vps_id = m_reader.getBits(4);
vps_max_layers = m_reader.getBits(6) + 1;
vps_max_sublayers = m_reader.getBits(3) + 1;
bool vps_default_ptl_dpb_hrd_max_tid_flag =
(vps_max_layers > 1 && vps_max_sublayers > 1) ? m_reader.getBit() : 1;
int vps_all_independent_layers_flag = (vps_max_layers > 1) ? m_reader.getBit() : 1;
for (int i = 0; i < vps_max_layers; i++)
{
m_reader.skipBits(6); // vps_layer_id[i]
if (i > 0 && !vps_all_independent_layers_flag)
{
if (!m_reader.getBit()) // vps_independent_layer_flag[i]
{
bool vps_max_tid_ref_present_flag = m_reader.getBit();
for (int j = 0; j < i; j++)
{
bool vps_direct_ref_layer_flag = m_reader.getBit();
if (vps_max_tid_ref_present_flag && vps_direct_ref_layer_flag)
m_reader.skipBits(3); // vps_max_tid_il_ref_pics_plus1[i][j]
}
}
}
}
bool vps_each_layer_is_an_ols_flag = 1;
int vps_num_ptls = 1;
int vps_ols_mode_idc = 2;
int olsModeIdc = 4;
int TotalNumOlss = vps_max_layers;
if (vps_max_layers > 1)
{
vps_each_layer_is_an_ols_flag = 0;
if (vps_all_independent_layers_flag)
vps_each_layer_is_an_ols_flag = m_reader.getBit();
if (!vps_each_layer_is_an_ols_flag)
{
if (!vps_all_independent_layers_flag)
{
vps_ols_mode_idc = m_reader.getBits(2);
olsModeIdc = vps_ols_mode_idc;
}
if (vps_ols_mode_idc == 2)
{
int vps_num_output_layer_sets_minus2 = m_reader.getBits(8);
TotalNumOlss = vps_num_output_layer_sets_minus2 + 2;
for (int i = 1; i <= vps_num_output_layer_sets_minus2 + 1; i++)
for (int j = 0; j < vps_max_layers; j++) m_reader.skipBit(); // vps_ols_output_layer_flag[i][j]
}
}
vps_num_ptls = m_reader.getBits(8) + 1;
}
std::vector<bool> vps_pt_present_flag;
std::vector<int> vps_ptl_max_tid;
vps_pt_present_flag.resize(vps_num_ptls);
vps_ptl_max_tid.resize(vps_num_ptls);
for (int i = 0; i < vps_num_ptls; i++)
{
if (i > 0)
vps_pt_present_flag[i] = m_reader.getBit();
if (!vps_default_ptl_dpb_hrd_max_tid_flag)
vps_ptl_max_tid[i] = m_reader.getBits(3);
}
m_reader.skipBits(m_reader.getBitsLeft() % 8); // vps_ptl_alignment_zero_bit
for (int i = 0; i < vps_num_ptls; i++)
if (profile_tier_level(vps_pt_present_flag[i], vps_ptl_max_tid[i]) != 0)
return 1;
for (int i = 0; i < TotalNumOlss; i++)
if (vps_num_ptls > 1 && vps_num_ptls != TotalNumOlss)
m_reader.skipBits(8); // vps_ols_ptl_idx[i]
if (!vps_each_layer_is_an_ols_flag)
{
unsigned NumMultiLayerOlss = 0;
int NumLayersInOls = 0;
for (int i = 1; i < TotalNumOlss; i++)
{
if (vps_each_layer_is_an_ols_flag)
NumLayersInOls = 1;
else if (vps_ols_mode_idc == 0 || vps_ols_mode_idc == 1)
NumLayersInOls = i + 1;
else if (vps_ols_mode_idc == 2)
{
int j = 0;
for (int k = 0; k < vps_max_layers; k++) NumLayersInOls = j;
}
if (NumLayersInOls > 1)
NumMultiLayerOlss++;
}
unsigned vps_num_dpb_params = extractUEGolombCode() + 1;
if (vps_num_dpb_params >= NumMultiLayerOlss)
return 1;
unsigned VpsNumDpbParams;
if (vps_each_layer_is_an_ols_flag)
VpsNumDpbParams = 0;
else
VpsNumDpbParams = vps_num_dpb_params;
bool vps_sublayer_dpb_params_present_flag =
(vps_max_sublayers > 1) ? vps_sublayer_dpb_params_present_flag = m_reader.getBit() : 0;
for (size_t i = 0; i < VpsNumDpbParams; i++)
{
int vps_dpb_max_tid = vps_max_sublayers - 1;
if (!vps_default_ptl_dpb_hrd_max_tid_flag)
vps_dpb_max_tid = m_reader.getBits(3);
if (dpb_parameters(vps_dpb_max_tid, vps_sublayer_dpb_params_present_flag))
return 1;
}
for (size_t i = 0; i < NumMultiLayerOlss; i++)
{
extractUEGolombCode(); // vps_ols_dpb_pic_width
extractUEGolombCode(); // vps_ols_dpb_pic_height
m_reader.skipBits(2); // vps_ols_dpb_chroma_format
unsigned vps_ols_dpb_bitdepth_minus8 = extractUEGolombCode();
if (vps_ols_dpb_bitdepth_minus8 > 2)
return 1;
if (VpsNumDpbParams > 1 && VpsNumDpbParams != NumMultiLayerOlss)
{
unsigned vps_ols_dpb_params_idx = extractUEGolombCode();
if (vps_ols_dpb_params_idx >= VpsNumDpbParams)
return 1;
}
}
bool vps_timing_hrd_params_present_flag = m_reader.getBit();
if (vps_timing_hrd_params_present_flag)
{
if (m_vps_hrd.general_timing_hrd_parameters())
return 1;
bool vps_sublayer_cpb_params_present_flag = (vps_max_sublayers > 1) ? m_reader.getBit() : 0;
unsigned vps_num_ols_timing_hrd_params = extractUEGolombCode() + 1;
if (vps_num_ols_timing_hrd_params > NumMultiLayerOlss)
return 1;
for (size_t i = 0; i <= vps_num_ols_timing_hrd_params; i++)
{
int vps_hrd_max_tid = 1;
if (!vps_default_ptl_dpb_hrd_max_tid_flag)
vps_hrd_max_tid = m_reader.getBits(3);
int firstSubLayer = vps_sublayer_cpb_params_present_flag ? 0 : vps_hrd_max_tid;
m_vps_hrd.ols_timing_hrd_parameters(firstSubLayer, vps_hrd_max_tid);
}
if (vps_num_ols_timing_hrd_params > 1 && vps_num_ols_timing_hrd_params != NumMultiLayerOlss)
{
for (size_t i = 0; i < NumMultiLayerOlss; i++)
{
unsigned vps_ols_timing_hrd_idx = extractUEGolombCode();
if (vps_ols_timing_hrd_idx >= vps_num_ols_timing_hrd_params)
return 1;
}
}
}
}
return rez;
}
catch (VodCoreException& e)
{
return NOT_ENOUGH_BUFFER;
}
} | 0 | [
"CWE-22"
] | tsMuxer | 3763dd34755a8944d903aa19578fa22cd3734165 | 123,442,046,737,267,450,000,000,000,000,000,000,000 | 173 | Fix Buffer Overflow
Fixes issue #509. |
static void coroutine_fn v9fs_op_not_supp(void *opaque)
{
V9fsPDU *pdu = opaque;
pdu_complete(pdu, -EOPNOTSUPP);
} | 0 | [
"CWE-362"
] | qemu | 89fbea8737e8f7b954745a1ffc4238d377055305 | 117,859,872,142,294,370,000,000,000,000,000,000,000 | 5 | 9pfs: Fully restart unreclaim loop (CVE-2021-20181)
Depending on the client activity, the server can be asked to open a huge
number of file descriptors and eventually hit RLIMIT_NOFILE. This is
currently mitigated using a reclaim logic : the server closes the file
descriptors of idle fids, based on the assumption that it will be able
to re-open them later. This assumption doesn't hold of course if the
client requests the file to be unlinked. In this case, we loop on the
entire fid list and mark all related fids as unreclaimable (the reclaim
logic will just ignore them) and, of course, we open or re-open their
file descriptors if needed since we're about to unlink the file.
This is the purpose of v9fs_mark_fids_unreclaim(). Since the actual
opening of a file can cause the coroutine to yield, another client
request could possibly add a new fid that we may want to mark as
non-reclaimable as well. The loop is thus restarted if the re-open
request was actually transmitted to the backend. This is achieved
by keeping a reference on the first fid (head) before traversing
the list.
This is wrong in several ways:
- a potential clunk request from the client could tear the first
fid down and cause the reference to be stale. This leads to a
use-after-free error that can be detected with ASAN, using a
custom 9p client
- fids are added at the head of the list : restarting from the
previous head will always miss fids added by a some other
potential request
All these problems could be avoided if fids were being added at the
end of the list. This can be achieved with a QSIMPLEQ, but this is
probably too much change for a bug fix. For now let's keep it
simple and just restart the loop from the current head.
Fixes: CVE-2021-20181
Buglink: https://bugs.launchpad.net/qemu/+bug/1911666
Reported-by: Zero Day Initiative <[email protected]>
Reviewed-by: Christian Schoenebeck <[email protected]>
Reviewed-by: Stefano Stabellini <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Greg Kurz <[email protected]> |
job_still_running(void)
{
struct job *job;
LIST_FOREACH(job, &all_jobs, entry) {
if ((~job->flags & JOB_NOWAIT) && job->state == JOB_RUNNING)
return (1);
}
return (0);
} | 0 | [] | src | b32e1d34e10a0da806823f57f02a4ae6e93d756e | 135,343,172,075,003,060,000,000,000,000,000,000,000 | 10 | evbuffer_new and bufferevent_new can both fail (when malloc fails) and
return NULL. GitHub issue 1547. |
igmp_scount(struct ip_mc_list *pmc, int type, int gdeleted, int sdeleted)
{
struct ip_sf_list *psf;
int scount = 0;
for (psf=pmc->sources; psf; psf=psf->sf_next) {
if (!is_in(pmc, psf, type, gdeleted, sdeleted))
continue;
scount++;
}
return scount;
} | 0 | [
"CWE-399",
"CWE-703",
"CWE-369"
] | linux | a8c1f65c79cbbb2f7da782d4c9d15639a9b94b27 | 145,980,978,627,234,960,000,000,000,000,000,000,000 | 12 | igmp: Avoid zero delay when receiving odd mixture of IGMP queries
Commit 5b7c84066733c5dfb0e4016d939757b38de189e4 ('ipv4: correct IGMP
behavior on v3 query during v2-compatibility mode') added yet another
case for query parsing, which can result in max_delay = 0. Substitute
a value of 1, as in the usual v3 case.
Reported-by: Simon McVittie <[email protected]>
References: http://bugs.debian.org/654876
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
static int jas_cmputint(long **bufptr, int sgnd, int prec, long val)
{
int m;
if (sgnd) {
m = (1 << (prec - 1));
if (val < -m || val >= m)
return -1;
} else {
if (val < 0 || val >= (1 << prec))
return -1;
}
**bufptr = val;
++(*bufptr);
return 0;
} | 0 | [
"CWE-189"
] | jasper | 3c55b399c36ef46befcb21e4ebc4799367f89684 | 175,097,441,858,662,900,000,000,000,000,000,000,000 | 15 | At many places in the code, jas_malloc or jas_recalloc was being
invoked with the size argument being computed in a manner that would not
allow integer overflow to be detected. Now, these places in the code
have been modified to use special-purpose memory allocation functions
(e.g., jas_alloc2, jas_alloc3, jas_realloc2) that check for overflow.
This should fix many security problems. |
f_py3eval(typval_T *argvars, typval_T *rettv)
{
char_u *str;
char_u buf[NUMBUFLEN];
if (p_pyx == 0)
p_pyx = 3;
str = tv_get_string_buf(&argvars[0], buf);
do_py3eval(str, rettv);
} | 1 | [
"CWE-78"
] | vim | 8c62a08faf89663e5633dc5036cd8695c80f1075 | 191,800,715,166,976,230,000,000,000,000,000,000,000 | 11 | patch 8.1.0881: can execute shell commands in rvim through interfaces
Problem: Can execute shell commands in rvim through interfaces.
Solution: Disable using interfaces in restricted mode. Allow for writing
file with writefile(), histadd() and a few others. |
static void SVGEndElement(void *context,const xmlChar *name)
{
SVGInfo
*svg_info;
/*
Called when the end of an element has been detected.
*/
(void) LogMagickEvent(CoderEvent,GetMagickModule(),
" SAX.endElement(%s)",name);
svg_info=(SVGInfo *) context;
if (strchr((char *) name,':') != (char *) NULL)
{
/*
Skip over namespace.
*/
for ( ; *name != ':'; name++) ;
name++;
}
switch (*name)
{
case 'C':
case 'c':
{
if (LocaleCompare((const char *) name,"circle") == 0)
{
(void) FormatLocaleFile(svg_info->file,"class \"circle\"\n");
(void) FormatLocaleFile(svg_info->file,"circle %g,%g %g,%g\n",
svg_info->element.cx,svg_info->element.cy,svg_info->element.cx,
svg_info->element.cy+svg_info->element.minor);
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
if (LocaleCompare((const char *) name,"clipPath") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop clip-path\n");
break;
}
break;
}
case 'D':
case 'd':
{
if (LocaleCompare((const char *) name,"defs") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop defs\n");
break;
}
if (LocaleCompare((const char *) name,"desc") == 0)
{
register char
*p;
if (*svg_info->text == '\0')
break;
(void) fputc('#',svg_info->file);
for (p=svg_info->text; *p != '\0'; p++)
{
(void) fputc(*p,svg_info->file);
if (*p == '\n')
(void) fputc('#',svg_info->file);
}
(void) fputc('\n',svg_info->file);
*svg_info->text='\0';
break;
}
break;
}
case 'E':
case 'e':
{
if (LocaleCompare((const char *) name,"ellipse") == 0)
{
double
angle;
(void) FormatLocaleFile(svg_info->file,"class \"ellipse\"\n");
angle=svg_info->element.angle;
(void) FormatLocaleFile(svg_info->file,"ellipse %g,%g %g,%g 0,360\n",
svg_info->element.cx,svg_info->element.cy,
angle == 0.0 ? svg_info->element.major : svg_info->element.minor,
angle == 0.0 ? svg_info->element.minor : svg_info->element.major);
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
break;
}
case 'F':
case 'f':
{
if (LocaleCompare((const char *) name,"foreignObject") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
break;
}
case 'G':
case 'g':
{
if (LocaleCompare((const char *) name,"g") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
break;
}
case 'I':
case 'i':
{
if (LocaleCompare((const char *) name,"image") == 0)
{
(void) FormatLocaleFile(svg_info->file,
"image Over %g,%g %g,%g \"%s\"\n",svg_info->bounds.x,
svg_info->bounds.y,svg_info->bounds.width,svg_info->bounds.height,
svg_info->url);
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
break;
}
case 'L':
case 'l':
{
if (LocaleCompare((const char *) name,"line") == 0)
{
(void) FormatLocaleFile(svg_info->file,"class \"line\"\n");
(void) FormatLocaleFile(svg_info->file,"line %g,%g %g,%g\n",
svg_info->segment.x1,svg_info->segment.y1,svg_info->segment.x2,
svg_info->segment.y2);
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
if (LocaleCompare((const char *) name,"linearGradient") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop gradient\n");
break;
}
break;
}
case 'M':
case 'm':
{
if (LocaleCompare((const char *) name,"mask") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop mask\n");
break;
}
break;
}
case 'P':
case 'p':
{
if (LocaleCompare((const char *) name,"pattern") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop pattern\n");
break;
}
if (LocaleCompare((const char *) name,"path") == 0)
{
(void) FormatLocaleFile(svg_info->file,"class \"path\"\n");
(void) FormatLocaleFile(svg_info->file,"path \"%s\"\n",
svg_info->vertices);
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
if (LocaleCompare((const char *) name,"polygon") == 0)
{
(void) FormatLocaleFile(svg_info->file,"class \"polygon\"\n");
(void) FormatLocaleFile(svg_info->file,"polygon %s\n",
svg_info->vertices);
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
if (LocaleCompare((const char *) name,"polyline") == 0)
{
(void) FormatLocaleFile(svg_info->file,"class \"polyline\"\n");
(void) FormatLocaleFile(svg_info->file,"polyline %s\n",
svg_info->vertices);
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
break;
}
case 'R':
case 'r':
{
if (LocaleCompare((const char *) name,"radialGradient") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop gradient\n");
break;
}
if (LocaleCompare((const char *) name,"rect") == 0)
{
if ((svg_info->radius.x == 0.0) && (svg_info->radius.y == 0.0))
{
(void) FormatLocaleFile(svg_info->file,"class \"rect\"\n");
if ((fabs(svg_info->bounds.width-1.0) < MagickEpsilon) &&
(fabs(svg_info->bounds.height-1.0) < MagickEpsilon))
(void) FormatLocaleFile(svg_info->file,"point %g,%g\n",
svg_info->bounds.x,svg_info->bounds.y);
else
(void) FormatLocaleFile(svg_info->file,
"rectangle %g,%g %g,%g\n",svg_info->bounds.x,
svg_info->bounds.y,svg_info->bounds.x+svg_info->bounds.width,
svg_info->bounds.y+svg_info->bounds.height);
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
if (svg_info->radius.x == 0.0)
svg_info->radius.x=svg_info->radius.y;
if (svg_info->radius.y == 0.0)
svg_info->radius.y=svg_info->radius.x;
(void) FormatLocaleFile(svg_info->file,
"roundRectangle %g,%g %g,%g %g,%g\n",
svg_info->bounds.x,svg_info->bounds.y,svg_info->bounds.x+
svg_info->bounds.width,svg_info->bounds.y+svg_info->bounds.height,
svg_info->radius.x,svg_info->radius.y);
svg_info->radius.x=0.0;
svg_info->radius.y=0.0;
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
break;
}
case 'S':
case 's':
{
if (LocaleCompare((const char *) name,"stop") == 0)
{
(void) FormatLocaleFile(svg_info->file,"stop-color \"%s\" %s\n",
svg_info->stop_color,svg_info->offset);
break;
}
if (LocaleCompare((char *) name,"style") == 0)
{
char
*keyword,
**tokens,
*value;
register ssize_t
j;
size_t
number_tokens;
/*
Find style definitions in svg_info->text.
*/
tokens=SVGKeyValuePairs(context,'{','}',svg_info->text,
&number_tokens);
if (tokens == (char **) NULL)
break;
for (j=0; j < (ssize_t) (number_tokens-1); j+=2)
{
keyword=(char *) tokens[j];
value=(char *) tokens[j+1];
(void) FormatLocaleFile(svg_info->file,"push class \"%s\"\n",
*keyword == '.' ? keyword+1 : keyword);
SVGProcessStyleElement(context,name,value);
(void) FormatLocaleFile(svg_info->file,"pop class\n");
}
break;
}
if (LocaleCompare((const char *) name,"svg") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
svg_info->svgDepth--;
break;
}
if (LocaleCompare((const char *) name,"symbol") == 0)
{
(void) FormatLocaleFile(svg_info->file,"pop symbol\n");
break;
}
break;
}
case 'T':
case 't':
{
if (LocaleCompare((const char *) name,"text") == 0)
{
if (*svg_info->text != '\0')
{
char
*text;
SVGStripString(MagickTrue,svg_info->text);
text=EscapeString(svg_info->text,'\"');
(void) FormatLocaleFile(svg_info->file,"text 0,0 \"%s\"\n",text);
text=DestroyString(text);
*svg_info->text='\0';
svg_info->center.x=0.0;
svg_info->center.y=0.0;
}
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
if (LocaleCompare((const char *) name,"tspan") == 0)
{
if (*svg_info->text != '\0')
{
char
*text;
(void) FormatLocaleFile(svg_info->file,"class \"tspan\"\n");
text=EscapeString(svg_info->text,'\"');
(void) FormatLocaleFile(svg_info->file,"text %g,%g \"%s\"\n",
svg_info->bounds.x-svg_info->center.x,svg_info->bounds.y-
svg_info->center.y,text);
text=DestroyString(text);
*svg_info->text='\0';
}
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
if (LocaleCompare((const char *) name,"title") == 0)
{
if (*svg_info->text == '\0')
break;
(void) CloneString(&svg_info->title,svg_info->text);
*svg_info->text='\0';
break;
}
break;
}
case 'U':
case 'u':
{
if (LocaleCompare((char *) name,"use") == 0)
{
if ((svg_info->bounds.x != 0.0) || (svg_info->bounds.y != 0.0))
(void) FormatLocaleFile(svg_info->file,"translate %g,%g\n",
svg_info->bounds.x,svg_info->bounds.y);
(void) FormatLocaleFile(svg_info->file,"use \"url(%s)\"\n",
svg_info->url);
(void) FormatLocaleFile(svg_info->file,"pop graphic-context\n");
break;
}
break;
}
default:
break;
}
*svg_info->text='\0';
(void) memset(&svg_info->element,0,sizeof(svg_info->element));
(void) memset(&svg_info->segment,0,sizeof(svg_info->segment));
svg_info->n--;
} | 1 | [
"CWE-401"
] | ImageMagick6 | e3417aebe17cbe274b7361aa92c83226ca5b646b | 213,422,284,416,318,250,000,000,000,000,000,000,000 | 350 | https://github.com/ImageMagick/ImageMagick/issues/1533 |
static int handle_invept(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
u32 vmx_instruction_info, types;
unsigned long type;
gva_t gva;
struct x86_exception e;
struct {
u64 eptp, gpa;
} operand;
if (!(vmx->nested.msrs.secondary_ctls_high &
SECONDARY_EXEC_ENABLE_EPT) ||
!(vmx->nested.msrs.ept_caps & VMX_EPT_INVEPT_BIT)) {
kvm_queue_exception(vcpu, UD_VECTOR);
return 1;
}
if (!nested_vmx_check_permission(vcpu))
return 1;
vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
type = kvm_register_readl(vcpu, (vmx_instruction_info >> 28) & 0xf);
types = (vmx->nested.msrs.ept_caps >> VMX_EPT_EXTENT_SHIFT) & 6;
if (type >= 32 || !(types & (1 << type))) {
nested_vmx_failValid(vcpu,
VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
return kvm_skip_emulated_instruction(vcpu);
}
/* According to the Intel VMX instruction reference, the memory
* operand is read even if it isn't needed (e.g., for type==global)
*/
if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION),
vmx_instruction_info, false, &gva))
return 1;
if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &operand,
sizeof(operand), &e)) {
kvm_inject_page_fault(vcpu, &e);
return 1;
}
switch (type) {
case VMX_EPT_EXTENT_GLOBAL:
/*
* TODO: track mappings and invalidate
* single context requests appropriately
*/
case VMX_EPT_EXTENT_CONTEXT:
kvm_mmu_sync_roots(vcpu);
kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
nested_vmx_succeed(vcpu);
break;
default:
BUG_ON(1);
break;
}
return kvm_skip_emulated_instruction(vcpu);
} | 0 | [
"CWE-284"
] | linux | 727ba748e110b4de50d142edca9d6a9b7e6111d8 | 126,979,135,897,222,730,000,000,000,000,000,000,000 | 62 | kvm: nVMX: Enforce cpl=0 for VMX instructions
VMX instructions executed inside a L1 VM will always trigger a VM exit
even when executed with cpl 3. This means we must perform the
privilege check in software.
Fixes: 70f3aac964ae("kvm: nVMX: Remove superfluous VMX instruction fault checks")
Cc: [email protected]
Signed-off-by: Felix Wilhelm <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]> |
void CModule::OnModCTCP(const CString& sMessage) {} | 0 | [
"CWE-20",
"CWE-264"
] | znc | 8de9e376ce531fe7f3c8b0aa4876d15b479b7311 | 272,146,748,093,147,400,000,000,000,000,000,000,000 | 1 | Fix remote code execution and privilege escalation vulnerability.
To trigger this, need to have a user already.
Thanks for Jeriko One <[email protected]> for finding and reporting this.
CVE-2019-12816 |
free_akl (struct akl *akl)
{
if (akl->spec)
free_keyserver_spec (akl->spec);
xfree (akl);
} | 0 | [
"CWE-310"
] | gnupg | 4bde12206c5bf199dc6e12a74af8da4558ba41bf | 271,057,681,727,584,280,000,000,000,000,000,000,000 | 7 | gpg: Distinguish between missing and cleared key flags.
* include/cipher.h (PUBKEY_USAGE_NONE): New.
* g10/getkey.c (parse_key_usage): Set new flag.
--
We do not want to use the default capabilities (derived from the
algorithm) if any key flags are given in a signature. Thus if key
flags are used in any way, the default key capabilities are never
used.
This allows to create a key with key flags set to all zero so it can't
be used. This better reflects common sense. |
brcmf_set_wsec_mode(struct net_device *ndev,
struct cfg80211_connect_params *sme)
{
struct brcmf_if *ifp = netdev_priv(ndev);
struct wiphy *wiphy = ifp->drvr->wiphy;
struct brcmf_cfg80211_profile *profile = ndev_to_prof(ndev);
struct brcmf_cfg80211_security *sec;
s32 pval = 0;
s32 gval = 0;
s32 wsec;
s32 err = 0;
if (sme->crypto.n_ciphers_pairwise) {
switch (sme->crypto.ciphers_pairwise[0]) {
case WLAN_CIPHER_SUITE_WEP40:
case WLAN_CIPHER_SUITE_WEP104:
pval = WEP_ENABLED;
break;
case WLAN_CIPHER_SUITE_TKIP:
pval = TKIP_ENABLED;
break;
case WLAN_CIPHER_SUITE_CCMP:
pval = AES_ENABLED;
break;
case WLAN_CIPHER_SUITE_AES_CMAC:
pval = AES_ENABLED;
break;
default:
bphy_err(wiphy, "invalid cipher pairwise (%d)\n",
sme->crypto.ciphers_pairwise[0]);
return -EINVAL;
}
}
if (sme->crypto.cipher_group) {
switch (sme->crypto.cipher_group) {
case WLAN_CIPHER_SUITE_WEP40:
case WLAN_CIPHER_SUITE_WEP104:
gval = WEP_ENABLED;
break;
case WLAN_CIPHER_SUITE_TKIP:
gval = TKIP_ENABLED;
break;
case WLAN_CIPHER_SUITE_CCMP:
gval = AES_ENABLED;
break;
case WLAN_CIPHER_SUITE_AES_CMAC:
gval = AES_ENABLED;
break;
default:
bphy_err(wiphy, "invalid cipher group (%d)\n",
sme->crypto.cipher_group);
return -EINVAL;
}
}
brcmf_dbg(CONN, "pval (%d) gval (%d)\n", pval, gval);
/* In case of privacy, but no security and WPS then simulate */
/* setting AES. WPS-2.0 allows no security */
if (brcmf_find_wpsie(sme->ie, sme->ie_len) && !pval && !gval &&
sme->privacy)
pval = AES_ENABLED;
wsec = pval | gval;
err = brcmf_fil_bsscfg_int_set(ifp, "wsec", wsec);
if (err) {
bphy_err(wiphy, "error (%d)\n", err);
return err;
}
sec = &profile->sec;
sec->cipher_pairwise = sme->crypto.ciphers_pairwise[0];
sec->cipher_group = sme->crypto.cipher_group;
return err;
} | 0 | [
"CWE-787"
] | linux | 1b5e2423164b3670e8bc9174e4762d297990deff | 102,778,870,967,352,820,000,000,000,000,000,000,000 | 75 | brcmfmac: assure SSID length from firmware is limited
The SSID length as received from firmware should not exceed
IEEE80211_MAX_SSID_LEN as that would result in heap overflow.
Reviewed-by: Hante Meuleman <[email protected]>
Reviewed-by: Pieter-Paul Giesberts <[email protected]>
Reviewed-by: Franky Lin <[email protected]>
Signed-off-by: Arend van Spriel <[email protected]>
Signed-off-by: Kalle Valo <[email protected]> |
static ssize_t callback_static_file_uncompressed_stream(void * cls, uint64_t pos, char * buf, size_t max) {
(void)(pos);
if (cls != NULL) {
return fread (buf, sizeof(char), max, (FILE *)cls);
} else {
return U_STREAM_END;
}
} | 0 | [
"CWE-269",
"CWE-22"
] | glewlwyd | e3f7245c33897bf9b3a75acfcdb8b7b93974bf11 | 12,462,736,093,658,907,000,000,000,000,000,000,000 | 8 | Fix file access check for directory traversal, and fix call for callback_static_file_uncompressed if header not set |
static ssize_t waiting_for_supplier_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
bool val;
device_lock(dev);
mutex_lock(&wfs_lock);
val = !list_empty(&dev->links.needs_suppliers)
&& dev->links.need_for_probe;
mutex_unlock(&wfs_lock);
device_unlock(dev);
return sysfs_emit(buf, "%u\n", val);
} | 0 | [
"CWE-787"
] | linux | aa838896d87af561a33ecefea1caa4c15a68bc47 | 206,609,688,270,614,460,000,000,000,000,000,000,000 | 14 | drivers core: Use sysfs_emit and sysfs_emit_at for show(device *...) functions
Convert the various sprintf fmaily calls in sysfs device show functions
to sysfs_emit and sysfs_emit_at for PAGE_SIZE buffer safety.
Done with:
$ spatch -sp-file sysfs_emit_dev.cocci --in-place --max-width=80 .
And cocci script:
$ cat sysfs_emit_dev.cocci
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- strcpy(buf, chr);
+ sysfs_emit(buf, chr);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
- len += scnprintf(buf + len, PAGE_SIZE - len,
+ len += sysfs_emit_at(buf, len,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
...
- strcpy(buf, chr);
- return strlen(buf);
+ return sysfs_emit(buf, chr);
}
Signed-off-by: Joe Perches <[email protected]>
Link: https://lore.kernel.org/r/3d033c33056d88bbe34d4ddb62afd05ee166ab9a.1600285923.git.joe@perches.com
Signed-off-by: Greg Kroah-Hartman <[email protected]> |
static void oidc_revoke_tokens(request_rec *r, oidc_cfg *c,
oidc_session_t *session) {
char *response = NULL;
char *basic_auth = NULL;
char *bearer_auth = NULL;
apr_table_t *params = NULL;
const char *token = NULL;
oidc_provider_t *provider = NULL;
oidc_debug(r, "enter");
if (oidc_get_provider_from_session(r, c, session, &provider) == FALSE)
goto out;
oidc_debug(r, "revocation_endpoint=%s",
provider->revocation_endpoint_url ?
provider->revocation_endpoint_url : "(null)");
if (provider->revocation_endpoint_url == NULL)
goto out;
params = apr_table_make(r->pool, 4);
// add the token endpoint authentication credentials to the revocation endpoint call...
if (oidc_proto_token_endpoint_auth(r, c, provider->token_endpoint_auth,
provider->client_id, provider->client_secret,
provider->client_signing_keys, provider->token_endpoint_url, params,
NULL, &basic_auth, &bearer_auth) == FALSE)
goto out;
// TODO: use oauth.ssl_validate_server ...
token = oidc_session_get_refresh_token(r, session);
if (token != NULL) {
apr_table_addn(params, "token_type_hint", "refresh_token");
apr_table_addn(params, "token", token);
if (oidc_util_http_post_form(r, provider->revocation_endpoint_url,
params, basic_auth, bearer_auth, c->oauth.ssl_validate_server,
&response, c->http_timeout_long, c->outgoing_proxy,
oidc_dir_cfg_pass_cookies(r), NULL,
NULL) == FALSE) {
oidc_warn(r, "revoking refresh token failed");
}
apr_table_clear(params);
}
token = oidc_session_get_access_token(r, session);
if (token != NULL) {
apr_table_addn(params, "token_type_hint", "access_token");
apr_table_addn(params, "token", token);
if (oidc_util_http_post_form(r, provider->revocation_endpoint_url,
params, basic_auth, bearer_auth, c->oauth.ssl_validate_server,
&response, c->http_timeout_long, c->outgoing_proxy,
oidc_dir_cfg_pass_cookies(r), NULL,
NULL) == FALSE) {
oidc_warn(r, "revoking access token failed");
}
}
out:
oidc_debug(r, "leave");
} | 0 | [
"CWE-601"
] | mod_auth_openidc | 5c15dfb08106c2451c2c44ce7ace6813c216ba75 | 247,344,149,488,179,000,000,000,000,000,000,000,000 | 65 | improve validation of the post-logout URL; closes #449
- to avoid an open redirect; thanks AIMOTO Norihito
- release 2.4.0.1
Signed-off-by: Hans Zandbelt <[email protected]> |
int X509_REQ_add_extensions_nid(X509_REQ *req, STACK_OF(X509_EXTENSION) *exts,
int nid)
{
ASN1_TYPE *at = NULL;
X509_ATTRIBUTE *attr = NULL;
if (!(at = ASN1_TYPE_new()) || !(at->value.sequence = ASN1_STRING_new()))
goto err;
at->type = V_ASN1_SEQUENCE;
/* Generate encoding of extensions */
at->value.sequence->length =
ASN1_item_i2d((ASN1_VALUE *)exts,
&at->value.sequence->data,
ASN1_ITEM_rptr(X509_EXTENSIONS));
if (!(attr = X509_ATTRIBUTE_new()))
goto err;
if (!(attr->value.set = sk_ASN1_TYPE_new_null()))
goto err;
if (!sk_ASN1_TYPE_push(attr->value.set, at))
goto err;
at = NULL;
attr->single = 0;
attr->object = OBJ_nid2obj(nid);
if (!req->req_info->attributes) {
if (!(req->req_info->attributes = sk_X509_ATTRIBUTE_new_null()))
goto err;
}
if (!sk_X509_ATTRIBUTE_push(req->req_info->attributes, attr))
goto err;
return 1;
err:
X509_ATTRIBUTE_free(attr);
ASN1_TYPE_free(at);
return 0;
} | 0 | [] | openssl | 28a00bcd8e318da18031b2ac8778c64147cd54f9 | 335,839,940,628,864,800,000,000,000,000,000,000,000 | 35 | Check public key is not NULL.
CVE-2015-0288
PR#3708
Reviewed-by: Matt Caswell <[email protected]> |
static void __sync_task_rss_stat(struct task_struct *task, struct mm_struct *mm)
{
int i;
for (i = 0; i < NR_MM_COUNTERS; i++) {
if (task->rss_stat.count[i]) {
add_mm_counter(mm, i, task->rss_stat.count[i]);
task->rss_stat.count[i] = 0;
}
}
task->rss_stat.events = 0;
} | 0 | [
"CWE-264"
] | linux-2.6 | 1a5a9906d4e8d1976b701f889d8f35d54b928f25 | 33,903,573,889,085,660,000,000,000,000,000,000,000 | 12 | mm: thp: fix pmd_bad() triggering in code paths holding mmap_sem read mode
In some cases it may happen that pmd_none_or_clear_bad() is called with
the mmap_sem hold in read mode. In those cases the huge page faults can
allocate hugepmds under pmd_none_or_clear_bad() and that can trigger a
false positive from pmd_bad() that will not like to see a pmd
materializing as trans huge.
It's not khugepaged causing the problem, khugepaged holds the mmap_sem
in write mode (and all those sites must hold the mmap_sem in read mode
to prevent pagetables to go away from under them, during code review it
seems vm86 mode on 32bit kernels requires that too unless it's
restricted to 1 thread per process or UP builds). The race is only with
the huge pagefaults that can convert a pmd_none() into a
pmd_trans_huge().
Effectively all these pmd_none_or_clear_bad() sites running with
mmap_sem in read mode are somewhat speculative with the page faults, and
the result is always undefined when they run simultaneously. This is
probably why it wasn't common to run into this. For example if the
madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page
fault, the hugepage will not be zapped, if the page fault runs first it
will be zapped.
Altering pmd_bad() not to error out if it finds hugepmds won't be enough
to fix this, because zap_pmd_range would then proceed to call
zap_pte_range (which would be incorrect if the pmd become a
pmd_trans_huge()).
The simplest way to fix this is to read the pmd in the local stack
(regardless of what we read, no need of actual CPU barriers, only
compiler barrier needed), and be sure it is not changing under the code
that computes its value. Even if the real pmd is changing under the
value we hold on the stack, we don't care. If we actually end up in
zap_pte_range it means the pmd was not none already and it was not huge,
and it can't become huge from under us (khugepaged locking explained
above).
All we need is to enforce that there is no way anymore that in a code
path like below, pmd_trans_huge can be false, but pmd_none_or_clear_bad
can run into a hugepmd. The overhead of a barrier() is just a compiler
tweak and should not be measurable (I only added it for THP builds). I
don't exclude different compiler versions may have prevented the race
too by caching the value of *pmd on the stack (that hasn't been
verified, but it wouldn't be impossible considering
pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none are all inlines
and there's no external function called in between pmd_trans_huge and
pmd_none_or_clear_bad).
if (pmd_trans_huge(*pmd)) {
if (next-addr != HPAGE_PMD_SIZE) {
VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem));
split_huge_page_pmd(vma->vm_mm, pmd);
} else if (zap_huge_pmd(tlb, vma, pmd, addr))
continue;
/* fall through */
}
if (pmd_none_or_clear_bad(pmd))
Because this race condition could be exercised without special
privileges this was reported in CVE-2012-1179.
The race was identified and fully explained by Ulrich who debugged it.
I'm quoting his accurate explanation below, for reference.
====== start quote =======
mapcount 0 page_mapcount 1
kernel BUG at mm/huge_memory.c:1384!
At some point prior to the panic, a "bad pmd ..." message similar to the
following is logged on the console:
mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7).
The "bad pmd ..." message is logged by pmd_clear_bad() before it clears
the page's PMD table entry.
143 void pmd_clear_bad(pmd_t *pmd)
144 {
-> 145 pmd_ERROR(*pmd);
146 pmd_clear(pmd);
147 }
After the PMD table entry has been cleared, there is an inconsistency
between the actual number of PMD table entries that are mapping the page
and the page's map count (_mapcount field in struct page). When the page
is subsequently reclaimed, __split_huge_page() detects this inconsistency.
1381 if (mapcount != page_mapcount(page))
1382 printk(KERN_ERR "mapcount %d page_mapcount %d\n",
1383 mapcount, page_mapcount(page));
-> 1384 BUG_ON(mapcount != page_mapcount(page));
The root cause of the problem is a race of two threads in a multithreaded
process. Thread B incurs a page fault on a virtual address that has never
been accessed (PMD entry is zero) while Thread A is executing an madvise()
system call on a virtual address within the same 2 MB (huge page) range.
virtual address space
.---------------------.
| |
| |
.-|---------------------|
| | |
| | |<-- B(fault)
| | |
2 MB | |/////////////////////|-.
huge < |/////////////////////| > A(range)
page | |/////////////////////|-'
| | |
| | |
'-|---------------------|
| |
| |
'---------------------'
- Thread A is executing an madvise(..., MADV_DONTNEED) system call
on the virtual address range "A(range)" shown in the picture.
sys_madvise
// Acquire the semaphore in shared mode.
down_read(¤t->mm->mmap_sem)
...
madvise_vma
switch (behavior)
case MADV_DONTNEED:
madvise_dontneed
zap_page_range
unmap_vmas
unmap_page_range
zap_pud_range
zap_pmd_range
//
// Assume that this huge page has never been accessed.
// I.e. content of the PMD entry is zero (not mapped).
//
if (pmd_trans_huge(*pmd)) {
// We don't get here due to the above assumption.
}
//
// Assume that Thread B incurred a page fault and
.---------> // sneaks in here as shown below.
| //
| if (pmd_none_or_clear_bad(pmd))
| {
| if (unlikely(pmd_bad(*pmd)))
| pmd_clear_bad
| {
| pmd_ERROR
| // Log "bad pmd ..." message here.
| pmd_clear
| // Clear the page's PMD entry.
| // Thread B incremented the map count
| // in page_add_new_anon_rmap(), but
| // now the page is no longer mapped
| // by a PMD entry (-> inconsistency).
| }
| }
|
v
- Thread B is handling a page fault on virtual address "B(fault)" shown
in the picture.
...
do_page_fault
__do_page_fault
// Acquire the semaphore in shared mode.
down_read_trylock(&mm->mmap_sem)
...
handle_mm_fault
if (pmd_none(*pmd) && transparent_hugepage_enabled(vma))
// We get here due to the above assumption (PMD entry is zero).
do_huge_pmd_anonymous_page
alloc_hugepage_vma
// Allocate a new transparent huge page here.
...
__do_huge_pmd_anonymous_page
...
spin_lock(&mm->page_table_lock)
...
page_add_new_anon_rmap
// Here we increment the page's map count (starts at -1).
atomic_set(&page->_mapcount, 0)
set_pmd_at
// Here we set the page's PMD entry which will be cleared
// when Thread A calls pmd_clear_bad().
...
spin_unlock(&mm->page_table_lock)
The mmap_sem does not prevent the race because both threads are acquiring
it in shared mode (down_read). Thread B holds the page_table_lock while
the page's map count and PMD table entry are updated. However, Thread A
does not synchronize on that lock.
====== end quote =======
[[email protected]: checkpatch fixes]
Reported-by: Ulrich Obergfell <[email protected]>
Signed-off-by: Andrea Arcangeli <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Dave Jones <[email protected]>
Acked-by: Larry Woodman <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Cc: <[email protected]> [2.6.38+]
Cc: Mark Salter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> |
static int ssl3_get_record(SSL *s)
{
int ssl_major,ssl_minor,al;
int enc_err,n,i,ret= -1;
SSL3_RECORD *rr;
SSL_SESSION *sess;
unsigned char *p;
unsigned char md[EVP_MAX_MD_SIZE];
short version;
unsigned mac_size;
int clear=0;
size_t extra;
rr= &(s->s3->rrec);
sess=s->session;
if (s->options & SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER)
extra=SSL3_RT_MAX_EXTRA;
else
extra=0;
if (extra != s->s3->rbuf.len - SSL3_RT_MAX_PACKET_SIZE)
{
/* actually likely an application error: SLS_OP_MICROSOFT_BIG_SSLV3_BUFFER
* set after ssl3_setup_buffers() was done */
SSLerr(SSL_F_SSL3_GET_RECORD, ERR_R_INTERNAL_ERROR);
return -1;
}
again:
/* check if we have the header */
if ( (s->rstate != SSL_ST_READ_BODY) ||
(s->packet_length < SSL3_RT_HEADER_LENGTH))
{
n=ssl3_read_n(s, SSL3_RT_HEADER_LENGTH, s->s3->rbuf.len, 0);
if (n <= 0) return(n); /* error or non-blocking */
s->rstate=SSL_ST_READ_BODY;
p=s->packet;
/* Pull apart the header into the SSL3_RECORD */
rr->type= *(p++);
ssl_major= *(p++);
ssl_minor= *(p++);
version=(ssl_major<<8)|ssl_minor;
n2s(p,rr->length);
/* Lets check version */
if (!s->first_packet)
{
if (version != s->version)
{
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_WRONG_VERSION_NUMBER);
if ((s->version & 0xFF00) == (version & 0xFF00))
/* Send back error using their minor version number :-) */
s->version = (unsigned short)version;
al=SSL_AD_PROTOCOL_VERSION;
goto f_err;
}
}
if ((version>>8) != SSL3_VERSION_MAJOR)
{
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_WRONG_VERSION_NUMBER);
goto err;
}
if (rr->length > SSL3_RT_MAX_ENCRYPTED_LENGTH+extra)
{
al=SSL_AD_RECORD_OVERFLOW;
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_PACKET_LENGTH_TOO_LONG);
goto f_err;
}
/* now s->rstate == SSL_ST_READ_BODY */
}
/* s->rstate == SSL_ST_READ_BODY, get and decode the data */
if (rr->length > s->packet_length-SSL3_RT_HEADER_LENGTH)
{
/* now s->packet_length == SSL3_RT_HEADER_LENGTH */
i=rr->length;
n=ssl3_read_n(s,i,i,1);
if (n <= 0) return(n); /* error or non-blocking io */
/* now n == rr->length,
* and s->packet_length == SSL3_RT_HEADER_LENGTH + rr->length */
}
s->rstate=SSL_ST_READ_HEADER; /* set state for later operations */
/* At this point, s->packet_length == SSL3_RT_HEADER_LNGTH + rr->length,
* and we have that many bytes in s->packet
*/
rr->input= &(s->packet[SSL3_RT_HEADER_LENGTH]);
/* ok, we can now read from 's->packet' data into 'rr'
* rr->input points at rr->length bytes, which
* need to be copied into rr->data by either
* the decryption or by the decompression
* When the data is 'copied' into the rr->data buffer,
* rr->input will be pointed at the new buffer */
/* We now have - encrypted [ MAC [ compressed [ plain ] ] ]
* rr->length bytes of encrypted compressed stuff. */
/* check is not needed I believe */
if (rr->length > SSL3_RT_MAX_ENCRYPTED_LENGTH+extra)
{
al=SSL_AD_RECORD_OVERFLOW;
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_ENCRYPTED_LENGTH_TOO_LONG);
goto f_err;
}
/* decrypt in place in 'rr->input' */
rr->data=rr->input;
enc_err = s->method->ssl3_enc->enc(s,0);
if (enc_err == 0)
{
/* SSLerr() and ssl3_send_alert() have been called */
goto err;
}
#ifdef TLS_DEBUG
printf("dec %d\n",rr->length);
{ unsigned int z; for (z=0; z<rr->length; z++) printf("%02X%c",rr->data[z],((z+1)%16)?' ':'\n'); }
printf("\n");
#endif
/* r->length is now the compressed data plus mac */
if ( (sess == NULL) ||
(s->enc_read_ctx == NULL) ||
(s->read_hash == NULL))
clear=1;
if (!clear)
{
/* !clear => s->read_hash != NULL => mac_size != -1 */
unsigned char *mac = NULL;
unsigned char mac_tmp[EVP_MAX_MD_SIZE];
mac_size=EVP_MD_size(s->read_hash);
OPENSSL_assert(mac_size <= EVP_MAX_MD_SIZE);
/* orig_len is the length of the record before any padding was
* removed. This is public information, as is the MAC in use,
* therefore we can safely process the record in a different
* amount of time if it's too short to possibly contain a MAC.
*/
if (rr->orig_len < mac_size ||
/* CBC records must have a padding length byte too. */
(EVP_CIPHER_CTX_mode(s->enc_read_ctx) == EVP_CIPH_CBC_MODE &&
rr->orig_len < mac_size+1))
{
al=SSL_AD_DECODE_ERROR;
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
if (EVP_CIPHER_CTX_mode(s->enc_read_ctx) == EVP_CIPH_CBC_MODE)
{
/* We update the length so that the TLS header bytes
* can be constructed correctly but we need to extract
* the MAC in constant time from within the record,
* without leaking the contents of the padding bytes.
* */
mac = mac_tmp;
ssl3_cbc_copy_mac(mac_tmp, rr, mac_size);
rr->length -= mac_size;
}
else
{
/* In this case there's no padding, so |rec->orig_len|
* equals |rec->length| and we checked that there's
* enough bytes for |mac_size| above. */
rr->length -= mac_size;
mac = &rr->data[rr->length];
}
i=s->method->ssl3_enc->mac(s,md,0 /* not send */);
if (i < 0 || mac == NULL || CRYPTO_memcmp(md, mac, (size_t)mac_size) != 0)
enc_err = -1;
if (rr->length > SSL3_RT_MAX_COMPRESSED_LENGTH+extra+mac_size)
enc_err = -1;
}
if (enc_err < 0)
{
/* A separate 'decryption_failed' alert was introduced with TLS 1.0,
* SSL 3.0 only has 'bad_record_mac'. But unless a decryption
* failure is directly visible from the ciphertext anyway,
* we should not reveal which kind of error occured -- this
* might become visible to an attacker (e.g. via a logfile) */
al=SSL_AD_BAD_RECORD_MAC;
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_DECRYPTION_FAILED_OR_BAD_RECORD_MAC);
goto f_err;
}
/* r->length is now just compressed */
if (s->expand != NULL)
{
if (rr->length > SSL3_RT_MAX_COMPRESSED_LENGTH+extra)
{
al=SSL_AD_RECORD_OVERFLOW;
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_COMPRESSED_LENGTH_TOO_LONG);
goto f_err;
}
if (!ssl3_do_uncompress(s))
{
al=SSL_AD_DECOMPRESSION_FAILURE;
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_BAD_DECOMPRESSION);
goto f_err;
}
}
if (rr->length > SSL3_RT_MAX_PLAIN_LENGTH+extra)
{
al=SSL_AD_RECORD_OVERFLOW;
SSLerr(SSL_F_SSL3_GET_RECORD,SSL_R_DATA_LENGTH_TOO_LONG);
goto f_err;
}
rr->off=0;
/* So at this point the following is true
* ssl->s3->rrec.type is the type of record
* ssl->s3->rrec.length == number of bytes in record
* ssl->s3->rrec.off == offset to first valid byte
* ssl->s3->rrec.data == where to take bytes from, increment
* after use :-).
*/
/* we have pulled in a full packet so zero things */
s->packet_length=0;
/* just read a 0 length packet */
if (rr->length == 0) goto again;
return(1);
f_err:
ssl3_send_alert(s,SSL3_AL_FATAL,al);
err:
return(ret);
} | 1 | [
"CWE-310"
] | openssl | b3a959a337b8083bc855623f24cebaf43a477350 | 307,994,214,933,389,130,000,000,000,000,000,000,000 | 243 | Don't crash when processing a zero-length, TLS >= 1.1 record.
The previous CBC patch was bugged in that there was a path through enc()
in s3_pkt.c/d1_pkt.c which didn't set orig_len. orig_len would be left
at the previous value which could suggest that the packet was a
sufficient length when it wasn't.
(cherry picked from commit 6cb19b7681f600b2f165e4adc57547b097b475fd)
(cherry picked from commit 2c948c1bb218f4ae126e14fd3453d42c62b93235)
Conflicts:
ssl/s3_enc.c |
SPL_METHOD(GlobIterator, count)
{
spl_filesystem_object *intern = Z_SPLFILESYSTEM_P(getThis());
if (zend_parse_parameters_none() == FAILURE) {
return;
}
if (intern->u.dir.dirp && php_stream_is(intern->u.dir.dirp ,&php_glob_stream_ops)) {
RETURN_LONG(php_glob_stream_get_count(intern->u.dir.dirp, NULL));
} else {
/* should not happen */
php_error_docref(NULL, E_ERROR, "GlobIterator lost glob state");
}
} | 0 | [
"CWE-74"
] | php-src | a5a15965da23c8e97657278fc8dfbf1dfb20c016 | 283,763,949,884,465,450,000,000,000,000,000,000,000 | 15 | Fix #78863: DirectoryIterator class silently truncates after a null byte
Since the constructor of DirectoryIterator and friends is supposed to
accepts paths (i.e. strings without NUL bytes), we must not accept
arbitrary strings. |
static int mszip_make_decode_table(unsigned int nsyms, unsigned int nbits,
unsigned char *length, unsigned short *table)
{
register unsigned int leaf, reverse, fill;
register unsigned short sym, next_sym;
register unsigned char bit_num;
unsigned int pos = 0; /* the current position in the decode table */
unsigned int table_mask = 1 << nbits;
unsigned int mszip_bit_mask = table_mask >> 1; /* don't do 0 length codes */
/* fill entries for codes short enough for a direct mapping */
for (bit_num = 1; bit_num <= nbits; bit_num++) {
for (sym = 0; sym < nsyms; sym++) {
if (length[sym] != bit_num) continue;
/* reverse the significant bits */
fill = length[sym]; reverse = pos >> (nbits - fill); leaf = 0;
do {leaf <<= 1; leaf |= reverse & 1; reverse >>= 1;} while (--fill);
if((pos += mszip_bit_mask) > table_mask) return 1; /* table overrun */
/* fill all possible lookups of this symbol with the symbol itself */
fill = mszip_bit_mask; next_sym = 1 << bit_num;
do { table[leaf] = sym; leaf += next_sym; } while (--fill);
}
mszip_bit_mask >>= 1;
}
/* exit with success if table is now complete */
if (pos == table_mask) return 0;
/* mark all remaining table entries as unused */
for (sym = pos; sym < table_mask; sym++) {
reverse = sym; leaf = 0; fill = nbits;
do { leaf <<= 1; leaf |= reverse & 1; reverse >>= 1; } while (--fill);
table[leaf] = 0xFFFF;
}
/* where should the longer codes be allocated from? */
next_sym = ((table_mask >> 1) < nsyms) ? nsyms : (table_mask >> 1);
/* give ourselves room for codes to grow by up to 16 more bits.
* codes now start at bit nbits+16 and end at (nbits+16-codelength) */
pos <<= 16;
table_mask <<= 16;
mszip_bit_mask = 1 << 15;
for (bit_num = nbits+1; bit_num <= MSZIP_MAX_HUFFBITS; bit_num++) {
for (sym = 0; sym < nsyms; sym++) {
if (length[sym] != bit_num) continue;
/* leaf = the first nbits of the code, reversed */
reverse = pos >> 16; leaf = 0; fill = nbits;
do {leaf <<= 1; leaf |= reverse & 1; reverse >>= 1;} while (--fill);
for (fill = 0; fill < (bit_num - nbits); fill++) {
/* if this path hasn't been taken yet, 'allocate' two entries */
if (table[leaf] == 0xFFFF) {
table[(next_sym << 1) ] = 0xFFFF;
table[(next_sym << 1) + 1 ] = 0xFFFF;
table[leaf] = next_sym++;
}
/* follow the path and select either left or right for next bit */
leaf = (table[leaf] << 1) | ((pos >> (15 - fill)) & 1);
}
table[leaf] = sym;
if ((pos += mszip_bit_mask) > table_mask) return 1; /* table overflow */
}
mszip_bit_mask >>= 1;
}
/* full table? */
return (pos != table_mask) ? 1 : 0;
} | 0 | [] | clamav-devel | 158c35e81a25ea5fda55a2a7f62ea9fec2e883d9 | 304,185,382,710,024,950,000,000,000,000,000,000,000 | 75 | libclamav/mspack.c: improve unpacking of malformed cabinets (bb#1826) |
static int ext4_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
pgoff_t writeback_index = 0;
long nr_to_write = wbc->nr_to_write;
int range_whole = 0;
int cycled = 1;
handle_t *handle = NULL;
struct mpage_da_data mpd;
struct inode *inode = mapping->host;
int needed_blocks, rsv_blocks = 0, ret = 0;
struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb);
bool done;
struct blk_plug plug;
bool give_up_on_write = false;
trace_ext4_writepages(inode, wbc);
/*
* No pages to write? This is mainly a kludge to avoid starting
* a transaction for special inodes like journal inode on last iput()
* because that could violate lock ordering on umount
*/
if (!mapping->nrpages || !mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))
goto out_writepages;
if (ext4_should_journal_data(inode)) {
struct blk_plug plug;
blk_start_plug(&plug);
ret = write_cache_pages(mapping, wbc, __writepage, mapping);
blk_finish_plug(&plug);
goto out_writepages;
}
/*
* If the filesystem has aborted, it is read-only, so return
* right away instead of dumping stack traces later on that
* will obscure the real source of the problem. We test
* EXT4_MF_FS_ABORTED instead of sb->s_flag's MS_RDONLY because
* the latter could be true if the filesystem is mounted
* read-only, and in that case, ext4_writepages should
* *never* be called, so if that ever happens, we would want
* the stack trace.
*/
if (unlikely(sbi->s_mount_flags & EXT4_MF_FS_ABORTED)) {
ret = -EROFS;
goto out_writepages;
}
if (ext4_should_dioread_nolock(inode)) {
/*
* We may need to convert up to one extent per block in
* the page and we may dirty the inode.
*/
rsv_blocks = 1 + (PAGE_CACHE_SIZE >> inode->i_blkbits);
}
/*
* If we have inline data and arrive here, it means that
* we will soon create the block for the 1st page, so
* we'd better clear the inline data here.
*/
if (ext4_has_inline_data(inode)) {
/* Just inode will be modified... */
handle = ext4_journal_start(inode, EXT4_HT_INODE, 1);
if (IS_ERR(handle)) {
ret = PTR_ERR(handle);
goto out_writepages;
}
BUG_ON(ext4_test_inode_state(inode,
EXT4_STATE_MAY_INLINE_DATA));
ext4_destroy_inline_data(handle, inode);
ext4_journal_stop(handle);
}
if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
range_whole = 1;
if (wbc->range_cyclic) {
writeback_index = mapping->writeback_index;
if (writeback_index)
cycled = 0;
mpd.first_page = writeback_index;
mpd.last_page = -1;
} else {
mpd.first_page = wbc->range_start >> PAGE_CACHE_SHIFT;
mpd.last_page = wbc->range_end >> PAGE_CACHE_SHIFT;
}
mpd.inode = inode;
mpd.wbc = wbc;
ext4_io_submit_init(&mpd.io_submit, wbc);
retry:
if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
tag_pages_for_writeback(mapping, mpd.first_page, mpd.last_page);
done = false;
blk_start_plug(&plug);
while (!done && mpd.first_page <= mpd.last_page) {
/* For each extent of pages we use new io_end */
mpd.io_submit.io_end = ext4_init_io_end(inode, GFP_KERNEL);
if (!mpd.io_submit.io_end) {
ret = -ENOMEM;
break;
}
/*
* We have two constraints: We find one extent to map and we
* must always write out whole page (makes a difference when
* blocksize < pagesize) so that we don't block on IO when we
* try to write out the rest of the page. Journalled mode is
* not supported by delalloc.
*/
BUG_ON(ext4_should_journal_data(inode));
needed_blocks = ext4_da_writepages_trans_blocks(inode);
/* start a new transaction */
handle = ext4_journal_start_with_reserve(inode,
EXT4_HT_WRITE_PAGE, needed_blocks, rsv_blocks);
if (IS_ERR(handle)) {
ret = PTR_ERR(handle);
ext4_msg(inode->i_sb, KERN_CRIT, "%s: jbd2_start: "
"%ld pages, ino %lu; err %d", __func__,
wbc->nr_to_write, inode->i_ino, ret);
/* Release allocated io_end */
ext4_put_io_end(mpd.io_submit.io_end);
break;
}
trace_ext4_da_write_pages(inode, mpd.first_page, mpd.wbc);
ret = mpage_prepare_extent_to_map(&mpd);
if (!ret) {
if (mpd.map.m_len)
ret = mpage_map_and_submit_extent(handle, &mpd,
&give_up_on_write);
else {
/*
* We scanned the whole range (or exhausted
* nr_to_write), submitted what was mapped and
* didn't find anything needing mapping. We are
* done.
*/
done = true;
}
}
ext4_journal_stop(handle);
/* Submit prepared bio */
ext4_io_submit(&mpd.io_submit);
/* Unlock pages we didn't use */
mpage_release_unused_pages(&mpd, give_up_on_write);
/* Drop our io_end reference we got from init */
ext4_put_io_end(mpd.io_submit.io_end);
if (ret == -ENOSPC && sbi->s_journal) {
/*
* Commit the transaction which would
* free blocks released in the transaction
* and try again
*/
jbd2_journal_force_commit_nested(sbi->s_journal);
ret = 0;
continue;
}
/* Fatal error - ENOMEM, EIO... */
if (ret)
break;
}
blk_finish_plug(&plug);
if (!ret && !cycled && wbc->nr_to_write > 0) {
cycled = 1;
mpd.last_page = writeback_index - 1;
mpd.first_page = 0;
goto retry;
}
/* Update index */
if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
/*
* Set the writeback_index so that range_cyclic
* mode will write it back later
*/
mapping->writeback_index = mpd.first_page;
out_writepages:
trace_ext4_writepages_result(inode, wbc, ret,
nr_to_write - wbc->nr_to_write);
return ret;
} | 0 | [
"CWE-362"
] | linux | ea3d7209ca01da209cda6f0dea8be9cc4b7a933b | 87,839,126,850,812,200,000,000,000,000,000,000,000 | 188 | ext4: fix races between page faults and hole punching
Currently, page faults and hole punching are completely unsynchronized.
This can result in page fault faulting in a page into a range that we
are punching after truncate_pagecache_range() has been called and thus
we can end up with a page mapped to disk blocks that will be shortly
freed. Filesystem corruption will shortly follow. Note that the same
race is avoided for truncate by checking page fault offset against
i_size but there isn't similar mechanism available for punching holes.
Fix the problem by creating new rw semaphore i_mmap_sem in inode and
grab it for writing over truncate, hole punching, and other functions
removing blocks from extent tree and for read over page faults. We
cannot easily use i_data_sem for this since that ranks below transaction
start and we need something ranking above it so that it can be held over
the whole truncate / hole punching operation. Also remove various
workarounds we had in the code to reduce race window when page fault
could have created pages with stale mapping information.
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Theodore Ts'o <[email protected]> |
void ConnectionManagerImpl::initializeReadFilterCallbacks(Network::ReadFilterCallbacks& callbacks) {
read_callbacks_ = &callbacks;
stats_.named_.downstream_cx_total_.inc();
stats_.named_.downstream_cx_active_.inc();
if (read_callbacks_->connection().ssl()) {
stats_.named_.downstream_cx_ssl_total_.inc();
stats_.named_.downstream_cx_ssl_active_.inc();
}
read_callbacks_->connection().addConnectionCallbacks(*this);
if (config_.idleTimeout()) {
connection_idle_timer_ = read_callbacks_->connection().dispatcher().createTimer(
[this]() -> void { onIdleTimeout(); });
connection_idle_timer_->enableTimer(config_.idleTimeout().value());
}
if (config_.maxConnectionDuration()) {
connection_duration_timer_ = read_callbacks_->connection().dispatcher().createTimer(
[this]() -> void { onConnectionDurationTimeout(); });
connection_duration_timer_->enableTimer(config_.maxConnectionDuration().value());
}
read_callbacks_->connection().setDelayedCloseTimeout(config_.delayedCloseTimeout());
read_callbacks_->connection().setConnectionStats(
{stats_.named_.downstream_cx_rx_bytes_total_, stats_.named_.downstream_cx_rx_bytes_buffered_,
stats_.named_.downstream_cx_tx_bytes_total_, stats_.named_.downstream_cx_tx_bytes_buffered_,
nullptr, &stats_.named_.downstream_cx_delayed_close_timeout_});
} | 0 | [
"CWE-400"
] | envoy | 0e49a495826ea9e29134c1bd54fdeb31a034f40c | 274,926,383,262,637,380,000,000,000,000,000,000,000 | 30 | http/2: add stats and stream flush timeout (#139)
This commit adds a new stream flush timeout to guard against a
remote server that does not open window once an entire stream has
been buffered for flushing. Additional stats have also been added
to better understand the codecs view of active streams as well as
amount of data buffered.
Signed-off-by: Matt Klein <[email protected]> |
PHPAPI int php_lint_script(zend_file_handle *file)
{
zend_op_array *op_array;
int retval = FAILURE;
zend_try {
op_array = zend_compile_file(file, ZEND_INCLUDE);
zend_destroy_file_handle(file);
if (op_array) {
destroy_op_array(op_array);
efree(op_array);
retval = SUCCESS;
}
} zend_end_try();
if (EG(exception)) {
zend_exception_error(EG(exception), E_ERROR);
}
return retval;
} | 0 | [] | php-src | 9a07245b728714de09361ea16b9c6fcf70cb5685 | 17,632,733,470,227,389,000,000,000,000,000,000,000 | 21 | Fixed bug #71273 A wrong ext directory setup in php.ini leads to crash |
R_API RBinField *r_bin_field_new(ut64 paddr, ut64 vaddr, int size, const char *name, const char *comment, const char *format) {
RBinField *ptr;
if (!(ptr = R_NEW0 (RBinField))) {
return NULL;
}
ptr->name = strdup (name);
ptr->comment = (comment && *comment)? strdup (comment): NULL;
ptr->format = (format && *format)? strdup (format): NULL;
ptr->paddr = paddr;
ptr->size = size;
// ptr->visibility = ???
ptr->vaddr = vaddr;
return ptr;
} | 0 | [
"CWE-125"
] | radare2 | d31c4d3cbdbe01ea3ded16a584de94149ecd31d9 | 94,409,669,705,540,030,000,000,000,000,000,000,000 | 14 | Fix #8748 - Fix oobread on string search |
int security_file_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
return security_ops->file_ioctl(file, cmd, arg);
} | 0 | [] | linux-2.6 | ee18d64c1f632043a02e6f5ba5e045bb26a5465f | 84,611,148,197,438,220,000,000,000,000,000,000,000 | 4 | KEYS: Add a keyctl to install a process's session keyring on its parent [try #6]
Add a keyctl to install a process's session keyring onto its parent. This
replaces the parent's session keyring. Because the COW credential code does
not permit one process to change another process's credentials directly, the
change is deferred until userspace next starts executing again. Normally this
will be after a wait*() syscall.
To support this, three new security hooks have been provided:
cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in
the blank security creds and key_session_to_parent() - which asks the LSM if
the process may replace its parent's session keyring.
The replacement may only happen if the process has the same ownership details
as its parent, and the process has LINK permission on the session keyring, and
the session keyring is owned by the process, and the LSM permits it.
Note that this requires alteration to each architecture's notify_resume path.
This has been done for all arches barring blackfin, m68k* and xtensa, all of
which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the
replacement to be performed at the point the parent process resumes userspace
execution.
This allows the userspace AFS pioctl emulation to fully emulate newpag() and
the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to
alter the parent process's PAG membership. However, since kAFS doesn't use
PAGs per se, but rather dumps the keys into the session keyring, the session
keyring of the parent must be replaced if, for example, VIOCSETTOK is passed
the newpag flag.
This can be tested with the following program:
#include <stdio.h>
#include <stdlib.h>
#include <keyutils.h>
#define KEYCTL_SESSION_TO_PARENT 18
#define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0)
int main(int argc, char **argv)
{
key_serial_t keyring, key;
long ret;
keyring = keyctl_join_session_keyring(argv[1]);
OSERROR(keyring, "keyctl_join_session_keyring");
key = add_key("user", "a", "b", 1, keyring);
OSERROR(key, "add_key");
ret = keyctl(KEYCTL_SESSION_TO_PARENT);
OSERROR(ret, "KEYCTL_SESSION_TO_PARENT");
return 0;
}
Compiled and linked with -lkeyutils, you should see something like:
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
355907932 --alswrv 4043 -1 \_ keyring: _uid.4043
[dhowells@andromeda ~]$ /tmp/newpag
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
1055658746 --alswrv 4043 4043 \_ user: a
[dhowells@andromeda ~]$ /tmp/newpag hello
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: hello
340417692 --alswrv 4043 4043 \_ user: a
Where the test program creates a new session keyring, sticks a user key named
'a' into it and then installs it on its parent.
Signed-off-by: David Howells <[email protected]>
Signed-off-by: James Morris <[email protected]> |
void ImapModelOpenConnectionTest::init( bool startTlsRequired )
{
Imap::Mailbox::AbstractCache* cache = new Imap::Mailbox::MemoryCache(this);
factory = new Streams::FakeSocketFactory(Imap::CONN_STATE_CONNECTED_PRETLS_PRECAPS);
factory->setStartTlsRequired( startTlsRequired );
Imap::Mailbox::TaskFactoryPtr taskFactory( new Imap::Mailbox::TaskFactory() ); // yes, the real one
model = new Imap::Mailbox::Model(this, cache, Imap::Mailbox::SocketFactoryPtr( factory ), std::move(taskFactory));
connect(model, SIGNAL(authRequested()), this, SLOT(provideAuthDetails()), Qt::QueuedConnection);
connect(model, SIGNAL(needsSslDecision(QList<QSslCertificate>,QList<QSslError>)),
this, SLOT(acceptSsl(QList<QSslCertificate>,QList<QSslError>)), Qt::QueuedConnection);
LibMailboxSync::setModelNetworkPolicy(model, Imap::Mailbox::NETWORK_ONLINE);
QCoreApplication::processEvents();
task = new Imap::Mailbox::OpenConnectionTask(model);
completedSpy = new QSignalSpy(task, SIGNAL(completed(Imap::Mailbox::ImapTask*)));
failedSpy = new QSignalSpy(task, SIGNAL(failed(QString)));
authSpy = new QSignalSpy(model, SIGNAL(authRequested()));
connErrorSpy = new QSignalSpy(model, SIGNAL(connectionError(QString)));
startTlsUpgradeSpy = new QSignalSpy(model, SIGNAL(requireStartTlsInFuture()));
} | 0 | [
"CWE-200"
] | trojita | 25fffa3e25cbad85bbca804193ad336b090a9ce1 | 285,752,368,617,241,500,000,000,000,000,000,000,000 | 19 | IMAP: refuse to work when STARTTLS is required but server sends PREAUTH
Oops, we cannot send STARTTLS when the connection is already authenticated.
This is serious enough to warrant an error; an attacker might be going after a
plaintext of a message we're going to APPEND, etc.
Thanks to Arnt Gulbrandsen on the imap-protocol ML for asking what happens when
we're configured to request STARTTLS and a PREAUTH is received, and to Michael M
Slusarz for starting that discussion.
Hope the error message is readable enough.
CVE: CVE-2014-2567 |
njs_vmcode_template_literal(njs_vm_t *vm, njs_value_t *invld1,
njs_value_t *retval)
{
njs_array_t *array;
njs_value_t *value;
njs_jump_off_t ret;
static const njs_function_t concat = {
.native = 1,
.args_offset = 1,
.u.native = njs_string_prototype_concat
};
value = njs_scope_valid_value(vm, (njs_index_t) retval);
if (!njs_is_primitive(value)) {
array = njs_array(value);
ret = njs_function_frame(vm, (njs_function_t *) &concat,
&njs_string_empty, array->start,
array->length, 0);
if (njs_slow_path(ret != NJS_OK)) {
return ret;
}
ret = njs_function_frame_invoke(vm, value);
if (njs_slow_path(ret != NJS_OK)) {
return ret;
}
}
return sizeof(njs_vmcode_template_literal_t);
} | 0 | [
"CWE-703",
"CWE-754"
] | njs | 222d6fdcf0c6485ec8e175f3a7b70d650c234b4e | 133,894,417,491,719,730,000,000,000,000,000,000,000 | 33 | Fixed njs_vmcode_interpreter() when "toString" conversion fails.
Previously, while interpreting a user function, njs_vmcode_interpreter()
might return prematurely when an error happens. This is not correct
because the current frame has to be unwound (or exception caught)
first.
The fix is exit through only 5 appropriate exit points to ensure
proper unwinding.
This closes #467 issue on Github. |
AfterTriggerSetState(ConstraintsSetStmt *stmt)
{
int my_level = GetCurrentTransactionNestLevel();
/*
* Ignore call if we aren't in a transaction. (Shouldn't happen?)
*/
if (afterTriggers == NULL)
return;
/*
* If in a subtransaction, and we didn't save the current state already,
* save it so it can be restored if the subtransaction aborts.
*/
if (my_level > 1 &&
afterTriggers->state_stack[my_level] == NULL)
{
afterTriggers->state_stack[my_level] =
SetConstraintStateCopy(afterTriggers->state);
}
/*
* Handle SET CONSTRAINTS ALL ...
*/
if (stmt->constraints == NIL)
{
/*
* Forget any previous SET CONSTRAINTS commands in this transaction.
*/
afterTriggers->state->numstates = 0;
/*
* Set the per-transaction ALL state to known.
*/
afterTriggers->state->all_isset = true;
afterTriggers->state->all_isdeferred = stmt->deferred;
}
else
{
Relation conrel;
Relation tgrel;
List *conoidlist = NIL;
List *tgoidlist = NIL;
ListCell *lc;
/*
* Handle SET CONSTRAINTS constraint-name [, ...]
*
* First, identify all the named constraints and make a list of their
* OIDs. Since, unlike the SQL spec, we allow multiple constraints of
* the same name within a schema, the specifications are not
* necessarily unique. Our strategy is to target all matching
* constraints within the first search-path schema that has any
* matches, but disregard matches in schemas beyond the first match.
* (This is a bit odd but it's the historical behavior.)
*/
conrel = heap_open(ConstraintRelationId, AccessShareLock);
foreach(lc, stmt->constraints)
{
RangeVar *constraint = lfirst(lc);
bool found;
List *namespacelist;
ListCell *nslc;
if (constraint->catalogname)
{
if (strcmp(constraint->catalogname, get_database_name(MyDatabaseId)) != 0)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cross-database references are not implemented: \"%s.%s.%s\"",
constraint->catalogname, constraint->schemaname,
constraint->relname)));
}
/*
* If we're given the schema name with the constraint, look only
* in that schema. If given a bare constraint name, use the
* search path to find the first matching constraint.
*/
if (constraint->schemaname)
{
Oid namespaceId = LookupExplicitNamespace(constraint->schemaname,
false);
namespacelist = list_make1_oid(namespaceId);
}
else
{
namespacelist = fetch_search_path(true);
}
found = false;
foreach(nslc, namespacelist)
{
Oid namespaceId = lfirst_oid(nslc);
SysScanDesc conscan;
ScanKeyData skey[2];
HeapTuple tup;
ScanKeyInit(&skey[0],
Anum_pg_constraint_conname,
BTEqualStrategyNumber, F_NAMEEQ,
CStringGetDatum(constraint->relname));
ScanKeyInit(&skey[1],
Anum_pg_constraint_connamespace,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(namespaceId));
conscan = systable_beginscan(conrel, ConstraintNameNspIndexId,
true, NULL, 2, skey);
while (HeapTupleIsValid(tup = systable_getnext(conscan)))
{
Form_pg_constraint con = (Form_pg_constraint) GETSTRUCT(tup);
if (con->condeferrable)
conoidlist = lappend_oid(conoidlist,
HeapTupleGetOid(tup));
else if (stmt->deferred)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("constraint \"%s\" is not deferrable",
constraint->relname)));
found = true;
}
systable_endscan(conscan);
/*
* Once we've found a matching constraint we do not search
* later parts of the search path.
*/
if (found)
break;
}
list_free(namespacelist);
/*
* Not found ?
*/
if (!found)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("constraint \"%s\" does not exist",
constraint->relname)));
}
heap_close(conrel, AccessShareLock);
/*
* Now, locate the trigger(s) implementing each of these constraints,
* and make a list of their OIDs.
*/
tgrel = heap_open(TriggerRelationId, AccessShareLock);
foreach(lc, conoidlist)
{
Oid conoid = lfirst_oid(lc);
bool found;
ScanKeyData skey;
SysScanDesc tgscan;
HeapTuple htup;
found = false;
ScanKeyInit(&skey,
Anum_pg_trigger_tgconstraint,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(conoid));
tgscan = systable_beginscan(tgrel, TriggerConstraintIndexId, true,
NULL, 1, &skey);
while (HeapTupleIsValid(htup = systable_getnext(tgscan)))
{
Form_pg_trigger pg_trigger = (Form_pg_trigger) GETSTRUCT(htup);
/*
* Silently skip triggers that are marked as non-deferrable in
* pg_trigger. This is not an error condition, since a
* deferrable RI constraint may have some non-deferrable
* actions.
*/
if (pg_trigger->tgdeferrable)
tgoidlist = lappend_oid(tgoidlist,
HeapTupleGetOid(htup));
found = true;
}
systable_endscan(tgscan);
/* Safety check: a deferrable constraint should have triggers */
if (!found)
elog(ERROR, "no triggers found for constraint with OID %u",
conoid);
}
heap_close(tgrel, AccessShareLock);
/*
* Now we can set the trigger states of individual triggers for this
* xact.
*/
foreach(lc, tgoidlist)
{
Oid tgoid = lfirst_oid(lc);
SetConstraintState state = afterTriggers->state;
bool found = false;
int i;
for (i = 0; i < state->numstates; i++)
{
if (state->trigstates[i].sct_tgoid == tgoid)
{
state->trigstates[i].sct_tgisdeferred = stmt->deferred;
found = true;
break;
}
}
if (!found)
{
afterTriggers->state =
SetConstraintStateAddItem(state, tgoid, stmt->deferred);
}
}
}
/*
* SQL99 requires that when a constraint is set to IMMEDIATE, any deferred
* checks against that constraint must be made when the SET CONSTRAINTS
* command is executed -- i.e. the effects of the SET CONSTRAINTS command
* apply retroactively. We've updated the constraints state, so scan the
* list of previously deferred events to fire any that have now become
* immediate.
*
* Obviously, if this was SET ... DEFERRED then it can't have converted
* any unfired events to immediate, so we need do nothing in that case.
*/
if (!stmt->deferred)
{
AfterTriggerEventList *events = &afterTriggers->events;
bool snapshot_set = false;
while (afterTriggerMarkEvents(events, NULL, true))
{
CommandId firing_id = afterTriggers->firing_counter++;
/*
* Make sure a snapshot has been established in case trigger
* functions need one. Note that we avoid setting a snapshot if
* we don't find at least one trigger that has to be fired now.
* This is so that BEGIN; SET CONSTRAINTS ...; SET TRANSACTION
* ISOLATION LEVEL SERIALIZABLE; ... works properly. (If we are
* at the start of a transaction it's not possible for any trigger
* events to be queued yet.)
*/
if (!snapshot_set)
{
PushActiveSnapshot(GetTransactionSnapshot());
snapshot_set = true;
}
/*
* We can delete fired events if we are at top transaction level,
* but we'd better not if inside a subtransaction, since the
* subtransaction could later get rolled back.
*/
if (afterTriggerInvokeEvents(events, firing_id, NULL,
!IsSubTransaction()))
break; /* all fired */
}
if (snapshot_set)
PopActiveSnapshot();
}
} | 0 | [
"CWE-362"
] | postgres | 5f173040e324f6c2eebb90d86cf1b0cdb5890f0a | 311,837,499,081,200,450,000,000,000,000,000,000,000 | 279 | Avoid repeated name lookups during table and index DDL.
If the name lookups come to different conclusions due to concurrent
activity, we might perform some parts of the DDL on a different table
than other parts. At least in the case of CREATE INDEX, this can be
used to cause the permissions checks to be performed against a
different table than the index creation, allowing for a privilege
escalation attack.
This changes the calling convention for DefineIndex, CreateTrigger,
transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible
(in 9.2 and newer), and AlterTable (in 9.1 and older). In addition,
CheckRelationOwnership is removed in 9.2 and newer and the calling
convention is changed in older branches. A field has also been added
to the Constraint node (FkConstraint in 8.4). Third-party code calling
these functions or using the Constraint node will require updating.
Report by Andres Freund. Patch by Robert Haas and Andres Freund,
reviewed by Tom Lane.
Security: CVE-2014-0062 |
static int airspy_s_ctrl(struct v4l2_ctrl *ctrl)
{
struct airspy *s = container_of(ctrl->handler, struct airspy, hdl);
int ret;
switch (ctrl->id) {
case V4L2_CID_RF_TUNER_LNA_GAIN_AUTO:
case V4L2_CID_RF_TUNER_LNA_GAIN:
ret = airspy_set_lna_gain(s);
break;
case V4L2_CID_RF_TUNER_MIXER_GAIN_AUTO:
case V4L2_CID_RF_TUNER_MIXER_GAIN:
ret = airspy_set_mixer_gain(s);
break;
case V4L2_CID_RF_TUNER_IF_GAIN:
ret = airspy_set_if_gain(s);
break;
default:
dev_dbg(s->dev, "unknown ctrl: id=%d name=%s\n",
ctrl->id, ctrl->name);
ret = -EINVAL;
}
return ret;
} | 0 | [
"CWE-119"
] | media_tree | aa93d1fee85c890a34f2510a310e55ee76a27848 | 339,187,752,766,960,670,000,000,000,000,000,000,000 | 25 | media: fix airspy usb probe error path
Fix a memory leak on probe error of the airspy usb device driver.
The problem is triggered when more than 64 usb devices register with
v4l2 of type VFL_TYPE_SDR or VFL_TYPE_SUBDEV.
The memory leak is caused by the probe function of the airspy driver
mishandeling errors and not freeing the corresponding control structures
when an error occours registering the device to v4l2 core.
A badusb device can emulate 64 of these devices, and then through
continual emulated connect/disconnect of the 65th device, cause the
kernel to run out of RAM and crash the kernel, thus causing a local DOS
vulnerability.
Fixes CVE-2016-5400
Signed-off-by: James Patrick-Evans <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: [email protected] # 3.17+
Signed-off-by: Linus Torvalds <[email protected]> |
connection_threadmain(void *arg)
{
Slapi_PBlock *pb = slapi_pblock_new();
int32_t *snmp_vars_idx = (int32_t *) arg;
/* wait forever for new pb until one is available or shutdown */
int32_t interval = 0; /* used be 10 seconds */
Connection *conn = NULL;
Operation *op;
ber_tag_t tag = 0;
int thread_turbo_flag = 0;
int ret = 0;
int more_data = 0;
int replication_connection = 0; /* If this connection is from a replication supplier, we want to ensure that operation processing is serialized */
int doshutdown = 0;
int maxthreads = 0;
long bypasspollcnt = 0;
#if defined(hpux)
/* Arrange to ignore SIGPIPE signals. */
SIGNAL(SIGPIPE, SIG_IGN);
#endif
thread_private_snmp_vars_set_idx(*snmp_vars_idx);
while (1) {
int is_timedout = 0;
time_t curtime = 0;
if (op_shutdown) {
slapi_log_err(SLAPI_LOG_TRACE, "connection_threadmain",
"op_thread received shutdown signal\n");
slapi_pblock_destroy(pb);
g_decr_active_threadcnt();
return;
}
if (!thread_turbo_flag && !more_data) {
Connection *pb_conn = NULL;
/* If more data is left from the previous connection_read_operation,
we should finish the op now. Client might be thinking it's
done sending the request and wait for the response forever.
[blackflag 624234] */
ret = connection_wait_for_new_work(pb, interval);
switch (ret) {
case CONN_NOWORK:
PR_ASSERT(interval != 0); /* this should never happen */
continue;
case CONN_SHUTDOWN:
slapi_log_err(SLAPI_LOG_TRACE, "connection_threadmain",
"op_thread received shutdown signal\n");
slapi_pblock_destroy(pb);
g_decr_active_threadcnt();
return;
case CONN_FOUND_WORK_TO_DO:
/* note - don't need to lock here - connection should only
be used by this thread - since c_gettingber is set to 1
in connection_activity when the conn is added to the
work queue, setup_pr_read_pds won't add the connection prfd
to the poll list */
slapi_pblock_get(pb, SLAPI_CONNECTION, &pb_conn);
if (pb_conn == NULL) {
slapi_log_err(SLAPI_LOG_ERR, "connection_threadmain", "pb_conn is NULL\n");
slapi_pblock_destroy(pb);
g_decr_active_threadcnt();
return;
}
pthread_mutex_lock(&(pb_conn->c_mutex));
if (pb_conn->c_anonlimits_set == 0) {
/*
* We have a new connection, set the anonymous reslimit idletimeout
* if applicable.
*/
char *anon_dn = config_get_anon_limits_dn();
int idletimeout;
/* If an anonymous limits dn is set, use it to set the limits. */
if (anon_dn && (strlen(anon_dn) > 0)) {
Slapi_DN *anon_sdn = slapi_sdn_new_normdn_byref(anon_dn);
reslimit_update_from_dn(pb_conn, anon_sdn);
slapi_sdn_free(&anon_sdn);
if (slapi_reslimit_get_integer_limit(pb_conn,
pb_conn->c_idletimeout_handle,
&idletimeout) == SLAPI_RESLIMIT_STATUS_SUCCESS) {
pb_conn->c_idletimeout = idletimeout;
}
}
slapi_ch_free_string(&anon_dn);
/*
* Set connection as initialized to avoid setting anonymous limits
* multiple times on the same connection
*/
pb_conn->c_anonlimits_set = 1;
}
/* must hold c_mutex so that it synchronizes the IO layer push
* with a potential pending sasl bind that is registering the IO layer
*/
if (connection_call_io_layer_callbacks(pb_conn)) {
slapi_log_err(SLAPI_LOG_ERR, "connection_threadmain",
"Could not add/remove IO layers from connection\n");
}
pthread_mutex_unlock(&(pb_conn->c_mutex));
break;
default:
break;
}
} else {
/* The turbo mode may cause threads starvation.
Do a yield here to reduce the starving
*/
PR_Sleep(PR_INTERVAL_NO_WAIT);
pthread_mutex_lock(&(conn->c_mutex));
/* Make our own pb in turbo mode */
connection_make_new_pb(pb, conn);
if (connection_call_io_layer_callbacks(conn)) {
slapi_log_err(SLAPI_LOG_ERR, "connection_threadmain",
"Could not add/remove IO layers from connection\n");
}
pthread_mutex_unlock(&(conn->c_mutex));
if (!config_check_referral_mode()) {
slapi_counter_increment(g_get_per_thread_snmp_vars()->server_tbl.dsOpInitiated);
slapi_counter_increment(g_get_per_thread_snmp_vars()->ops_tbl.dsInOps);
}
}
/* Once we're here we have a pb */
slapi_pblock_get(pb, SLAPI_CONNECTION, &conn);
slapi_pblock_get(pb, SLAPI_OPERATION, &op);
if (conn == NULL || op == NULL) {
slapi_log_err(SLAPI_LOG_ERR, "connection_threadmain", "NULL param: conn (0x%p) op (0x%p)\n", conn, op);
slapi_pblock_destroy(pb);
g_decr_active_threadcnt();
return;
}
maxthreads = conn->c_max_threads_per_conn;
more_data = 0;
ret = connection_read_operation(conn, op, &tag, &more_data);
if ((ret == CONN_DONE) || (ret == CONN_TIMEDOUT)) {
slapi_log_err(SLAPI_LOG_CONNS, "connection_threadmain",
"conn %" PRIu64 " read not ready due to %d - thread_turbo_flag %d more_data %d "
"ops_initiated %d refcnt %d flags %d\n",
conn->c_connid, ret, thread_turbo_flag, more_data,
conn->c_opsinitiated, conn->c_refcnt, conn->c_flags);
} else if (ret == CONN_FOUND_WORK_TO_DO) {
slapi_log_err(SLAPI_LOG_CONNS, "connection_threadmain",
"conn %" PRIu64 " read operation successfully - thread_turbo_flag %d more_data %d "
"ops_initiated %d refcnt %d flags %d\n",
conn->c_connid, thread_turbo_flag, more_data,
conn->c_opsinitiated, conn->c_refcnt, conn->c_flags);
}
curtime = slapi_current_rel_time_t();
#define DB_PERF_TURBO 1
#if defined(DB_PERF_TURBO)
/* If it's been a while since we last did it ... */
if (curtime - conn->c_private->previous_count_check_time > CONN_TURBO_CHECK_INTERVAL) {
if (config_get_enable_turbo_mode()) {
int new_turbo_flag = 0;
/* Check the connection's activity level */
connection_check_activity_level(conn);
/* And if appropriate, change into or out of turbo mode */
connection_enter_leave_turbo(conn, thread_turbo_flag, &new_turbo_flag);
thread_turbo_flag = new_turbo_flag;
} else {
thread_turbo_flag = 0;
}
}
/* turn off turbo mode immediately if any pb waiting in global queue */
if (thread_turbo_flag && !WORK_Q_EMPTY) {
thread_turbo_flag = 0;
slapi_log_err(SLAPI_LOG_CONNS, "connection_threadmain",
"conn %" PRIu64 " leaving turbo mode - pb_q is not empty %d\n",
conn->c_connid, work_q_size);
}
#endif
switch (ret) {
case CONN_DONE:
/* This means that the connection was closed, so clear turbo mode */
/*FALLTHROUGH*/
case CONN_TIMEDOUT:
thread_turbo_flag = 0;
is_timedout = 1;
/* In the case of CONN_DONE, more_data could have been set to 1
* in connection_read_operation before an error was encountered.
* In that case, we need to set more_data to 0 - even if there is
* more data available, we're not going to use it anyway.
* In the case of CONN_TIMEDOUT, it is only used in one place, and
* more_data will never be set to 1, so it is safe to set it to 0 here.
* We need more_data to be 0 so the connection will be processed
* correctly at the end of this function.
*/
more_data = 0;
/* note:
* should call connection_make_readable after the op is removed
* connection_make_readable(conn);
*/
slapi_log_err(SLAPI_LOG_CONNS, "connection_threadmain",
"conn %" PRIu64 " leaving turbo mode due to %d\n",
conn->c_connid, ret);
goto done;
case CONN_SHUTDOWN:
slapi_log_err(SLAPI_LOG_TRACE, "connection_threadmain",
"op_thread received shutdown signal\n");
g_decr_active_threadcnt();
doshutdown = 1;
goto done; /* To destroy pb, jump to done once */
default:
break;
}
/* if we got here, then we had some read activity */
if (thread_turbo_flag) {
/* turbo mode avoids handle_pr_read_ready which avoids setting c_idlesince
update c_idlesince here since, if we got some read activity, we are
not idle */
conn->c_idlesince = curtime;
}
/*
* Do not put the connection back to the read ready poll list
* if the operation is unbind. Unbind will close the socket.
* Similarly, if we are in turbo mode, don't send the socket
* back to the poll set.
* more_data: [blackflag 624234]
* If the connection is from a replication supplier, don't make it readable here.
* We want to ensure that replication operations are processed strictly in the order
* they are received off the wire.
*/
replication_connection = conn->c_isreplication_session;
if ((tag != LDAP_REQ_UNBIND) && !thread_turbo_flag && !replication_connection) {
if (!more_data) {
conn->c_flags &= ~CONN_FLAG_MAX_THREADS;
pthread_mutex_lock(&(conn->c_mutex));
connection_make_readable_nolock(conn);
pthread_mutex_unlock(&(conn->c_mutex));
/* once the connection is readable, another thread may access conn,
* so need locking from here on */
signal_listner();
} else { /* more data in conn - just put back on work_q - bypass poll */
bypasspollcnt++;
pthread_mutex_lock(&(conn->c_mutex));
/* don't do this if it would put us over the max threads per conn */
if (conn->c_threadnumber < maxthreads) {
/* for turbo, c_idlesince is set above - for !turbo and
* !more_data, we put the conn back in the poll loop and
* c_idlesince is set in handle_pr_read_ready - since we
* are bypassing both of those, we set idlesince here
*/
conn->c_idlesince = curtime;
connection_activity(conn, maxthreads);
slapi_log_err(SLAPI_LOG_CONNS, "connection_threadmain", "conn %" PRIu64 " queued because more_data\n",
conn->c_connid);
} else {
/* keep count of how many times maxthreads has blocked an operation */
conn->c_maxthreadsblocked++;
if (conn->c_maxthreadsblocked == 1 && connection_has_psearch(conn)) {
slapi_log_err(SLAPI_LOG_NOTICE, "connection_threadmain",
"Connection (conn=%" PRIu64 ") has a running persistent search "
"that has exceeded the maximum allowed threads per connection. "
"New operations will be blocked.\n",
conn->c_connid);
}
}
pthread_mutex_unlock(&(conn->c_mutex));
}
}
/* are we in referral-only mode? */
if (config_check_referral_mode() && tag != LDAP_REQ_UNBIND) {
referral_mode_reply(pb);
goto done;
}
/* check if new password is required */
if (connection_need_new_password(conn, op, pb)) {
goto done;
}
/* if this is a bulk import, only "add" and "import done"
* are allowed */
if (conn->c_flags & CONN_FLAG_IMPORT) {
if ((tag != LDAP_REQ_ADD) && (tag != LDAP_REQ_EXTENDED)) {
/* no cookie for you. */
slapi_log_err(SLAPI_LOG_ERR, "connection_threadmain", "Attempted operation %lu "
"from within bulk import\n",
tag);
slapi_send_ldap_result(pb, LDAP_PROTOCOL_ERROR, NULL,
NULL, 0, NULL);
goto done;
}
}
/*
* Fix bz 1931820 issue (the check to set OP_FLAG_REPLICATED may be done
* before replication session is properly set).
*/
if (replication_connection) {
operation_set_flag(op, OP_FLAG_REPLICATED);
}
/*
* Call the do_<operation> function to process this request.
*/
connection_dispatch_operation(conn, op, pb);
done:
if (doshutdown) {
pthread_mutex_lock(&(conn->c_mutex));
connection_remove_operation_ext(pb, conn, op);
connection_make_readable_nolock(conn);
conn->c_threadnumber--;
slapi_counter_decrement(conns_in_maxthreads);
slapi_counter_decrement(g_get_per_thread_snmp_vars()->ops_tbl.dsConnectionsInMaxThreads);
connection_release_nolock(conn);
pthread_mutex_unlock(&(conn->c_mutex));
signal_listner();
slapi_pblock_destroy(pb);
return;
}
/*
* done with this operation. delete it from the op
* queue for this connection, delete the number of
* threads devoted to this connection, and see if
* there's more work to do right now on this conn.
*/
/* number of ops on this connection */
PR_AtomicIncrement(&conn->c_opscompleted);
/* total number of ops for the server */
slapi_counter_increment(g_get_per_thread_snmp_vars()->server_tbl.dsOpCompleted);
/* If this op isn't a persistent search, remove it */
if (op->o_flags & OP_FLAG_PS) {
/* Release the connection (i.e. decrease refcnt) at the condition
* this thread will not loop on it.
* If we are in turbo mode (dedicated to that connection) or
* more_data (continue reading buffered req) this thread
* continues to hold the connection
*/
if (!thread_turbo_flag && !more_data) {
pthread_mutex_lock(&(conn->c_mutex));
connection_release_nolock(conn); /* psearch acquires ref to conn - release this one now */
pthread_mutex_unlock(&(conn->c_mutex));
}
/* ps_add makes a shallow copy of the pb - so we
* can't free it or init it here - just set operation to NULL.
* ps_send_results will call connection_remove_operation_ext to free it
* The connection_thread private pblock ('pb') has be cloned and should only
* be reinit (slapi_pblock_init)
*/
slapi_pblock_set(pb, SLAPI_OPERATION, NULL);
slapi_pblock_init(pb);
} else {
/* delete from connection operation queue & decr refcnt */
int conn_closed = 0;
pthread_mutex_lock(&(conn->c_mutex));
connection_remove_operation_ext(pb, conn, op);
/* If we're in turbo mode, we keep our reference to the connection alive */
/* can't use the more_data var because connection could have changed in another thread */
slapi_log_err(SLAPI_LOG_CONNS, "connection_threadmain", "conn %" PRIu64 " check more_data %d thread_turbo_flag %d"
"repl_conn_bef %d, repl_conn_now %d\n",
conn->c_connid, more_data, thread_turbo_flag,
replication_connection, conn->c_isreplication_session);
if (!replication_connection && conn->c_isreplication_session) {
/* it a connection that was just flagged as replication connection */
more_data = 0;
} else {
/* normal connection or already established replication connection */
more_data = conn_buffered_data_avail_nolock(conn, &conn_closed) ? 1 : 0;
}
if (!more_data) {
if (!thread_turbo_flag) {
int32_t need_wakeup = 0;
/*
* Don't release the connection now.
* But note down what to do.
*/
if (replication_connection || (1 == is_timedout)) {
connection_make_readable_nolock(conn);
need_wakeup = 1;
}
if (!need_wakeup) {
if (conn->c_threadnumber == maxthreads) {
need_wakeup = 1;
} else {
need_wakeup = 0;
}
}
if (conn->c_threadnumber == maxthreads) {
conn->c_flags &= ~CONN_FLAG_MAX_THREADS;
slapi_counter_decrement(conns_in_maxthreads);
slapi_counter_decrement(g_get_per_thread_snmp_vars()->ops_tbl.dsConnectionsInMaxThreads);
}
conn->c_threadnumber--;
connection_release_nolock(conn);
/* If need_wakeup, call signal_listner once.
* Need to release the connection (refcnt--)
* before that call.
*/
if (need_wakeup) {
signal_listner();
need_wakeup = 0;
}
} else if (1 == is_timedout) {
/* covscan reports this code is unreachable (2019/6/4) */
connection_make_readable_nolock(conn);
signal_listner();
}
}
pthread_mutex_unlock(&(conn->c_mutex));
}
} /* while (1) */
} | 0 | [
"CWE-415"
] | 389-ds-base | a3c298f8140d3e4fa1bd5a670f1bb965a21a9b7b | 323,666,962,874,318,800,000,000,000,000,000,000,000 | 419 | Issue 5218 - double-free of the virtual attribute context in persistent search (#5219)
description:
A search is processed by a worker using a private pblock.
If the search is persistent, the worker spawn a thread
and kind of duplicate its private pblock so that the spawn
thread continue to process the persistent search.
Then worker ends the initial search, reinit (free) its private pblock,
and returns monitoring the wait_queue.
When the persistent search completes, it frees the duplicated
pblock.
The problem is that private pblock and duplicated pblock
are referring to a same structure (pb_vattr_context).
That can lead to a double free
Fix:
When cloning the pblock (slapi_pblock_clone) make sure
to transfert the references inside the original (private)
pblock to the target (cloned) one
That includes pb_vattr_context pointer.
Reviewed by: Mark Reynolds, James Chapman, Pierre Rogier (Thanks !)
Co-authored-by: Mark Reynolds <[email protected]> |
PHP_METHOD(Phar, addFile)
{
char *fname, *localname = NULL;
int fname_len, localname_len = 0;
php_stream *resource;
zval *zresource;
PHAR_ARCHIVE_OBJECT();
if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "s|s", &fname, &fname_len, &localname, &localname_len) == FAILURE) {
return;
}
#if PHP_API_VERSION < 20100412
if (PG(safe_mode) && (!php_checkuid(fname, NULL, CHECKUID_ALLOW_ONLY_FILE))) {
zend_throw_exception_ex(spl_ce_RuntimeException, 0 TSRMLS_CC, "phar error: unable to open file \"%s\" to add to phar archive, safe_mode restrictions prevent this", fname);
return;
}
#endif
if (!strstr(fname, "://") && php_check_open_basedir(fname TSRMLS_CC)) {
zend_throw_exception_ex(spl_ce_RuntimeException, 0 TSRMLS_CC, "phar error: unable to open file \"%s\" to add to phar archive, open_basedir restrictions prevent this", fname);
return;
}
if (!(resource = php_stream_open_wrapper(fname, "rb", 0, NULL))) {
zend_throw_exception_ex(spl_ce_RuntimeException, 0 TSRMLS_CC, "phar error: unable to open file \"%s\" to add to phar archive", fname);
return;
}
if (localname) {
fname = localname;
fname_len = localname_len;
}
MAKE_STD_ZVAL(zresource);
php_stream_to_zval(resource, zresource);
phar_add_file(&(phar_obj->arc.archive), fname, fname_len, NULL, 0, zresource TSRMLS_CC);
efree(zresource);
php_stream_close(resource);
} | 0 | [
"CWE-416"
] | php-src | b2cf3f064b8f5efef89bb084521b61318c71781b | 2,834,458,409,593,205,000,000,000,000,000,000,000 | 41 | Fixed bug #68901 (use after free) |
int qemu_chr_fe_get_msgfd(CharBackend *be)
{
CharDriverState *s = be->chr;
int fd;
int res = (qemu_chr_fe_get_msgfds(be, &fd, 1) == 1) ? fd : -1;
if (s && s->replay) {
fprintf(stderr,
"Replay: get msgfd is not supported for serial devices yet\n");
exit(1);
}
return res;
} | 0 | [
"CWE-416"
] | qemu | a4afa548fc6dd9842ed86639b4d37d4d1c4ad480 | 36,266,934,134,330,770,000,000,000,000,000,000,000 | 12 | char: move front end handlers in CharBackend
Since the hanlders are associated with a CharBackend, rather than the
CharDriverState, it is more appropriate to store in CharBackend. This
avoids the handler copy dance in qemu_chr_fe_set_handlers() then
mux_chr_update_read_handler(), by storing the CharBackend pointer
directly.
Also a mux CharDriver should go through mux->backends[focused], since
chr->be will stay NULL. Before that, it was possible to call
chr->handler by mistake with surprising results, for ex through
qemu_chr_be_can_write(), which would result in calling the last set
handler front end, not the one with focus.
Signed-off-by: Marc-André Lureau <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]> |
host_write_s2d (SF_PRIVATE *psf, const short *ptr, sf_count_t len)
{ BUF_UNION ubuf ;
int bufferlen, writecount ;
sf_count_t total = 0 ;
double scale ;
scale = (psf->scale_int_float == 0) ? 1.0 : 1.0 / 0x8000 ;
bufferlen = ARRAY_LEN (ubuf.dbuf) ;
while (len > 0)
{ if (len < bufferlen)
bufferlen = (int) len ;
s2d_array (ptr + total, ubuf.dbuf, bufferlen, scale) ;
if (psf->peak_info)
double64_peak_update (psf, ubuf.dbuf, bufferlen, total / psf->sf.channels) ;
if (psf->data_endswap == SF_TRUE)
endswap_double_array (ubuf.dbuf, bufferlen) ;
writecount = psf_fwrite (ubuf.dbuf, sizeof (double), bufferlen, psf) ;
total += writecount ;
if (writecount < bufferlen)
break ;
len -= writecount ;
} ;
return total ;
} /* host_write_s2d */ | 0 | [
"CWE-369"
] | libsndfile | 85c877d5072866aadbe8ed0c3e0590fbb5e16788 | 290,395,553,190,978,000,000,000,000,000,000,000,000 | 30 | double64_init: Check psf->sf.channels against upper bound
This prevents division by zero later in the code.
While the trivial case to catch this (i.e. sf.channels < 1) has already
been covered, a crafted file may report a number of channels that is
so high (i.e. > INT_MAX/sizeof(double)) that it "somehow" gets
miscalculated to zero (if this makes sense) in the determination of the
blockwidth. Since we only support a limited number of channels anyway,
make sure to check here as well.
CVE-2017-14634
Closes: https://github.com/erikd/libsndfile/issues/318
Signed-off-by: Erik de Castro Lopo <[email protected]> |
Subsets and Splits