func
stringlengths 0
484k
| target
int64 0
1
| cwe
sequencelengths 0
4
| project
stringclasses 799
values | commit_id
stringlengths 40
40
| hash
float64 1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
| size
int64 1
24k
| message
stringlengths 0
13.3k
|
---|---|---|---|---|---|---|---|
static int rbd_obj_copyup_current_snapc(struct rbd_obj_request *obj_req,
u32 bytes)
{
struct ceph_osd_request *osd_req;
int num_ops = count_write_ops(obj_req);
int which = 0;
int ret;
dout("%s obj_req %p bytes %u\n", __func__, obj_req, bytes);
if (bytes != MODS_ONLY)
num_ops++; /* copyup */
osd_req = rbd_obj_add_osd_request(obj_req, num_ops);
if (IS_ERR(osd_req))
return PTR_ERR(osd_req);
if (bytes != MODS_ONLY) {
ret = rbd_osd_setup_copyup(osd_req, which++, bytes);
if (ret)
return ret;
}
rbd_osd_setup_write_ops(osd_req, which);
rbd_osd_format_write(osd_req);
ret = ceph_osdc_alloc_messages(osd_req, GFP_NOIO);
if (ret)
return ret;
rbd_osd_submit(osd_req);
return 0;
} | 0 | [
"CWE-863"
] | linux | f44d04e696feaf13d192d942c4f14ad2e117065a | 129,828,732,894,609,540,000,000,000,000,000,000,000 | 33 | rbd: require global CAP_SYS_ADMIN for mapping and unmapping
It turns out that currently we rely only on sysfs attribute
permissions:
$ ll /sys/bus/rbd/{add*,remove*}
--w------- 1 root root 4096 Sep 3 20:37 /sys/bus/rbd/add
--w------- 1 root root 4096 Sep 3 20:37 /sys/bus/rbd/add_single_major
--w------- 1 root root 4096 Sep 3 20:37 /sys/bus/rbd/remove
--w------- 1 root root 4096 Sep 3 20:38 /sys/bus/rbd/remove_single_major
This means that images can be mapped and unmapped (i.e. block devices
can be created and deleted) by a UID 0 process even after it drops all
privileges or by any process with CAP_DAC_OVERRIDE in its user namespace
as long as UID 0 is mapped into that user namespace.
Be consistent with other virtual block devices (loop, nbd, dm, md, etc)
and require CAP_SYS_ADMIN in the initial user namespace for mapping and
unmapping, and also for dumping the configuration string and refreshing
the image header.
Cc: [email protected]
Signed-off-by: Ilya Dryomov <[email protected]>
Reviewed-by: Jeff Layton <[email protected]> |
static int parse_next_body_message_rfc822_init(struct message_parser_ctx *ctx,
struct message_block *block_r)
{
message_part_append(ctx);
return parse_next_header_init(ctx, block_r);
} | 0 | [
"CWE-20"
] | core | fb97a1cddbda4019e327fa736972a1c7433fedaa | 20,000,228,276,414,352,000,000,000,000,000,000,000 | 6 | lib-mail: message-parser - Fix assert-crash when enforcing MIME part limit
The limit could have been exceeded with message/rfc822 parts. |
static struct nsid_cache *netns_map_get_by_nsid(int nsid)
{
uint32_t h = NSID_HASH_NSID(nsid);
struct hlist_node *n;
hlist_for_each(n, &nsid_head[h]) {
struct nsid_cache *c = container_of(n, struct nsid_cache,
nsid_hash);
if (c->nsid == nsid)
return c;
}
return NULL;
} | 0 | [
"CWE-416"
] | iproute2 | 9bf2c538a0eb10d66e2365a655bf6c52f5ba3d10 | 48,138,501,704,960,310,000,000,000,000,000,000,000 | 14 | ipnetns: use-after-free problem in get_netnsid_from_name func
Follow the following steps:
# ip netns add net1
# export MALLOC_MMAP_THRESHOLD_=0
# ip netns list
then Segmentation fault (core dumped) will occur.
In get_netnsid_from_name func, answer is freed before
rta_getattr_u32(tb[NETNSA_NSID]), where tb[] refers to answer`s
content. If we set MALLOC_MMAP_THRESHOLD_=0, mmap will be adoped to
malloc memory, which will be freed immediately after calling free
func. So reading tb[NETNSA_NSID] will access the released memory
after free(answer).
Here, we will call get_netnsid_from_name(tb[NETNSA_NSID]) before free(answer).
Fixes: 86bf43c7c2f ("lib/libnetlink: update rtnl_talk to support malloc buff at run time")
Reported-by: Huiying Kou <[email protected]>
Signed-off-by: Zhiqiang Liu <[email protected]>
Acked-by: Phil Sutter <[email protected]>
Signed-off-by: Stephen Hemminger <[email protected]> |
size_t Magick::Image::channels() const
{
return(constImage()->number_channels);
} | 0 | [
"CWE-416"
] | ImageMagick | 8c35502217c1879cb8257c617007282eee3fe1cc | 170,643,364,628,948,130,000,000,000,000,000,000,000 | 4 | Added missing return to avoid use after free. |
appendInstructionChar(
const FileInfo *file, widechar *passInstructions, int *passIC, widechar ch) {
if (*passIC >= MAXSTRING) {
compileError(file, "multipass operand too long");
return 0;
}
passInstructions[(*passIC)++] = ch;
return 1;
} | 0 | [
"CWE-787"
] | liblouis | 2e4772befb2b1c37cb4b9d6572945115ee28630a | 30,460,265,145,993,846,000,000,000,000,000,000,000 | 9 | Prevent an invalid memory writes in compileRule
Thanks to Han Zheng for reporting it
Fixes #1214 |
ciMethod* ciEnv::get_method_by_index(constantPoolHandle cpool,
int index, Bytecodes::Code bc,
ciInstanceKlass* accessor) {
GUARDED_VM_ENTRY(return get_method_by_index_impl(cpool, index, bc, accessor);)
} | 0 | [] | jdk8u | 1dafef08cc922ee85a8e216387100dc681a5484d | 210,507,877,265,029,600,000,000,000,000,000,000,000 | 5 | 8281859: Improve class compilation
Reviewed-by: andrew
Backport-of: 3ac62a66efd05d0842076dd4cfbea0e53b12630f |
static int __init enable_swap_account(char *s)
{
/* consider enabled if no parameter or 1 is given */
if (!strcmp(s, "1"))
really_do_swap_account = 1;
else if (!strcmp(s, "0"))
really_do_swap_account = 0;
return 1;
} | 0 | [
"CWE-264"
] | linux-2.6 | 1a5a9906d4e8d1976b701f889d8f35d54b928f25 | 11,143,123,882,435,855,000,000,000,000,000,000,000 | 9 | mm: thp: fix pmd_bad() triggering in code paths holding mmap_sem read mode
In some cases it may happen that pmd_none_or_clear_bad() is called with
the mmap_sem hold in read mode. In those cases the huge page faults can
allocate hugepmds under pmd_none_or_clear_bad() and that can trigger a
false positive from pmd_bad() that will not like to see a pmd
materializing as trans huge.
It's not khugepaged causing the problem, khugepaged holds the mmap_sem
in write mode (and all those sites must hold the mmap_sem in read mode
to prevent pagetables to go away from under them, during code review it
seems vm86 mode on 32bit kernels requires that too unless it's
restricted to 1 thread per process or UP builds). The race is only with
the huge pagefaults that can convert a pmd_none() into a
pmd_trans_huge().
Effectively all these pmd_none_or_clear_bad() sites running with
mmap_sem in read mode are somewhat speculative with the page faults, and
the result is always undefined when they run simultaneously. This is
probably why it wasn't common to run into this. For example if the
madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page
fault, the hugepage will not be zapped, if the page fault runs first it
will be zapped.
Altering pmd_bad() not to error out if it finds hugepmds won't be enough
to fix this, because zap_pmd_range would then proceed to call
zap_pte_range (which would be incorrect if the pmd become a
pmd_trans_huge()).
The simplest way to fix this is to read the pmd in the local stack
(regardless of what we read, no need of actual CPU barriers, only
compiler barrier needed), and be sure it is not changing under the code
that computes its value. Even if the real pmd is changing under the
value we hold on the stack, we don't care. If we actually end up in
zap_pte_range it means the pmd was not none already and it was not huge,
and it can't become huge from under us (khugepaged locking explained
above).
All we need is to enforce that there is no way anymore that in a code
path like below, pmd_trans_huge can be false, but pmd_none_or_clear_bad
can run into a hugepmd. The overhead of a barrier() is just a compiler
tweak and should not be measurable (I only added it for THP builds). I
don't exclude different compiler versions may have prevented the race
too by caching the value of *pmd on the stack (that hasn't been
verified, but it wouldn't be impossible considering
pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none are all inlines
and there's no external function called in between pmd_trans_huge and
pmd_none_or_clear_bad).
if (pmd_trans_huge(*pmd)) {
if (next-addr != HPAGE_PMD_SIZE) {
VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem));
split_huge_page_pmd(vma->vm_mm, pmd);
} else if (zap_huge_pmd(tlb, vma, pmd, addr))
continue;
/* fall through */
}
if (pmd_none_or_clear_bad(pmd))
Because this race condition could be exercised without special
privileges this was reported in CVE-2012-1179.
The race was identified and fully explained by Ulrich who debugged it.
I'm quoting his accurate explanation below, for reference.
====== start quote =======
mapcount 0 page_mapcount 1
kernel BUG at mm/huge_memory.c:1384!
At some point prior to the panic, a "bad pmd ..." message similar to the
following is logged on the console:
mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7).
The "bad pmd ..." message is logged by pmd_clear_bad() before it clears
the page's PMD table entry.
143 void pmd_clear_bad(pmd_t *pmd)
144 {
-> 145 pmd_ERROR(*pmd);
146 pmd_clear(pmd);
147 }
After the PMD table entry has been cleared, there is an inconsistency
between the actual number of PMD table entries that are mapping the page
and the page's map count (_mapcount field in struct page). When the page
is subsequently reclaimed, __split_huge_page() detects this inconsistency.
1381 if (mapcount != page_mapcount(page))
1382 printk(KERN_ERR "mapcount %d page_mapcount %d\n",
1383 mapcount, page_mapcount(page));
-> 1384 BUG_ON(mapcount != page_mapcount(page));
The root cause of the problem is a race of two threads in a multithreaded
process. Thread B incurs a page fault on a virtual address that has never
been accessed (PMD entry is zero) while Thread A is executing an madvise()
system call on a virtual address within the same 2 MB (huge page) range.
virtual address space
.---------------------.
| |
| |
.-|---------------------|
| | |
| | |<-- B(fault)
| | |
2 MB | |/////////////////////|-.
huge < |/////////////////////| > A(range)
page | |/////////////////////|-'
| | |
| | |
'-|---------------------|
| |
| |
'---------------------'
- Thread A is executing an madvise(..., MADV_DONTNEED) system call
on the virtual address range "A(range)" shown in the picture.
sys_madvise
// Acquire the semaphore in shared mode.
down_read(¤t->mm->mmap_sem)
...
madvise_vma
switch (behavior)
case MADV_DONTNEED:
madvise_dontneed
zap_page_range
unmap_vmas
unmap_page_range
zap_pud_range
zap_pmd_range
//
// Assume that this huge page has never been accessed.
// I.e. content of the PMD entry is zero (not mapped).
//
if (pmd_trans_huge(*pmd)) {
// We don't get here due to the above assumption.
}
//
// Assume that Thread B incurred a page fault and
.---------> // sneaks in here as shown below.
| //
| if (pmd_none_or_clear_bad(pmd))
| {
| if (unlikely(pmd_bad(*pmd)))
| pmd_clear_bad
| {
| pmd_ERROR
| // Log "bad pmd ..." message here.
| pmd_clear
| // Clear the page's PMD entry.
| // Thread B incremented the map count
| // in page_add_new_anon_rmap(), but
| // now the page is no longer mapped
| // by a PMD entry (-> inconsistency).
| }
| }
|
v
- Thread B is handling a page fault on virtual address "B(fault)" shown
in the picture.
...
do_page_fault
__do_page_fault
// Acquire the semaphore in shared mode.
down_read_trylock(&mm->mmap_sem)
...
handle_mm_fault
if (pmd_none(*pmd) && transparent_hugepage_enabled(vma))
// We get here due to the above assumption (PMD entry is zero).
do_huge_pmd_anonymous_page
alloc_hugepage_vma
// Allocate a new transparent huge page here.
...
__do_huge_pmd_anonymous_page
...
spin_lock(&mm->page_table_lock)
...
page_add_new_anon_rmap
// Here we increment the page's map count (starts at -1).
atomic_set(&page->_mapcount, 0)
set_pmd_at
// Here we set the page's PMD entry which will be cleared
// when Thread A calls pmd_clear_bad().
...
spin_unlock(&mm->page_table_lock)
The mmap_sem does not prevent the race because both threads are acquiring
it in shared mode (down_read). Thread B holds the page_table_lock while
the page's map count and PMD table entry are updated. However, Thread A
does not synchronize on that lock.
====== end quote =======
[[email protected]: checkpatch fixes]
Reported-by: Ulrich Obergfell <[email protected]>
Signed-off-by: Andrea Arcangeli <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Dave Jones <[email protected]>
Acked-by: Larry Woodman <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Cc: <[email protected]> [2.6.38+]
Cc: Mark Salter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> |
static __init int rb_hammer_test(void *arg)
{
while (!kthread_should_stop()) {
/* Send an IPI to all cpus to write data! */
smp_call_function(rb_ipi, NULL, 1);
/* No sleep, but for non preempt, let others run */
schedule();
}
return 0;
} | 0 | [
"CWE-190"
] | linux-stable | 59643d1535eb220668692a5359de22545af579f6 | 129,299,904,802,249,730,000,000,000,000,000,000,000 | 12 | ring-buffer: Prevent overflow of size in ring_buffer_resize()
If the size passed to ring_buffer_resize() is greater than MAX_LONG - BUF_PAGE_SIZE
then the DIV_ROUND_UP() will return zero.
Here's the details:
# echo 18014398509481980 > /sys/kernel/debug/tracing/buffer_size_kb
tracing_entries_write() processes this and converts kb to bytes.
18014398509481980 << 10 = 18446744073709547520
and this is passed to ring_buffer_resize() as unsigned long size.
size = DIV_ROUND_UP(size, BUF_PAGE_SIZE);
Where DIV_ROUND_UP(a, b) is (a + b - 1)/b
BUF_PAGE_SIZE is 4080 and here
18446744073709547520 + 4080 - 1 = 18446744073709551599
where 18446744073709551599 is still smaller than 2^64
2^64 - 18446744073709551599 = 17
But now 18446744073709551599 / 4080 = 4521260802379792
and size = size * 4080 = 18446744073709551360
This is checked to make sure its still greater than 2 * 4080,
which it is.
Then we convert to the number of buffer pages needed.
nr_page = DIV_ROUND_UP(size, BUF_PAGE_SIZE)
but this time size is 18446744073709551360 and
2^64 - (18446744073709551360 + 4080 - 1) = -3823
Thus it overflows and the resulting number is less than 4080, which makes
3823 / 4080 = 0
an nr_pages is set to this. As we already checked against the minimum that
nr_pages may be, this causes the logic to fail as well, and we crash the
kernel.
There's no reason to have the two DIV_ROUND_UP() (that's just result of
historical code changes), clean up the code and fix this bug.
Cc: [email protected] # 3.5+
Fixes: 83f40318dab00 ("ring-buffer: Make removal of ring buffer pages atomic")
Signed-off-by: Steven Rostedt <[email protected]> |
lexer_hex_to_character (parser_context_t *context_p, /**< context */
const uint8_t *source_p, /**< current source position */
int length) /**< source length */
{
uint32_t result = 0;
do
{
uint32_t byte = *source_p++;
result <<= 4;
if (byte >= LIT_CHAR_0 && byte <= LIT_CHAR_9)
{
result += byte - LIT_CHAR_0;
}
else
{
byte = LEXER_TO_ASCII_LOWERCASE (byte);
if (byte >= LIT_CHAR_LOWERCASE_A && byte <= LIT_CHAR_LOWERCASE_F)
{
result += byte - (LIT_CHAR_LOWERCASE_A - 10);
}
else
{
parser_raise_error (context_p, PARSER_ERR_INVALID_ESCAPE_SEQUENCE);
}
}
}
while (--length > 0);
return (ecma_char_t) result;
} /* lexer_hex_to_character */ | 0 | [
"CWE-476"
] | jerryscript | e58f2880df608652aff7fd35c45b242467ec0e79 | 43,526,773,460,379,700,000,000,000,000,000,000,000 | 33 | Do not allocate memory for zero length strings. (#1844)
Fixes #1821.
JerryScript-DCO-1.0-Signed-off-by: Zoltan Herczeg [email protected] |
napi_status napi_escape_handle(napi_env env,
napi_escapable_handle_scope scope,
napi_value escapee,
napi_value* result) {
// Omit NAPI_PREAMBLE and GET_RETURN_STATUS because V8 calls here cannot throw
// JS exceptions.
CHECK_ENV(env);
CHECK_ARG(env, scope);
CHECK_ARG(env, escapee);
CHECK_ARG(env, result);
v8impl::EscapableHandleScopeWrapper* s =
v8impl::V8EscapableHandleScopeFromJsEscapableHandleScope(scope);
if (!s->escape_called()) {
*result = v8impl::JsValueFromV8LocalValue(
s->Escape(v8impl::V8LocalValueFromJsValue(escapee)));
return napi_clear_last_error(env);
}
return napi_set_last_error(env, napi_escape_called_twice);
} | 0 | [
"CWE-191"
] | node | 656260b4b65fec3b10f6da3fdc9f11fb941aafb5 | 295,811,786,881,047,400,000,000,000,000,000,000,000 | 20 | napi: fix memory corruption vulnerability
Fixes: https://hackerone.com/reports/784186
CVE-ID: CVE-2020-8174
PR-URL: https://github.com/nodejs-private/node-private/pull/195
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Gabriel Schulhof <[email protected]>
Reviewed-By: Michael Dawson <[email protected]>
Reviewed-By: Colin Ihrig <[email protected]>
Reviewed-By: Rich Trott <[email protected]> |
cmd_str_r(const notify_script_t *script, char *buf, size_t len)
{
char *str_p;
int i;
size_t str_len;
str_p = buf;
for (i = 0; i < script->num_args; i++) {
/* Check there is enough room for the next word */
str_len = strlen(script->args[i]);
if (str_p + str_len + 2 + (i ? 1 : 0) >= buf + len)
return NULL;
if (i)
*str_p++ = ' ';
*str_p++ = '\'';
strcpy(str_p, script->args[i]);
str_p += str_len;
*str_p++ = '\'';
}
*str_p = '\0';
return buf;
} | 0 | [
"CWE-59",
"CWE-61"
] | keepalived | 04f2d32871bb3b11d7dc024039952f2fe2750306 | 25,335,271,424,374,764,000,000,000,000,000,000,000 | 25 | When opening files for write, ensure they aren't symbolic links
Issue #1048 identified that if, for example, a non privileged user
created a symbolic link from /etc/keepalvied.data to /etc/passwd,
writing to /etc/keepalived.data (which could be invoked via DBus)
would cause /etc/passwd to be overwritten.
This commit stops keepalived writing to pathnames where the ultimate
component is a symbolic link, by setting O_NOFOLLOW whenever opening
a file for writing.
This might break some setups, where, for example, /etc/keepalived.data
was a symbolic link to /home/fred/keepalived.data. If this was the case,
instead create a symbolic link from /home/fred/keepalived.data to
/tmp/keepalived.data, so that the file is still accessible via
/home/fred/keepalived.data.
There doesn't appear to be a way around this backward incompatibility,
since even checking if the pathname is a symbolic link prior to opening
for writing would create a race condition.
Signed-off-by: Quentin Armitage <[email protected]> |
R_API RList *r_anal_bb_list_new() {
RList *list = r_list_newf ((RListFree)r_anal_bb_free);
if (!list) {
return NULL;
}
return list;
} | 0 | [
"CWE-416"
] | radare2 | 90b71c017a7fa9732fe45fd21b245ee051b1f548 | 261,566,654,837,646,600,000,000,000,000,000,000,000 | 7 | Fix #10293 - Use-after-free in r_anal_bb_free() |
int tls1_alert_code(int code)
{
switch (code)
{
case SSL_AD_CLOSE_NOTIFY: return(SSL3_AD_CLOSE_NOTIFY);
case SSL_AD_UNEXPECTED_MESSAGE: return(SSL3_AD_UNEXPECTED_MESSAGE);
case SSL_AD_BAD_RECORD_MAC: return(SSL3_AD_BAD_RECORD_MAC);
case SSL_AD_DECRYPTION_FAILED: return(TLS1_AD_DECRYPTION_FAILED);
case SSL_AD_RECORD_OVERFLOW: return(TLS1_AD_RECORD_OVERFLOW);
case SSL_AD_DECOMPRESSION_FAILURE:return(SSL3_AD_DECOMPRESSION_FAILURE);
case SSL_AD_HANDSHAKE_FAILURE: return(SSL3_AD_HANDSHAKE_FAILURE);
case SSL_AD_NO_CERTIFICATE: return(-1);
case SSL_AD_BAD_CERTIFICATE: return(SSL3_AD_BAD_CERTIFICATE);
case SSL_AD_UNSUPPORTED_CERTIFICATE:return(SSL3_AD_UNSUPPORTED_CERTIFICATE);
case SSL_AD_CERTIFICATE_REVOKED:return(SSL3_AD_CERTIFICATE_REVOKED);
case SSL_AD_CERTIFICATE_EXPIRED:return(SSL3_AD_CERTIFICATE_EXPIRED);
case SSL_AD_CERTIFICATE_UNKNOWN:return(SSL3_AD_CERTIFICATE_UNKNOWN);
case SSL_AD_ILLEGAL_PARAMETER: return(SSL3_AD_ILLEGAL_PARAMETER);
case SSL_AD_UNKNOWN_CA: return(TLS1_AD_UNKNOWN_CA);
case SSL_AD_ACCESS_DENIED: return(TLS1_AD_ACCESS_DENIED);
case SSL_AD_DECODE_ERROR: return(TLS1_AD_DECODE_ERROR);
case SSL_AD_DECRYPT_ERROR: return(TLS1_AD_DECRYPT_ERROR);
case SSL_AD_EXPORT_RESTRICTION: return(TLS1_AD_EXPORT_RESTRICTION);
case SSL_AD_PROTOCOL_VERSION: return(TLS1_AD_PROTOCOL_VERSION);
case SSL_AD_INSUFFICIENT_SECURITY:return(TLS1_AD_INSUFFICIENT_SECURITY);
case SSL_AD_INTERNAL_ERROR: return(TLS1_AD_INTERNAL_ERROR);
case SSL_AD_USER_CANCELLED: return(TLS1_AD_USER_CANCELLED);
case SSL_AD_NO_RENEGOTIATION: return(TLS1_AD_NO_RENEGOTIATION);
case SSL_AD_UNSUPPORTED_EXTENSION: return(TLS1_AD_UNSUPPORTED_EXTENSION);
case SSL_AD_CERTIFICATE_UNOBTAINABLE: return(TLS1_AD_CERTIFICATE_UNOBTAINABLE);
case SSL_AD_UNRECOGNIZED_NAME: return(TLS1_AD_UNRECOGNIZED_NAME);
case SSL_AD_BAD_CERTIFICATE_STATUS_RESPONSE: return(TLS1_AD_BAD_CERTIFICATE_STATUS_RESPONSE);
case SSL_AD_BAD_CERTIFICATE_HASH_VALUE: return(TLS1_AD_BAD_CERTIFICATE_HASH_VALUE);
case SSL_AD_UNKNOWN_PSK_IDENTITY:return(TLS1_AD_UNKNOWN_PSK_IDENTITY);
#if 0 /* not appropriate for TLS, not used for DTLS */
case DTLS1_AD_MISSING_HANDSHAKE_MESSAGE: return
(DTLS1_AD_MISSING_HANDSHAKE_MESSAGE);
#endif
default: return(-1);
}
} | 1 | [] | openssl | edc032b5e3f3ebb1006a9c89e0ae00504f47966f | 221,298,453,108,972,230,000,000,000,000,000,000,000 | 41 | Add SRP support. |
static void hns_nic_drop_rx_fetch(struct hns_nic_ring_data *ring_data,
struct sk_buff *skb)
{
dev_kfree_skb_any(skb);
} | 0 | [
"CWE-416"
] | linux | 27463ad99f738ed93c7c8b3e2e5bc8c4853a2ff2 | 174,630,959,378,011,830,000,000,000,000,000,000,000 | 5 | net: hns: Fix a skb used after free bug
skb maybe freed in hns_nic_net_xmit_hw() and return NETDEV_TX_OK,
which cause hns_nic_net_xmit to use a freed skb.
BUG: KASAN: use-after-free in hns_nic_net_xmit_hw+0x62c/0x940...
[17659.112635] alloc_debug_processing+0x18c/0x1a0
[17659.117208] __slab_alloc+0x52c/0x560
[17659.120909] kmem_cache_alloc_node+0xac/0x2c0
[17659.125309] __alloc_skb+0x6c/0x260
[17659.128837] tcp_send_ack+0x8c/0x280
[17659.132449] __tcp_ack_snd_check+0x9c/0xf0
[17659.136587] tcp_rcv_established+0x5a4/0xa70
[17659.140899] tcp_v4_do_rcv+0x27c/0x620
[17659.144687] tcp_prequeue_process+0x108/0x170
[17659.149085] tcp_recvmsg+0x940/0x1020
[17659.152787] inet_recvmsg+0x124/0x180
[17659.156488] sock_recvmsg+0x64/0x80
[17659.160012] SyS_recvfrom+0xd8/0x180
[17659.163626] __sys_trace_return+0x0/0x4
[17659.167506] INFO: Freed in kfree_skbmem+0xa0/0xb0 age=23 cpu=1 pid=13
[17659.174000] free_debug_processing+0x1d4/0x2c0
[17659.178486] __slab_free+0x240/0x390
[17659.182100] kmem_cache_free+0x24c/0x270
[17659.186062] kfree_skbmem+0xa0/0xb0
[17659.189587] __kfree_skb+0x28/0x40
[17659.193025] napi_gro_receive+0x168/0x1c0
[17659.197074] hns_nic_rx_up_pro+0x58/0x90
[17659.201038] hns_nic_rx_poll_one+0x518/0xbc0
[17659.205352] hns_nic_common_poll+0x94/0x140
[17659.209576] net_rx_action+0x458/0x5e0
[17659.213363] __do_softirq+0x1b8/0x480
[17659.217062] run_ksoftirqd+0x64/0x80
[17659.220679] smpboot_thread_fn+0x224/0x310
[17659.224821] kthread+0x150/0x170
[17659.228084] ret_from_fork+0x10/0x40
BUG: KASAN: use-after-free in hns_nic_net_xmit+0x8c/0xc0...
[17751.080490] __slab_alloc+0x52c/0x560
[17751.084188] kmem_cache_alloc+0x244/0x280
[17751.088238] __build_skb+0x40/0x150
[17751.091764] build_skb+0x28/0x100
[17751.095115] __alloc_rx_skb+0x94/0x150
[17751.098900] __napi_alloc_skb+0x34/0x90
[17751.102776] hns_nic_rx_poll_one+0x180/0xbc0
[17751.107097] hns_nic_common_poll+0x94/0x140
[17751.111333] net_rx_action+0x458/0x5e0
[17751.115123] __do_softirq+0x1b8/0x480
[17751.118823] run_ksoftirqd+0x64/0x80
[17751.122437] smpboot_thread_fn+0x224/0x310
[17751.126575] kthread+0x150/0x170
[17751.129838] ret_from_fork+0x10/0x40
[17751.133454] INFO: Freed in kfree_skbmem+0xa0/0xb0 age=19 cpu=7 pid=43
[17751.139951] free_debug_processing+0x1d4/0x2c0
[17751.144436] __slab_free+0x240/0x390
[17751.148051] kmem_cache_free+0x24c/0x270
[17751.152014] kfree_skbmem+0xa0/0xb0
[17751.155543] __kfree_skb+0x28/0x40
[17751.159022] napi_gro_receive+0x168/0x1c0
[17751.163074] hns_nic_rx_up_pro+0x58/0x90
[17751.167041] hns_nic_rx_poll_one+0x518/0xbc0
[17751.171358] hns_nic_common_poll+0x94/0x140
[17751.175585] net_rx_action+0x458/0x5e0
[17751.179373] __do_softirq+0x1b8/0x480
[17751.183076] run_ksoftirqd+0x64/0x80
[17751.186691] smpboot_thread_fn+0x224/0x310
[17751.190826] kthread+0x150/0x170
[17751.194093] ret_from_fork+0x10/0x40
Fixes: 13ac695e7ea1 ("net:hns: Add support of Hip06 SoC to the Hislicon Network Subsystem")
Signed-off-by: Yunsheng Lin <[email protected]>
Signed-off-by: lipeng <[email protected]>
Reported-by: Jun He <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
static int uvc_scan_chain_entity(struct uvc_video_chain *chain,
struct uvc_entity *entity)
{
switch (UVC_ENTITY_TYPE(entity)) {
case UVC_VC_EXTENSION_UNIT:
if (uvc_trace_param & UVC_TRACE_PROBE)
printk(KERN_CONT " <- XU %d", entity->id);
if (entity->bNrInPins != 1) {
uvc_trace(UVC_TRACE_DESCR, "Extension unit %d has more "
"than 1 input pin.\n", entity->id);
return -1;
}
break;
case UVC_VC_PROCESSING_UNIT:
if (uvc_trace_param & UVC_TRACE_PROBE)
printk(KERN_CONT " <- PU %d", entity->id);
if (chain->processing != NULL) {
uvc_trace(UVC_TRACE_DESCR, "Found multiple "
"Processing Units in chain.\n");
return -1;
}
chain->processing = entity;
break;
case UVC_VC_SELECTOR_UNIT:
if (uvc_trace_param & UVC_TRACE_PROBE)
printk(KERN_CONT " <- SU %d", entity->id);
/* Single-input selector units are ignored. */
if (entity->bNrInPins == 1)
break;
if (chain->selector != NULL) {
uvc_trace(UVC_TRACE_DESCR, "Found multiple Selector "
"Units in chain.\n");
return -1;
}
chain->selector = entity;
break;
case UVC_ITT_VENDOR_SPECIFIC:
case UVC_ITT_CAMERA:
case UVC_ITT_MEDIA_TRANSPORT_INPUT:
if (uvc_trace_param & UVC_TRACE_PROBE)
printk(KERN_CONT " <- IT %d\n", entity->id);
break;
case UVC_OTT_VENDOR_SPECIFIC:
case UVC_OTT_DISPLAY:
case UVC_OTT_MEDIA_TRANSPORT_OUTPUT:
if (uvc_trace_param & UVC_TRACE_PROBE)
printk(KERN_CONT " OT %d", entity->id);
break;
case UVC_TT_STREAMING:
if (UVC_ENTITY_IS_ITERM(entity)) {
if (uvc_trace_param & UVC_TRACE_PROBE)
printk(KERN_CONT " <- IT %d\n", entity->id);
} else {
if (uvc_trace_param & UVC_TRACE_PROBE)
printk(KERN_CONT " OT %d", entity->id);
}
break;
default:
uvc_trace(UVC_TRACE_DESCR, "Unsupported entity type "
"0x%04x found in chain.\n", UVC_ENTITY_TYPE(entity));
return -1;
}
list_add_tail(&entity->chain, &chain->entities);
return 0;
} | 0 | [
"CWE-269"
] | linux | 68035c80e129c4cfec659aac4180354530b26527 | 158,672,920,257,125,290,000,000,000,000,000,000,000 | 82 | media: uvcvideo: Avoid cyclic entity chains due to malformed USB descriptors
Way back in 2017, fuzzing the 4.14-rc2 USB stack with syzkaller kicked
up the following WARNING from the UVC chain scanning code:
| list_add double add: new=ffff880069084010, prev=ffff880069084010,
| next=ffff880067d22298.
| ------------[ cut here ]------------
| WARNING: CPU: 1 PID: 1846 at lib/list_debug.c:31 __list_add_valid+0xbd/0xf0
| Modules linked in:
| CPU: 1 PID: 1846 Comm: kworker/1:2 Not tainted
| 4.14.0-rc2-42613-g1488251d1a98 #238
| Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
| Workqueue: usb_hub_wq hub_event
| task: ffff88006b01ca40 task.stack: ffff880064358000
| RIP: 0010:__list_add_valid+0xbd/0xf0 lib/list_debug.c:29
| RSP: 0018:ffff88006435ddd0 EFLAGS: 00010286
| RAX: 0000000000000058 RBX: ffff880067d22298 RCX: 0000000000000000
| RDX: 0000000000000058 RSI: ffffffff85a58800 RDI: ffffed000c86bbac
| RBP: ffff88006435dde8 R08: 1ffff1000c86ba52 R09: 0000000000000000
| R10: 0000000000000002 R11: 0000000000000000 R12: ffff880069084010
| R13: ffff880067d22298 R14: ffff880069084010 R15: ffff880067d222a0
| FS: 0000000000000000(0000) GS:ffff88006c900000(0000) knlGS:0000000000000000
| CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
| CR2: 0000000020004ff2 CR3: 000000006b447000 CR4: 00000000000006e0
| Call Trace:
| __list_add ./include/linux/list.h:59
| list_add_tail+0x8c/0x1b0 ./include/linux/list.h:92
| uvc_scan_chain_forward.isra.8+0x373/0x416
| drivers/media/usb/uvc/uvc_driver.c:1471
| uvc_scan_chain drivers/media/usb/uvc/uvc_driver.c:1585
| uvc_scan_device drivers/media/usb/uvc/uvc_driver.c:1769
| uvc_probe+0x77f2/0x8f00 drivers/media/usb/uvc/uvc_driver.c:2104
Looking into the output from usbmon, the interesting part is the
following data packet:
ffff880069c63e00 30710169 C Ci:1:002:0 0 143 = 09028f00 01030080
00090403 00000e01 00000924 03000103 7c003328 010204db
If we drop the lead configuration and interface descriptors, we're left
with an output terminal descriptor describing a generic display:
/* Output terminal descriptor */
buf[0] 09
buf[1] 24
buf[2] 03 /* UVC_VC_OUTPUT_TERMINAL */
buf[3] 00 /* ID */
buf[4] 01 /* type == 0x0301 (UVC_OTT_DISPLAY) */
buf[5] 03
buf[6] 7c
buf[7] 00 /* source ID refers to self! */
buf[8] 33
The problem with this descriptor is that it is self-referential: the
source ID of 0 matches itself! This causes the 'struct uvc_entity'
representing the display to be added to its chain list twice during
'uvc_scan_chain()': once via 'uvc_scan_chain_entity()' when it is
processed directly from the 'dev->entities' list and then again
immediately afterwards when trying to follow the source ID in
'uvc_scan_chain_forward()'
Add a check before adding an entity to a chain list to ensure that the
entity is not already part of a chain.
Link: https://lore.kernel.org/linux-media/CAAeHK+z+Si69jUR+N-SjN9q4O+o5KFiNManqEa-PjUta7EOb7A@mail.gmail.com/
Cc: <[email protected]>
Fixes: c0efd232929c ("V4L/DVB (8145a): USB Video Class driver")
Reported-by: Andrey Konovalov <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Laurent Pinchart <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]> |
String *val_str(String *to)
{
return has_value() ? Date(this).to_string(to) : NULL;
} | 0 | [
"CWE-617"
] | server | 807945f2eb5fa22e6f233cc17b85a2e141efe2c8 | 174,353,533,427,953,100,000,000,000,000,000,000,000 | 4 | MDEV-26402: A SEGV in Item_field::used_tables/update_depend_map_for_order...
When doing condition pushdown from HAVING into WHERE,
Item_equal::create_pushable_equalities() calls
item->set_extraction_flag(IMMUTABLE_FL) for constant items.
Then, Item::cleanup_excluding_immutables_processor() checks for this flag
to see if it should call item->cleanup() or leave the item as-is.
The failure happens when a constant item has a non-constant one inside it,
like:
(tbl.col=0 AND impossible_cond)
item->walk(cleanup_excluding_immutables_processor) works in a bottom-up
way so it
1. will call Item_func_eq(tbl.col=0)->cleanup()
2. will not call Item_cond_and->cleanup (as the AND is constant)
This creates an item tree where a fixed Item has an un-fixed Item inside
it which eventually causes an assertion failure.
Fixed by introducing this rule: instead of just calling
item->set_extraction_flag(IMMUTABLE_FL);
we call Item::walk() to set the flag for all sub-items of the item. |
xfrm_policy_lookup(struct net *net, const struct flowi *fl, u16 family,
u8 dir, struct flow_cache_object *old_obj, void *ctx)
{
struct xfrm_policy *pol;
if (old_obj)
xfrm_pol_put(container_of(old_obj, struct xfrm_policy, flo));
pol = __xfrm_policy_lookup(net, fl, family, flow_to_policy_dir(dir));
if (IS_ERR_OR_NULL(pol))
return ERR_CAST(pol);
/* Resolver returns two references:
* one for cache and one for caller of flow_cache_lookup() */
xfrm_pol_hold(pol);
return &pol->flo;
} | 0 | [
"CWE-125"
] | ipsec | 7bab09631c2a303f87a7eb7e3d69e888673b9b7e | 180,823,233,955,164,430,000,000,000,000,000,000,000 | 18 | xfrm: policy: check policy direction value
The 'dir' parameter in xfrm_migrate() is a user-controlled byte which is used
as an array index. This can lead to an out-of-bound access, kernel lockup and
DoS. Add a check for the 'dir' value.
This fixes CVE-2017-11600.
References: https://bugzilla.redhat.com/show_bug.cgi?id=1474928
Fixes: 80c9abaabf42 ("[XFRM]: Extension for dynamic update of endpoint address(es)")
Cc: <[email protected]> # v2.6.21-rc1
Reported-by: "bo Zhang" <[email protected]>
Signed-off-by: Vladis Dronov <[email protected]>
Signed-off-by: Steffen Klassert <[email protected]> |
fill_rectangles (void *_dst,
cairo_operator_t op,
const cairo_color_t *color,
cairo_rectangle_int_t *rects,
int num_rects)
{
cairo_image_surface_t *dst = _dst;
uint32_t pixel;
int i;
TRACE ((stderr, "%s\n", __FUNCTION__));
if (fill_reduces_to_source (op, color, dst, &pixel)) {
for (i = 0; i < num_rects; i++) {
pixman_fill ((uint32_t *) dst->data, dst->stride / sizeof (uint32_t),
PIXMAN_FORMAT_BPP (dst->pixman_format),
rects[i].x, rects[i].y,
rects[i].width, rects[i].height,
pixel);
}
} else {
pixman_image_t *src = _pixman_image_for_color (color);
if (unlikely (src == NULL))
return _cairo_error (CAIRO_STATUS_NO_MEMORY);
op = _pixman_operator (op);
for (i = 0; i < num_rects; i++) {
pixman_image_composite32 (op,
src, NULL, dst->pixman_image,
0, 0,
0, 0,
rects[i].x, rects[i].y,
rects[i].width, rects[i].height);
}
pixman_image_unref (src);
}
return CAIRO_STATUS_SUCCESS;
} | 0 | [] | cairo | 03a820b173ed1fdef6ff14b4468f5dbc02ff59be | 85,111,554,050,739,900,000,000,000,000,000,000,000 | 40 | Fix mask usage in image-compositor |
static X509 *find_issuer(X509_STORE_CTX *ctx, STACK_OF(X509) *sk, X509 *x)
{
int i;
X509 *issuer, *rv = NULL;
for (i = 0; i < sk_X509_num(sk); i++) {
issuer = sk_X509_value(sk, i);
if (ctx->check_issued(ctx, x, issuer)
&& (((x->ex_flags & EXFLAG_SI) != 0 && sk_X509_num(ctx->chain) == 1)
|| !sk_X509_contains(ctx->chain, issuer))) {
rv = issuer;
if (x509_check_cert_time(ctx, rv, -1))
break;
}
}
return rv;
} | 0 | [
"CWE-295"
] | openssl | 2a40b7bc7b94dd7de897a74571e7024f0cf0d63b | 198,235,576,472,052,340,000,000,000,000,000,000,000 | 17 | check_chain_extensions: Do not override error return value by check_curve
The X509_V_FLAG_X509_STRICT flag enables additional security checks of the
certificates present in a certificate chain. It is not set by default.
Starting from OpenSSL version 1.1.1h a check to disallow certificates with
explicitly encoded elliptic curve parameters in the chain was added to the
strict checks.
An error in the implementation of this check meant that the result of a
previous check to confirm that certificates in the chain are valid CA
certificates was overwritten. This effectively bypasses the check
that non-CA certificates must not be able to issue other certificates.
If a "purpose" has been configured then a subsequent check that the
certificate is consistent with that purpose also checks that it is a
valid CA. Therefore where a purpose is set the certificate chain will
still be rejected even when the strict flag has been used. A purpose is
set by default in libssl client and server certificate verification
routines, but it can be overriden by an application.
Affected applications explicitly set the X509_V_FLAG_X509_STRICT
verification flag and either do not set a purpose for the certificate
verification or, in the case of TLS client or server applications,
override the default purpose to make it not set.
CVE-2021-3450
Reviewed-by: Matt Caswell <[email protected]>
Reviewed-by: Paul Dale <[email protected]> |
GF_Err stsz_Write(GF_Box *s, GF_BitStream *bs)
{
GF_Err e;
u32 i;
GF_SampleSizeBox *ptr = (GF_SampleSizeBox *)s;
e = gf_isom_full_box_write(s, bs);
if (e) return e;
//in both versions this is still valid
if (ptr->type == GF_ISOM_BOX_TYPE_STSZ) {
gf_bs_write_u32(bs, ptr->sampleSize);
} else {
gf_bs_write_u24(bs, 0);
gf_bs_write_u8(bs, ptr->sampleSize);
}
gf_bs_write_u32(bs, ptr->sampleCount);
if (ptr->type == GF_ISOM_BOX_TYPE_STSZ) {
if (! ptr->sampleSize) {
for (i = 0; i < ptr->sampleCount; i++) {
gf_bs_write_u32(bs, ptr->sizes ? ptr->sizes[i] : 0);
}
}
} else {
for (i = 0; i < ptr->sampleCount; ) {
switch (ptr->sampleSize) {
case 4:
gf_bs_write_int(bs, ptr->sizes[i], 4);
if (i+1 < ptr->sampleCount) {
gf_bs_write_int(bs, ptr->sizes[i+1], 4);
} else {
//0 padding in odd sample count
gf_bs_write_int(bs, 0, 4);
}
i += 2;
break;
default:
gf_bs_write_int(bs, ptr->sizes[i], ptr->sampleSize);
i += 1;
break;
}
}
}
return GF_OK;
} | 0 | [
"CWE-400",
"CWE-401"
] | gpac | d2371b4b204f0a3c0af51ad4e9b491144dd1225c | 16,176,520,240,806,798,000,000,000,000,000,000,000 | 45 | prevent dref memleak on invalid input (#1183) |
void posixtimer_rearm(struct siginfo *info)
{
struct k_itimer *timr;
unsigned long flags;
timr = lock_timer(info->si_tid, &flags);
if (!timr)
return;
if (timr->it_requeue_pending == info->si_sys_private) {
timr->kclock->timer_rearm(timr);
timr->it_active = 1;
timr->it_overrun_last = timr->it_overrun;
timr->it_overrun = -1LL;
++timr->it_requeue_pending;
info->si_overrun = timer_overrun_to_int(timr, info->si_overrun);
}
unlock_timer(timr, flags);
} | 0 | [
"CWE-190"
] | linux | 78c9c4dfbf8c04883941445a195276bb4bb92c76 | 166,408,312,872,166,350,000,000,000,000,000,000,000 | 22 | posix-timers: Sanitize overrun handling
The posix timer overrun handling is broken because the forwarding functions
can return a huge number of overruns which does not fit in an int. As a
consequence timer_getoverrun(2) and siginfo::si_overrun can turn into
random number generators.
The k_clock::timer_forward() callbacks return a 64 bit value now. Make
k_itimer::ti_overrun[_last] 64bit as well, so the kernel internal
accounting is correct. 3Remove the temporary (int) casts.
Add a helper function which clamps the overrun value returned to user space
via timer_getoverrun(2) or siginfo::si_overrun limited to a positive value
between 0 and INT_MAX. INT_MAX is an indicator for user space that the
overrun value has been clamped.
Reported-by: Team OWL337 <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: John Stultz <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Michael Kerrisk <[email protected]>
Link: https://lkml.kernel.org/r/[email protected] |
Integer& BER_Decoder::GetInteger(Integer& integer)
{
if (!source_.GetError().What())
integer.Decode(source_);
return integer;
} | 0 | [
"CWE-254"
] | mysql-server | e7061f7e5a96c66cb2e0bf46bec7f6ff35801a69 | 55,845,786,421,744,410,000,000,000,000,000,000,000 | 6 | Bug #22738607: YASSL FUNCTION X509_NAME_GET_INDEX_BY_NID IS NOT WORKING AS EXPECTED. |
static int brcmf_cfg80211_stop_ap(struct wiphy *wiphy, struct net_device *ndev)
{
struct brcmf_if *ifp = netdev_priv(ndev);
s32 err;
struct brcmf_fil_bss_enable_le bss_enable;
struct brcmf_join_params join_params;
brcmf_dbg(TRACE, "Enter\n");
if (ifp->vif->wdev.iftype == NL80211_IFTYPE_AP) {
/* Due to most likely deauths outstanding we sleep */
/* first to make sure they get processed by fw. */
msleep(400);
if (ifp->vif->mbss) {
err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_DOWN, 1);
return err;
}
/* First BSS doesn't get a full reset */
if (ifp->bsscfgidx == 0)
brcmf_fil_iovar_int_set(ifp, "closednet", 0);
memset(&join_params, 0, sizeof(join_params));
err = brcmf_fil_cmd_data_set(ifp, BRCMF_C_SET_SSID,
&join_params, sizeof(join_params));
if (err < 0)
brcmf_err("SET SSID error (%d)\n", err);
err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_DOWN, 1);
if (err < 0)
brcmf_err("BRCMF_C_DOWN error %d\n", err);
err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_AP, 0);
if (err < 0)
brcmf_err("setting AP mode failed %d\n", err);
err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_INFRA, 0);
if (err < 0)
brcmf_err("setting INFRA mode failed %d\n", err);
if (brcmf_feat_is_enabled(ifp, BRCMF_FEAT_MBSS))
brcmf_fil_iovar_int_set(ifp, "mbss", 0);
err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_REGULATORY,
ifp->vif->is_11d);
if (err < 0)
brcmf_err("restoring REGULATORY setting failed %d\n",
err);
/* Bring device back up so it can be used again */
err = brcmf_fil_cmd_int_set(ifp, BRCMF_C_UP, 1);
if (err < 0)
brcmf_err("BRCMF_C_UP error %d\n", err);
} else {
bss_enable.bsscfgidx = cpu_to_le32(ifp->bsscfgidx);
bss_enable.enable = cpu_to_le32(0);
err = brcmf_fil_iovar_data_set(ifp, "bss", &bss_enable,
sizeof(bss_enable));
if (err < 0)
brcmf_err("bss_enable config failed %d\n", err);
}
brcmf_set_mpc(ifp, 1);
brcmf_configure_arp_nd_offload(ifp, true);
clear_bit(BRCMF_VIF_STATUS_AP_CREATED, &ifp->vif->sme_state);
brcmf_net_setcarrier(ifp, false);
return err;
} | 0 | [
"CWE-119",
"CWE-703"
] | linux | ded89912156b1a47d940a0c954c43afbabd0c42c | 334,235,425,063,467,120,000,000,000,000,000,000,000 | 63 | brcmfmac: avoid potential stack overflow in brcmf_cfg80211_start_ap()
User-space can choose to omit NL80211_ATTR_SSID and only provide raw
IE TLV data. When doing so it can provide SSID IE with length exceeding
the allowed size. The driver further processes this IE copying it
into a local variable without checking the length. Hence stack can be
corrupted and used as exploit.
Cc: [email protected] # v4.7
Reported-by: Daxing Guo <[email protected]>
Reviewed-by: Hante Meuleman <[email protected]>
Reviewed-by: Pieter-Paul Giesberts <[email protected]>
Reviewed-by: Franky Lin <[email protected]>
Signed-off-by: Arend van Spriel <[email protected]>
Signed-off-by: Kalle Valo <[email protected]> |
void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
{
unsigned long *sara = stack_addr(regs);
ri->ret_addr = (kprobe_opcode_t *) *sara;
/* Replace the return addr with trampoline addr */
*sara = (unsigned long) &kretprobe_trampoline;
} | 0 | [
"CWE-264"
] | linux | 548acf19234dbda5a52d5a8e7e205af46e9da840 | 211,321,243,335,554,820,000,000,000,000,000,000,000 | 9 | x86/mm: Expand the exception table logic to allow new handling options
Huge amounts of help from Andy Lutomirski and Borislav Petkov to
produce this. Andy provided the inspiration to add classes to the
exception table with a clever bit-squeezing trick, Boris pointed
out how much cleaner it would all be if we just had a new field.
Linus Torvalds blessed the expansion with:
' I'd rather not be clever in order to save just a tiny amount of space
in the exception table, which isn't really criticial for anybody. '
The third field is another relative function pointer, this one to a
handler that executes the actions.
We start out with three handlers:
1: Legacy - just jumps the to fixup IP
2: Fault - provide the trap number in %ax to the fixup code
3: Cleaned up legacy for the uaccess error hack
Signed-off-by: Tony Luck <[email protected]>
Reviewed-by: Borislav Petkov <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/f6af78fcbd348cf4939875cfda9c19689b5e50b8.1455732970.git.tony.luck@intel.com
Signed-off-by: Ingo Molnar <[email protected]> |
static inline void __tcp_add_write_queue_head(struct sock *sk, struct sk_buff *skb)
{
__skb_queue_head(&sk->sk_write_queue, skb);
} | 0 | [
"CWE-416",
"CWE-269"
] | linux | bb1fceca22492109be12640d49f5ea5a544c6bb4 | 275,665,182,148,590,750,000,000,000,000,000,000,000 | 4 | tcp: fix use after free in tcp_xmit_retransmit_queue()
When tcp_sendmsg() allocates a fresh and empty skb, it puts it at the
tail of the write queue using tcp_add_write_queue_tail()
Then it attempts to copy user data into this fresh skb.
If the copy fails, we undo the work and remove the fresh skb.
Unfortunately, this undo lacks the change done to tp->highest_sack and
we can leave a dangling pointer (to a freed skb)
Later, tcp_xmit_retransmit_queue() can dereference this pointer and
access freed memory. For regular kernels where memory is not unmapped,
this might cause SACK bugs because tcp_highest_sack_seq() is buggy,
returning garbage instead of tp->snd_nxt, but with various debug
features like CONFIG_DEBUG_PAGEALLOC, this can crash the kernel.
This bug was found by Marco Grassi thanks to syzkaller.
Fixes: 6859d49475d4 ("[TCP]: Abstract tp->highest_sack accessing & point to next skb")
Reported-by: Marco Grassi <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Ilpo Järvinen <[email protected]>
Cc: Yuchung Cheng <[email protected]>
Cc: Neal Cardwell <[email protected]>
Acked-by: Neal Cardwell <[email protected]>
Reviewed-by: Cong Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
static noinline int search_ioctl(struct inode *inode,
struct btrfs_ioctl_search_key *sk,
size_t *buf_size,
char __user *ubuf)
{
struct btrfs_root *root;
struct btrfs_key key;
struct btrfs_path *path;
struct btrfs_fs_info *info = BTRFS_I(inode)->root->fs_info;
int ret;
int num_found = 0;
unsigned long sk_offset = 0;
if (*buf_size < sizeof(struct btrfs_ioctl_search_header)) {
*buf_size = sizeof(struct btrfs_ioctl_search_header);
return -EOVERFLOW;
}
path = btrfs_alloc_path();
if (!path)
return -ENOMEM;
if (sk->tree_id == 0) {
/* search the root of the inode that was passed */
root = BTRFS_I(inode)->root;
} else {
key.objectid = sk->tree_id;
key.type = BTRFS_ROOT_ITEM_KEY;
key.offset = (u64)-1;
root = btrfs_read_fs_root_no_name(info, &key);
if (IS_ERR(root)) {
btrfs_err(info, "could not find root %llu",
sk->tree_id);
btrfs_free_path(path);
return -ENOENT;
}
}
key.objectid = sk->min_objectid;
key.type = sk->min_type;
key.offset = sk->min_offset;
while (1) {
ret = btrfs_search_forward(root, &key, path, sk->min_transid);
if (ret != 0) {
if (ret > 0)
ret = 0;
goto err;
}
ret = copy_to_sk(root, path, &key, sk, buf_size, ubuf,
&sk_offset, &num_found);
btrfs_release_path(path);
if (ret)
break;
}
if (ret > 0)
ret = 0;
err:
sk->nr_items = num_found;
btrfs_free_path(path);
return ret;
} | 0 | [
"CWE-200"
] | linux | 8039d87d9e473aeb740d4fdbd59b9d2f89b2ced9 | 26,804,953,347,467,755,000,000,000,000,000,000,000 | 63 | Btrfs: fix file corruption and data loss after cloning inline extents
Currently the clone ioctl allows to clone an inline extent from one file
to another that already has other (non-inlined) extents. This is a problem
because btrfs is not designed to deal with files having inline and regular
extents, if a file has an inline extent then it must be the only extent
in the file and must start at file offset 0. Having a file with an inline
extent followed by regular extents results in EIO errors when doing reads
or writes against the first 4K of the file.
Also, the clone ioctl allows one to lose data if the source file consists
of a single inline extent, with a size of N bytes, and the destination
file consists of a single inline extent with a size of M bytes, where we
have M > N. In this case the clone operation removes the inline extent
from the destination file and then copies the inline extent from the
source file into the destination file - we lose the M - N bytes from the
destination file, a read operation will get the value 0x00 for any bytes
in the the range [N, M] (the destination inode's i_size remained as M,
that's why we can read past N bytes).
So fix this by not allowing such destructive operations to happen and
return errno EOPNOTSUPP to user space.
Currently the fstest btrfs/035 tests the data loss case but it totally
ignores this - i.e. expects the operation to succeed and does not check
the we got data loss.
The following test case for fstests exercises all these cases that result
in file corruption and data loss:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
_require_btrfs_fs_feature "no_holes"
_require_btrfs_mkfs_feature "no-holes"
rm -f $seqres.full
test_cloning_inline_extents()
{
local mkfs_opts=$1
local mount_opts=$2
_scratch_mkfs $mkfs_opts >>$seqres.full 2>&1
_scratch_mount $mount_opts
# File bar, the source for all the following clone operations, consists
# of a single inline extent (50 bytes).
$XFS_IO_PROG -f -c "pwrite -S 0xbb 0 50" $SCRATCH_MNT/bar \
| _filter_xfs_io
# Test cloning into a file with an extent (non-inlined) where the
# destination offset overlaps that extent. It should not be possible to
# clone the inline extent from file bar into this file.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 16K" $SCRATCH_MNT/foo \
| _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo
# Doing IO against any range in the first 4K of the file should work.
# Due to a past clone ioctl bug which allowed cloning the inline extent,
# these operations resulted in EIO errors.
echo "File foo data after clone operation:"
# All bytes should have the value 0xaa (clone operation failed and did
# not modify our file).
od -t x1 $SCRATCH_MNT/foo
$XFS_IO_PROG -c "pwrite -S 0xcc 0 100" $SCRATCH_MNT/foo | _filter_xfs_io
# Test cloning the inline extent against a file which has a hole in its
# first 4K followed by a non-inlined extent. It should not be possible
# as well to clone the inline extent from file bar into this file.
$XFS_IO_PROG -f -c "pwrite -S 0xdd 4K 12K" $SCRATCH_MNT/foo2 \
| _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo2
# Doing IO against any range in the first 4K of the file should work.
# Due to a past clone ioctl bug which allowed cloning the inline extent,
# these operations resulted in EIO errors.
echo "File foo2 data after clone operation:"
# All bytes should have the value 0x00 (clone operation failed and did
# not modify our file).
od -t x1 $SCRATCH_MNT/foo2
$XFS_IO_PROG -c "pwrite -S 0xee 0 90" $SCRATCH_MNT/foo2 | _filter_xfs_io
# Test cloning the inline extent against a file which has a size of zero
# but has a prealloc extent. It should not be possible as well to clone
# the inline extent from file bar into this file.
$XFS_IO_PROG -f -c "falloc -k 0 1M" $SCRATCH_MNT/foo3 | _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo3
# Doing IO against any range in the first 4K of the file should work.
# Due to a past clone ioctl bug which allowed cloning the inline extent,
# these operations resulted in EIO errors.
echo "First 50 bytes of foo3 after clone operation:"
# Should not be able to read any bytes, file has 0 bytes i_size (the
# clone operation failed and did not modify our file).
od -t x1 $SCRATCH_MNT/foo3
$XFS_IO_PROG -c "pwrite -S 0xff 0 90" $SCRATCH_MNT/foo3 | _filter_xfs_io
# Test cloning the inline extent against a file which consists of a
# single inline extent that has a size not greater than the size of
# bar's inline extent (40 < 50).
# It should be possible to do the extent cloning from bar to this file.
$XFS_IO_PROG -f -c "pwrite -S 0x01 0 40" $SCRATCH_MNT/foo4 \
| _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo4
# Doing IO against any range in the first 4K of the file should work.
echo "File foo4 data after clone operation:"
# Must match file bar's content.
od -t x1 $SCRATCH_MNT/foo4
$XFS_IO_PROG -c "pwrite -S 0x02 0 90" $SCRATCH_MNT/foo4 | _filter_xfs_io
# Test cloning the inline extent against a file which consists of a
# single inline extent that has a size greater than the size of bar's
# inline extent (60 > 50).
# It should not be possible to clone the inline extent from file bar
# into this file.
$XFS_IO_PROG -f -c "pwrite -S 0x03 0 60" $SCRATCH_MNT/foo5 \
| _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo5
# Reading the file should not fail.
echo "File foo5 data after clone operation:"
# Must have a size of 60 bytes, with all bytes having a value of 0x03
# (the clone operation failed and did not modify our file).
od -t x1 $SCRATCH_MNT/foo5
# Test cloning the inline extent against a file which has no extents but
# has a size greater than bar's inline extent (16K > 50).
# It should not be possible to clone the inline extent from file bar
# into this file.
$XFS_IO_PROG -f -c "truncate 16K" $SCRATCH_MNT/foo6 | _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo6
# Reading the file should not fail.
echo "File foo6 data after clone operation:"
# Must have a size of 16K, with all bytes having a value of 0x00 (the
# clone operation failed and did not modify our file).
od -t x1 $SCRATCH_MNT/foo6
# Test cloning the inline extent against a file which has no extents but
# has a size not greater than bar's inline extent (30 < 50).
# It should be possible to clone the inline extent from file bar into
# this file.
$XFS_IO_PROG -f -c "truncate 30" $SCRATCH_MNT/foo7 | _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo7
# Reading the file should not fail.
echo "File foo7 data after clone operation:"
# Must have a size of 50 bytes, with all bytes having a value of 0xbb.
od -t x1 $SCRATCH_MNT/foo7
# Test cloning the inline extent against a file which has a size not
# greater than the size of bar's inline extent (20 < 50) but has
# a prealloc extent that goes beyond the file's size. It should not be
# possible to clone the inline extent from bar into this file.
$XFS_IO_PROG -f -c "falloc -k 0 1M" \
-c "pwrite -S 0x88 0 20" \
$SCRATCH_MNT/foo8 | _filter_xfs_io
$CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo8
echo "File foo8 data after clone operation:"
# Must have a size of 20 bytes, with all bytes having a value of 0x88
# (the clone operation did not modify our file).
od -t x1 $SCRATCH_MNT/foo8
_scratch_unmount
}
echo -e "\nTesting without compression and without the no-holes feature...\n"
test_cloning_inline_extents
echo -e "\nTesting with compression and without the no-holes feature...\n"
test_cloning_inline_extents "" "-o compress"
echo -e "\nTesting without compression and with the no-holes feature...\n"
test_cloning_inline_extents "-O no-holes" ""
echo -e "\nTesting with compression and with the no-holes feature...\n"
test_cloning_inline_extents "-O no-holes" "-o compress"
status=0
exit
Cc: [email protected]
Signed-off-by: Filipe Manana <[email protected]> |
std::string HttpIntegrationTest::listenerStatPrefix(const std::string& stat_name) {
if (version_ == Network::Address::IpVersion::v4) {
return "listener.127.0.0.1_0." + stat_name;
}
return "listener.[__1]_0." + stat_name;
} | 0 | [
"CWE-400",
"CWE-703"
] | envoy | afc39bea36fd436e54262f150c009e8d72db5014 | 183,056,808,179,796,200,000,000,000,000,000,000,000 | 6 | Track byteSize of HeaderMap internally.
Introduces a cached byte size updated internally in HeaderMap. The value
is stored as an optional, and is cleared whenever a non-const pointer or
reference to a HeaderEntry is accessed. The cached value can be set with
refreshByteSize() which performs an iteration over the HeaderMap to sum
the size of each key and value in the HeaderMap.
Signed-off-by: Asra Ali <[email protected]> |
int fib6_del(struct rt6_info *rt, struct nl_info *info)
{
struct net *net = info->nl_net;
struct fib6_node *fn = rt->rt6i_node;
struct rt6_info **rtp;
#if RT6_DEBUG >= 2
if (rt->dst.obsolete>0) {
WARN_ON(fn != NULL);
return -ENOENT;
}
#endif
if (!fn || rt == net->ipv6.ip6_null_entry)
return -ENOENT;
WARN_ON(!(fn->fn_flags & RTN_RTINFO));
if (!(rt->rt6i_flags & RTF_CACHE)) {
struct fib6_node *pn = fn;
#ifdef CONFIG_IPV6_SUBTREES
/* clones of this route might be in another subtree */
if (rt->rt6i_src.plen) {
while (!(pn->fn_flags & RTN_ROOT))
pn = pn->parent;
pn = pn->parent;
}
#endif
fib6_prune_clones(info->nl_net, pn, rt);
}
/*
* Walk the leaf entries looking for ourself
*/
for (rtp = &fn->leaf; *rtp; rtp = &(*rtp)->dst.rt6_next) {
if (*rtp == rt) {
fib6_del_route(fn, rtp, info);
return 0;
}
}
return -ENOENT;
} | 0 | [
"CWE-399"
] | linux | 307f2fb95e9b96b3577916e73d92e104f8f26494 | 81,195,022,790,075,810,000,000,000,000,000,000,000 | 42 | ipv6: only static routes qualify for equal cost multipathing
Static routes in this case are non-expiring routes which did not get
configured by autoconf or by icmpv6 redirects.
To make sure we actually get an ecmp route while searching for the first
one in this fib6_node's leafs, also make sure it matches the ecmp route
assumptions.
v2:
a) Removed RTF_EXPIRE check in dst.from chain. The check of RTF_ADDRCONF
already ensures that this route, even if added again without
RTF_EXPIRES (in case of a RA announcement with infinite timeout),
does not cause the rt6i_nsiblings logic to go wrong if a later RA
updates the expiration time later.
v3:
a) Allow RTF_EXPIRES routes to enter the ecmp route set. We have to do so,
because an pmtu event could update the RTF_EXPIRES flag and we would
not count this route, if another route joins this set. We now filter
only for RTF_GATEWAY|RTF_ADDRCONF|RTF_DYNAMIC, which are flags that
don't get changed after rt6_info construction.
Cc: Nicolas Dichtel <[email protected]>
Signed-off-by: Hannes Frederic Sowa <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
void __init trace_init(void)
{
trace_event_init();
} | 0 | [
"CWE-415"
] | linux | 4397f04575c44e1440ec2e49b6302785c95fd2f8 | 253,365,684,591,997,220,000,000,000,000,000,000,000 | 4 | tracing: Fix possible double free on failure of allocating trace buffer
Jing Xia and Chunyan Zhang reported that on failing to allocate part of the
tracing buffer, memory is freed, but the pointers that point to them are not
initialized back to NULL, and later paths may try to free the freed memory
again. Jing and Chunyan fixed one of the locations that does this, but
missed a spot.
Link: http://lkml.kernel.org/r/[email protected]
Cc: [email protected]
Fixes: 737223fbca3b1 ("tracing: Consolidate buffer allocation code")
Reported-by: Jing Xia <[email protected]>
Reported-by: Chunyan Zhang <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]> |
static int ct_rewind(struct media_player *mp, void *user_data)
{
struct avrcp_player *player = user_data;
return ct_hold(player, AVC_REWIND);
} | 0 | [
"CWE-200"
] | bluez | e2b0f0d8d63e1223bb714a9efb37e2257818268b | 125,469,163,235,459,980,000,000,000,000,000,000,000 | 6 | avrcp: Fix not checking if params_len match number of received bytes
This makes sure the number of bytes in the params_len matches the
remaining bytes received so the code don't end up accessing invalid
memory. |
op_on_lines(int op)
{
return opchars[op][2] & OPF_LINES;
} | 0 | [
"CWE-416",
"CWE-787"
] | vim | 9f8c304c8a390ade133bac29963dc8e56ab14cbc | 174,336,989,655,479,230,000,000,000,000,000,000,000 | 4 | patch 8.2.4120: block insert goes over the end of the line
Problem: Block insert goes over the end of the line.
Solution: Handle invalid byte better. Fix inserting the wrong text. |
int valid_phys_addr_range(unsigned long addr, size_t size)
{
if (addr < PHYS_OFFSET)
return 0;
if (addr + size > __pa(high_memory - 1) + 1)
return 0;
return 1;
} | 0 | [] | linux | 1d18c47c735e8adfe531fc41fae31e98f86b68fe | 171,403,487,503,070,370,000,000,000,000,000,000,000 | 9 | arm64: MMU fault handling and page table management
This patch adds support for the handling of the MMU faults (exception
entry code introduced by a previous patch) and page table management.
The user translation table is pointed to by TTBR0 and the kernel one
(swapper_pg_dir) by TTBR1. There is no translation information shared or
address space overlapping between user and kernel page tables.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Acked-by: Tony Lindgren <[email protected]>
Acked-by: Nicolas Pitre <[email protected]>
Acked-by: Olof Johansson <[email protected]>
Acked-by: Santosh Shilimkar <[email protected]>
Acked-by: Arnd Bergmann <[email protected]> |
CtPtr ProtocolV1::handle_message_footer(char *buffer, int r) {
ldout(cct, 20) << __func__ << " r=" << r << dendl;
if (r < 0) {
ldout(cct, 1) << __func__ << " read footer data error " << dendl;
return _fault();
}
ceph_msg_footer footer;
ceph_msg_footer_old old_footer;
if (connection->has_feature(CEPH_FEATURE_MSG_AUTH)) {
footer = *((ceph_msg_footer *)buffer);
} else {
old_footer = *((ceph_msg_footer_old *)buffer);
footer.front_crc = old_footer.front_crc;
footer.middle_crc = old_footer.middle_crc;
footer.data_crc = old_footer.data_crc;
footer.sig = 0;
footer.flags = old_footer.flags;
}
int aborted = (footer.flags & CEPH_MSG_FOOTER_COMPLETE) == 0;
ldout(cct, 10) << __func__ << " aborted = " << aborted << dendl;
if (aborted) {
ldout(cct, 0) << __func__ << " got " << front.length() << " + "
<< middle.length() << " + " << data.length()
<< " byte message.. ABORTED" << dendl;
return _fault();
}
ldout(cct, 20) << __func__ << " got " << front.length() << " + "
<< middle.length() << " + " << data.length() << " byte message"
<< dendl;
Message *message = decode_message(cct, messenger->crcflags, current_header,
footer, front, middle, data, connection);
if (!message) {
ldout(cct, 1) << __func__ << " decode message failed " << dendl;
return _fault();
}
//
// Check the signature if one should be present. A zero return indicates
// success. PLR
//
if (session_security.get() == NULL) {
ldout(cct, 10) << __func__ << " no session security set" << dendl;
} else {
if (session_security->check_message_signature(message)) {
ldout(cct, 0) << __func__ << " Signature check failed" << dendl;
message->put();
return _fault();
}
}
message->set_byte_throttler(connection->policy.throttler_bytes);
message->set_message_throttler(connection->policy.throttler_messages);
// store reservation size in message, so we don't get confused
// by messages entering the dispatch queue through other paths.
message->set_dispatch_throttle_size(cur_msg_size);
message->set_recv_stamp(recv_stamp);
message->set_throttle_stamp(throttle_stamp);
message->set_recv_complete_stamp(ceph_clock_now());
// check received seq#. if it is old, drop the message.
// note that incoming messages may skip ahead. this is convenient for the
// client side queueing because messages can't be renumbered, but the (kernel)
// client will occasionally pull a message out of the sent queue to send
// elsewhere. in that case it doesn't matter if we "got" it or not.
uint64_t cur_seq = in_seq;
if (message->get_seq() <= cur_seq) {
ldout(cct, 0) << __func__ << " got old message " << message->get_seq()
<< " <= " << cur_seq << " " << message << " " << *message
<< ", discarding" << dendl;
message->put();
if (connection->has_feature(CEPH_FEATURE_RECONNECT_SEQ) &&
cct->_conf->ms_die_on_old_message) {
ceph_assert(0 == "old msgs despite reconnect_seq feature");
}
return nullptr;
}
if (message->get_seq() > cur_seq + 1) {
ldout(cct, 0) << __func__ << " missed message? skipped from seq "
<< cur_seq << " to " << message->get_seq() << dendl;
if (cct->_conf->ms_die_on_skipped_message) {
ceph_assert(0 == "skipped incoming seq");
}
}
#if defined(WITH_EVENTTRACE)
if (message->get_type() == CEPH_MSG_OSD_OP ||
message->get_type() == CEPH_MSG_OSD_OPREPLY) {
utime_t ltt_processed_stamp = ceph_clock_now();
double usecs_elapsed =
((double)(ltt_processed_stamp.to_nsec() - recv_stamp.to_nsec())) / 1000;
ostringstream buf;
if (message->get_type() == CEPH_MSG_OSD_OP)
OID_ELAPSED_WITH_MSG(message, usecs_elapsed, "TIME_TO_DECODE_OSD_OP",
false);
else
OID_ELAPSED_WITH_MSG(message, usecs_elapsed, "TIME_TO_DECODE_OSD_OPREPLY",
false);
}
#endif
// note last received message.
in_seq = message->get_seq();
ldout(cct, 5) << " rx " << message->get_source() << " seq "
<< message->get_seq() << " " << message << " " << *message
<< dendl;
bool need_dispatch_writer = false;
if (!connection->policy.lossy) {
ack_left++;
need_dispatch_writer = true;
}
state = OPENED;
ceph::mono_time fast_dispatch_time;
if (connection->is_blackhole()) {
ldout(cct, 10) << __func__ << " blackhole " << *message << dendl;
message->put();
goto out;
}
connection->logger->inc(l_msgr_recv_messages);
connection->logger->inc(
l_msgr_recv_bytes,
cur_msg_size + sizeof(ceph_msg_header) + sizeof(ceph_msg_footer));
messenger->ms_fast_preprocess(message);
fast_dispatch_time = ceph::mono_clock::now();
connection->logger->tinc(l_msgr_running_recv_time,
fast_dispatch_time - connection->recv_start_time);
if (connection->delay_state) {
double delay_period = 0;
if (rand() % 10000 < cct->_conf->ms_inject_delay_probability * 10000.0) {
delay_period =
cct->_conf->ms_inject_delay_max * (double)(rand() % 10000) / 10000.0;
ldout(cct, 1) << "queue_received will delay after "
<< (ceph_clock_now() + delay_period) << " on " << message
<< " " << *message << dendl;
}
connection->delay_state->queue(delay_period, message);
} else if (messenger->ms_can_fast_dispatch(message)) {
connection->lock.unlock();
connection->dispatch_queue->fast_dispatch(message);
connection->recv_start_time = ceph::mono_clock::now();
connection->logger->tinc(l_msgr_running_fast_dispatch_time,
connection->recv_start_time - fast_dispatch_time);
connection->lock.lock();
} else {
connection->dispatch_queue->enqueue(message, message->get_priority(),
connection->conn_id);
}
out:
// clean up local buffer references
data_buf.clear();
front.clear();
middle.clear();
data.clear();
if (need_dispatch_writer && connection->is_connected()) {
connection->center->dispatch_event_external(connection->write_handler);
}
return CONTINUE(wait_message);
} | 0 | [
"CWE-294"
] | ceph | 6c14c2fb5650426285428dfe6ca1597e5ea1d07d | 183,352,044,799,684,560,000,000,000,000,000,000,000 | 173 | mon/MonClient: bring back CEPHX_V2 authorizer challenges
Commit c58c5754dfd2 ("msg/async/ProtocolV1: use AuthServer and
AuthClient") introduced a backwards compatibility issue into msgr1.
To fix it, commit 321548010578 ("mon/MonClient: skip CEPHX_V2
challenge if client doesn't support it") set out to skip authorizer
challenges for peers that don't support CEPHX_V2. However, it
made it so that authorizer challenges are skipped for all peers in
both msgr1 and msgr2 cases, effectively disabling the protection
against replay attacks that was put in place in commit f80b848d3f83
("auth/cephx: add authorizer challenge", CVE-2018-1128).
This is because con->get_features() always returns 0 at that
point. In msgr1 case, the peer shares its features along with the
authorizer, but while they are available in connect_msg.features they
aren't assigned to con until ProtocolV1::open(). In msgr2 case, the
peer doesn't share its features until much later (in CLIENT_IDENT
frame, i.e. after the authentication phase). The result is that
!CEPHX_V2 branch is taken in all cases and replay attack protection
is lost.
Only clusters with cephx_service_require_version set to 2 on the
service daemons would not be silently downgraded. But, since the
default is 1 and there are no reports of looping on BADAUTHORIZER
faults, I'm pretty sure that no one has ever done that. Note that
cephx_require_version set to 2 would have no effect even though it
is supposed to be stronger than cephx_service_require_version
because MonClient::handle_auth_request() didn't check it.
To fix:
- for msgr1, check connect_msg.features (as was done before commit
c58c5754dfd2) and challenge if CEPHX_V2 is supported. Together
with two preceding patches that resurrect proper cephx_* option
handling in msgr1, this covers both "I want old clients to work"
and "I wish to require better authentication" use cases.
- for msgr2, don't check anything and always challenge. CEPHX_V2
predates msgr2, anyone speaking msgr2 must support it.
Signed-off-by: Ilya Dryomov <[email protected]>
(cherry picked from commit 4a82c72e3bdddcb625933e83af8b50a444b961f1) |
mono_loader_lock_track_ownership (gboolean track)
{
loader_lock_track_ownership = track;
} | 0 | [] | mono | 8e890a3bf80a4620e417814dc14886b1bbd17625 | 39,789,779,075,617,055,000,000,000,000,000,000,000 | 4 | Search for dllimported shared libs in the base directory, not cwd.
* loader.c: we don't search the current directory anymore for shared
libraries referenced in DllImport attributes, as it has a slight
security risk. We search in the same directory where the referencing
image was loaded from, instead. Fixes bug# 641915. |
GF_Err tsel_box_size(GF_Box *s)
{
GF_TrackSelectionBox *ptr = (GF_TrackSelectionBox *) s;
ptr->size += 4 + (4*ptr->attributeListCount);
return GF_OK;
} | 0 | [
"CWE-476"
] | gpac | d527325a9b72218612455a534a508f9e1753f76e | 333,168,642,479,944,520,000,000,000,000,000,000,000 | 6 | fixed #1768 |
xmlElemDump(FILE * f, xmlDocPtr doc, xmlNodePtr cur)
{
xmlOutputBufferPtr outbuf;
xmlInitParser();
if (cur == NULL) {
#ifdef DEBUG_TREE
xmlGenericError(xmlGenericErrorContext,
"xmlElemDump : cur == NULL\n");
#endif
return;
}
#ifdef DEBUG_TREE
if (doc == NULL) {
xmlGenericError(xmlGenericErrorContext,
"xmlElemDump : doc == NULL\n");
}
#endif
outbuf = xmlOutputBufferCreateFile(f, NULL);
if (outbuf == NULL)
return;
if ((doc != NULL) && (doc->type == XML_HTML_DOCUMENT_NODE)) {
#ifdef LIBXML_HTML_ENABLED
htmlNodeDumpOutput(outbuf, doc, cur, NULL);
#else
xmlSaveErr(XML_ERR_INTERNAL_ERROR, cur, "HTML support not compiled in\n");
#endif /* LIBXML_HTML_ENABLED */
} else
xmlNodeDumpOutput(outbuf, doc, cur, 0, 1, NULL);
xmlOutputBufferClose(outbuf);
} | 0 | [
"CWE-502"
] | libxml2 | c97750d11bb8b6f3303e7131fe526a61ac65bcfd | 178,039,943,965,938,260,000,000,000,000,000,000,000 | 33 | Avoid an out of bound access when serializing malformed strings
For https://bugzilla.gnome.org/show_bug.cgi?id=766414
* xmlsave.c: xmlBufAttrSerializeTxtContent() if an attribute value
is not UTF-8 be more careful when serializing it as we may do an
out of bound access as a result. |
static __always_inline int do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt,
unsigned size)
{
unsigned done_size = ctxt->fetch.end - ctxt->fetch.ptr;
if (unlikely(done_size < size))
return __do_insn_fetch_bytes(ctxt, size - done_size);
else
return X86EMUL_CONTINUE;
} | 0 | [
"CWE-399"
] | kvm | 13e457e0eebf0a0c82c38ceb890d93eb826d62a6 | 242,738,953,655,044,330,000,000,000,000,000,000,000 | 10 | KVM: x86: Emulator does not decode clflush well
Currently, all group15 instructions are decoded as clflush (e.g., mfence,
xsave). In addition, the clflush instruction requires no prefix (66/f2/f3)
would exist. If prefix exists it may encode a different instruction (e.g.,
clflushopt).
Creating a group for clflush, and different group for each prefix.
This has been the case forever, but the next patch needs the cflush group
in order to fix a bug introduced in 3.17.
Fixes: 41061cdb98a0bec464278b4db8e894a3121671f5
Cc: [email protected]
Signed-off-by: Nadav Amit <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]> |
get_uncompressed_data(struct archive_read *a, const void **buff, size_t size,
size_t minimum)
{
struct _7zip *zip = (struct _7zip *)a->format->data;
ssize_t bytes_avail;
if (zip->codec == _7Z_COPY && zip->codec2 == (unsigned long)-1) {
/* Copy mode. */
/*
* Note: '1' here is a performance optimization.
* Recall that the decompression layer returns a count of
* available bytes; asking for more than that forces the
* decompressor to combine reads by copying data.
*/
*buff = __archive_read_ahead(a, 1, &bytes_avail);
if (bytes_avail <= 0) {
archive_set_error(&a->archive,
ARCHIVE_ERRNO_FILE_FORMAT,
"Truncated 7-Zip file data");
return (ARCHIVE_FATAL);
}
if ((size_t)bytes_avail >
zip->uncompressed_buffer_bytes_remaining)
bytes_avail = (ssize_t)
zip->uncompressed_buffer_bytes_remaining;
if ((size_t)bytes_avail > size)
bytes_avail = (ssize_t)size;
zip->pack_stream_bytes_unconsumed = bytes_avail;
} else if (zip->uncompressed_buffer_pointer == NULL) {
/* Decompression has failed. */
archive_set_error(&(a->archive),
ARCHIVE_ERRNO_MISC, "Damaged 7-Zip archive");
return (ARCHIVE_FATAL);
} else {
/* Packed mode. */
if (minimum > zip->uncompressed_buffer_bytes_remaining) {
/*
* If remaining uncompressed data size is less than
* the minimum size, fill the buffer up to the
* minimum size.
*/
if (extract_pack_stream(a, minimum) < 0)
return (ARCHIVE_FATAL);
}
if (size > zip->uncompressed_buffer_bytes_remaining)
bytes_avail = (ssize_t)
zip->uncompressed_buffer_bytes_remaining;
else
bytes_avail = (ssize_t)size;
*buff = zip->uncompressed_buffer_pointer;
zip->uncompressed_buffer_pointer += bytes_avail;
}
zip->uncompressed_buffer_bytes_remaining -= bytes_avail;
return (bytes_avail);
} | 1 | [
"CWE-125"
] | libarchive | 65a23f5dbee4497064e9bb467f81138a62b0dae1 | 31,051,076,667,023,103,000,000,000,000,000,000,000 | 57 | 7zip: fix crash when parsing certain archives
Fuzzing with CRCs disabled revealed that a call to get_uncompressed_data()
would sometimes fail to return at least 'minimum' bytes. This can cause
the crc32() invocation in header_bytes to read off into invalid memory.
A specially crafted archive can use this to cause a crash.
An ASAN trace is below, but ASAN is not required - an uninstrumented
binary will also crash.
==7719==ERROR: AddressSanitizer: SEGV on unknown address 0x631000040000 (pc 0x7fbdb3b3ec1d bp 0x7ffe77a51310 sp 0x7ffe77a51150 T0)
==7719==The signal is caused by a READ memory access.
#0 0x7fbdb3b3ec1c in crc32_z (/lib/x86_64-linux-gnu/libz.so.1+0x2c1c)
#1 0x84f5eb in header_bytes (/tmp/libarchive/bsdtar+0x84f5eb)
#2 0x856156 in read_Header (/tmp/libarchive/bsdtar+0x856156)
#3 0x84e134 in slurp_central_directory (/tmp/libarchive/bsdtar+0x84e134)
#4 0x849690 in archive_read_format_7zip_read_header (/tmp/libarchive/bsdtar+0x849690)
#5 0x5713b7 in _archive_read_next_header2 (/tmp/libarchive/bsdtar+0x5713b7)
#6 0x570e63 in _archive_read_next_header (/tmp/libarchive/bsdtar+0x570e63)
#7 0x6f08bd in archive_read_next_header (/tmp/libarchive/bsdtar+0x6f08bd)
#8 0x52373f in read_archive (/tmp/libarchive/bsdtar+0x52373f)
#9 0x5257be in tar_mode_x (/tmp/libarchive/bsdtar+0x5257be)
#10 0x51daeb in main (/tmp/libarchive/bsdtar+0x51daeb)
#11 0x7fbdb27cab96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
#12 0x41dd09 in _start (/tmp/libarchive/bsdtar+0x41dd09)
This was primarly done with afl and FairFuzz. Some early corpus entries
may have been generated by qsym. |
static struct ewah_bitmap *read_bitmap_1(struct bitmap_index *index)
{
struct ewah_bitmap *b = ewah_pool_new();
int bitmap_size = ewah_read_mmap(b,
index->map + index->map_pos,
index->map_size - index->map_pos);
if (bitmap_size < 0) {
error("Failed to load bitmap index (corrupted?)");
ewah_pool_free(b);
return NULL;
}
index->map_pos += bitmap_size;
return b;
} | 0 | [
"CWE-119",
"CWE-787"
] | git | de1e67d0703894cb6ea782e36abb63976ab07e60 | 26,631,550,437,634,030,000,000,000,000,000,000,000 | 17 | list-objects: pass full pathname to callbacks
When we find a blob at "a/b/c", we currently pass this to
our show_object_fn callbacks as two components: "a/b/" and
"c". Callbacks which want the full value then call
path_name(), which concatenates the two. But this is an
inefficient interface; the path is a strbuf, and we could
simply append "c" to it temporarily, then roll back the
length, without creating a new copy.
So we could improve this by teaching the callsites of
path_name() this trick (and there are only 3). But we can
also notice that no callback actually cares about the
broken-down representation, and simply pass each callback
the full path "a/b/c" as a string. The callback code becomes
even simpler, then, as we do not have to worry about freeing
an allocated buffer, nor rolling back our modification to
the strbuf.
This is theoretically less efficient, as some callbacks
would not bother to format the final path component. But in
practice this is not measurable. Since we use the same
strbuf over and over, our work to grow it is amortized, and
we really only pay to memcpy a few bytes.
Signed-off-by: Jeff King <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]> |
void* Type_U16Fixed16_Dup(struct _cms_typehandler_struct* self, const void *Ptr, cmsUInt32Number n)
{
return _cmsDupMem(self ->ContextID, Ptr, n * sizeof(cmsFloat64Number));
} | 0 | [] | Little-CMS | 41d222df1bc6188131a8f46c32eab0a4d4cdf1b6 | 181,022,965,290,059,150,000,000,000,000,000,000,000 | 4 | Memory squeezing fix: lcms2 cmsPipeline construction
When creating a new pipeline, lcms would often try to allocate a stage
and pass it to cmsPipelineInsertStage without checking whether the
allocation succeeded. cmsPipelineInsertStage would then assert (or crash)
if it had not.
The fix here is to change cmsPipelineInsertStage to check and return
an error value. All calling code is then checked to test this return
value and cope. |
struct sock *ax25_make_new(struct sock *osk, struct ax25_dev *ax25_dev)
{
struct sock *sk;
ax25_cb *ax25, *oax25;
sk = sk_alloc(sock_net(osk), PF_AX25, GFP_ATOMIC, osk->sk_prot);
if (sk == NULL)
return NULL;
if ((ax25 = ax25_create_cb()) == NULL) {
sk_free(sk);
return NULL;
}
switch (osk->sk_type) {
case SOCK_DGRAM:
break;
case SOCK_SEQPACKET:
break;
default:
sk_free(sk);
ax25_cb_put(ax25);
return NULL;
}
sock_init_data(NULL, sk);
sk->sk_type = osk->sk_type;
sk->sk_priority = osk->sk_priority;
sk->sk_protocol = osk->sk_protocol;
sk->sk_rcvbuf = osk->sk_rcvbuf;
sk->sk_sndbuf = osk->sk_sndbuf;
sk->sk_state = TCP_ESTABLISHED;
sock_copy_flags(sk, osk);
oax25 = ax25_sk(osk);
ax25->modulus = oax25->modulus;
ax25->backoff = oax25->backoff;
ax25->pidincl = oax25->pidincl;
ax25->iamdigi = oax25->iamdigi;
ax25->rtt = oax25->rtt;
ax25->t1 = oax25->t1;
ax25->t2 = oax25->t2;
ax25->t3 = oax25->t3;
ax25->n2 = oax25->n2;
ax25->idle = oax25->idle;
ax25->paclen = oax25->paclen;
ax25->window = oax25->window;
ax25->ax25_dev = ax25_dev;
ax25->source_addr = oax25->source_addr;
if (oax25->digipeat != NULL) {
ax25->digipeat = kmemdup(oax25->digipeat, sizeof(ax25_digi),
GFP_ATOMIC);
if (ax25->digipeat == NULL) {
sk_free(sk);
ax25_cb_put(ax25);
return NULL;
}
}
sk->sk_protinfo = ax25;
sk->sk_destruct = ax25_free_sock;
ax25->sk = sk;
return sk;
} | 0 | [
"CWE-20",
"CWE-269"
] | linux | f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | 221,447,642,808,834,000,000,000,000,000,000,000,000 | 69 | net: rework recvmsg handler msg_name and msg_namelen logic
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <[email protected]>
Suggested-by: Eric Dumazet <[email protected]>
Signed-off-by: Hannes Frederic Sowa <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
{
unsigned int i, head, max;
hwaddr desc_pa = vq->vring.desc;
if (!virtqueue_num_heads(vq, vq->last_avail_idx))
return 0;
/* When we start there are none of either input nor output. */
elem->out_num = elem->in_num = 0;
max = vq->vring.num;
i = head = virtqueue_get_head(vq, vq->last_avail_idx++);
if (vq->vdev->guest_features & (1 << VIRTIO_RING_F_EVENT_IDX)) {
vring_avail_event(vq, vring_avail_idx(vq));
}
if (vring_desc_flags(desc_pa, i) & VRING_DESC_F_INDIRECT) {
if (vring_desc_len(desc_pa, i) % sizeof(VRingDesc)) {
error_report("Invalid size for indirect buffer table");
exit(1);
}
/* loop over the indirect descriptor table */
max = vring_desc_len(desc_pa, i) / sizeof(VRingDesc);
desc_pa = vring_desc_addr(desc_pa, i);
i = 0;
}
/* Collect all the descriptors */
do {
struct iovec *sg;
if (vring_desc_flags(desc_pa, i) & VRING_DESC_F_WRITE) {
if (elem->in_num >= ARRAY_SIZE(elem->in_sg)) {
error_report("Too many write descriptors in indirect table");
exit(1);
}
elem->in_addr[elem->in_num] = vring_desc_addr(desc_pa, i);
sg = &elem->in_sg[elem->in_num++];
} else {
if (elem->out_num >= ARRAY_SIZE(elem->out_sg)) {
error_report("Too many read descriptors in indirect table");
exit(1);
}
elem->out_addr[elem->out_num] = vring_desc_addr(desc_pa, i);
sg = &elem->out_sg[elem->out_num++];
}
sg->iov_len = vring_desc_len(desc_pa, i);
/* If we've got too many, that implies a descriptor loop. */
if ((elem->in_num + elem->out_num) > max) {
error_report("Looped descriptor");
exit(1);
}
} while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
/* Now map what we have collected */
virtqueue_map_sg(elem->in_sg, elem->in_addr, elem->in_num, 1);
virtqueue_map_sg(elem->out_sg, elem->out_addr, elem->out_num, 0);
elem->index = head;
vq->inuse++;
trace_virtqueue_pop(vq, elem, elem->in_num, elem->out_num);
return elem->in_num + elem->out_num;
} | 0 | [
"CWE-94"
] | qemu | cc45995294b92d95319b4782750a3580cabdbc0c | 218,538,161,143,472,700,000,000,000,000,000,000,000 | 70 | virtio: out-of-bounds buffer write on invalid state load
CVE-2013-4151 QEMU 1.0 out-of-bounds buffer write in
virtio_load@hw/virtio/virtio.c
So we have this code since way back when:
num = qemu_get_be32(f);
for (i = 0; i < num; i++) {
vdev->vq[i].vring.num = qemu_get_be32(f);
array of vqs has size VIRTIO_PCI_QUEUE_MAX, so
on invalid input this will write beyond end of buffer.
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Michael Roth <[email protected]>
Signed-off-by: Juan Quintela <[email protected]> |
nautilus_file_operations_delete (GList *files,
GtkWindow *parent_window,
NautilusDeleteCallback done_callback,
gpointer done_callback_data)
{
trash_or_delete_internal (files, parent_window,
FALSE,
done_callback, done_callback_data);
} | 0 | [] | nautilus | ca2fd475297946f163c32dcea897f25da892b89d | 12,279,326,107,511,298,000,000,000,000,000,000,000 | 9 | Add nautilus_file_mark_desktop_file_trusted(), this now adds a #! line if
2009-02-24 Alexander Larsson <[email protected]>
* libnautilus-private/nautilus-file-operations.c:
* libnautilus-private/nautilus-file-operations.h:
Add nautilus_file_mark_desktop_file_trusted(), this now
adds a #! line if there is none as well as makes the file
executable.
* libnautilus-private/nautilus-mime-actions.c:
Use nautilus_file_mark_desktop_file_trusted() instead of
just setting the permissions.
svn path=/trunk/; revision=15006 |
pop_showcmd(void)
{
if (!p_sc)
return;
STRCPY(showcmd_buf, old_showcmd_buf);
display_showcmd();
} | 0 | [
"CWE-416"
] | vim | 35a9a00afcb20897d462a766793ff45534810dc3 | 153,333,867,390,518,450,000,000,000,000,000,000,000 | 9 | patch 8.2.3428: using freed memory when replacing
Problem: Using freed memory when replacing. (Dhiraj Mishra)
Solution: Get the line pointer after calling ins_copychar(). |
static bool parseOperands(char* str, ArmOp *op) {
char *t = strdup (str);
int operand = 0;
char *token = t;
char *x;
int imm_count = 0;
int mem_opt = 0;
if (!token) {
return false;
}
while (token) {
char *next = strchr (token, ',');
if (next) {
*next++ = 0;
}
while (token[0] == ' ') {
token++;
}
if (operand >= MAX_OPERANDS) {
eprintf ("Too many operands\n");
return false;
}
op->operands[operand].type = ARM_NOTYPE;
op->operands[operand].reg_type = ARM_UNDEFINED;
op->operands[operand].shift = ARM_NO_SHIFT;
while (token[0] == ' ' || token[0] == '[' || token[0] == ']') {
token ++;
}
if (!strncmp (token, "lsl", 3)) {
op->operands[operand].shift = ARM_LSL;
} else if (!strncmp (token, "lsr", 3)) {
op->operands[operand].shift = ARM_LSR;
} else if (!strncmp (token, "asr", 3)) {
op->operands[operand].shift = ARM_ASR;
}
if (strlen (token) > 4 && op->operands[operand].shift != ARM_NO_SHIFT) {
op->operands_count ++;
op->operands[operand].shift_amount = r_num_math (NULL, token + 4);
if (op->operands[operand].shift_amount > 63) {
return false;
}
operand ++;
token = next;
continue;
}
switch (token[0]) {
case 'x':
x = strchr (token, ',');
if (x) {
x[0] = '\0';
}
op->operands_count ++;
op->operands[operand].type = ARM_GPR;
op->operands[operand].reg_type = ARM_REG64;
op->operands[operand].reg = r_num_math (NULL, token + 1);
if (op->operands[operand].reg > 31) {
return false;
}
break;
case 'w':
op->operands_count ++;
op->operands[operand].type = ARM_GPR;
op->operands[operand].reg_type = ARM_REG32;
op->operands[operand].reg = r_num_math (NULL, token + 1);
if (op->operands[operand].reg > 31) {
return false;
}
break;
case 'v':
op->operands_count ++;
op->operands[operand].type = ARM_FP;
op->operands[operand].reg = r_num_math (NULL, token + 1);
break;
case 's':
case 'S':
if (token[1] == 'P' || token [1] == 'p') {
int i;
for (i = 0; msr_const[i].name; i++) {
if (!r_str_ncasecmp (token, msr_const[i].name, strlen (msr_const[i].name))) {
op->operands[operand].sp_val = msr_const[i].val;
break;
}
}
op->operands_count ++;
op->operands[operand].type = ARM_GPR;
op->operands[operand].reg_type = ARM_SP | ARM_REG64;
op->operands[operand].reg = 31;
break;
}
mem_opt = get_mem_option (token);
if (mem_opt != -1) {
op->operands_count ++;
op->operands[operand].type = ARM_MEM_OPT;
op->operands[operand].mem_option = mem_opt;
}
break;
case 'L':
case 'l':
case 'I':
case 'i':
case 'N':
case 'n':
case 'O':
case 'o':
case 'p':
case 'P':
mem_opt = get_mem_option (token);
if (mem_opt != -1) {
op->operands_count ++;
op->operands[operand].type = ARM_MEM_OPT;
op->operands[operand].mem_option = mem_opt;
}
break;
case '-':
op->operands[operand].sign = -1;
// falthru
default:
op->operands_count ++;
op->operands[operand].type = ARM_CONSTANT;
op->operands[operand].immediate = r_num_math (NULL, token);
imm_count++;
break;
}
token = next;
operand ++;
if (operand > MAX_OPERANDS) {
free (t);
return false;
}
}
free (t);
return true;
} | 0 | [
"CWE-125",
"CWE-787"
] | radare2 | e5c14c167b0dcf0a53d76bd50bacbbcc0dfc1ae7 | 198,292,456,631,788,930,000,000,000,000,000,000,000 | 138 | Fix #12417/#12418 (arm assembler heap overflows) |
Statement_Ptr Expand::operator()(Ruleset_Ptr r)
{
LOCAL_FLAG(old_at_root_without_rule, at_root_without_rule);
if (in_keyframes) {
Block_Ptr bb = operator()(r->block());
Keyframe_Rule_Obj k = SASS_MEMORY_NEW(Keyframe_Rule, r->pstate(), bb);
if (r->selector()) {
if (Selector_List_Ptr s = r->selector()) {
selector_stack.push_back(0);
k->name(s->eval(eval));
selector_stack.pop_back();
}
}
return k.detach();
}
// reset when leaving scope
LOCAL_FLAG(at_root_without_rule, false);
// `&` is allowed in `@at-root`!
bool has_parent_selector = false;
for (size_t i = 0, L = selector_stack.size(); i < L && !has_parent_selector; i++) {
Selector_List_Obj ll = selector_stack.at(i);
has_parent_selector = ll != 0 && ll->length() > 0;
}
Selector_List_Obj sel = r->selector();
if (sel) sel = sel->eval(eval);
// check for parent selectors in base level rules
if (r->is_root() || (block_stack.back() && block_stack.back()->is_root())) {
if (Selector_List_Ptr selector_list = Cast<Selector_List>(r->selector())) {
for (Complex_Selector_Obj complex_selector : selector_list->elements()) {
Complex_Selector_Ptr tail = complex_selector;
while (tail) {
if (tail->head()) for (Simple_Selector_Obj header : tail->head()->elements()) {
Parent_Selector_Ptr ptr = Cast<Parent_Selector>(header);
if (ptr == NULL || (!ptr->real() || has_parent_selector)) continue;
std::string sel_str(complex_selector->to_string(ctx.c_options));
error("Base-level rules cannot contain the parent-selector-referencing character '&'.", header->pstate(), traces);
}
tail = tail->tail();
}
}
}
}
else {
if (sel->length() == 0 || sel->has_parent_ref()) {
if (sel->has_real_parent_ref() && !has_parent_selector) {
error("Base-level rules cannot contain the parent-selector-referencing character '&'.", sel->pstate(), traces);
}
}
}
// do not connect parent again
sel->remove_parent_selectors();
selector_stack.push_back(sel);
Env env(environment());
if (block_stack.back()->is_root()) {
env_stack.push_back(&env);
}
sel->set_media_block(media_stack.back());
Block_Obj blk = 0;
if (r->block()) blk = operator()(r->block());
Ruleset_Ptr rr = SASS_MEMORY_NEW(Ruleset,
r->pstate(),
sel,
blk);
selector_stack.pop_back();
if (block_stack.back()->is_root()) {
env_stack.pop_back();
}
rr->is_root(r->is_root());
rr->tabs(r->tabs());
return rr;
} | 0 | [
"CWE-476"
] | libsass | 0bc35e3d26922229d5a3e3308860cf0fcee5d1cf | 191,010,609,221,070,580,000,000,000,000,000,000,000 | 79 | Fix segfault on empty custom properties
Originally reported in sass/sassc#225
Fixes sass/sassc#225
Spec sass/sass-spec#1249 |
static bool is_xmm_fast_hypercall(struct kvm_hv_hcall *hc)
{
switch (hc->code) {
case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
case HVCALL_SEND_IPI_EX:
return true;
}
return false;
} | 0 | [
"CWE-476"
] | linux | 7ec37d1cbe17d8189d9562178d8b29167fe1c31a | 238,829,125,946,659,230,000,000,000,000,000,000,000 | 13 | KVM: x86: Check lapic_in_kernel() before attempting to set a SynIC irq
When KVM_CAP_HYPERV_SYNIC{,2} is activated, KVM already checks for
irqchip_in_kernel() so normally SynIC irqs should never be set. It is,
however, possible for a misbehaving VMM to write to SYNIC/STIMER MSRs
causing erroneous behavior.
The immediate issue being fixed is that kvm_irq_delivery_to_apic()
(kvm_irq_delivery_to_apic_fast()) crashes when called with
'irq.shorthand = APIC_DEST_SELF' and 'src == NULL'.
Signed-off-by: Vitaly Kuznetsov <[email protected]>
Message-Id: <[email protected]>
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]> |
void ip_rt_get_source(u8 *addr, struct sk_buff *skb, struct rtable *rt)
{
__be32 src;
if (rt_is_output_route(rt))
src = ip_hdr(skb)->saddr;
else {
struct fib_result res;
struct iphdr *iph = ip_hdr(skb);
struct flowi4 fl4 = {
.daddr = iph->daddr,
.saddr = iph->saddr,
.flowi4_tos = RT_TOS(iph->tos),
.flowi4_oif = rt->dst.dev->ifindex,
.flowi4_iif = skb->dev->ifindex,
.flowi4_mark = skb->mark,
};
rcu_read_lock();
if (fib_lookup(dev_net(rt->dst.dev), &fl4, &res, 0) == 0)
src = fib_result_prefsrc(dev_net(rt->dst.dev), &res);
else
src = inet_select_addr(rt->dst.dev,
rt_nexthop(rt, iph->daddr),
RT_SCOPE_UNIVERSE);
rcu_read_unlock();
}
memcpy(addr, &src, 4);
} | 0 | [
"CWE-327"
] | linux | aa6dd211e4b1dde9d5dc25d699d35f789ae7eeba | 359,054,203,419,884,750,000,000,000,000,000,000 | 29 | inet: use bigger hash table for IP ID generation
In commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count")
I used a very small hash table that could be abused
by patient attackers to reveal sensitive information.
Switch to a dynamic sizing, depending on RAM size.
Typical big hosts will now use 128x more storage (2 MB)
to get a similar increase in security and reduction
of hash collisions.
As a bonus, use of alloc_large_system_hash() spreads
allocated memory among all NUMA nodes.
Fixes: 73f156a6e8c1 ("inetpeer: get rid of ip_id_count")
Reported-by: Amit Klein <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Willy Tarreau <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
const char * olm_pk_encryption_last_error(
OlmPkEncryption * encryption
) {
auto error = encryption->last_error;
return _olm_error_to_string(error);
} | 0 | [
"CWE-787"
] | olm | ccc0d122ee1b4d5e5ca4ec1432086be17d5f901b | 23,084,313,783,276,715,000,000,000,000,000,000,000 | 6 | olm_pk_decrypt: Ensure inputs are of correct length. |
static void add_child(pid_t pid, struct sockaddr *addr, int addrlen)
{
struct child *newborn, **cradle;
/*
* This must be xcalloc() -- we'll compare the whole sockaddr_storage
* but individual address may be shorter.
*/
newborn = xcalloc(1, sizeof(*newborn));
live_children++;
newborn->pid = pid;
memcpy(&newborn->address, addr, addrlen);
for (cradle = &firstborn; *cradle; cradle = &(*cradle)->next)
if (!memcmp(&(*cradle)->address, &newborn->address,
sizeof(newborn->address)))
break;
newborn->next = *cradle;
*cradle = newborn;
} | 0 | [] | git | 73bb33a94ec67a53e7d805b12ad9264fa25f4f8d | 56,345,121,297,387,060,000,000,000,000,000,000,000 | 19 | daemon: Strictly parse the "extra arg" part of the command
Since 1.4.4.5 (49ba83fb67 "Add virtualization support to git-daemon")
git daemon enters an infinite loop and never terminates if a client
hides any extra arguments in the initial request line which is not
exactly "\0host=blah\0".
Since that change, a client must never insert additional extra
arguments, or attempt to use any argument other than "host=", as
any daemon will get stuck parsing the request line and will never
complete the request.
Since the client can't tell if the daemon is patched or not, it
is not possible to know if additional extra args might actually be
able to be safely requested.
If we ever need to extend the git daemon protocol to support a new
feature, we may have to do something like this to the exchange:
# If both support git:// v2
#
C: 000cgit://v2
S: 0010ok host user
C: 0018host git.kernel.org
C: 0027git-upload-pack /pub/linux-2.6.git
S: ...git-upload-pack header...
# If client supports git:// v2, server does not:
#
C: 000cgit://v2
S: <EOF>
C: 003bgit-upload-pack /pub/linux-2.6.git\0host=git.kernel.org\0
S: ...git-upload-pack header...
This requires the client to create two TCP connections to talk to
an older git daemon, however all daemons since the introduction of
daemon.c will safely reject the unknown "git://v2" command request,
so the client can quite easily determine the server supports an
older protocol.
Signed-off-by: Shawn O. Pearce <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]> |
static int create_raw_packet_qp_tis(struct mlx5_ib_dev *dev,
struct mlx5_ib_qp *qp,
struct mlx5_ib_sq *sq, u32 tdn)
{
u32 in[MLX5_ST_SZ_DW(create_tis_in)] = {0};
void *tisc = MLX5_ADDR_OF(create_tis_in, in, ctx);
MLX5_SET(tisc, tisc, transport_domain, tdn);
if (qp->flags & MLX5_IB_QP_UNDERLAY)
MLX5_SET(tisc, tisc, underlay_qpn, qp->underlay_qpn);
return mlx5_core_create_tis(dev->mdev, in, sizeof(in), &sq->tisn);
} | 0 | [
"CWE-119",
"CWE-787"
] | linux | 0625b4ba1a5d4703c7fb01c497bd6c156908af00 | 171,220,468,399,583,140,000,000,000,000,000,000,000 | 13 | IB/mlx5: Fix leaking stack memory to userspace
mlx5_ib_create_qp_resp was never initialized and only the first 4 bytes
were written.
Fixes: 41d902cb7c32 ("RDMA/mlx5: Fix definition of mlx5_ib_create_qp_resp")
Cc: <[email protected]>
Acked-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]> |
ExecAssignScanTypeFromOuterPlan(ScanState *scanstate)
{
PlanState *outerPlan;
TupleDesc tupDesc;
outerPlan = outerPlanState(scanstate);
tupDesc = ExecGetResultType(outerPlan);
ExecAssignScanType(scanstate, tupDesc);
} | 0 | [
"CWE-209"
] | postgres | 804b6b6db4dcfc590a468e7be390738f9f7755fb | 15,252,072,717,406,730,000,000,000,000,000,000,000 | 10 | Fix column-privilege leak in error-message paths
While building error messages to return to the user,
BuildIndexValueDescription, ExecBuildSlotValueDescription and
ri_ReportViolation would happily include the entire key or entire row in
the result returned to the user, even if the user didn't have access to
view all of the columns being included.
Instead, include only those columns which the user is providing or which
the user has select rights on. If the user does not have any rights
to view the table or any of the columns involved then no detail is
provided and a NULL value is returned from BuildIndexValueDescription
and ExecBuildSlotValueDescription. Note that, for key cases, the user
must have access to all of the columns for the key to be shown; a
partial key will not be returned.
Further, in master only, do not return any data for cases where row
security is enabled on the relation and row security should be applied
for the user. This required a bit of refactoring and moving of things
around related to RLS- note the addition of utils/misc/rls.c.
Back-patch all the way, as column-level privileges are now in all
supported versions.
This has been assigned CVE-2014-8161, but since the issue and the patch
have already been publicized on pgsql-hackers, there's no point in trying
to hide this commit. |
_public_ int sd_bus_get_exit_on_disconnect(sd_bus *bus) {
assert_return(bus, -EINVAL);
assert_return(bus = bus_resolve(bus), -ENOPKG);
return bus->exit_on_disconnect;
} | 0 | [
"CWE-416"
] | systemd | 1068447e6954dc6ce52f099ed174c442cb89ed54 | 72,486,465,211,245,030,000,000,000,000,000,000,000 | 6 | sd-bus: introduce API for re-enqueuing incoming messages
When authorizing via PolicyKit we want to process incoming method calls
twice: once to process and figure out that we need PK authentication,
and a second time after we aquired PK authentication to actually execute
the operation. With this new call sd_bus_enqueue_for_read() we have a
way to put an incoming message back into the read queue for this
purpose.
This might have other uses too, for example debugging. |
FLAC_API FLAC__bool FLAC__stream_encoder_set_rice_parameter_search_dist(FLAC__StreamEncoder *encoder, uint32_t value)
{
FLAC__ASSERT(0 != encoder);
FLAC__ASSERT(0 != encoder->private_);
FLAC__ASSERT(0 != encoder->protected_);
if(encoder->protected_->state != FLAC__STREAM_ENCODER_UNINITIALIZED)
return false;
#if 0
/*@@@ deprecated: */
encoder->protected_->rice_parameter_search_dist = value;
#else
(void)value;
#endif
return true;
} | 0 | [
"CWE-787"
] | flac | e1575e4a7c5157cbf4e4a16dbd39b74f7174c7be | 314,032,663,791,153,760,000,000,000,000,000,000,000 | 15 | libFlac: Exit at EOS in verify mode
When verify mode is enabled, once decoder flags end of stream,
encode processing is considered complete.
CVE-2021-0561
Signed-off-by: Ralph Giles <[email protected]> |
void icmpv6_flow_init(struct sock *sk, struct flowi6 *fl6,
u8 type,
const struct in6_addr *saddr,
const struct in6_addr *daddr,
int oif)
{
memset(fl6, 0, sizeof(*fl6));
fl6->saddr = *saddr;
fl6->daddr = *daddr;
fl6->flowi6_proto = IPPROTO_ICMPV6;
fl6->fl6_icmp_type = type;
fl6->fl6_icmp_code = 0;
fl6->flowi6_oif = oif;
security_sk_classify_flow(sk, flowi6_to_flowi(fl6));
} | 0 | [
"CWE-20",
"CWE-200"
] | linux | 79dc7e3f1cd323be4c81aa1a94faa1b3ed987fb2 | 41,267,044,418,174,507,000,000,000,000,000,000,000 | 15 | net: handle no dst on skb in icmp6_send
Andrey reported the following while fuzzing the kernel with syzkaller:
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN
Modules linked in:
CPU: 0 PID: 3859 Comm: a.out Not tainted 4.9.0-rc6+ #429
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task: ffff8800666d4200 task.stack: ffff880067348000
RIP: 0010:[<ffffffff833617ec>] [<ffffffff833617ec>]
icmp6_send+0x5fc/0x1e30 net/ipv6/icmp.c:451
RSP: 0018:ffff88006734f2c0 EFLAGS: 00010206
RAX: ffff8800666d4200 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: dffffc0000000000 RDI: 0000000000000018
RBP: ffff88006734f630 R08: ffff880064138418 R09: 0000000000000003
R10: dffffc0000000000 R11: 0000000000000005 R12: 0000000000000000
R13: ffffffff84e7e200 R14: ffff880064138484 R15: ffff8800641383c0
FS: 00007fb3887a07c0(0000) GS:ffff88006cc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020000000 CR3: 000000006b040000 CR4: 00000000000006f0
Stack:
ffff8800666d4200 ffff8800666d49f8 ffff8800666d4200 ffffffff84c02460
ffff8800666d4a1a 1ffff1000ccdaa2f ffff88006734f498 0000000000000046
ffff88006734f440 ffffffff832f4269 ffff880064ba7456 0000000000000000
Call Trace:
[<ffffffff83364ddc>] icmpv6_param_prob+0x2c/0x40 net/ipv6/icmp.c:557
[< inline >] ip6_tlvopt_unknown net/ipv6/exthdrs.c:88
[<ffffffff83394405>] ip6_parse_tlv+0x555/0x670 net/ipv6/exthdrs.c:157
[<ffffffff8339a759>] ipv6_parse_hopopts+0x199/0x460 net/ipv6/exthdrs.c:663
[<ffffffff832ee773>] ipv6_rcv+0xfa3/0x1dc0 net/ipv6/ip6_input.c:191
...
icmp6_send / icmpv6_send is invoked for both rx and tx paths. In both
cases the dst->dev should be preferred for determining the L3 domain
if the dst has been set on the skb. Fallback to the skb->dev if it has
not. This covers the case reported here where icmp6_send is invoked on
Rx before the route lookup.
Fixes: 5d41ce29e ("net: icmp6_send should use dst dev to determine L3 domain")
Reported-by: Andrey Konovalov <[email protected]>
Signed-off-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
static void FVMenuCopyWidth(GWindow gw, struct gmenuitem *mi, GEvent *UNUSED(e)) {
FontView *fv = (FontView *) GDrawGetUserData(gw);
if ( FVAnyCharSelected(fv)==-1 )
return;
if ( mi->mid==MID_CopyVWidth && !fv->b.sf->hasvmetrics )
return;
FVCopyWidth((FontViewBase *) fv,
mi->mid==MID_CopyWidth?ut_width:
mi->mid==MID_CopyVWidth?ut_vwidth:
mi->mid==MID_CopyLBearing?ut_lbearing:
ut_rbearing);
} | 0 | [
"CWE-119",
"CWE-787"
] | fontforge | 626f751752875a0ddd74b9e217b6f4828713573c | 285,499,905,503,902,620,000,000,000,000,000,000,000 | 13 | Warn users before discarding their unsaved scripts (#3852)
* Warn users before discarding their unsaved scripts
This closes #3846. |
static gboolean lo_key_equal(gconstpointer a, gconstpointer b)
{
const struct lo_key *la = a;
const struct lo_key *lb = b;
return la->ino == lb->ino && la->dev == lb->dev;
} | 0 | [] | qemu | 6084633dff3a05d63176e06d7012c7e15aba15be | 285,942,089,346,274,270,000,000,000,000,000,000,000 | 7 | tools/virtiofsd: xattr name mappings: Add option
Add an option to define mappings of xattr names so that
the client and server filesystems see different views.
This can be used to have different SELinux mappings as
seen by the guest, to run the virtiofsd with less privileges
(e.g. in a case where it can't set trusted/system/security
xattrs but you want the guest to be able to), or to isolate
multiple users of the same name; e.g. trusted attributes
used by stacking overlayfs.
A mapping engine is used with 3 simple rules; the rules can
be combined to allow most useful mapping scenarios.
The ruleset is defined by -o xattrmap='rules...'.
This patch doesn't use the rule maps yet.
Signed-off-by: Dr. David Alan Gilbert <[email protected]>
Message-Id: <[email protected]>
Reviewed-by: Stefan Hajnoczi <[email protected]>
Signed-off-by: Dr. David Alan Gilbert <[email protected]> |
build_pathname(struct archive_string *as, struct file_info *file, int depth)
{
// Plain ISO9660 only allows 8 dir levels; if we get
// to 1000, then something is very, very wrong.
if (depth > 1000) {
return NULL;
}
if (file->parent != NULL && archive_strlen(&file->parent->name) > 0) {
if (build_pathname(as, file->parent, depth + 1) == NULL) {
return NULL;
}
archive_strcat(as, "/");
}
if (archive_strlen(&file->name) == 0)
archive_strcat(as, ".");
else
archive_string_concat(as, &file->name);
return (as->s);
} | 0 | [
"CWE-125"
] | libarchive | f9569c086ff29259c73790db9cbf39fe8fb9d862 | 32,048,783,471,010,700,000,000,000,000,000,000,000 | 19 | iso9660: validate directory record length |
sg_start_req(Sg_request *srp, unsigned char *cmd)
{
int res;
struct request *rq;
struct scsi_request *req;
Sg_fd *sfp = srp->parentfp;
sg_io_hdr_t *hp = &srp->header;
int dxfer_len = (int) hp->dxfer_len;
int dxfer_dir = hp->dxfer_direction;
unsigned int iov_count = hp->iovec_count;
Sg_scatter_hold *req_schp = &srp->data;
Sg_scatter_hold *rsv_schp = &sfp->reserve;
struct request_queue *q = sfp->parentdp->device->request_queue;
struct rq_map_data *md, map_data;
int rw = hp->dxfer_direction == SG_DXFER_TO_DEV ? WRITE : READ;
unsigned char *long_cmdp = NULL;
SCSI_LOG_TIMEOUT(4, sg_printk(KERN_INFO, sfp->parentdp,
"sg_start_req: dxfer_len=%d\n",
dxfer_len));
if (hp->cmd_len > BLK_MAX_CDB) {
long_cmdp = kzalloc(hp->cmd_len, GFP_KERNEL);
if (!long_cmdp)
return -ENOMEM;
}
/*
* NOTE
*
* With scsi-mq enabled, there are a fixed number of preallocated
* requests equal in number to shost->can_queue. If all of the
* preallocated requests are already in use, then using GFP_ATOMIC with
* blk_get_request() will return -EWOULDBLOCK, whereas using GFP_KERNEL
* will cause blk_get_request() to sleep until an active command
* completes, freeing up a request. Neither option is ideal, but
* GFP_KERNEL is the better choice to prevent userspace from getting an
* unexpected EWOULDBLOCK.
*
* With scsi-mq disabled, blk_get_request() with GFP_KERNEL usually
* does not sleep except under memory pressure.
*/
rq = blk_get_request(q, hp->dxfer_direction == SG_DXFER_TO_DEV ?
REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, GFP_KERNEL);
if (IS_ERR(rq)) {
kfree(long_cmdp);
return PTR_ERR(rq);
}
req = scsi_req(rq);
if (hp->cmd_len > BLK_MAX_CDB)
req->cmd = long_cmdp;
memcpy(req->cmd, cmd, hp->cmd_len);
req->cmd_len = hp->cmd_len;
srp->rq = rq;
rq->end_io_data = srp;
req->retries = SG_DEFAULT_RETRIES;
if ((dxfer_len <= 0) || (dxfer_dir == SG_DXFER_NONE))
return 0;
if (sg_allow_dio && hp->flags & SG_FLAG_DIRECT_IO &&
dxfer_dir != SG_DXFER_UNKNOWN && !iov_count &&
!sfp->parentdp->device->host->unchecked_isa_dma &&
blk_rq_aligned(q, (unsigned long)hp->dxferp, dxfer_len))
md = NULL;
else
md = &map_data;
if (md) {
mutex_lock(&sfp->f_mutex);
if (dxfer_len <= rsv_schp->bufflen &&
!sfp->res_in_use) {
sfp->res_in_use = 1;
sg_link_reserve(sfp, srp, dxfer_len);
} else if (hp->flags & SG_FLAG_MMAP_IO) {
res = -EBUSY; /* sfp->res_in_use == 1 */
if (dxfer_len > rsv_schp->bufflen)
res = -ENOMEM;
mutex_unlock(&sfp->f_mutex);
return res;
} else {
res = sg_build_indirect(req_schp, sfp, dxfer_len);
if (res) {
mutex_unlock(&sfp->f_mutex);
return res;
}
}
mutex_unlock(&sfp->f_mutex);
md->pages = req_schp->pages;
md->page_order = req_schp->page_order;
md->nr_entries = req_schp->k_use_sg;
md->offset = 0;
md->null_mapped = hp->dxferp ? 0 : 1;
if (dxfer_dir == SG_DXFER_TO_FROM_DEV)
md->from_user = 1;
else
md->from_user = 0;
}
if (iov_count) {
struct iovec *iov = NULL;
struct iov_iter i;
res = import_iovec(rw, hp->dxferp, iov_count, 0, &iov, &i);
if (res < 0)
return res;
iov_iter_truncate(&i, hp->dxfer_len);
if (!iov_iter_count(&i)) {
kfree(iov);
return -EINVAL;
}
res = blk_rq_map_user_iov(q, rq, md, &i, GFP_ATOMIC);
kfree(iov);
} else
res = blk_rq_map_user(q, rq, md, hp->dxferp,
hp->dxfer_len, GFP_ATOMIC);
if (!res) {
srp->bio = rq->bio;
if (!md) {
req_schp->dio_in_use = 1;
hp->info |= SG_INFO_DIRECT_IO;
}
}
return res;
} | 0 | [
"CWE-200"
] | linux | 3e0097499839e0fe3af380410eababe5a47c4cf9 | 208,153,856,791,990,770,000,000,000,000,000,000,000 | 132 | scsi: sg: fixup infoleak when using SG_GET_REQUEST_TABLE
When calling SG_GET_REQUEST_TABLE ioctl only a half-filled table is
returned; the remaining part will then contain stale kernel memory
information. This patch zeroes out the entire table to avoid this
issue.
Signed-off-by: Hannes Reinecke <[email protected]>
Reviewed-by: Bart Van Assche <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Eric Dumazet <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]> |
void uwsgi_emulate_cow_for_apps(int id) {
int i;
// check if we need to emulate fork() COW
if (uwsgi.mywid == 0) {
for (i = 1; i <= uwsgi.numproc; i++) {
memcpy(&uwsgi.workers[i].apps[id], &uwsgi.workers[0].apps[id], sizeof(struct uwsgi_app));
uwsgi.workers[i].apps_cnt = uwsgi_apps_cnt;
}
}
} | 0 | [
"CWE-119",
"CWE-703",
"CWE-787"
] | uwsgi | cb4636f7c0af2e97a4eef7a3cdcbd85a71247bfe | 65,568,442,820,357,500,000,000,000,000,000,000,000 | 10 | improve uwsgi_expand_path() to sanitize input, avoiding stack corruption and potential security issue |
static void imap_login_preinit(void)
{
login_set_roots = imap_login_setting_roots;
} | 0 | [] | core | 62061e8cf68f506c0ccaaba21fd4174764ca875f | 321,676,649,634,654,700,000,000,000,000,000,000,000 | 4 | imap-login: Split off client_invalid_command() |
libraw_processed_image_t *LibRaw::dcraw_make_mem_image(int *errcode)
{
int width, height, colors, bps;
get_mem_image_format(&width, &height, &colors, &bps);
int stride = width * (bps/8) * colors;
unsigned ds = height * stride;
libraw_processed_image_t *ret = (libraw_processed_image_t*)::malloc(sizeof(libraw_processed_image_t)+ds);
if(!ret)
{
if(errcode) *errcode= ENOMEM;
return NULL;
}
memset(ret,0,sizeof(libraw_processed_image_t));
// metadata init
ret->type = LIBRAW_IMAGE_BITMAP;
ret->height = height;
ret->width = width;
ret->colors = colors;
ret->bits = bps;
ret->data_size = ds;
copy_mem_image(ret->data, stride, 0);
return ret;
} | 0 | [
"CWE-129"
] | LibRaw | 89d065424f09b788f443734d44857289489ca9e2 | 112,563,906,367,239,100,000,000,000,000,000,000,000 | 26 | fixed two more problems found by fuzzer |
ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf)
{
return cpu_show_common(dev, attr, buf, X86_BUG_SPEC_STORE_BYPASS);
} | 0 | [] | linux | a2059825986a1c8143fd6698774fa9d83733bb11 | 2,652,098,715,897,922,000,000,000,000,000,000,000 | 4 | x86/speculation: Enable Spectre v1 swapgs mitigations
The previous commit added macro calls in the entry code which mitigate the
Spectre v1 swapgs issue if the X86_FEATURE_FENCE_SWAPGS_* features are
enabled. Enable those features where applicable.
The mitigations may be disabled with "nospectre_v1" or "mitigations=off".
There are different features which can affect the risk of attack:
- When FSGSBASE is enabled, unprivileged users are able to place any
value in GS, using the wrgsbase instruction. This means they can
write a GS value which points to any value in kernel space, which can
be useful with the following gadget in an interrupt/exception/NMI
handler:
if (coming from user space)
swapgs
mov %gs:<percpu_offset>, %reg1
// dependent load or store based on the value of %reg
// for example: mov %(reg1), %reg2
If an interrupt is coming from user space, and the entry code
speculatively skips the swapgs (due to user branch mistraining), it
may speculatively execute the GS-based load and a subsequent dependent
load or store, exposing the kernel data to an L1 side channel leak.
Note that, on Intel, a similar attack exists in the above gadget when
coming from kernel space, if the swapgs gets speculatively executed to
switch back to the user GS. On AMD, this variant isn't possible
because swapgs is serializing with respect to future GS-based
accesses.
NOTE: The FSGSBASE patch set hasn't been merged yet, so the above case
doesn't exist quite yet.
- When FSGSBASE is disabled, the issue is mitigated somewhat because
unprivileged users must use prctl(ARCH_SET_GS) to set GS, which
restricts GS values to user space addresses only. That means the
gadget would need an additional step, since the target kernel address
needs to be read from user space first. Something like:
if (coming from user space)
swapgs
mov %gs:<percpu_offset>, %reg1
mov (%reg1), %reg2
// dependent load or store based on the value of %reg2
// for example: mov %(reg2), %reg3
It's difficult to audit for this gadget in all the handlers, so while
there are no known instances of it, it's entirely possible that it
exists somewhere (or could be introduced in the future). Without
tooling to analyze all such code paths, consider it vulnerable.
Effects of SMAP on the !FSGSBASE case:
- If SMAP is enabled, and the CPU reports RDCL_NO (i.e., not
susceptible to Meltdown), the kernel is prevented from speculatively
reading user space memory, even L1 cached values. This effectively
disables the !FSGSBASE attack vector.
- If SMAP is enabled, but the CPU *is* susceptible to Meltdown, SMAP
still prevents the kernel from speculatively reading user space
memory. But it does *not* prevent the kernel from reading the
user value from L1, if it has already been cached. This is probably
only a small hurdle for an attacker to overcome.
Thanks to Dave Hansen for contributing the speculative_smap() function.
Thanks to Andrew Cooper for providing the inside scoop on whether swapgs
is serializing on AMD.
[ tglx: Fixed the USER fence decision and polished the comment as suggested
by Dave Hansen ]
Signed-off-by: Josh Poimboeuf <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Dave Hansen <[email protected]> |
void do_notify_resume(struct pt_regs *regs, sigset_t *oldset,
__u32 thread_info_flags)
{
/* Pending single-step? */
if (thread_info_flags & _TIF_SINGLESTEP)
clear_thread_flag(TIF_SINGLESTEP);
/* deal with pending signal delivery */
if (thread_info_flags & _TIF_SIGPENDING)
do_signal(regs,oldset);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
clear_thread_flag(TIF_IRET);
} | 0 | [] | linux-2.6 | ee18d64c1f632043a02e6f5ba5e045bb26a5465f | 170,294,807,008,701,900,000,000,000,000,000,000,000 | 20 | KEYS: Add a keyctl to install a process's session keyring on its parent [try #6]
Add a keyctl to install a process's session keyring onto its parent. This
replaces the parent's session keyring. Because the COW credential code does
not permit one process to change another process's credentials directly, the
change is deferred until userspace next starts executing again. Normally this
will be after a wait*() syscall.
To support this, three new security hooks have been provided:
cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in
the blank security creds and key_session_to_parent() - which asks the LSM if
the process may replace its parent's session keyring.
The replacement may only happen if the process has the same ownership details
as its parent, and the process has LINK permission on the session keyring, and
the session keyring is owned by the process, and the LSM permits it.
Note that this requires alteration to each architecture's notify_resume path.
This has been done for all arches barring blackfin, m68k* and xtensa, all of
which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the
replacement to be performed at the point the parent process resumes userspace
execution.
This allows the userspace AFS pioctl emulation to fully emulate newpag() and
the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to
alter the parent process's PAG membership. However, since kAFS doesn't use
PAGs per se, but rather dumps the keys into the session keyring, the session
keyring of the parent must be replaced if, for example, VIOCSETTOK is passed
the newpag flag.
This can be tested with the following program:
#include <stdio.h>
#include <stdlib.h>
#include <keyutils.h>
#define KEYCTL_SESSION_TO_PARENT 18
#define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0)
int main(int argc, char **argv)
{
key_serial_t keyring, key;
long ret;
keyring = keyctl_join_session_keyring(argv[1]);
OSERROR(keyring, "keyctl_join_session_keyring");
key = add_key("user", "a", "b", 1, keyring);
OSERROR(key, "add_key");
ret = keyctl(KEYCTL_SESSION_TO_PARENT);
OSERROR(ret, "KEYCTL_SESSION_TO_PARENT");
return 0;
}
Compiled and linked with -lkeyutils, you should see something like:
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
355907932 --alswrv 4043 -1 \_ keyring: _uid.4043
[dhowells@andromeda ~]$ /tmp/newpag
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
1055658746 --alswrv 4043 4043 \_ user: a
[dhowells@andromeda ~]$ /tmp/newpag hello
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: hello
340417692 --alswrv 4043 4043 \_ user: a
Where the test program creates a new session keyring, sticks a user key named
'a' into it and then installs it on its parent.
Signed-off-by: David Howells <[email protected]>
Signed-off-by: James Morris <[email protected]> |
static inline bool intel_vgpu_in_aperture(struct intel_vgpu *vgpu, uint64_t off)
{
return off >= vgpu_aperture_offset(vgpu) &&
off < vgpu_aperture_offset(vgpu) + vgpu_aperture_sz(vgpu);
} | 0 | [
"CWE-20"
] | linux | 51b00d8509dc69c98740da2ad07308b630d3eb7d | 229,521,181,008,533,200,000,000,000,000,000,000,000 | 5 | drm/i915/gvt: Fix mmap range check
This is to fix missed mmap range check on vGPU bar2 region
and only allow to map vGPU allocated GMADDR range, which means
user space should support sparse mmap to get proper offset for
mmap vGPU aperture. And this takes care of actual pgoff in mmap
request as original code always does from beginning of vGPU
aperture.
Fixes: 659643f7d814 ("drm/i915/gvt/kvmgt: add vfio/mdev support to KVMGT")
Cc: "Monroy, Rodrigo Axel" <[email protected]>
Cc: "Orrala Contreras, Alfredo" <[email protected]>
Cc: [email protected] # v4.10+
Reviewed-by: Hang Yuan <[email protected]>
Signed-off-by: Zhenyu Wang <[email protected]> |
CredentialData() : password(""), scram(), isExternal(false) {} | 0 | [
"CWE-613"
] | mongo | db19e7ce84cfd702a4ba9983ee2ea5019f470f82 | 89,568,138,844,150,700,000,000,000,000,000,000,000 | 1 | SERVER-38984 Validate unique User ID on UserCache hit
(cherry picked from commit e55d6e2292e5dbe2f97153251d8193d1cc89f5d7) |
R_API const char *r_core_anal_optype_colorfor(RCore *core, ut64 addr, bool verbose) {
ut64 type;
if (!(core->print->flags & R_PRINT_FLAGS_COLOR)) {
return NULL;
}
if (!r_config_get_i (core->config, "scr.color")) {
return NULL;
}
type = r_core_anal_address (core, addr);
if (type & R_ANAL_ADDR_TYPE_EXEC) {
return core->cons->context->pal.ai_exec; //Color_RED;
}
if (type & R_ANAL_ADDR_TYPE_WRITE) {
return core->cons->context->pal.ai_write; //Color_BLUE;
}
if (type & R_ANAL_ADDR_TYPE_READ) {
return core->cons->context->pal.ai_read; //Color_GREEN;
}
if (type & R_ANAL_ADDR_TYPE_SEQUENCE) {
return core->cons->context->pal.ai_seq; //Color_MAGENTA;
}
if (type & R_ANAL_ADDR_TYPE_ASCII) {
return core->cons->context->pal.ai_ascii; //Color_YELLOW;
}
return NULL;
} | 0 | [
"CWE-415",
"CWE-703"
] | radare2 | cb8b683758edddae2d2f62e8e63a738c39f92683 | 261,781,924,886,620,440,000,000,000,000,000,000,000 | 26 | Fix #16303 - c->table_query double free (#16318) |
int ip_route_input_rcu(struct sk_buff *skb, __be32 daddr, __be32 saddr,
u8 tos, struct net_device *dev, struct fib_result *res)
{
/* Multicast recognition logic is moved from route cache to here.
* The problem was that too many Ethernet cards have broken/missing
* hardware multicast filters :-( As result the host on multicasting
* network acquires a lot of useless route cache entries, sort of
* SDR messages from all the world. Now we try to get rid of them.
* Really, provided software IP multicast filter is organized
* reasonably (at least, hashed), it does not result in a slowdown
* comparing with route cache reject entries.
* Note, that multicast routers are not affected, because
* route cache entry is created eventually.
*/
if (ipv4_is_multicast(daddr)) {
struct in_device *in_dev = __in_dev_get_rcu(dev);
int our = 0;
int err = -EINVAL;
if (!in_dev)
return err;
our = ip_check_mc_rcu(in_dev, daddr, saddr,
ip_hdr(skb)->protocol);
/* check l3 master if no match yet */
if (!our && netif_is_l3_slave(dev)) {
struct in_device *l3_in_dev;
l3_in_dev = __in_dev_get_rcu(skb->dev);
if (l3_in_dev)
our = ip_check_mc_rcu(l3_in_dev, daddr, saddr,
ip_hdr(skb)->protocol);
}
if (our
#ifdef CONFIG_IP_MROUTE
||
(!ipv4_is_local_multicast(daddr) &&
IN_DEV_MFORWARD(in_dev))
#endif
) {
err = ip_route_input_mc(skb, daddr, saddr,
tos, dev, our);
}
return err;
}
return ip_route_input_slow(skb, daddr, saddr, tos, dev, res);
} | 0 | [
"CWE-327"
] | linux | aa6dd211e4b1dde9d5dc25d699d35f789ae7eeba | 158,856,833,068,806,880,000,000,000,000,000,000,000 | 49 | inet: use bigger hash table for IP ID generation
In commit 73f156a6e8c1 ("inetpeer: get rid of ip_id_count")
I used a very small hash table that could be abused
by patient attackers to reveal sensitive information.
Switch to a dynamic sizing, depending on RAM size.
Typical big hosts will now use 128x more storage (2 MB)
to get a similar increase in security and reduction
of hash collisions.
As a bonus, use of alloc_large_system_hash() spreads
allocated memory among all NUMA nodes.
Fixes: 73f156a6e8c1 ("inetpeer: get rid of ip_id_count")
Reported-by: Amit Klein <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Willy Tarreau <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
static ssize_t netdev_queue_attr_show(struct kobject *kobj,
struct attribute *attr, char *buf)
{
const struct netdev_queue_attribute *attribute
= to_netdev_queue_attr(attr);
struct netdev_queue *queue = to_netdev_queue(kobj);
if (!attribute->show)
return -EIO;
return attribute->show(queue, buf); | 0 | [
"CWE-401"
] | linux | 895a5e96dbd6386c8e78e5b78e067dcc67b7f0ab | 119,406,569,786,271,520,000,000,000,000,000,000,000 | 12 | net-sysfs: Fix mem leak in netdev_register_kobject
syzkaller report this:
BUG: memory leak
unreferenced object 0xffff88837a71a500 (size 256):
comm "syz-executor.2", pid 9770, jiffies 4297825125 (age 17.843s)
hex dump (first 32 bytes):
00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
ff ff ff ff ff ff ff ff 20 c0 ef 86 ff ff ff ff ........ .......
backtrace:
[<00000000db12624b>] netdev_register_kobject+0x124/0x2e0 net/core/net-sysfs.c:1751
[<00000000dc49a994>] register_netdevice+0xcc1/0x1270 net/core/dev.c:8516
[<00000000e5f3fea0>] tun_set_iff drivers/net/tun.c:2649 [inline]
[<00000000e5f3fea0>] __tun_chr_ioctl+0x2218/0x3d20 drivers/net/tun.c:2883
[<000000001b8ac127>] vfs_ioctl fs/ioctl.c:46 [inline]
[<000000001b8ac127>] do_vfs_ioctl+0x1a5/0x10e0 fs/ioctl.c:690
[<0000000079b269f8>] ksys_ioctl+0x89/0xa0 fs/ioctl.c:705
[<00000000de649beb>] __do_sys_ioctl fs/ioctl.c:712 [inline]
[<00000000de649beb>] __se_sys_ioctl fs/ioctl.c:710 [inline]
[<00000000de649beb>] __x64_sys_ioctl+0x74/0xb0 fs/ioctl.c:710
[<000000007ebded1e>] do_syscall_64+0xc8/0x580 arch/x86/entry/common.c:290
[<00000000db315d36>] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[<00000000115be9bb>] 0xffffffffffffffff
It should call kset_unregister to free 'dev->queues_kset'
in error path of register_queue_kobjects, otherwise will cause a mem leak.
Reported-by: Hulk Robot <[email protected]>
Fixes: 1d24eb4815d1 ("xps: Transmit Packet Steering")
Signed-off-by: YueHaibing <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
void setup_new_exec(struct linux_binprm * bprm)
{
arch_pick_mmap_layout(current->mm);
/* This is the point of no return */
current->sas_ss_sp = current->sas_ss_size = 0;
if (uid_eq(current_euid(), current_uid()) && gid_eq(current_egid(), current_gid()))
set_dumpable(current->mm, SUID_DUMP_USER);
else
set_dumpable(current->mm, suid_dumpable);
arch_setup_new_exec();
perf_event_exec();
__set_task_comm(current, kbasename(bprm->filename), true);
/* Set the new mm task size. We have to do that late because it may
* depend on TIF_32BIT which is only updated in flush_thread() on
* some architectures like powerpc
*/
current->mm->task_size = TASK_SIZE;
/* install the new credentials */
if (!uid_eq(bprm->cred->uid, current_euid()) ||
!gid_eq(bprm->cred->gid, current_egid())) {
current->pdeath_signal = 0;
} else {
if (bprm->interp_flags & BINPRM_FLAGS_ENFORCE_NONDUMP)
set_dumpable(current->mm, suid_dumpable);
}
/* An exec changes our domain. We are no longer part of the thread
group */
current->self_exec_id++;
flush_signal_handlers(current, 0);
} | 0 | [] | linux | 98da7d08850fb8bdeb395d6368ed15753304aa0c | 46,642,757,276,650,250,000,000,000,000,000,000,000 | 36 | fs/exec.c: account for argv/envp pointers
When limiting the argv/envp strings during exec to 1/4 of the stack limit,
the storage of the pointers to the strings was not included. This means
that an exec with huge numbers of tiny strings could eat 1/4 of the stack
limit in strings and then additional space would be later used by the
pointers to the strings.
For example, on 32-bit with a 8MB stack rlimit, an exec with 1677721
single-byte strings would consume less than 2MB of stack, the max (8MB /
4) amount allowed, but the pointers to the strings would consume the
remaining additional stack space (1677721 * 4 == 6710884).
The result (1677721 + 6710884 == 8388605) would exhaust stack space
entirely. Controlling this stack exhaustion could result in
pathological behavior in setuid binaries (CVE-2017-1000365).
[[email protected]: additional commenting from Kees]
Fixes: b6a2fea39318 ("mm: variable length argument support")
Link: http://lkml.kernel.org/r/20170622001720.GA32173@beast
Signed-off-by: Kees Cook <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Qualys Security Advisory <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> |
bool Item_sum_and::add()
{
ulonglong value= (ulonglong) args[0]->val_int();
if (!args[0]->null_value)
{
if (as_window_function)
return add_as_window(value);
bits&=value;
}
return 0;
} | 0 | [
"CWE-120"
] | server | eca207c46293bc72dd8d0d5622153fab4d3fccf1 | 223,779,360,688,196,020,000,000,000,000,000,000,000 | 11 | MDEV-25317 Assertion `scale <= precision' failed in decimal_bin_size And Assertion `scale >= 0 && precision > 0 && scale <= precision' failed in decimal_bin_size_inline/decimal_bin_size.
Precision should be kept below DECIMAL_MAX_SCALE for computations.
It can be bigger in Item_decimal. I'd fix this too but it changes the
existing behaviour so problemmatic to ix. |
UUIDNormalize(
slap_mask_t usage,
Syntax *syntax,
MatchingRule *mr,
struct berval *val,
struct berval *normalized,
void *ctx )
{
unsigned char octet = '\0';
int i;
int j;
if ( SLAP_MR_IS_DENORMALIZE( usage ) ) {
/* NOTE: must be a normalized UUID */
assert( val->bv_len == 16 );
normalized->bv_val = slap_sl_malloc( LDAP_LUTIL_UUIDSTR_BUFSIZE, ctx );
normalized->bv_len = lutil_uuidstr_from_normalized( val->bv_val,
val->bv_len, normalized->bv_val, LDAP_LUTIL_UUIDSTR_BUFSIZE );
assert( normalized->bv_len == STRLENOF( "BADBADBA-DBAD-0123-4567-BADBADBADBAD" ) );
return LDAP_SUCCESS;
}
normalized->bv_len = 16;
normalized->bv_val = slap_sl_malloc( normalized->bv_len + 1, ctx );
for( i=0, j=0; i<36; i++ ) {
unsigned char nibble;
if( val->bv_val[i] == '-' ) {
continue;
} else if( ASCII_DIGIT( val->bv_val[i] ) ) {
nibble = val->bv_val[i] - '0';
} else if( ASCII_HEXLOWER( val->bv_val[i] ) ) {
nibble = val->bv_val[i] - ('a'-10);
} else if( ASCII_HEXUPPER( val->bv_val[i] ) ) {
nibble = val->bv_val[i] - ('A'-10);
} else {
slap_sl_free( normalized->bv_val, ctx );
BER_BVZERO( normalized );
return LDAP_INVALID_SYNTAX;
}
if( j & 1 ) {
octet |= nibble;
normalized->bv_val[j>>1] = octet;
} else {
octet = nibble << 4;
}
j++;
}
normalized->bv_val[normalized->bv_len] = 0;
return LDAP_SUCCESS;
} | 0 | [
"CWE-617"
] | openldap | 67670f4544e28fb09eb7319c39f404e1d3229e65 | 23,492,407,327,006,870,000,000,000,000,000,000,000 | 59 | ITS#9383 remove assert in certificateListValidate |
static rfbBool MallocFrameBuffer(rfbClient* client) {
uint64_t allocSize;
if(client->frameBuffer)
free(client->frameBuffer);
/* SECURITY: promote 'width' into uint64_t so that the multiplication does not overflow
'width' and 'height' are 16-bit integers per RFB protocol design
SIZE_MAX is the maximum value that can fit into size_t
*/
allocSize = (uint64_t)client->width * client->height * client->format.bitsPerPixel/8;
if (allocSize >= SIZE_MAX) {
rfbClientErr("CRITICAL: cannot allocate frameBuffer, requested size is too large\n");
return FALSE;
}
client->frameBuffer=malloc( (size_t)allocSize );
if (client->frameBuffer == NULL)
rfbClientErr("CRITICAL: frameBuffer allocation failed, requested size too large or not enough memory?\n");
return client->frameBuffer?TRUE:FALSE;
} | 0 | [
"CWE-703",
"CWE-835"
] | libvncserver | 57433015f856cc12753378254ce4f1c78f5d9c7b | 102,273,844,770,207,630,000,000,000,000,000,000,000 | 24 | libvncclient: handle half-open TCP connections
When a connection is not reset properly at the TCP level (e.g. sudden
power loss or process crash) the TCP connection becomes half-open and
read() always returns -1 with errno = EAGAIN while select() always
returns 0. This leads to an infinite loop and can be fixed by closing
the connection after a certain number of retries (based on a timeout)
has been exceeded. |
_libssh2_channel_flush(LIBSSH2_CHANNEL *channel, int streamid)
{
if(channel->flush_state == libssh2_NB_state_idle) {
LIBSSH2_PACKET *packet =
_libssh2_list_first(&channel->session->packets);
channel->flush_refund_bytes = 0;
channel->flush_flush_bytes = 0;
while(packet) {
LIBSSH2_PACKET *next = _libssh2_list_next(&packet->node);
unsigned char packet_type = packet->data[0];
if(((packet_type == SSH_MSG_CHANNEL_DATA)
|| (packet_type == SSH_MSG_CHANNEL_EXTENDED_DATA))
&& (_libssh2_ntohu32(packet->data + 1) == channel->local.id)) {
/* It's our channel at least */
long packet_stream_id =
(packet_type == SSH_MSG_CHANNEL_DATA) ? 0 :
_libssh2_ntohu32(packet->data + 5);
if((streamid == LIBSSH2_CHANNEL_FLUSH_ALL)
|| ((packet_type == SSH_MSG_CHANNEL_EXTENDED_DATA)
&& ((streamid == LIBSSH2_CHANNEL_FLUSH_EXTENDED_DATA)
|| (streamid == packet_stream_id)))
|| ((packet_type == SSH_MSG_CHANNEL_DATA)
&& (streamid == 0))) {
int bytes_to_flush = packet->data_len - packet->data_head;
_libssh2_debug(channel->session, LIBSSH2_TRACE_CONN,
"Flushing %d bytes of data from stream "
"%lu on channel %lu/%lu",
bytes_to_flush, packet_stream_id,
channel->local.id, channel->remote.id);
/* It's one of the streams we wanted to flush */
channel->flush_refund_bytes += packet->data_len - 13;
channel->flush_flush_bytes += bytes_to_flush;
LIBSSH2_FREE(channel->session, packet->data);
/* remove this packet from the parent's list */
_libssh2_list_remove(&packet->node);
LIBSSH2_FREE(channel->session, packet);
}
}
packet = next;
}
channel->flush_state = libssh2_NB_state_created;
}
channel->read_avail -= channel->flush_flush_bytes;
channel->remote.window_size -= channel->flush_flush_bytes;
if(channel->flush_refund_bytes) {
int rc;
rc = _libssh2_channel_receive_window_adjust(channel,
channel->flush_refund_bytes,
1, NULL);
if(rc == LIBSSH2_ERROR_EAGAIN)
return rc;
}
channel->flush_state = libssh2_NB_state_idle;
return channel->flush_flush_bytes;
} | 1 | [
"CWE-787"
] | libssh2 | dc109a7f518757741590bb993c0c8412928ccec2 | 271,367,579,830,276,080,000,000,000,000,000,000,000 | 67 | Security fixes (#315)
* Bounds checks
Fixes for CVEs
https://www.libssh2.org/CVE-2019-3863.html
https://www.libssh2.org/CVE-2019-3856.html
* Packet length bounds check
CVE
https://www.libssh2.org/CVE-2019-3855.html
* Response length check
CVE
https://www.libssh2.org/CVE-2019-3859.html
* Bounds check
CVE
https://www.libssh2.org/CVE-2019-3857.html
* Bounds checking
CVE
https://www.libssh2.org/CVE-2019-3859.html
and additional data validation
* Check bounds before reading into buffers
* Bounds checking
CVE
https://www.libssh2.org/CVE-2019-3859.html
* declare SIZE_MAX and UINT_MAX if needed |
int nfs_readdir_filler(nfs_readdir_descriptor_t *desc, struct page *page)
{
struct file *file = desc->file;
struct inode *inode = file->f_path.dentry->d_inode;
struct rpc_cred *cred = nfs_file_cred(file);
unsigned long timestamp;
int error;
dfprintk(DIRCACHE, "NFS: %s: reading cookie %Lu into page %lu\n",
__FUNCTION__, (long long)desc->entry->cookie,
page->index);
again:
timestamp = jiffies;
error = NFS_PROTO(inode)->readdir(file->f_path.dentry, cred, desc->entry->cookie, page,
NFS_SERVER(inode)->dtsize, desc->plus);
if (error < 0) {
/* We requested READDIRPLUS, but the server doesn't grok it */
if (error == -ENOTSUPP && desc->plus) {
NFS_SERVER(inode)->caps &= ~NFS_CAP_READDIRPLUS;
clear_bit(NFS_INO_ADVISE_RDPLUS, &NFS_FLAGS(inode));
desc->plus = 0;
goto again;
}
goto error;
}
desc->timestamp = timestamp;
desc->timestamp_valid = 1;
SetPageUptodate(page);
spin_lock(&inode->i_lock);
NFS_I(inode)->cache_validity |= NFS_INO_INVALID_ATIME;
spin_unlock(&inode->i_lock);
/* Ensure consistent page alignment of the data.
* Note: assumes we have exclusive access to this mapping either
* through inode->i_mutex or some other mechanism.
*/
if (page->index == 0 && invalidate_inode_pages2_range(inode->i_mapping, PAGE_CACHE_SIZE, -1) < 0) {
/* Should never happen */
nfs_zap_mapping(inode, inode->i_mapping);
}
unlock_page(page);
return 0;
error:
SetPageError(page);
unlock_page(page);
nfs_zap_caches(inode);
desc->error = error;
return -EIO;
} | 0 | [
"CWE-20"
] | linux-2.6 | 54af3bb543c071769141387a42deaaab5074da55 | 306,465,394,569,828,730,000,000,000,000,000,000,000 | 49 | NFS: Fix an Oops in encode_lookup()
It doesn't look as if the NFS file name limit is being initialised correctly
in the struct nfs_server. Make sure that we limit whatever is being set in
nfs_probe_fsinfo() and nfs_init_server().
Also ensure that readdirplus and nfs4_path_walk respect our file name
limits.
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> |
static int perf_event_stop(struct perf_event *event, int restart)
{
struct stop_event_data sd = {
.event = event,
.restart = restart,
};
int ret = 0;
do {
if (READ_ONCE(event->state) != PERF_EVENT_STATE_ACTIVE)
return 0;
/* matches smp_wmb() in event_sched_in() */
smp_rmb();
/*
* We only want to restart ACTIVE events, so if the event goes
* inactive here (event->oncpu==-1), there's nothing more to do;
* fall through with ret==-ENXIO.
*/
ret = cpu_function_call(READ_ONCE(event->oncpu),
__perf_event_stop, &sd);
} while (ret == -EAGAIN);
return ret;
} | 0 | [
"CWE-362",
"CWE-125"
] | linux | 321027c1fe77f892f4ea07846aeae08cefbbb290 | 51,819,340,412,472,050,000,000,000,000,000,000,000 | 26 | perf/core: Fix concurrent sys_perf_event_open() vs. 'move_group' race
Di Shen reported a race between two concurrent sys_perf_event_open()
calls where both try and move the same pre-existing software group
into a hardware context.
The problem is exactly that described in commit:
f63a8daa5812 ("perf: Fix event->ctx locking")
... where, while we wait for a ctx->mutex acquisition, the event->ctx
relation can have changed under us.
That very same commit failed to recognise sys_perf_event_context() as an
external access vector to the events and thereby didn't apply the
established locking rules correctly.
So while one sys_perf_event_open() call is stuck waiting on
mutex_lock_double(), the other (which owns said locks) moves the group
about. So by the time the former sys_perf_event_open() acquires the
locks, the context we've acquired is stale (and possibly dead).
Apply the established locking rules as per perf_event_ctx_lock_nested()
to the mutex_lock_double() for the 'move_group' case. This obviously means
we need to validate state after we acquire the locks.
Reported-by: Di Shen (Keen Lab)
Tested-by: John Dias <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Min Chong <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vince Weaver <[email protected]>
Fixes: f63a8daa5812 ("perf: Fix event->ctx locking")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]> |
CString CZNC::GetConfPath(bool bAllowMkDir) const {
CString sConfPath = m_sZNCPath + "/configs";
if (bAllowMkDir && !CFile::Exists(sConfPath)) {
CDir::MakeDir(sConfPath);
}
return sConfPath;
} | 0 | [
"CWE-20"
] | znc | 64613bc8b6b4adf1e32231f9844d99cd512b8973 | 69,267,561,492,691,010,000,000,000,000,000,000,000 | 8 | Don't crash if user specified invalid encoding.
This is CVE-2019-9917 |
other_wireless_activate_cb (GtkWidget *menu_item,
NMApplet *applet)
{
GtkWidget *dialog;
dialog = nma_wireless_dialog_new_for_other (applet);
if (!dialog)
return;
g_signal_connect (dialog, "response",
G_CALLBACK (wireless_dialog_response_cb),
applet);
show_ignore_focus_stealing_prevention (dialog);
} | 0 | [
"CWE-310"
] | network-manager-applet | 4020594dfbf566f1852f0acb36ad631a9e73a82b | 287,673,235,231,355,320,000,000,000,000,000,000,000 | 15 | core: fix CA cert mishandling after cert file deletion (deb #560067) (rh #546793)
If a connection was created with a CA certificate, but the user later
moved or deleted that CA certificate, the applet would simply provide the
connection to NetworkManager without any CA certificate. This could cause
NM to connect to the original network (or a network spoofing the original
network) without verifying the identity of the network as the user
expects.
In the future we can/should do better here by (1) alerting the user that
some connection is now no longer complete by flagging it in the connection
editor or notifying the user somehow, and (2) by using a freaking' cert
store already (not that Linux has one yet). |
SM_io_parser<Decorator_>::
SM_io_parser(std::ostream& iout, const Base& D)
: Base(D), in(std::cin), out(iout),
VI(this->svertices_begin(),this->svertices_end(),'v'),
EI(this->shalfedges_begin(),this->shalfedges_end(),'e'),
FI(this->sfaces_begin(),this->sfaces_end(),'f'),
vn(this->number_of_svertices()),
en(this->number_of_shalfedges()),
ln(this->number_of_shalfloops()),
fn(this->number_of_sfaces())
{ verbose = (get_mode(out) != CGAL::IO::ASCII &&
get_mode(out) != CGAL::IO::BINARY);
} | 0 | [
"CWE-269"
] | cgal | 618b409b0fbcef7cb536a4134ae3a424ef5aae45 | 116,608,554,412,191,050,000,000,000,000,000,000,000 | 13 | Fix Nef_2 and Nef_S2 IO |
LEX_USER *create_definer(THD *thd, LEX_CSTRING *user_name,
LEX_CSTRING *host_name)
{
LEX_USER *definer;
/* Create and initialize. */
if (unlikely(!(definer= (LEX_USER*) thd->alloc(sizeof(LEX_USER)))))
return 0;
definer->user= *user_name;
definer->host= *host_name;
definer->auth= NULL;
return definer;
} | 0 | [
"CWE-703"
] | server | 39feab3cd31b5414aa9b428eaba915c251ac34a2 | 147,070,318,723,130,490,000,000,000,000,000,000,000 | 16 | MDEV-26412 Server crash in Item_field::fix_outer_field for INSERT SELECT
IF an INSERT/REPLACE SELECT statement contained an ON expression in the top
level select and this expression used a subquery with a column reference
that could not be resolved then an attempt to resolve this reference as
an outer reference caused a crash of the server. This happened because the
outer context field in the Name_resolution_context structure was not set
to NULL for such references. Rather it pointed to the first element in
the select_stack.
Note that starting from 10.4 we cannot use the SELECT_LEX::outer_select()
method when parsing a SELECT construct.
Approved by Oleksandr Byelkin <[email protected]> |
TEST(HeaderMapImplTest, InlineHeaderByteSize) {
{
TestRequestHeaderMapImpl headers;
std::string foo = "foo";
headers.setHost(foo);
EXPECT_EQ(headers.byteSize(), 13);
}
{
// Overwrite an inline headers with set.
TestRequestHeaderMapImpl headers;
std::string foo = "foo";
headers.setHost(foo);
std::string big_foo = "big_foo";
headers.setHost(big_foo);
EXPECT_EQ(headers.byteSize(), 17);
}
{
// Overwrite an inline headers with setReference and clear.
TestRequestHeaderMapImpl headers;
std::string foo = "foo";
headers.setHost(foo);
EXPECT_EQ(headers.byteSize(), 13);
std::string big_foo = "big_foo";
headers.setReferenceHost(big_foo);
EXPECT_EQ(headers.byteSize(), 17);
EXPECT_EQ(1UL, headers.removeHost());
EXPECT_EQ(headers.byteSize(), 0);
}
{
// Overwrite an inline headers with set integer value.
TestResponseHeaderMapImpl headers;
uint64_t status = 200;
headers.setStatus(status);
EXPECT_EQ(headers.byteSize(), 10);
uint64_t newStatus = 500;
headers.setStatus(newStatus);
EXPECT_EQ(headers.byteSize(), 10);
EXPECT_EQ(1UL, headers.removeStatus());
EXPECT_EQ(headers.byteSize(), 0);
}
{
// Set an inline header, remove, and rewrite.
TestResponseHeaderMapImpl headers;
uint64_t status = 200;
headers.setStatus(status);
EXPECT_EQ(headers.byteSize(), 10);
EXPECT_EQ(1UL, headers.removeStatus());
EXPECT_EQ(headers.byteSize(), 0);
uint64_t newStatus = 500;
headers.setStatus(newStatus);
EXPECT_EQ(headers.byteSize(), 10);
}
} | 0 | [] | envoy | 2c60632d41555ec8b3d9ef5246242be637a2db0f | 87,119,269,160,961,580,000,000,000,000,000,000,000 | 53 | http: header map security fixes for duplicate headers (#197)
Previously header matching did not match on all headers for
non-inline headers. This patch changes the default behavior to
always logically match on all headers. Multiple individual
headers will be logically concatenated with ',' similar to what
is done with inline headers. This makes the behavior effectively
consistent. This behavior can be temporary reverted by setting
the runtime value "envoy.reloadable_features.header_match_on_all_headers"
to "false".
Targeted fixes have been additionally performed on the following
extensions which make them consider all duplicate headers by default as
a comma concatenated list:
1) Any extension using CEL matching on headers.
2) The header to metadata filter.
3) The JWT filter.
4) The Lua filter.
Like primary header matching used in routing, RBAC, etc. this behavior
can be disabled by setting the runtime value
"envoy.reloadable_features.header_match_on_all_headers" to false.
Finally, the setCopy() header map API previously only set the first
header in the case of duplicate non-inline headers. setCopy() now
behaves similiarly to the other set*() APIs and replaces all found
headers with a single value. This may have had security implications
in the extauth filter which uses this API. This behavior can be disabled
by setting the runtime value
"envoy.reloadable_features.http_set_copy_replace_all_headers" to false.
Fixes https://github.com/envoyproxy/envoy-setec/issues/188
Signed-off-by: Matt Klein <[email protected]> |
static __inline__ int __sk_nulls_del_node_init_rcu(struct sock *sk)
{
if (sk_hashed(sk)) {
hlist_nulls_del_init_rcu(&sk->sk_nulls_node);
return 1;
}
return 0;
} | 0 | [
"CWE-400"
] | linux-2.6 | c377411f2494a931ff7facdbb3a6839b1266bcf6 | 121,699,187,988,652,950,000,000,000,000,000,000,000 | 8 | net: sk_add_backlog() take rmem_alloc into account
Current socket backlog limit is not enough to really stop DDOS attacks,
because user thread spend many time to process a full backlog each
round, and user might crazy spin on socket lock.
We should add backlog size and receive_queue size (aka rmem_alloc) to
pace writers, and let user run without being slow down too much.
Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in
stress situations.
Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp
receiver can now process ~200.000 pps (instead of ~100 pps before the
patch) on a 8 core machine.
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
TEE_Result syscall_cryp_obj_get_info(unsigned long obj, TEE_ObjectInfo *info)
{
TEE_Result res;
struct tee_ta_session *sess;
struct tee_obj *o;
res = tee_ta_get_current_session(&sess);
if (res != TEE_SUCCESS)
goto exit;
res = tee_obj_get(to_user_ta_ctx(sess->ctx),
tee_svc_uref_to_vaddr(obj), &o);
if (res != TEE_SUCCESS)
goto exit;
res = tee_svc_copy_to_user(info, &o->info, sizeof(o->info));
exit:
return res;
} | 0 | [
"CWE-119",
"CWE-787"
] | optee_os | a637243270fc1faae16de059091795c32d86e65e | 2,766,648,136,780,979,000,000,000,000,000,000,000 | 20 | svc: check for allocation overflow in crypto calls
Without checking for overflow there is a risk of allocating a buffer
with size smaller than anticipated and as a consequence of that it might
lead to a heap based overflow with attacker controlled data written
outside the boundaries of the buffer.
Fixes: OP-TEE-2018-0010: "Integer overflow in crypto system calls (x2)"
Signed-off-by: Joakim Bech <[email protected]>
Tested-by: Joakim Bech <[email protected]> (QEMU v7, v8)
Reviewed-by: Jens Wiklander <[email protected]>
Reported-by: Riscure <[email protected]>
Reported-by: Alyssa Milburn <[email protected]>
Acked-by: Etienne Carriere <[email protected]> |
static int tcp_peek_sndq(struct sock *sk, struct msghdr *msg, int len)
{
struct sk_buff *skb;
int copied = 0, err = 0;
/* XXX -- need to support SO_PEEK_OFF */
skb_queue_walk(&sk->sk_write_queue, skb) {
err = skb_copy_datagram_msg(skb, 0, msg, skb->len);
if (err)
break;
copied += skb->len;
}
return err ?: copied;
} | 0 | [
"CWE-399",
"CWE-835"
] | linux | ccf7abb93af09ad0868ae9033d1ca8108bdaec82 | 195,646,805,350,110,400,000,000,000,000,000,000,000 | 17 | tcp: avoid infinite loop in tcp_splice_read()
Splicing from TCP socket is vulnerable when a packet with URG flag is
received and stored into receive queue.
__tcp_splice_read() returns 0, and sk_wait_data() immediately
returns since there is the problematic skb in queue.
This is a nice way to burn cpu (aka infinite loop) and trigger
soft lockups.
Again, this gem was found by syzkaller tool.
Fixes: 9c55e01c0cc8 ("[TCP]: Splice receive support.")
Signed-off-by: Eric Dumazet <[email protected]>
Reported-by: Dmitry Vyukov <[email protected]>
Cc: Willy Tarreau <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
static int rtnl_dump_all(struct sk_buff *skb, struct netlink_callback *cb)
{
int idx;
int s_idx = cb->family;
if (s_idx == 0)
s_idx = 1;
for (idx = 1; idx <= RTNL_FAMILY_MAX; idx++) {
int type = cb->nlh->nlmsg_type-RTM_BASE;
struct rtnl_link *handlers;
rtnl_dumpit_func dumpit;
if (idx < s_idx || idx == PF_PACKET)
continue;
handlers = rtnl_dereference(rtnl_msg_handlers[idx]);
if (!handlers)
continue;
dumpit = READ_ONCE(handlers[type].dumpit);
if (!dumpit)
continue;
if (idx > s_idx) {
memset(&cb->args[0], 0, sizeof(cb->args));
cb->prev_seq = 0;
cb->seq = 0;
}
if (dumpit(skb, cb))
break;
}
cb->family = idx;
return skb->len;
} | 0 | [
"CWE-476"
] | linux | f428fe4a04cc339166c8bbd489789760de3a0cee | 293,619,444,131,425,100,000,000,000,000,000,000,000 | 36 | rtnetlink: give a user socket to get_target_net()
This function is used from two places: rtnl_dump_ifinfo and
rtnl_getlink. In rtnl_getlink(), we give a request skb into
get_target_net(), but in rtnl_dump_ifinfo, we give a response skb
into get_target_net().
The problem here is that NETLINK_CB() isn't initialized for the response
skb. In both cases we can get a user socket and give it instead of skb
into get_target_net().
This bug was found by syzkaller with this call-trace:
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN
Modules linked in:
CPU: 1 PID: 3149 Comm: syzkaller140561 Not tainted 4.15.0-rc4-mm1+ #47
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
RIP: 0010:__netlink_ns_capable+0x8b/0x120 net/netlink/af_netlink.c:868
RSP: 0018:ffff8801c880f348 EFLAGS: 00010206
RAX: dffffc0000000000 RBX: 0000000000000000 RCX: ffffffff8443f900
RDX: 000000000000007b RSI: ffffffff86510f40 RDI: 00000000000003d8
RBP: ffff8801c880f360 R08: 0000000000000000 R09: 1ffff10039101e4f
R10: 0000000000000000 R11: 0000000000000001 R12: ffffffff86510f40
R13: 000000000000000c R14: 0000000000000004 R15: 0000000000000011
FS: 0000000001a1a880(0000) GS:ffff8801db300000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020151000 CR3: 00000001c9511005 CR4: 00000000001606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
netlink_ns_capable+0x26/0x30 net/netlink/af_netlink.c:886
get_target_net+0x9d/0x120 net/core/rtnetlink.c:1765
rtnl_dump_ifinfo+0x2e5/0xee0 net/core/rtnetlink.c:1806
netlink_dump+0x48c/0xce0 net/netlink/af_netlink.c:2222
__netlink_dump_start+0x4f0/0x6d0 net/netlink/af_netlink.c:2319
netlink_dump_start include/linux/netlink.h:214 [inline]
rtnetlink_rcv_msg+0x7f0/0xb10 net/core/rtnetlink.c:4485
netlink_rcv_skb+0x21e/0x460 net/netlink/af_netlink.c:2441
rtnetlink_rcv+0x1c/0x20 net/core/rtnetlink.c:4540
netlink_unicast_kernel net/netlink/af_netlink.c:1308 [inline]
netlink_unicast+0x4be/0x6a0 net/netlink/af_netlink.c:1334
netlink_sendmsg+0xa4a/0xe60 net/netlink/af_netlink.c:1897
Cc: Jiri Benc <[email protected]>
Fixes: 79e1ad148c84 ("rtnetlink: use netnsid to query interface")
Signed-off-by: Andrei Vagin <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
static int decode_pixdata(deark *c, lctx *d, struct fmtutil_macbitmap_info *bi, i64 pos)
{
int retval = 0;
de_dbg_indent(c, 1);
if(bi->npwidth==0 || bi->height==0) {
de_warn(c, "Ignoring zero-size bitmap (%d"DE_CHAR_TIMES"%d)",
(int)bi->npwidth, (int)bi->height);
goto done;
}
if(!de_good_image_dimensions(c, bi->npwidth, bi->height)) goto done;
if(bi->pixelsize!=1 && bi->pixelsize!=2 && bi->pixelsize!=4 && bi->pixelsize!=8 &&
bi->pixelsize!=16 && bi->pixelsize!=24 && bi->pixelsize!=32)
{
de_err(c, "%d bits/pixel images are not supported", (int)bi->pixelsize);
goto done;
}
if((bi->uses_pal && bi->pixeltype!=0) || (!bi->uses_pal && bi->pixeltype!=16)) {
de_err(c, "Pixel type %d is not supported", (int)bi->pixeltype);
goto done;
}
if(bi->cmpcount!=1 && bi->cmpcount!=3 && bi->cmpcount!=4) {
de_err(c, "Component count %d is not supported", (int)bi->cmpcount);
goto done;
}
if(bi->cmpsize!=1 && bi->cmpsize!=2 && bi->cmpsize!=4 && bi->cmpsize!=5 &&
bi->cmpsize!=8)
{
de_err(c, "%d-bit components are not supported", (int)bi->cmpsize);
goto done;
}
if(bi->packing_type!=0 && bi->packing_type!=1 && bi->packing_type!=3 && bi->packing_type!=4) {
de_err(c, "Packing type %d is not supported", (int)bi->packing_type);
goto done;
}
if((bi->uses_pal &&
(bi->packing_type==0 || bi->packing_type==1) &&
(bi->pixelsize==1 || bi->pixelsize==2 || bi->pixelsize==4 || bi->pixelsize==8) &&
bi->cmpcount==1 && bi->cmpsize==bi->pixelsize) ||
(!bi->uses_pal && bi->packing_type==3 && bi->pixelsize==16 && bi->cmpcount==3 && bi->cmpsize==5) ||
(!bi->uses_pal && bi->packing_type==4 && bi->pixelsize==32 && bi->cmpcount==3 && bi->cmpsize==8) ||
(!bi->uses_pal && bi->packing_type==4 && bi->pixelsize==32 && bi->cmpcount==4 && bi->cmpsize==8))
{
;
}
else {
de_err(c, "This type of image is not supported");
goto done;
}
if(bi->cmpcount==4) {
de_warn(c, "This image might have transparency, which is not supported.");
}
decode_bitmap(c, d, bi, pos);
done:
de_dbg_indent(c, -1);
return retval;
} | 0 | [
"CWE-476"
] | deark | 287f5ac31dfdc074669182f51ece637706070eeb | 336,469,903,190,616,630,000,000,000,000,000,000,000 | 62 | pict: Fixed a bug with ICC profile extraction
Could cause a NULL pointer dereference.
Found by F. Çelik. |
int numa_zonelist_order_handler(struct ctl_table *table, int write,
void __user *buffer, size_t *length,
loff_t *ppos)
{
char *str;
int ret;
if (!write)
return proc_dostring(table, write, buffer, length, ppos);
str = memdup_user_nul(buffer, 16);
if (IS_ERR(str))
return PTR_ERR(str);
ret = __parse_numa_zonelist_order(str);
kfree(str);
return ret;
} | 0 | [] | linux | 400e22499dd92613821374c8c6c88c7225359980 | 93,665,319,128,016,260,000,000,000,000,000,000,000 | 17 | mm: don't warn about allocations which stall for too long
Commit 63f53dea0c98 ("mm: warn about allocations which stall for too
long") was a great step for reducing possibility of silent hang up
problem caused by memory allocation stalls. But this commit reverts it,
for it is possible to trigger OOM lockup and/or soft lockups when many
threads concurrently called warn_alloc() (in order to warn about memory
allocation stalls) due to current implementation of printk(), and it is
difficult to obtain useful information due to limitation of synchronous
warning approach.
Current printk() implementation flushes all pending logs using the
context of a thread which called console_unlock(). printk() should be
able to flush all pending logs eventually unless somebody continues
appending to printk() buffer.
Since warn_alloc() started appending to printk() buffer while waiting
for oom_kill_process() to make forward progress when oom_kill_process()
is processing pending logs, it became possible for warn_alloc() to force
oom_kill_process() loop inside printk(). As a result, warn_alloc()
significantly increased possibility of preventing oom_kill_process()
from making forward progress.
---------- Pseudo code start ----------
Before warn_alloc() was introduced:
retry:
if (mutex_trylock(&oom_lock)) {
while (atomic_read(&printk_pending_logs) > 0) {
atomic_dec(&printk_pending_logs);
print_one_log();
}
// Send SIGKILL here.
mutex_unlock(&oom_lock)
}
goto retry;
After warn_alloc() was introduced:
retry:
if (mutex_trylock(&oom_lock)) {
while (atomic_read(&printk_pending_logs) > 0) {
atomic_dec(&printk_pending_logs);
print_one_log();
}
// Send SIGKILL here.
mutex_unlock(&oom_lock)
} else if (waited_for_10seconds()) {
atomic_inc(&printk_pending_logs);
}
goto retry;
---------- Pseudo code end ----------
Although waited_for_10seconds() becomes true once per 10 seconds,
unbounded number of threads can call waited_for_10seconds() at the same
time. Also, since threads doing waited_for_10seconds() keep doing
almost busy loop, the thread doing print_one_log() can use little CPU
resource. Therefore, this situation can be simplified like
---------- Pseudo code start ----------
retry:
if (mutex_trylock(&oom_lock)) {
while (atomic_read(&printk_pending_logs) > 0) {
atomic_dec(&printk_pending_logs);
print_one_log();
}
// Send SIGKILL here.
mutex_unlock(&oom_lock)
} else {
atomic_inc(&printk_pending_logs);
}
goto retry;
---------- Pseudo code end ----------
when printk() is called faster than print_one_log() can process a log.
One of possible mitigation would be to introduce a new lock in order to
make sure that no other series of printk() (either oom_kill_process() or
warn_alloc()) can append to printk() buffer when one series of printk()
(either oom_kill_process() or warn_alloc()) is already in progress.
Such serialization will also help obtaining kernel messages in readable
form.
---------- Pseudo code start ----------
retry:
if (mutex_trylock(&oom_lock)) {
mutex_lock(&oom_printk_lock);
while (atomic_read(&printk_pending_logs) > 0) {
atomic_dec(&printk_pending_logs);
print_one_log();
}
// Send SIGKILL here.
mutex_unlock(&oom_printk_lock);
mutex_unlock(&oom_lock)
} else {
if (mutex_trylock(&oom_printk_lock)) {
atomic_inc(&printk_pending_logs);
mutex_unlock(&oom_printk_lock);
}
}
goto retry;
---------- Pseudo code end ----------
But this commit does not go that direction, for we don't want to
introduce a new lock dependency, and we unlikely be able to obtain
useful information even if we serialized oom_kill_process() and
warn_alloc().
Synchronous approach is prone to unexpected results (e.g. too late [1],
too frequent [2], overlooked [3]). As far as I know, warn_alloc() never
helped with providing information other than "something is going wrong".
I want to consider asynchronous approach which can obtain information
during stalls with possibly relevant threads (e.g. the owner of
oom_lock and kswapd-like threads) and serve as a trigger for actions
(e.g. turn on/off tracepoints, ask libvirt daemon to take a memory dump
of stalling KVM guest for diagnostic purpose).
This commit temporarily loses ability to report e.g. OOM lockup due to
unable to invoke the OOM killer due to !__GFP_FS allocation request.
But asynchronous approach will be able to detect such situation and emit
warning. Thus, let's remove warn_alloc().
[1] https://bugzilla.kernel.org/show_bug.cgi?id=192981
[2] http://lkml.kernel.org/r/CAM_iQpWuPVGc2ky8M-9yukECtS+zKjiDasNymX7rMcBjBFyM_A@mail.gmail.com
[3] commit db73ee0d46379922 ("mm, vmscan: do not loop on too_many_isolated for ever"))
Link: http://lkml.kernel.org/r/1509017339-4802-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp
Signed-off-by: Tetsuo Handa <[email protected]>
Reported-by: Cong Wang <[email protected]>
Reported-by: yuwang.yuwang <[email protected]>
Reported-by: Johannes Weiner <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Cc: Petr Mladek <[email protected]>
Cc: Steven Rostedt <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]> |
static const char* format() { return "%.17g"; } | 0 | [
"CWE-125"
] | CImg | 10af1e8c1ad2a58a0a3342a856bae63e8f257abb | 338,005,108,986,134,200,000,000,000,000,000,000,000 | 1 | Fix other issues in 'CImg<T>::load_bmp()'. |
CtPtr ProtocolV1::read_message_data_prepare() {
ldout(cct, 20) << __func__ << dendl;
unsigned data_len = current_header.data_len;
unsigned data_off = current_header.data_off;
if (data_len) {
// get a buffer
#if 0
// rx_buffers is broken by design... see
// http://tracker.ceph.com/issues/22480
map<ceph_tid_t, pair<bufferlist, int> >::iterator p =
connection->rx_buffers.find(current_header.tid);
if (p != connection->rx_buffers.end()) {
ldout(cct, 10) << __func__ << " seleting rx buffer v " << p->second.second
<< " at offset " << data_off << " len "
<< p->second.first.length() << dendl;
data_buf = p->second.first;
// make sure it's big enough
if (data_buf.length() < data_len)
data_buf.push_back(buffer::create(data_len - data_buf.length()));
data_blp = data_buf.begin();
} else {
ldout(cct, 20) << __func__ << " allocating new rx buffer at offset "
<< data_off << dendl;
alloc_aligned_buffer(data_buf, data_len, data_off);
data_blp = data_buf.begin();
}
#else
ldout(cct, 20) << __func__ << " allocating new rx buffer at offset "
<< data_off << dendl;
alloc_aligned_buffer(data_buf, data_len, data_off);
data_blp = data_buf.begin();
#endif
}
msg_left = data_len;
return CONTINUE(read_message_data);
} | 0 | [
"CWE-294"
] | ceph | 6c14c2fb5650426285428dfe6ca1597e5ea1d07d | 242,382,627,456,153,940,000,000,000,000,000,000,000 | 40 | mon/MonClient: bring back CEPHX_V2 authorizer challenges
Commit c58c5754dfd2 ("msg/async/ProtocolV1: use AuthServer and
AuthClient") introduced a backwards compatibility issue into msgr1.
To fix it, commit 321548010578 ("mon/MonClient: skip CEPHX_V2
challenge if client doesn't support it") set out to skip authorizer
challenges for peers that don't support CEPHX_V2. However, it
made it so that authorizer challenges are skipped for all peers in
both msgr1 and msgr2 cases, effectively disabling the protection
against replay attacks that was put in place in commit f80b848d3f83
("auth/cephx: add authorizer challenge", CVE-2018-1128).
This is because con->get_features() always returns 0 at that
point. In msgr1 case, the peer shares its features along with the
authorizer, but while they are available in connect_msg.features they
aren't assigned to con until ProtocolV1::open(). In msgr2 case, the
peer doesn't share its features until much later (in CLIENT_IDENT
frame, i.e. after the authentication phase). The result is that
!CEPHX_V2 branch is taken in all cases and replay attack protection
is lost.
Only clusters with cephx_service_require_version set to 2 on the
service daemons would not be silently downgraded. But, since the
default is 1 and there are no reports of looping on BADAUTHORIZER
faults, I'm pretty sure that no one has ever done that. Note that
cephx_require_version set to 2 would have no effect even though it
is supposed to be stronger than cephx_service_require_version
because MonClient::handle_auth_request() didn't check it.
To fix:
- for msgr1, check connect_msg.features (as was done before commit
c58c5754dfd2) and challenge if CEPHX_V2 is supported. Together
with two preceding patches that resurrect proper cephx_* option
handling in msgr1, this covers both "I want old clients to work"
and "I wish to require better authentication" use cases.
- for msgr2, don't check anything and always challenge. CEPHX_V2
predates msgr2, anyone speaking msgr2 must support it.
Signed-off-by: Ilya Dryomov <[email protected]>
(cherry picked from commit 4a82c72e3bdddcb625933e83af8b50a444b961f1) |
static void keep_sockalive(SOCKETTYPE fd)
{
const int tcp_one = 1;
#ifndef WIN32
const int tcp_keepidle = 45;
const int tcp_keepintvl = 30;
int flags = fcntl(fd, F_GETFL, 0);
fcntl(fd, F_SETFL, O_NONBLOCK | flags);
#else
u_long flags = 1;
ioctlsocket(fd, FIONBIO, &flags);
#endif
setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, (const char *)&tcp_one, sizeof(tcp_one));
if (!opt_delaynet)
#ifndef __linux
setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, (const char *)&tcp_one, sizeof(tcp_one));
#else /* __linux */
setsockopt(fd, SOL_TCP, TCP_NODELAY, (const void *)&tcp_one, sizeof(tcp_one));
setsockopt(fd, SOL_TCP, TCP_KEEPCNT, &tcp_one, sizeof(tcp_one));
setsockopt(fd, SOL_TCP, TCP_KEEPIDLE, &tcp_keepidle, sizeof(tcp_keepidle));
setsockopt(fd, SOL_TCP, TCP_KEEPINTVL, &tcp_keepintvl, sizeof(tcp_keepintvl));
#endif /* __linux__ */
#ifdef __APPLE_CC__
setsockopt(fd, IPPROTO_TCP, TCP_KEEPALIVE, &tcp_keepintvl, sizeof(tcp_keepintvl));
#endif /* __APPLE_CC__ */
} | 0 | [
"CWE-20",
"CWE-703"
] | sgminer | 910c36089940e81fb85c65b8e63dcd2fac71470c | 28,197,108,781,907,504,000,000,000,000,000,000,000 | 31 | stratum: parse_notify(): Don't die on malformed bbversion/prev_hash/nbit/ntime.
Might have introduced a memory leak, don't have time to check. :(
Should the other hex2bin()'s be checked?
Thanks to Mick Ayzenberg <mick.dejavusecurity.com> for finding this. |
static PyObject *Input_close(InputObject *self, PyObject *args)
{
if (!self->r) {
PyErr_SetString(PyExc_RuntimeError, "request object has expired");
return NULL;
}
if (!PyArg_ParseTuple(args, ":close"))
return NULL;
Py_INCREF(Py_None);
return Py_None;
} | 0 | [
"CWE-264"
] | mod_wsgi | d9d5fea585b23991f76532a9b07de7fcd3b649f4 | 137,662,448,633,373,120,000,000,000,000,000,000,000 | 13 | Local privilege escalation when using daemon mode. (CVE-2014-0240) |
static void f2fs_put_super(struct super_block *sb)
{
struct f2fs_sb_info *sbi = F2FS_SB(sb);
if (sbi->s_proc) {
remove_proc_entry("segment_info", sbi->s_proc);
remove_proc_entry("segment_bits", sbi->s_proc);
remove_proc_entry(sb->s_id, f2fs_proc_root);
}
kobject_del(&sbi->s_kobj);
stop_gc_thread(sbi);
/* prevent remaining shrinker jobs */
mutex_lock(&sbi->umount_mutex);
/*
* We don't need to do checkpoint when superblock is clean.
* But, the previous checkpoint was not done by umount, it needs to do
* clean checkpoint again.
*/
if (is_sbi_flag_set(sbi, SBI_IS_DIRTY) ||
!is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
struct cp_control cpc = {
.reason = CP_UMOUNT,
};
write_checkpoint(sbi, &cpc);
}
/* be sure to wait for any on-going discard commands */
f2fs_wait_discard_bios(sbi);
if (!sbi->discard_blks) {
struct cp_control cpc = {
.reason = CP_UMOUNT | CP_TRIMMED,
};
write_checkpoint(sbi, &cpc);
}
/* write_checkpoint can update stat informaion */
f2fs_destroy_stats(sbi);
/*
* normally superblock is clean, so we need to release this.
* In addition, EIO will skip do checkpoint, we need this as well.
*/
release_ino_entry(sbi, true);
f2fs_leave_shrinker(sbi);
mutex_unlock(&sbi->umount_mutex);
/* our cp_error case, we can wait for any writeback page */
f2fs_flush_merged_bios(sbi);
iput(sbi->node_inode);
iput(sbi->meta_inode);
/* destroy f2fs internal modules */
destroy_node_manager(sbi);
destroy_segment_manager(sbi);
kfree(sbi->ckpt);
kobject_put(&sbi->s_kobj);
wait_for_completion(&sbi->s_kobj_unregister);
sb->s_fs_info = NULL;
if (sbi->s_chksum_driver)
crypto_free_shash(sbi->s_chksum_driver);
kfree(sbi->raw_super);
destroy_device_list(sbi);
mempool_destroy(sbi->write_io_dummy);
destroy_percpu_info(sbi);
kfree(sbi);
} | 0 | [
"CWE-20",
"CWE-129"
] | linux | 15d3042a937c13f5d9244241c7a9c8416ff6e82a | 340,252,574,903,581,340,000,000,000,000,000,000,000 | 75 | f2fs: sanity check checkpoint segno and blkoff
Make sure segno and blkoff read from raw image are valid.
Cc: [email protected]
Signed-off-by: Jin Qian <[email protected]>
[Jaegeuk Kim: adjust minor coding style]
Signed-off-by: Jaegeuk Kim <[email protected]> |
static int zlib_wrap_decompress(const char* input, size_t compressed_length,
char* output, size_t maxout) {
int status;
uLongf ul = (uLongf)maxout;
status = uncompress(
(Bytef*)output, &ul, (Bytef*)input, (uLong)compressed_length);
if (status != Z_OK) {
return 0;
}
return (int)ul;
} | 0 | [
"CWE-787"
] | c-blosc2 | c4c6470e88210afc95262c8b9fcc27e30ca043ee | 146,064,921,760,417,230,000,000,000,000,000,000,000 | 11 | Fixed asan heap buffer overflow when not enough space to write compressed block size. |
void ComparisonMatchExpressionBase::serialize(BSONObjBuilder* out) const {
out->append(path(), BSON(name() << _rhs));
} | 0 | [] | mongo | b0ef26c639112b50648a02d969298650fbd402a4 | 319,051,095,833,872,030,000,000,000,000,000,000,000 | 3 | SERVER-51083 Reject invalid UTF-8 from $regex match expressions |
create_display_for_user_session (GdmManager *self,
GdmSession *session,
const char *session_id)
{
GdmDisplay *display;
/* at the moment we only create GdmLocalDisplay objects on seat0 */
const char *seat_id = "seat0";
display = gdm_local_display_new ();
g_object_set (G_OBJECT (display),
"session-class", "user",
"seat-id", seat_id,
"session-id", session_id,
NULL);
gdm_display_store_add (self->priv->display_store,
display);
g_object_set_data (G_OBJECT (session), "gdm-display", display);
} | 0 | [] | gdm | ff98b2817014684ae1acec78ff06f0f461a56a9f | 39,528,340,421,942,616,000,000,000,000,000,000,000 | 19 | manager: if falling back to X11 retry autologin
Right now, we get one shot to autologin. If it fails, we fall back to
the greeter. We should give it another go if the reason for the failure
was wayland fallback to X.
https://bugzilla.gnome.org/show_bug.cgi?id=780520 |
write_pid_file(const char *path)
{
FILE *file;
unsigned long pid;
file = fopen(path, "w");
if (file == NULL)
return errno;
pid = (unsigned long) getpid();
if (fprintf(file, "%ld\n", pid) < 0 || fclose(file) == EOF)
return errno;
return 0;
} | 0 | [
"CWE-476"
] | krb5 | 5d2d9a1abe46a2c1a8614d4672d08d9d30a5f8bf | 128,433,977,669,026,770,000,000,000,000,000,000,000 | 13 | Multi-realm KDC null deref [CVE-2013-1418]
If a KDC serves multiple realms, certain requests can cause
setup_server_realm() to dereference a null pointer, crashing the KDC.
CVSSv2: AV:N/AC:M/Au:N/C:N/I:N/A:P/E:POC/RL:OF/RC:C
A related but more minor vulnerability requires authentication to
exploit, and is only present if a third-party KDC database module can
dereference a null pointer under certain conditions.
ticket: 7755 (new)
target_version: 1.12
tags: pullup |
parser_set_continues_to_current_position (parser_context_t *context_p, /**< context */
parser_branch_node_t *current_p) /**< branch list */
{
while (current_p != NULL)
{
if (current_p->branch.offset & CBC_HIGHEST_BIT_MASK)
{
parser_set_branch_to_current_position (context_p, ¤t_p->branch);
}
current_p = current_p->next_p;
}
} /* parser_set_continues_to_current_position */ | 0 | [
"CWE-416"
] | jerryscript | 3bcd48f72d4af01d1304b754ef19fe1a02c96049 | 253,312,294,480,172,800,000,000,000,000,000,000,000 | 12 | Improve parse_identifier (#4691)
Ascii string length is no longer computed during string allocation.
JerryScript-DCO-1.0-Signed-off-by: Daniel Batiz [email protected] |
void send_nt_replies(connection_struct *conn,
struct smb_request *req, NTSTATUS nt_error,
char *params, int paramsize,
char *pdata, int datasize)
{
int data_to_send = datasize;
int params_to_send = paramsize;
int useable_space;
char *pp = params;
char *pd = pdata;
int params_sent_thistime, data_sent_thistime, total_sent_thistime;
int alignment_offset = 1;
int data_alignment_offset = 0;
struct smbd_server_connection *sconn = req->sconn;
int max_send = sconn->smb1.sessions.max_send;
/*
* If there genuinely are no parameters or data to send just send
* the empty packet.
*/
if(params_to_send == 0 && data_to_send == 0) {
reply_outbuf(req, 18, 0);
if (NT_STATUS_V(nt_error)) {
error_packet_set((char *)req->outbuf,
0, 0, nt_error,
__LINE__,__FILE__);
}
show_msg((char *)req->outbuf);
if (!srv_send_smb(sconn,
(char *)req->outbuf,
true, req->seqnum+1,
IS_CONN_ENCRYPTED(conn),
&req->pcd)) {
exit_server_cleanly("send_nt_replies: srv_send_smb failed.");
}
TALLOC_FREE(req->outbuf);
return;
}
/*
* When sending params and data ensure that both are nicely aligned.
* Only do this alignment when there is also data to send - else
* can cause NT redirector problems.
*/
if (((params_to_send % 4) != 0) && (data_to_send != 0)) {
data_alignment_offset = 4 - (params_to_send % 4);
}
/*
* Space is bufsize minus Netbios over TCP header minus SMB header.
* The alignment_offset is to align the param bytes on a four byte
* boundary (2 bytes for data len, one byte pad).
* NT needs this to work correctly.
*/
useable_space = max_send - (smb_size
+ 2 * 18 /* wct */
+ alignment_offset
+ data_alignment_offset);
if (useable_space < 0) {
char *msg = talloc_asprintf(
talloc_tos(),
"send_nt_replies failed sanity useable_space = %d!!!",
useable_space);
DEBUG(0, ("%s\n", msg));
exit_server_cleanly(msg);
}
while (params_to_send || data_to_send) {
/*
* Calculate whether we will totally or partially fill this packet.
*/
total_sent_thistime = params_to_send + data_to_send;
/*
* We can never send more than useable_space.
*/
total_sent_thistime = MIN(total_sent_thistime, useable_space);
reply_outbuf(req, 18,
total_sent_thistime + alignment_offset
+ data_alignment_offset);
/*
* Set total params and data to be sent.
*/
SIVAL(req->outbuf,smb_ntr_TotalParameterCount,paramsize);
SIVAL(req->outbuf,smb_ntr_TotalDataCount,datasize);
/*
* Calculate how many parameters and data we can fit into
* this packet. Parameters get precedence.
*/
params_sent_thistime = MIN(params_to_send,useable_space);
data_sent_thistime = useable_space - params_sent_thistime;
data_sent_thistime = MIN(data_sent_thistime,data_to_send);
SIVAL(req->outbuf, smb_ntr_ParameterCount,
params_sent_thistime);
if(params_sent_thistime == 0) {
SIVAL(req->outbuf,smb_ntr_ParameterOffset,0);
SIVAL(req->outbuf,smb_ntr_ParameterDisplacement,0);
} else {
/*
* smb_ntr_ParameterOffset is the offset from the start of the SMB header to the
* parameter bytes, however the first 4 bytes of outbuf are
* the Netbios over TCP header. Thus use smb_base() to subtract
* them from the calculation.
*/
SIVAL(req->outbuf,smb_ntr_ParameterOffset,
((smb_buf(req->outbuf)+alignment_offset)
- smb_base(req->outbuf)));
/*
* Absolute displacement of param bytes sent in this packet.
*/
SIVAL(req->outbuf, smb_ntr_ParameterDisplacement,
pp - params);
}
/*
* Deal with the data portion.
*/
SIVAL(req->outbuf, smb_ntr_DataCount, data_sent_thistime);
if(data_sent_thistime == 0) {
SIVAL(req->outbuf,smb_ntr_DataOffset,0);
SIVAL(req->outbuf,smb_ntr_DataDisplacement, 0);
} else {
/*
* The offset of the data bytes is the offset of the
* parameter bytes plus the number of parameters being sent this time.
*/
SIVAL(req->outbuf, smb_ntr_DataOffset,
((smb_buf(req->outbuf)+alignment_offset) -
smb_base(req->outbuf))
+ params_sent_thistime + data_alignment_offset);
SIVAL(req->outbuf,smb_ntr_DataDisplacement, pd - pdata);
}
/*
* Copy the param bytes into the packet.
*/
if(params_sent_thistime) {
if (alignment_offset != 0) {
memset(smb_buf(req->outbuf), 0,
alignment_offset);
}
memcpy((smb_buf(req->outbuf)+alignment_offset), pp,
params_sent_thistime);
}
/*
* Copy in the data bytes
*/
if(data_sent_thistime) {
if (data_alignment_offset != 0) {
memset((smb_buf(req->outbuf)+alignment_offset+
params_sent_thistime), 0,
data_alignment_offset);
}
memcpy(smb_buf(req->outbuf)+alignment_offset
+params_sent_thistime+data_alignment_offset,
pd,data_sent_thistime);
}
DEBUG(9,("nt_rep: params_sent_thistime = %d, data_sent_thistime = %d, useable_space = %d\n",
params_sent_thistime, data_sent_thistime, useable_space));
DEBUG(9,("nt_rep: params_to_send = %d, data_to_send = %d, paramsize = %d, datasize = %d\n",
params_to_send, data_to_send, paramsize, datasize));
if (NT_STATUS_V(nt_error)) {
error_packet_set((char *)req->outbuf,
0, 0, nt_error,
__LINE__,__FILE__);
}
/* Send the packet */
show_msg((char *)req->outbuf);
if (!srv_send_smb(sconn,
(char *)req->outbuf,
true, req->seqnum+1,
IS_CONN_ENCRYPTED(conn),
&req->pcd)) {
exit_server_cleanly("send_nt_replies: srv_send_smb failed.");
}
TALLOC_FREE(req->outbuf);
pp += params_sent_thistime;
pd += data_sent_thistime;
params_to_send -= params_sent_thistime;
data_to_send -= data_sent_thistime;
/*
* Sanity check
*/
if(params_to_send < 0 || data_to_send < 0) {
DEBUG(0,("send_nt_replies failed sanity check pts = %d, dts = %d\n!!!",
params_to_send, data_to_send));
exit_server_cleanly("send_nt_replies: internal error");
}
}
} | 0 | [
"CWE-189"
] | samba | efdbcabbe97a594572d71d714d258a5854c5d8ce | 109,472,872,616,177,600,000,000,000,000,000,000,000 | 220 | Fix bug #10010 - Missing integer wrap protection in EA list reading can cause server to loop with DOS.
Ensure we never wrap whilst adding client provided input.
CVE-2013-4124
Signed-off-by: Jeremy Allison <[email protected]> |
static void tcp_v6_send_ack(const struct sock *sk, struct sk_buff *skb, u32 seq,
u32 ack, u32 win, u32 tsval, u32 tsecr, int oif,
struct tcp_md5sig_key *key, u8 tclass,
u32 label)
{
tcp_v6_send_response(sk, skb, seq, ack, win, tsval, tsecr, oif, key, 0,
tclass, label);
} | 0 | [
"CWE-416",
"CWE-284",
"CWE-264"
] | linux | 45f6fad84cc305103b28d73482b344d7f5b76f39 | 71,620,936,399,864,660,000,000,000,000,000,000,000 | 8 | ipv6: add complete rcu protection around np->opt
This patch addresses multiple problems :
UDP/RAW sendmsg() need to get a stable struct ipv6_txoptions
while socket is not locked : Other threads can change np->opt
concurrently. Dmitry posted a syzkaller
(http://github.com/google/syzkaller) program desmonstrating
use-after-free.
Starting with TCP/DCCP lockless listeners, tcp_v6_syn_recv_sock()
and dccp_v6_request_recv_sock() also need to use RCU protection
to dereference np->opt once (before calling ipv6_dup_options())
This patch adds full RCU protection to np->opt
Reported-by: Dmitry Vyukov <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Acked-by: Hannes Frederic Sowa <[email protected]>
Signed-off-by: David S. Miller <[email protected]> |
virtual uint exists2in_reserved_items() { return 0; }; | 0 | [
"CWE-617"
] | server | 2e7891080667c59ac80f788eef4d59d447595772 | 199,525,163,383,755,500,000,000,000,000,000,000,000 | 1 | MDEV-25635 Assertion failure when pushing from HAVING into WHERE of view
This bug could manifest itself after pushing a where condition over a
mergeable derived table / view / CTE DT into a grouping view / derived
table / CTE V whose item list contained set functions with constant
arguments such as MIN(2), SUM(1) etc. In such cases the field references
used in the condition pushed into the view V that correspond set functions
are wrapped into Item_direct_view_ref wrappers. Due to a wrong implementation
of the virtual method const_item() for the class Item_direct_view_ref the
wrapped set functions with constant arguments could be erroneously taken
for constant items. This could lead to a wrong result set returned by the
main select query in 10.2. In 10.4 where a possibility of pushing condition
from HAVING into WHERE had been added this could cause a crash.
Approved by Sergey Petrunya <[email protected]> |
Subsets and Splits