CVE ID
stringlengths 13
43
⌀ | CVE Page
stringlengths 45
48
⌀ | CWE ID
stringclasses 90
values | codeLink
stringlengths 46
139
| commit_id
stringlengths 6
81
| commit_message
stringlengths 3
13.3k
⌀ | func_after
stringlengths 14
241k
| func_before
stringlengths 14
241k
| lang
stringclasses 3
values | project
stringclasses 309
values | vul
int8 0
1
|
---|---|---|---|---|---|---|---|---|---|---|
CVE-2015-6791
|
https://www.cvedetails.com/cve/CVE-2015-6791/
| null |
https://github.com/chromium/chromium/commit/7e995b26a5a503adefc0ad40435f7e16a45434c2
|
7e995b26a5a503adefc0ad40435f7e16a45434c2
|
Add a fake DriveFS launcher client.
Using DriveFS requires building and deploying ChromeOS. Add a client for
the fake DriveFS launcher to allow the use of a real DriveFS from a
ChromeOS chroot to be used with a target_os="chromeos" build of chrome.
This connects to the fake DriveFS launcher using mojo over a unix domain
socket named by a command-line flag, using the launcher to create
DriveFS instances.
Bug: 848126
Change-Id: I22dcca154d41bda196dd7c1782bb503f6bcba5b1
Reviewed-on: https://chromium-review.googlesource.com/1098434
Reviewed-by: Xiyuan Xia <[email protected]>
Commit-Queue: Sam McNally <[email protected]>
Cr-Commit-Position: refs/heads/master@{#567513}
|
void OnGotTpmIsReady(base::Optional<bool> tpm_is_ready) {
if (!tpm_is_ready.has_value() || !tpm_is_ready.value()) {
VLOG(1) << "SystemTokenCertDBInitializer: TPM is not ready - not loading "
"system token.";
if (ShallAttemptTpmOwnership()) {
LOG(WARNING) << "Request attempting TPM ownership.";
DBusThreadManager::Get()->GetCryptohomeClient()->TpmCanAttemptOwnership(
EmptyVoidDBusMethodCallback());
}
return;
}
VLOG(1)
<< "SystemTokenCertDBInitializer: TPM is ready, loading system token.";
TPMTokenLoader::Get()->EnsureStarted();
base::Callback<void(crypto::ScopedPK11Slot)> callback =
base::BindRepeating(&SystemTokenCertDBInitializer::InitializeDatabase,
weak_ptr_factory_.GetWeakPtr());
content::BrowserThread::PostTask(
content::BrowserThread::IO, FROM_HERE,
base::BindOnce(&GetSystemSlotOnIOThread, callback));
}
|
void OnGotTpmIsReady(base::Optional<bool> tpm_is_ready) {
if (!tpm_is_ready.has_value() || !tpm_is_ready.value()) {
VLOG(1) << "SystemTokenCertDBInitializer: TPM is not ready - not loading "
"system token.";
if (ShallAttemptTpmOwnership()) {
LOG(WARNING) << "Request attempting TPM ownership.";
DBusThreadManager::Get()->GetCryptohomeClient()->TpmCanAttemptOwnership(
EmptyVoidDBusMethodCallback());
}
return;
}
VLOG(1)
<< "SystemTokenCertDBInitializer: TPM is ready, loading system token.";
TPMTokenLoader::Get()->EnsureStarted();
base::Callback<void(crypto::ScopedPK11Slot)> callback =
base::BindRepeating(&SystemTokenCertDBInitializer::InitializeDatabase,
weak_ptr_factory_.GetWeakPtr());
content::BrowserThread::PostTask(
content::BrowserThread::IO, FROM_HERE,
base::BindOnce(&GetSystemSlotOnIOThread, callback));
}
|
C
|
Chrome
| 0 |
CVE-2018-20784
|
https://www.cvedetails.com/cve/CVE-2018-20784/
|
CWE-400
|
https://github.com/torvalds/linux/commit/c40f7d74c741a907cfaeb73a7697081881c497d0
|
c40f7d74c741a907cfaeb73a7697081881c497d0
|
sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
Zhipeng Xie, Xie XiuQi and Sargun Dhillon reported lockups in the
scheduler under high loads, starting at around the v4.18 time frame,
and Zhipeng Xie tracked it down to bugs in the rq->leaf_cfs_rq_list
manipulation.
Do a (manual) revert of:
a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
It turns out that the list_del_leaf_cfs_rq() introduced by this commit
is a surprising property that was not considered in followup commits
such as:
9c2791f936ef ("sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_list")
As Vincent Guittot explains:
"I think that there is a bigger problem with commit a9e7f6544b9c and
cfs_rq throttling:
Let take the example of the following topology TG2 --> TG1 --> root:
1) The 1st time a task is enqueued, we will add TG2 cfs_rq then TG1
cfs_rq to leaf_cfs_rq_list and we are sure to do the whole branch in
one path because it has never been used and can't be throttled so
tmp_alone_branch will point to leaf_cfs_rq_list at the end.
2) Then TG1 is throttled
3) and we add TG3 as a new child of TG1.
4) The 1st enqueue of a task on TG3 will add TG3 cfs_rq just before TG1
cfs_rq and tmp_alone_branch will stay on rq->leaf_cfs_rq_list.
With commit a9e7f6544b9c, we can del a cfs_rq from rq->leaf_cfs_rq_list.
So if the load of TG1 cfs_rq becomes NULL before step 2) above, TG1
cfs_rq is removed from the list.
Then at step 4), TG3 cfs_rq is added at the beginning of rq->leaf_cfs_rq_list
but tmp_alone_branch still points to TG3 cfs_rq because its throttled
parent can't be enqueued when the lock is released.
tmp_alone_branch doesn't point to rq->leaf_cfs_rq_list whereas it should.
So if TG3 cfs_rq is removed or destroyed before tmp_alone_branch
points on another TG cfs_rq, the next TG cfs_rq that will be added,
will be linked outside rq->leaf_cfs_rq_list - which is bad.
In addition, we can break the ordering of the cfs_rq in
rq->leaf_cfs_rq_list but this ordering is used to update and
propagate the update from leaf down to root."
Instead of trying to work through all these cases and trying to reproduce
the very high loads that produced the lockup to begin with, simplify
the code temporarily by reverting a9e7f6544b9c - which change was clearly
not thought through completely.
This (hopefully) gives us a kernel that doesn't lock up so people
can continue to enjoy their holidays without worrying about regressions. ;-)
[ mingo: Wrote changelog, fixed weird spelling in code comment while at it. ]
Analyzed-by: Xie XiuQi <[email protected]>
Analyzed-by: Vincent Guittot <[email protected]>
Reported-by: Zhipeng Xie <[email protected]>
Reported-by: Sargun Dhillon <[email protected]>
Reported-by: Xie XiuQi <[email protected]>
Tested-by: Zhipeng Xie <[email protected]>
Tested-by: Sargun Dhillon <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Acked-by: Vincent Guittot <[email protected]>
Cc: <[email protected]> # v4.13+
Cc: Bin Li <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
group_type group_classify(struct sched_group *group,
struct sg_lb_stats *sgs)
{
if (sgs->group_no_capacity)
return group_overloaded;
if (sg_imbalanced(group))
return group_imbalanced;
if (sgs->group_misfit_task_load)
return group_misfit_task;
return group_other;
}
|
group_type group_classify(struct sched_group *group,
struct sg_lb_stats *sgs)
{
if (sgs->group_no_capacity)
return group_overloaded;
if (sg_imbalanced(group))
return group_imbalanced;
if (sgs->group_misfit_task_load)
return group_misfit_task;
return group_other;
}
|
C
|
linux
| 0 |
CVE-2016-8339
|
https://www.cvedetails.com/cve/CVE-2016-8339/
|
CWE-119
|
https://github.com/antirez/redis/commit/6d9f8e2462fc2c426d48c941edeb78e5df7d2977
|
6d9f8e2462fc2c426d48c941edeb78e5df7d2977
|
Security: CONFIG SET client-output-buffer-limit overflow fixed.
This commit fixes a vunlerability reported by Cory Duplantis
of Cisco Talos, see TALOS-2016-0206 for reference.
CONFIG SET client-output-buffer-limit accepts as client class "master"
which is actually only used to implement CLIENT KILL. The "master" class
has ID 3. What happens is that the global structure:
server.client_obuf_limits[class]
Is accessed with class = 3. However it is a 3 elements array, so writing
the 4th element means to write up to 24 bytes of memory *after* the end
of the array, since the structure is defined as:
typedef struct clientBufferLimitsConfig {
unsigned long long hard_limit_bytes;
unsigned long long soft_limit_bytes;
time_t soft_limit_seconds;
} clientBufferLimitsConfig;
EVALUATION OF IMPACT:
Checking what's past the boundaries of the array in the global
'server' structure, we find AOF state fields:
clientBufferLimitsConfig client_obuf_limits[CLIENT_TYPE_OBUF_COUNT];
/* AOF persistence */
int aof_state; /* AOF_(ON|OFF|WAIT_REWRITE) */
int aof_fsync; /* Kind of fsync() policy */
char *aof_filename; /* Name of the AOF file */
int aof_no_fsync_on_rewrite; /* Don't fsync if a rewrite is in prog. */
int aof_rewrite_perc; /* Rewrite AOF if % growth is > M and... */
off_t aof_rewrite_min_size; /* the AOF file is at least N bytes. */
off_t aof_rewrite_base_size; /* AOF size on latest startup or rewrite. */
off_t aof_current_size; /* AOF current size. */
Writing to most of these fields should be harmless and only cause problems in
Redis persistence that should not escalate to security problems.
However unfortunately writing to "aof_filename" could be potentially a
security issue depending on the access pattern.
Searching for "aof.filename" accesses in the source code returns many different
usages of the field, including using it as input for open(), logging to the
Redis log file or syslog, and calling the rename() syscall.
It looks possible that attacks could lead at least to informations
disclosure of the state and data inside Redis. However note that the
attacker must already have access to the server. But, worse than that,
it looks possible that being able to change the AOF filename can be used
to mount more powerful attacks: like overwriting random files with AOF
data (easily a potential security issue as demostrated here:
http://antirez.com/news/96), or even more subtle attacks where the
AOF filename is changed to a path were a malicious AOF file is loaded
in order to exploit other potential issues when the AOF parser is fed
with untrusted input (no known issue known currently).
The fix checks the places where the 'master' class is specifiedf in
order to access configuration data structures, and return an error in
this cases.
WHO IS AT RISK?
The "master" client class was introduced in Redis in Jul 28 2015.
Every Redis instance released past this date is not vulnerable
while all the releases after this date are. Notably:
Redis 3.0.x is NOT vunlerable.
Redis 3.2.x IS vulnerable.
Redis unstable is vulnerable.
In order for the instance to be at risk, at least one of the following
conditions must be true:
1. The attacker can access Redis remotely and is able to send
the CONFIG SET command (often banned in managed Redis instances).
2. The attacker is able to control the "redis.conf" file and
can wait or trigger a server restart.
The problem was fixed 26th September 2016 in all the releases affected.
|
const char *configEnumGetNameOrUnknown(configEnum *ce, int val) {
const char *name = configEnumGetName(ce,val);
return name ? name : "unknown";
}
|
const char *configEnumGetNameOrUnknown(configEnum *ce, int val) {
const char *name = configEnumGetName(ce,val);
return name ? name : "unknown";
}
|
C
|
redis
| 0 |
CVE-2017-9985
|
https://www.cvedetails.com/cve/CVE-2017-9985/
|
CWE-125
|
https://github.com/torvalds/linux/commit/20e2b791796bd68816fa115f12be5320de2b8021
|
20e2b791796bd68816fa115f12be5320de2b8021
|
ALSA: msnd: Optimize / harden DSP and MIDI loops
The ISA msnd drivers have loops fetching the ring-buffer head, tail
and size values inside the loops. Such codes are inefficient and
fragile.
This patch optimizes it, and also adds the sanity check to avoid the
endless loops.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196131
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196133
Signed-off-by: Takashi Iwai <[email protected]>
|
static int snd_msnd_probe(struct snd_card *card)
{
struct snd_msnd *chip = card->private_data;
unsigned char info;
#ifndef MSND_CLASSIC
char *xv, *rev = NULL;
char *pin = "TB Pinnacle", *fiji = "TB Fiji";
char *pinfiji = "TB Pinnacle/Fiji";
#endif
if (!request_region(chip->io, DSP_NUMIO, "probing")) {
snd_printk(KERN_ERR LOGNAME ": I/O port conflict\n");
return -ENODEV;
}
if (snd_msnd_reset_dsp(chip->io, &info) < 0) {
release_region(chip->io, DSP_NUMIO);
return -ENODEV;
}
#ifdef MSND_CLASSIC
strcpy(card->shortname, "Classic/Tahiti/Monterey");
strcpy(card->longname, "Turtle Beach Multisound");
printk(KERN_INFO LOGNAME ": %s, "
"I/O 0x%lx-0x%lx, IRQ %d, memory mapped to 0x%lX-0x%lX\n",
card->shortname,
chip->io, chip->io + DSP_NUMIO - 1,
chip->irq,
chip->base, chip->base + 0x7fff);
#else
switch (info >> 4) {
case 0xf:
xv = "<= 1.15";
break;
case 0x1:
xv = "1.18/1.2";
break;
case 0x2:
xv = "1.3";
break;
case 0x3:
xv = "1.4";
break;
default:
xv = "unknown";
break;
}
switch (info & 0x7) {
case 0x0:
rev = "I";
strcpy(card->shortname, pin);
break;
case 0x1:
rev = "F";
strcpy(card->shortname, pin);
break;
case 0x2:
rev = "G";
strcpy(card->shortname, pin);
break;
case 0x3:
rev = "H";
strcpy(card->shortname, pin);
break;
case 0x4:
rev = "E";
strcpy(card->shortname, fiji);
break;
case 0x5:
rev = "C";
strcpy(card->shortname, fiji);
break;
case 0x6:
rev = "D";
strcpy(card->shortname, fiji);
break;
case 0x7:
rev = "A-B (Fiji) or A-E (Pinnacle)";
strcpy(card->shortname, pinfiji);
break;
}
strcpy(card->longname, "Turtle Beach Multisound Pinnacle");
printk(KERN_INFO LOGNAME ": %s revision %s, Xilinx version %s, "
"I/O 0x%lx-0x%lx, IRQ %d, memory mapped to 0x%lX-0x%lX\n",
card->shortname,
rev, xv,
chip->io, chip->io + DSP_NUMIO - 1,
chip->irq,
chip->base, chip->base + 0x7fff);
#endif
release_region(chip->io, DSP_NUMIO);
return 0;
}
|
static int snd_msnd_probe(struct snd_card *card)
{
struct snd_msnd *chip = card->private_data;
unsigned char info;
#ifndef MSND_CLASSIC
char *xv, *rev = NULL;
char *pin = "TB Pinnacle", *fiji = "TB Fiji";
char *pinfiji = "TB Pinnacle/Fiji";
#endif
if (!request_region(chip->io, DSP_NUMIO, "probing")) {
snd_printk(KERN_ERR LOGNAME ": I/O port conflict\n");
return -ENODEV;
}
if (snd_msnd_reset_dsp(chip->io, &info) < 0) {
release_region(chip->io, DSP_NUMIO);
return -ENODEV;
}
#ifdef MSND_CLASSIC
strcpy(card->shortname, "Classic/Tahiti/Monterey");
strcpy(card->longname, "Turtle Beach Multisound");
printk(KERN_INFO LOGNAME ": %s, "
"I/O 0x%lx-0x%lx, IRQ %d, memory mapped to 0x%lX-0x%lX\n",
card->shortname,
chip->io, chip->io + DSP_NUMIO - 1,
chip->irq,
chip->base, chip->base + 0x7fff);
#else
switch (info >> 4) {
case 0xf:
xv = "<= 1.15";
break;
case 0x1:
xv = "1.18/1.2";
break;
case 0x2:
xv = "1.3";
break;
case 0x3:
xv = "1.4";
break;
default:
xv = "unknown";
break;
}
switch (info & 0x7) {
case 0x0:
rev = "I";
strcpy(card->shortname, pin);
break;
case 0x1:
rev = "F";
strcpy(card->shortname, pin);
break;
case 0x2:
rev = "G";
strcpy(card->shortname, pin);
break;
case 0x3:
rev = "H";
strcpy(card->shortname, pin);
break;
case 0x4:
rev = "E";
strcpy(card->shortname, fiji);
break;
case 0x5:
rev = "C";
strcpy(card->shortname, fiji);
break;
case 0x6:
rev = "D";
strcpy(card->shortname, fiji);
break;
case 0x7:
rev = "A-B (Fiji) or A-E (Pinnacle)";
strcpy(card->shortname, pinfiji);
break;
}
strcpy(card->longname, "Turtle Beach Multisound Pinnacle");
printk(KERN_INFO LOGNAME ": %s revision %s, Xilinx version %s, "
"I/O 0x%lx-0x%lx, IRQ %d, memory mapped to 0x%lX-0x%lX\n",
card->shortname,
rev, xv,
chip->io, chip->io + DSP_NUMIO - 1,
chip->irq,
chip->base, chip->base + 0x7fff);
#endif
release_region(chip->io, DSP_NUMIO);
return 0;
}
|
C
|
linux
| 0 |
CVE-2015-7510
|
https://www.cvedetails.com/cve/CVE-2015-7510/
|
CWE-119
|
https://github.com/keszybz/systemd/commit/cb31827d62066a04b02111df3052949fda4b6888
|
cb31827d62066a04b02111df3052949fda4b6888
|
nss-mymachines: do not allow overlong machine names
https://github.com/systemd/systemd/issues/2002
|
enum nss_status _nss_mymachines_getgrgid_r(
gid_t gid,
struct group *gr,
char *buffer, size_t buflen,
int *errnop) {
_cleanup_bus_error_free_ sd_bus_error error = SD_BUS_ERROR_NULL;
_cleanup_bus_message_unref_ sd_bus_message* reply = NULL;
_cleanup_bus_flush_close_unref_ sd_bus *bus = NULL;
const char *machine, *object;
uint32_t mapped;
int r;
if (!gid_is_valid(gid)) {
r = -EINVAL;
goto fail;
}
/* We consider all gids < 65536 host gids */
if (gid < 0x10000)
goto not_found;
r = sd_bus_open_system(&bus);
if (r < 0)
goto fail;
r = sd_bus_call_method(bus,
"org.freedesktop.machine1",
"/org/freedesktop/machine1",
"org.freedesktop.machine1.Manager",
"MapToMachineGroup",
&error,
&reply,
"u",
(uint32_t) gid);
if (r < 0) {
if (sd_bus_error_has_name(&error, BUS_ERROR_NO_SUCH_GROUP_MAPPING))
goto not_found;
goto fail;
}
r = sd_bus_message_read(reply, "sou", &machine, &object, &mapped);
if (r < 0)
goto fail;
if (buflen < sizeof(char*) + 1) {
*errnop = ENOMEM;
return NSS_STATUS_TRYAGAIN;
}
memzero(buffer, sizeof(char*));
if (snprintf(buffer + sizeof(char*), buflen - sizeof(char*), "vg-%s-" GID_FMT, machine, (gid_t) mapped) >= (int) buflen) {
*errnop = ENOMEM;
return NSS_STATUS_TRYAGAIN;
}
gr->gr_name = buffer + sizeof(char*);
gr->gr_gid = gid;
gr->gr_passwd = (char*) "*"; /* locked */
gr->gr_mem = (char**) buffer;
*errnop = 0;
return NSS_STATUS_SUCCESS;
not_found:
*errnop = 0;
return NSS_STATUS_NOTFOUND;
fail:
*errnop = -r;
return NSS_STATUS_UNAVAIL;
}
|
enum nss_status _nss_mymachines_getgrgid_r(
gid_t gid,
struct group *gr,
char *buffer, size_t buflen,
int *errnop) {
_cleanup_bus_error_free_ sd_bus_error error = SD_BUS_ERROR_NULL;
_cleanup_bus_message_unref_ sd_bus_message* reply = NULL;
_cleanup_bus_flush_close_unref_ sd_bus *bus = NULL;
const char *machine, *object;
uint32_t mapped;
int r;
if (!gid_is_valid(gid)) {
r = -EINVAL;
goto fail;
}
/* We consider all gids < 65536 host gids */
if (gid < 0x10000)
goto not_found;
r = sd_bus_open_system(&bus);
if (r < 0)
goto fail;
r = sd_bus_call_method(bus,
"org.freedesktop.machine1",
"/org/freedesktop/machine1",
"org.freedesktop.machine1.Manager",
"MapToMachineGroup",
&error,
&reply,
"u",
(uint32_t) gid);
if (r < 0) {
if (sd_bus_error_has_name(&error, BUS_ERROR_NO_SUCH_GROUP_MAPPING))
goto not_found;
goto fail;
}
r = sd_bus_message_read(reply, "sou", &machine, &object, &mapped);
if (r < 0)
goto fail;
if (buflen < sizeof(char*) + 1) {
*errnop = ENOMEM;
return NSS_STATUS_TRYAGAIN;
}
memzero(buffer, sizeof(char*));
if (snprintf(buffer + sizeof(char*), buflen - sizeof(char*), "vg-%s-" GID_FMT, machine, (gid_t) mapped) >= (int) buflen) {
*errnop = ENOMEM;
return NSS_STATUS_TRYAGAIN;
}
gr->gr_name = buffer + sizeof(char*);
gr->gr_gid = gid;
gr->gr_passwd = (char*) "*"; /* locked */
gr->gr_mem = (char**) buffer;
*errnop = 0;
return NSS_STATUS_SUCCESS;
not_found:
*errnop = 0;
return NSS_STATUS_NOTFOUND;
fail:
*errnop = -r;
return NSS_STATUS_UNAVAIL;
}
|
C
|
systemd
| 0 |
CVE-2016-9806
|
https://www.cvedetails.com/cve/CVE-2016-9806/
|
CWE-415
|
https://github.com/torvalds/linux/commit/92964c79b357efd980812c4de5c1fd2ec8bb5520
|
92964c79b357efd980812c4de5c1fd2ec8bb5520
|
netlink: Fix dump skb leak/double free
When we free cb->skb after a dump, we do it after releasing the
lock. This means that a new dump could have started in the time
being and we'll end up freeing their skb instead of ours.
This patch saves the skb and module before we unlock so we free
the right memory.
Fixes: 16b304f3404f ("netlink: Eliminate kmalloc in netlink dump operation.")
Reported-by: Baozeng Ding <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Acked-by: Cong Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
int netlink_attachskb(struct sock *sk, struct sk_buff *skb,
long *timeo, struct sock *ssk)
{
struct netlink_sock *nlk;
nlk = nlk_sk(sk);
if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf ||
test_bit(NETLINK_S_CONGESTED, &nlk->state))) {
DECLARE_WAITQUEUE(wait, current);
if (!*timeo) {
if (!ssk || netlink_is_kernel(ssk))
netlink_overrun(sk);
sock_put(sk);
kfree_skb(skb);
return -EAGAIN;
}
__set_current_state(TASK_INTERRUPTIBLE);
add_wait_queue(&nlk->wait, &wait);
if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf ||
test_bit(NETLINK_S_CONGESTED, &nlk->state)) &&
!sock_flag(sk, SOCK_DEAD))
*timeo = schedule_timeout(*timeo);
__set_current_state(TASK_RUNNING);
remove_wait_queue(&nlk->wait, &wait);
sock_put(sk);
if (signal_pending(current)) {
kfree_skb(skb);
return sock_intr_errno(*timeo);
}
return 1;
}
netlink_skb_set_owner_r(skb, sk);
return 0;
}
|
int netlink_attachskb(struct sock *sk, struct sk_buff *skb,
long *timeo, struct sock *ssk)
{
struct netlink_sock *nlk;
nlk = nlk_sk(sk);
if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf ||
test_bit(NETLINK_S_CONGESTED, &nlk->state))) {
DECLARE_WAITQUEUE(wait, current);
if (!*timeo) {
if (!ssk || netlink_is_kernel(ssk))
netlink_overrun(sk);
sock_put(sk);
kfree_skb(skb);
return -EAGAIN;
}
__set_current_state(TASK_INTERRUPTIBLE);
add_wait_queue(&nlk->wait, &wait);
if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf ||
test_bit(NETLINK_S_CONGESTED, &nlk->state)) &&
!sock_flag(sk, SOCK_DEAD))
*timeo = schedule_timeout(*timeo);
__set_current_state(TASK_RUNNING);
remove_wait_queue(&nlk->wait, &wait);
sock_put(sk);
if (signal_pending(current)) {
kfree_skb(skb);
return sock_intr_errno(*timeo);
}
return 1;
}
netlink_skb_set_owner_r(skb, sk);
return 0;
}
|
C
|
linux
| 0 |
CVE-2011-4930
|
https://www.cvedetails.com/cve/CVE-2011-4930/
|
CWE-134
|
https://htcondor-git.cs.wisc.edu/?p=condor.git;a=commitdiff;h=5e5571d1a431eb3c61977b6dd6ec90186ef79867
|
5e5571d1a431eb3c61977b6dd6ec90186ef79867
| null |
GahpClient::condor_job_stage_in(const char *schedd_name, ClassAd *job_ad)
{
static const char* command = "CONDOR_JOB_STAGE_IN";
MyString ad_string;
if (server->m_commands_supported->contains_anycase(command)==FALSE) {
return GAHPCLIENT_COMMAND_NOT_SUPPORTED;
}
if (!schedd_name) schedd_name=NULLSTRING;
if (!job_ad) {
ad_string=NULLSTRING;
} else {
if ( useXMLClassads ) {
ClassAdXMLUnparser unparser;
unparser.SetUseCompactSpacing( true );
unparser.SetOutputType( false );
unparser.SetOutputTargetType( false );
unparser.Unparse( job_ad, ad_string );
} else {
NewClassAdUnparser unparser;
unparser.SetUseCompactSpacing( true );
unparser.SetOutputType( false );
unparser.SetOutputTargetType( false );
unparser.Unparse( job_ad, ad_string );
}
}
std::string reqline;
char *esc1 = strdup( escapeGahpString(schedd_name) );
char *esc2 = strdup( escapeGahpString(ad_string.Value()) );
int x = sprintf(reqline, "%s %s", esc1, esc2);
free( esc1 );
free( esc2 );
ASSERT( x > 0 );
const char *buf = reqline.c_str();
if ( !is_pending(command,buf) ) {
if ( m_mode == results_only ) {
return GAHPCLIENT_COMMAND_NOT_SUBMITTED;
}
now_pending(command,buf,deleg_proxy);
}
Gahp_Args* result = get_pending_result(command,buf);
if ( result ) {
if (result->argc != 3) {
EXCEPT("Bad %s Result",command);
}
int rc = 1;
if ( result->argv[1][0] == 'S' ) {
rc = 0;
}
if ( strcasecmp(result->argv[2], NULLSTRING) ) {
error_string = result->argv[2];
} else {
error_string = "";
}
delete result;
return rc;
}
if ( check_pending_timeout(command,buf) ) {
sprintf( error_string, "%s timed out", command );
return GAHPCLIENT_COMMAND_TIMED_OUT;
}
return GAHPCLIENT_COMMAND_PENDING;
}
|
GahpClient::condor_job_stage_in(const char *schedd_name, ClassAd *job_ad)
{
static const char* command = "CONDOR_JOB_STAGE_IN";
MyString ad_string;
if (server->m_commands_supported->contains_anycase(command)==FALSE) {
return GAHPCLIENT_COMMAND_NOT_SUPPORTED;
}
if (!schedd_name) schedd_name=NULLSTRING;
if (!job_ad) {
ad_string=NULLSTRING;
} else {
if ( useXMLClassads ) {
ClassAdXMLUnparser unparser;
unparser.SetUseCompactSpacing( true );
unparser.SetOutputType( false );
unparser.SetOutputTargetType( false );
unparser.Unparse( job_ad, ad_string );
} else {
NewClassAdUnparser unparser;
unparser.SetUseCompactSpacing( true );
unparser.SetOutputType( false );
unparser.SetOutputTargetType( false );
unparser.Unparse( job_ad, ad_string );
}
}
std::string reqline;
char *esc1 = strdup( escapeGahpString(schedd_name) );
char *esc2 = strdup( escapeGahpString(ad_string.Value()) );
int x = sprintf(reqline, "%s %s", esc1, esc2);
free( esc1 );
free( esc2 );
ASSERT( x > 0 );
const char *buf = reqline.c_str();
if ( !is_pending(command,buf) ) {
if ( m_mode == results_only ) {
return GAHPCLIENT_COMMAND_NOT_SUBMITTED;
}
now_pending(command,buf,deleg_proxy);
}
Gahp_Args* result = get_pending_result(command,buf);
if ( result ) {
if (result->argc != 3) {
EXCEPT("Bad %s Result",command);
}
int rc = 1;
if ( result->argv[1][0] == 'S' ) {
rc = 0;
}
if ( strcasecmp(result->argv[2], NULLSTRING) ) {
error_string = result->argv[2];
} else {
error_string = "";
}
delete result;
return rc;
}
if ( check_pending_timeout(command,buf) ) {
sprintf( error_string, "%s timed out", command );
return GAHPCLIENT_COMMAND_TIMED_OUT;
}
return GAHPCLIENT_COMMAND_PENDING;
}
|
CPP
|
htcondor
| 0 |
CVE-2014-0131
|
https://www.cvedetails.com/cve/CVE-2014-0131/
|
CWE-416
|
https://github.com/torvalds/linux/commit/1fd819ecb90cc9b822cd84d3056ddba315d3340f
|
1fd819ecb90cc9b822cd84d3056ddba315d3340f
|
skbuff: skb_segment: orphan frags before copying
skb_segment copies frags around, so we need
to copy them carefully to avoid accessing
user memory after reporting completion to userspace
through a callback.
skb_segment doesn't normally happen on datapath:
TSO needs to be disabled - so disabling zero copy
in this case does not look like a big deal.
Signed-off-by: Michael S. Tsirkin <[email protected]>
Acked-by: Herbert Xu <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
gfp_t gfp_mask)
{
int i;
u8 *data;
int size = nhead + skb_end_offset(skb) + ntail;
long off;
BUG_ON(nhead < 0);
if (skb_shared(skb))
BUG();
size = SKB_DATA_ALIGN(size);
if (skb_pfmemalloc(skb))
gfp_mask |= __GFP_MEMALLOC;
data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
gfp_mask, NUMA_NO_NODE, NULL);
if (!data)
goto nodata;
size = SKB_WITH_OVERHEAD(ksize(data));
/* Copy only real data... and, alas, header. This should be
* optimized for the cases when header is void.
*/
memcpy(data + nhead, skb->head, skb_tail_pointer(skb) - skb->head);
memcpy((struct skb_shared_info *)(data + size),
skb_shinfo(skb),
offsetof(struct skb_shared_info, frags[skb_shinfo(skb)->nr_frags]));
/*
* if shinfo is shared we must drop the old head gracefully, but if it
* is not we can just drop the old head and let the existing refcount
* be since all we did is relocate the values
*/
if (skb_cloned(skb)) {
/* copy this zero copy skb frags */
if (skb_orphan_frags(skb, gfp_mask))
goto nofrags;
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
skb_frag_ref(skb, i);
if (skb_has_frag_list(skb))
skb_clone_fraglist(skb);
skb_release_data(skb);
} else {
skb_free_head(skb);
}
off = (data + nhead) - skb->head;
skb->head = data;
skb->head_frag = 0;
skb->data += off;
#ifdef NET_SKBUFF_DATA_USES_OFFSET
skb->end = size;
off = nhead;
#else
skb->end = skb->head + size;
#endif
skb->tail += off;
skb_headers_offset_update(skb, nhead);
skb->cloned = 0;
skb->hdr_len = 0;
skb->nohdr = 0;
atomic_set(&skb_shinfo(skb)->dataref, 1);
return 0;
nofrags:
kfree(data);
nodata:
return -ENOMEM;
}
|
int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
gfp_t gfp_mask)
{
int i;
u8 *data;
int size = nhead + skb_end_offset(skb) + ntail;
long off;
BUG_ON(nhead < 0);
if (skb_shared(skb))
BUG();
size = SKB_DATA_ALIGN(size);
if (skb_pfmemalloc(skb))
gfp_mask |= __GFP_MEMALLOC;
data = kmalloc_reserve(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
gfp_mask, NUMA_NO_NODE, NULL);
if (!data)
goto nodata;
size = SKB_WITH_OVERHEAD(ksize(data));
/* Copy only real data... and, alas, header. This should be
* optimized for the cases when header is void.
*/
memcpy(data + nhead, skb->head, skb_tail_pointer(skb) - skb->head);
memcpy((struct skb_shared_info *)(data + size),
skb_shinfo(skb),
offsetof(struct skb_shared_info, frags[skb_shinfo(skb)->nr_frags]));
/*
* if shinfo is shared we must drop the old head gracefully, but if it
* is not we can just drop the old head and let the existing refcount
* be since all we did is relocate the values
*/
if (skb_cloned(skb)) {
/* copy this zero copy skb frags */
if (skb_orphan_frags(skb, gfp_mask))
goto nofrags;
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
skb_frag_ref(skb, i);
if (skb_has_frag_list(skb))
skb_clone_fraglist(skb);
skb_release_data(skb);
} else {
skb_free_head(skb);
}
off = (data + nhead) - skb->head;
skb->head = data;
skb->head_frag = 0;
skb->data += off;
#ifdef NET_SKBUFF_DATA_USES_OFFSET
skb->end = size;
off = nhead;
#else
skb->end = skb->head + size;
#endif
skb->tail += off;
skb_headers_offset_update(skb, nhead);
skb->cloned = 0;
skb->hdr_len = 0;
skb->nohdr = 0;
atomic_set(&skb_shinfo(skb)->dataref, 1);
return 0;
nofrags:
kfree(data);
nodata:
return -ENOMEM;
}
|
C
|
linux
| 0 |
CVE-2018-15572
|
https://www.cvedetails.com/cve/CVE-2018-15572/
| null |
https://github.com/torvalds/linux/commit/fdf82a7856b32d905c39afc85e34364491e46346
|
fdf82a7856b32d905c39afc85e34364491e46346
|
x86/speculation: Protect against userspace-userspace spectreRSB
The article "Spectre Returns! Speculation Attacks using the Return Stack
Buffer" [1] describes two new (sub-)variants of spectrev2-like attacks,
making use solely of the RSB contents even on CPUs that don't fallback to
BTB on RSB underflow (Skylake+).
Mitigate userspace-userspace attacks by always unconditionally filling RSB on
context switch when the generic spectrev2 mitigation has been enabled.
[1] https://arxiv.org/pdf/1807.07940.pdf
Signed-off-by: Jiri Kosina <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Josh Poimboeuf <[email protected]>
Acked-by: Tim Chen <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: David Woodhouse <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
|
static void __init spectre_v2_select_mitigation(void)
{
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
/*
* If the CPU is not affected and the command line mode is NONE or AUTO
* then nothing to do.
*/
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
(cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
return;
switch (cmd) {
case SPECTRE_V2_CMD_NONE:
return;
case SPECTRE_V2_CMD_FORCE:
case SPECTRE_V2_CMD_AUTO:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto;
break;
case SPECTRE_V2_CMD_RETPOLINE_AMD:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_amd;
break;
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_generic;
break;
case SPECTRE_V2_CMD_RETPOLINE:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto;
break;
}
pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
return;
retpoline_auto:
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
retpoline_amd:
if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");
goto retpoline_generic;
}
mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD :
SPECTRE_V2_RETPOLINE_MINIMAL_AMD;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
} else {
retpoline_generic:
mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_GENERIC :
SPECTRE_V2_RETPOLINE_MINIMAL;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
}
spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]);
/*
* If spectre v2 protection has been enabled, unconditionally fill
* RSB during a context switch; this protects against two independent
* issues:
*
* - RSB underflow (and switch to BTB) on Skylake+
* - SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs
*/
setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
/* Initialize Indirect Branch Prediction Barrier if supported */
if (boot_cpu_has(X86_FEATURE_IBPB)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
}
/*
* Retpoline means the kernel is safe because it has no indirect
* branches. But firmware isn't, so use IBRS to protect that.
*/
if (boot_cpu_has(X86_FEATURE_IBRS)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
}
|
static void __init spectre_v2_select_mitigation(void)
{
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
/*
* If the CPU is not affected and the command line mode is NONE or AUTO
* then nothing to do.
*/
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
(cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
return;
switch (cmd) {
case SPECTRE_V2_CMD_NONE:
return;
case SPECTRE_V2_CMD_FORCE:
case SPECTRE_V2_CMD_AUTO:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto;
break;
case SPECTRE_V2_CMD_RETPOLINE_AMD:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_amd;
break;
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_generic;
break;
case SPECTRE_V2_CMD_RETPOLINE:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto;
break;
}
pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
return;
retpoline_auto:
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
retpoline_amd:
if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");
goto retpoline_generic;
}
mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD :
SPECTRE_V2_RETPOLINE_MINIMAL_AMD;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
} else {
retpoline_generic:
mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_GENERIC :
SPECTRE_V2_RETPOLINE_MINIMAL;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
}
spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]);
/*
* If neither SMEP nor PTI are available, there is a risk of
* hitting userspace addresses in the RSB after a context switch
* from a shallow call stack to a deeper one. To prevent this fill
* the entire RSB, even when using IBRS.
*
* Skylake era CPUs have a separate issue with *underflow* of the
* RSB, when they will predict 'ret' targets from the generic BTB.
* The proper mitigation for this is IBRS. If IBRS is not supported
* or deactivated in favour of retpolines the RSB fill on context
* switch is required.
*/
if ((!boot_cpu_has(X86_FEATURE_PTI) &&
!boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
pr_info("Spectre v2 mitigation: Filling RSB on context switch\n");
}
/* Initialize Indirect Branch Prediction Barrier if supported */
if (boot_cpu_has(X86_FEATURE_IBPB)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
}
/*
* Retpoline means the kernel is safe because it has no indirect
* branches. But firmware isn't, so use IBRS to protect that.
*/
if (boot_cpu_has(X86_FEATURE_IBRS)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
}
|
C
|
linux
| 1 |
CVE-2016-2550
|
https://www.cvedetails.com/cve/CVE-2016-2550/
|
CWE-399
|
https://github.com/torvalds/linux/commit/415e3d3e90ce9e18727e8843ae343eda5a58fad6
|
415e3d3e90ce9e18727e8843ae343eda5a58fad6
|
unix: correctly track in-flight fds in sending process user_struct
The commit referenced in the Fixes tag incorrectly accounted the number
of in-flight fds over a unix domain socket to the original opener
of the file-descriptor. This allows another process to arbitrary
deplete the original file-openers resource limit for the maximum of
open files. Instead the sending processes and its struct cred should
be credited.
To do so, we add a reference counted struct user_struct pointer to the
scm_fp_list and use it to account for the number of inflight unix fds.
Fixes: 712f4aad406bb1 ("unix: properly account for FDs passed over unix sockets")
Reported-by: David Herrmann <[email protected]>
Cc: David Herrmann <[email protected]>
Cc: Willy Tarreau <[email protected]>
Cc: Linus Torvalds <[email protected]>
Suggested-by: Linus Torvalds <[email protected]>
Signed-off-by: Hannes Frederic Sowa <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static bool unix_skb_scm_eq(struct sk_buff *skb,
struct scm_cookie *scm)
{
const struct unix_skb_parms *u = &UNIXCB(skb);
return u->pid == scm->pid &&
uid_eq(u->uid, scm->creds.uid) &&
gid_eq(u->gid, scm->creds.gid) &&
unix_secdata_eq(scm, skb);
}
|
static bool unix_skb_scm_eq(struct sk_buff *skb,
struct scm_cookie *scm)
{
const struct unix_skb_parms *u = &UNIXCB(skb);
return u->pid == scm->pid &&
uid_eq(u->uid, scm->creds.uid) &&
gid_eq(u->gid, scm->creds.gid) &&
unix_secdata_eq(scm, skb);
}
|
C
|
linux
| 0 |
CVE-2016-2451
|
https://www.cvedetails.com/cve/CVE-2016-2451/
|
CWE-264
|
https://android.googlesource.com/platform/frameworks/av/+/f9ed2fe6d61259e779a37d4c2d7edb33a1c1f8ba
|
f9ed2fe6d61259e779a37d4c2d7edb33a1c1f8ba
|
Add VPX output buffer size check
and handle dead observers more gracefully
Bug: 27597103
Change-Id: Id7acb25d5ef69b197da15ec200a9e4f9e7b03518
|
status_t OMX::getGraphicBufferUsage(
node_id node, OMX_U32 port_index, OMX_U32* usage) {
return findInstance(node)->getGraphicBufferUsage(port_index, usage);
}
|
status_t OMX::getGraphicBufferUsage(
node_id node, OMX_U32 port_index, OMX_U32* usage) {
return findInstance(node)->getGraphicBufferUsage(port_index, usage);
}
|
C
|
Android
| 0 |
CVE-2018-1066
|
https://www.cvedetails.com/cve/CVE-2018-1066/
|
CWE-476
|
https://github.com/torvalds/linux/commit/cabfb3680f78981d26c078a26e5c748531257ebb
|
cabfb3680f78981d26c078a26e5c748531257ebb
|
CIFS: Enable encryption during session setup phase
In order to allow encryption on SMB connection we need to exchange
a session key and generate encryption and decryption keys.
Signed-off-by: Pavel Shilovsky <[email protected]>
|
SMB2_rmdir(const unsigned int xid, struct cifs_tcon *tcon,
u64 persistent_fid, u64 volatile_fid)
{
__u8 delete_pending = 1;
void *data;
unsigned int size;
data = &delete_pending;
size = 1; /* sizeof __u8 */
return send_set_info(xid, tcon, persistent_fid, volatile_fid,
current->tgid, FILE_DISPOSITION_INFORMATION, 1, &data,
&size);
}
|
SMB2_rmdir(const unsigned int xid, struct cifs_tcon *tcon,
u64 persistent_fid, u64 volatile_fid)
{
__u8 delete_pending = 1;
void *data;
unsigned int size;
data = &delete_pending;
size = 1; /* sizeof __u8 */
return send_set_info(xid, tcon, persistent_fid, volatile_fid,
current->tgid, FILE_DISPOSITION_INFORMATION, 1, &data,
&size);
}
|
C
|
linux
| 0 |
CVE-2016-10044
|
https://www.cvedetails.com/cve/CVE-2016-10044/
|
CWE-264
|
https://github.com/torvalds/linux/commit/22f6b4d34fcf039c63a94e7670e0da24f8575a5a
|
22f6b4d34fcf039c63a94e7670e0da24f8575a5a
|
aio: mark AIO pseudo-fs noexec
This ensures that do_mmap() won't implicitly make AIO memory mappings
executable if the READ_IMPLIES_EXEC personality flag is set. Such
behavior is problematic because the security_mmap_file LSM hook doesn't
catch this case, potentially permitting an attacker to bypass a W^X
policy enforced by SELinux.
I have tested the patch on my machine.
To test the behavior, compile and run this:
#define _GNU_SOURCE
#include <unistd.h>
#include <sys/personality.h>
#include <linux/aio_abi.h>
#include <err.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/syscall.h>
int main(void) {
personality(READ_IMPLIES_EXEC);
aio_context_t ctx = 0;
if (syscall(__NR_io_setup, 1, &ctx))
err(1, "io_setup");
char cmd[1000];
sprintf(cmd, "cat /proc/%d/maps | grep -F '/[aio]'",
(int)getpid());
system(cmd);
return 0;
}
In the output, "rw-s" is good, "rwxs" is bad.
Signed-off-by: Jann Horn <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
void exit_aio(struct mm_struct *mm)
{
struct kioctx_table *table = rcu_dereference_raw(mm->ioctx_table);
struct ctx_rq_wait wait;
int i, skipped;
if (!table)
return;
atomic_set(&wait.count, table->nr);
init_completion(&wait.comp);
skipped = 0;
for (i = 0; i < table->nr; ++i) {
struct kioctx *ctx = table->table[i];
if (!ctx) {
skipped++;
continue;
}
/*
* We don't need to bother with munmap() here - exit_mmap(mm)
* is coming and it'll unmap everything. And we simply can't,
* this is not necessarily our ->mm.
* Since kill_ioctx() uses non-zero ->mmap_size as indicator
* that it needs to unmap the area, just set it to 0.
*/
ctx->mmap_size = 0;
kill_ioctx(mm, ctx, &wait);
}
if (!atomic_sub_and_test(skipped, &wait.count)) {
/* Wait until all IO for the context are done. */
wait_for_completion(&wait.comp);
}
RCU_INIT_POINTER(mm->ioctx_table, NULL);
kfree(table);
}
|
void exit_aio(struct mm_struct *mm)
{
struct kioctx_table *table = rcu_dereference_raw(mm->ioctx_table);
struct ctx_rq_wait wait;
int i, skipped;
if (!table)
return;
atomic_set(&wait.count, table->nr);
init_completion(&wait.comp);
skipped = 0;
for (i = 0; i < table->nr; ++i) {
struct kioctx *ctx = table->table[i];
if (!ctx) {
skipped++;
continue;
}
/*
* We don't need to bother with munmap() here - exit_mmap(mm)
* is coming and it'll unmap everything. And we simply can't,
* this is not necessarily our ->mm.
* Since kill_ioctx() uses non-zero ->mmap_size as indicator
* that it needs to unmap the area, just set it to 0.
*/
ctx->mmap_size = 0;
kill_ioctx(mm, ctx, &wait);
}
if (!atomic_sub_and_test(skipped, &wait.count)) {
/* Wait until all IO for the context are done. */
wait_for_completion(&wait.comp);
}
RCU_INIT_POINTER(mm->ioctx_table, NULL);
kfree(table);
}
|
C
|
linux
| 0 |
CVE-2017-5547
|
https://www.cvedetails.com/cve/CVE-2017-5547/
|
CWE-119
|
https://github.com/torvalds/linux/commit/6d104af38b570d37aa32a5803b04c354f8ed513d
|
6d104af38b570d37aa32a5803b04c354f8ed513d
|
HID: corsair: fix DMA buffers on stack
Not all platforms support DMA to the stack, and specifically since v4.9
this is no longer supported on x86 with VMAP_STACK either.
Note that the macro-mode buffer was larger than necessary.
Fixes: 6f78193ee9ea ("HID: corsair: Add Corsair Vengeance K90 driver")
Cc: stable <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Jiri Kosina <[email protected]>
|
static void k90_backlight_work(struct work_struct *work)
{
int ret;
struct k90_led *led = container_of(work, struct k90_led, work);
struct device *dev;
struct usb_interface *usbif;
struct usb_device *usbdev;
if (led->removed)
return;
dev = led->cdev.dev->parent;
usbif = to_usb_interface(dev->parent);
usbdev = interface_to_usbdev(usbif);
ret = usb_control_msg(usbdev, usb_sndctrlpipe(usbdev, 0),
K90_REQUEST_BRIGHTNESS,
USB_DIR_OUT | USB_TYPE_VENDOR |
USB_RECIP_DEVICE, led->brightness, 0,
NULL, 0, USB_CTRL_SET_TIMEOUT);
if (ret != 0)
dev_warn(dev, "Failed to set backlight brightness (error: %d).\n",
ret);
}
|
static void k90_backlight_work(struct work_struct *work)
{
int ret;
struct k90_led *led = container_of(work, struct k90_led, work);
struct device *dev;
struct usb_interface *usbif;
struct usb_device *usbdev;
if (led->removed)
return;
dev = led->cdev.dev->parent;
usbif = to_usb_interface(dev->parent);
usbdev = interface_to_usbdev(usbif);
ret = usb_control_msg(usbdev, usb_sndctrlpipe(usbdev, 0),
K90_REQUEST_BRIGHTNESS,
USB_DIR_OUT | USB_TYPE_VENDOR |
USB_RECIP_DEVICE, led->brightness, 0,
NULL, 0, USB_CTRL_SET_TIMEOUT);
if (ret != 0)
dev_warn(dev, "Failed to set backlight brightness (error: %d).\n",
ret);
}
|
C
|
linux
| 0 |
CVE-2018-13006
|
https://www.cvedetails.com/cve/CVE-2018-13006/
|
CWE-125
|
https://github.com/gpac/gpac/commit/bceb03fd2be95097a7b409ea59914f332fb6bc86
|
bceb03fd2be95097a7b409ea59914f332fb6bc86
|
fixed 2 possible heap overflows (inc. #1088)
|
GF_Err uuid_dump(GF_Box *a, FILE * trace)
{
gf_isom_box_dump_start(a, "UUIDBox", trace);
fprintf(trace, ">\n");
gf_isom_box_dump_done("UUIDBox", a, trace);
return GF_OK;
}
|
GF_Err uuid_dump(GF_Box *a, FILE * trace)
{
gf_isom_box_dump_start(a, "UUIDBox", trace);
fprintf(trace, ">\n");
gf_isom_box_dump_done("UUIDBox", a, trace);
return GF_OK;
}
|
C
|
gpac
| 0 |
CVE-2013-0892
|
https://www.cvedetails.com/cve/CVE-2013-0892/
| null |
https://github.com/chromium/chromium/commit/da5e5f78f02bc0af5ddc5694090defbef7853af1
|
da5e5f78f02bc0af5ddc5694090defbef7853af1
|
DevTools: remove references to modules/device_orientation from core
BUG=340221
Review URL: https://codereview.chromium.org/150913003
git-svn-id: svn://svn.chromium.org/blink/trunk@166493 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void InspectorPageAgent::didClearWindowObjectInMainWorld(Frame* frame)
{
if (frame == m_page->mainFrame())
m_injectedScriptManager->discardInjectedScripts();
if (!m_frontend)
return;
RefPtr<JSONObject> scripts = m_state->getObject(PageAgentState::pageAgentScriptsToEvaluateOnLoad);
if (scripts) {
JSONObject::const_iterator end = scripts->end();
for (JSONObject::const_iterator it = scripts->begin(); it != end; ++it) {
String scriptText;
if (it->value->asString(&scriptText))
frame->script().executeScriptInMainWorld(scriptText);
}
}
if (!m_scriptToEvaluateOnLoadOnce.isEmpty())
frame->script().executeScriptInMainWorld(m_scriptToEvaluateOnLoadOnce);
}
|
void InspectorPageAgent::didClearWindowObjectInMainWorld(Frame* frame)
{
if (frame == m_page->mainFrame())
m_injectedScriptManager->discardInjectedScripts();
if (!m_frontend)
return;
RefPtr<JSONObject> scripts = m_state->getObject(PageAgentState::pageAgentScriptsToEvaluateOnLoad);
if (scripts) {
JSONObject::const_iterator end = scripts->end();
for (JSONObject::const_iterator it = scripts->begin(); it != end; ++it) {
String scriptText;
if (it->value->asString(&scriptText))
frame->script().executeScriptInMainWorld(scriptText);
}
}
if (!m_scriptToEvaluateOnLoadOnce.isEmpty())
frame->script().executeScriptInMainWorld(m_scriptToEvaluateOnLoadOnce);
}
|
C
|
Chrome
| 0 |
CVE-2017-14170
|
https://www.cvedetails.com/cve/CVE-2017-14170/
|
CWE-834
|
https://github.com/FFmpeg/FFmpeg/commit/900f39692ca0337a98a7cf047e4e2611071810c2
|
900f39692ca0337a98a7cf047e4e2611071810c2
|
avformat/mxfdec: Fix DoS issues in mxf_read_index_entry_array()
Fixes: 20170829A.mxf
Co-Author: 张洪亮(望初)" <[email protected]>
Found-by: Xiaohei and Wangchu from Alibaba Security Team
Signed-off-by: Michael Niedermayer <[email protected]>
|
static void mxf_handle_small_eubc(AVFormatContext *s)
{
MXFContext *mxf = s->priv_data;
/* assuming non-OPAtom == frame wrapped
* no sane writer would wrap 2 byte PCM packets with 20 byte headers.. */
AVStream *st = mxf_get_opatom_stream(mxf);
if (!st)
return;
/* expect PCM with exactly one index table segment and a small (< 32) EUBC */
if (st->codecpar->codec_type != AVMEDIA_TYPE_AUDIO ||
!is_pcm(st->codecpar->codec_id) ||
mxf->nb_index_tables != 1 ||
mxf->index_tables[0].nb_segments != 1 ||
mxf->index_tables[0].segments[0]->edit_unit_byte_count >= 32)
return;
/* arbitrarily default to 48 kHz PAL audio frame size */
/* TODO: We could compute this from the ratio between the audio
* and video edit rates for 48 kHz NTSC we could use the
* 1802-1802-1802-1802-1801 pattern. */
mxf->edit_units_per_packet = 1920;
}
|
static void mxf_handle_small_eubc(AVFormatContext *s)
{
MXFContext *mxf = s->priv_data;
/* assuming non-OPAtom == frame wrapped
* no sane writer would wrap 2 byte PCM packets with 20 byte headers.. */
AVStream *st = mxf_get_opatom_stream(mxf);
if (!st)
return;
/* expect PCM with exactly one index table segment and a small (< 32) EUBC */
if (st->codecpar->codec_type != AVMEDIA_TYPE_AUDIO ||
!is_pcm(st->codecpar->codec_id) ||
mxf->nb_index_tables != 1 ||
mxf->index_tables[0].nb_segments != 1 ||
mxf->index_tables[0].segments[0]->edit_unit_byte_count >= 32)
return;
/* arbitrarily default to 48 kHz PAL audio frame size */
/* TODO: We could compute this from the ratio between the audio
* and video edit rates for 48 kHz NTSC we could use the
* 1802-1802-1802-1802-1801 pattern. */
mxf->edit_units_per_packet = 1920;
}
|
C
|
FFmpeg
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/b9e2ecab97a8a7f3cce06951ab92a3eaef559206
|
b9e2ecab97a8a7f3cce06951ab92a3eaef559206
|
Do not discount a MANUAL_SUBFRAME load just because it involved
some redirects.
R=brettw
BUG=21353
TEST=none
Review URL: http://codereview.chromium.org/246073
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@27887 0039d316-1c4b-4281-b951-d872f2087c98
|
void NavigationController::GoBack() {
if (!CanGoBack()) {
NOTREACHED();
return;
}
int current_index = GetCurrentEntryIndex();
DiscardNonCommittedEntries();
pending_entry_index_ = current_index - 1;
NavigateToPendingEntry(false);
}
|
void NavigationController::GoBack() {
if (!CanGoBack()) {
NOTREACHED();
return;
}
int current_index = GetCurrentEntryIndex();
DiscardNonCommittedEntries();
pending_entry_index_ = current_index - 1;
NavigateToPendingEntry(false);
}
|
C
|
Chrome
| 0 |
CVE-2016-1678
|
https://www.cvedetails.com/cve/CVE-2016-1678/
|
CWE-119
|
https://github.com/chromium/chromium/commit/1f5ad409dbf5334523931df37598ea49e9849c87
|
1f5ad409dbf5334523931df37598ea49e9849c87
|
Allow origin lock for WebUI pages.
Returning true for WebUI pages in DoesSiteRequireDedicatedProcess helps
to keep enforcing a SiteInstance swap during chrome://foo ->
chrome://bar navigation, even after relaxing
BrowsingInstance::GetSiteInstanceForURL to consider RPH::IsSuitableHost
(see https://crrev.com/c/783470 for that fixes process sharing in
isolated(b(c),d(c)) scenario).
I've manually tested this CL by visiting the following URLs:
- chrome://welcome/
- chrome://settings
- chrome://extensions
- chrome://history
- chrome://help and chrome://chrome (both redirect to chrome://settings/help)
Bug: 510588, 847127
Change-Id: I55073bce00f32cb8bc5c1c91034438ff9a3f8971
Reviewed-on: https://chromium-review.googlesource.com/1237392
Commit-Queue: Łukasz Anforowicz <[email protected]>
Reviewed-by: François Doray <[email protected]>
Reviewed-by: Nasko Oskov <[email protected]>
Reviewed-by: Avi Drissman <[email protected]>
Cr-Commit-Position: refs/heads/master@{#595259}
|
void AssertForegroundAndRepost(const base::Process& renderer_process,
base::PortProvider* port_provider) {
ASSERT_FALSE(renderer_process.IsProcessBackgrounded(port_provider));
base::ThreadTaskRunnerHandle::Get()->PostDelayedTask(
FROM_HERE,
base::BindOnce(&AssertForegroundHelper::AssertForegroundAndRepost,
weak_ptr_factory_.GetWeakPtr(),
base::ConstRef(renderer_process), port_provider),
base::TimeDelta::FromMilliseconds(1));
}
|
void AssertForegroundAndRepost(const base::Process& renderer_process,
base::PortProvider* port_provider) {
ASSERT_FALSE(renderer_process.IsProcessBackgrounded(port_provider));
base::ThreadTaskRunnerHandle::Get()->PostDelayedTask(
FROM_HERE,
base::BindOnce(&AssertForegroundHelper::AssertForegroundAndRepost,
weak_ptr_factory_.GetWeakPtr(),
base::ConstRef(renderer_process), port_provider),
base::TimeDelta::FromMilliseconds(1));
}
|
C
|
Chrome
| 0 |
CVE-2018-14568
|
https://www.cvedetails.com/cve/CVE-2018-14568/
| null |
https://github.com/OISF/suricata/pull/3428/commits/843d0b7a10bb45627f94764a6c5d468a24143345
|
843d0b7a10bb45627f94764a6c5d468a24143345
|
stream: support RST getting lost/ignored
In case of a valid RST on a SYN, the state is switched to 'TCP_CLOSED'.
However, the target of the RST may not have received it, or may not
have accepted it. Also, the RST may have been injected, so the supposed
sender may not actually be aware of the RST that was sent in it's name.
In this case the previous behavior was to switch the state to CLOSED and
accept no further TCP updates or stream reassembly.
This patch changes this. It still switches the state to CLOSED, as this
is by far the most likely to be correct. However, it will reconsider
the state if the receiver continues to talk.
To do this on each state change the previous state will be recorded in
TcpSession::pstate. If a non-RST packet is received after a RST, this
TcpSession::pstate is used to try to continue the conversation.
If the (supposed) sender of the RST is also continueing the conversation
as normal, it's highly likely it didn't send the RST. In this case
a stream event is generated.
Ticket: #2501
Reported-By: Kirill Shipulin
|
static int StreamTcp4WHSTest03 (void)
{
int ret = 0;
Packet *p = SCMalloc(SIZE_OF_PACKET);
FAIL_IF(unlikely(p == NULL));
Flow f;
ThreadVars tv;
StreamTcpThread stt;
TCPHdr tcph;
memset(p, 0, SIZE_OF_PACKET);
PacketQueue pq;
memset(&pq,0,sizeof(PacketQueue));
memset (&f, 0, sizeof(Flow));
memset(&tv, 0, sizeof (ThreadVars));
memset(&stt, 0, sizeof (StreamTcpThread));
memset(&tcph, 0, sizeof (TCPHdr));
FLOW_INITIALIZE(&f);
p->flow = &f;
StreamTcpUTInit(&stt.ra_ctx);
tcph.th_win = htons(5480);
tcph.th_seq = htonl(10);
tcph.th_ack = 0;
tcph.th_flags = TH_SYN;
p->tcph = &tcph;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
p->tcph->th_seq = htonl(20);
p->tcph->th_ack = 0;
p->tcph->th_flags = TH_SYN;
p->flowflags = FLOW_PKT_TOCLIENT;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
if ((!(((TcpSession *)(p->flow->protoctx))->flags & STREAMTCP_FLAG_4WHS))) {
printf("STREAMTCP_FLAG_4WHS flag not set: ");
goto end;
}
p->tcph->th_seq = htonl(30);
p->tcph->th_ack = htonl(11);
p->tcph->th_flags = TH_SYN|TH_ACK;
p->flowflags = FLOW_PKT_TOCLIENT;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
p->tcph->th_seq = htonl(11);
p->tcph->th_ack = htonl(31);
p->tcph->th_flags = TH_ACK;
p->flowflags = FLOW_PKT_TOSERVER;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
if (((TcpSession *)(p->flow->protoctx))->state != TCP_ESTABLISHED) {
printf("state is not ESTABLISHED: ");
goto end;
}
ret = 1;
end:
StreamTcpSessionClear(p->flow->protoctx);
SCFree(p);
FLOW_DESTROY(&f);
StreamTcpUTDeinit(stt.ra_ctx);
return ret;
}
|
static int StreamTcp4WHSTest03 (void)
{
int ret = 0;
Packet *p = SCMalloc(SIZE_OF_PACKET);
FAIL_IF(unlikely(p == NULL));
Flow f;
ThreadVars tv;
StreamTcpThread stt;
TCPHdr tcph;
memset(p, 0, SIZE_OF_PACKET);
PacketQueue pq;
memset(&pq,0,sizeof(PacketQueue));
memset (&f, 0, sizeof(Flow));
memset(&tv, 0, sizeof (ThreadVars));
memset(&stt, 0, sizeof (StreamTcpThread));
memset(&tcph, 0, sizeof (TCPHdr));
FLOW_INITIALIZE(&f);
p->flow = &f;
StreamTcpUTInit(&stt.ra_ctx);
tcph.th_win = htons(5480);
tcph.th_seq = htonl(10);
tcph.th_ack = 0;
tcph.th_flags = TH_SYN;
p->tcph = &tcph;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
p->tcph->th_seq = htonl(20);
p->tcph->th_ack = 0;
p->tcph->th_flags = TH_SYN;
p->flowflags = FLOW_PKT_TOCLIENT;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
if ((!(((TcpSession *)(p->flow->protoctx))->flags & STREAMTCP_FLAG_4WHS))) {
printf("STREAMTCP_FLAG_4WHS flag not set: ");
goto end;
}
p->tcph->th_seq = htonl(30);
p->tcph->th_ack = htonl(11);
p->tcph->th_flags = TH_SYN|TH_ACK;
p->flowflags = FLOW_PKT_TOCLIENT;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
p->tcph->th_seq = htonl(11);
p->tcph->th_ack = htonl(31);
p->tcph->th_flags = TH_ACK;
p->flowflags = FLOW_PKT_TOSERVER;
if (StreamTcpPacket(&tv, p, &stt, &pq) == -1)
goto end;
if (((TcpSession *)(p->flow->protoctx))->state != TCP_ESTABLISHED) {
printf("state is not ESTABLISHED: ");
goto end;
}
ret = 1;
end:
StreamTcpSessionClear(p->flow->protoctx);
SCFree(p);
FLOW_DESTROY(&f);
StreamTcpUTDeinit(stt.ra_ctx);
return ret;
}
|
C
|
suricata
| 0 |
CVE-2013-2237
|
https://www.cvedetails.com/cve/CVE-2013-2237/
|
CWE-119
|
https://github.com/torvalds/linux/commit/85dfb745ee40232876663ae206cba35f24ab2a40
|
85dfb745ee40232876663ae206cba35f24ab2a40
|
af_key: initialize satype in key_notify_policy_flush()
This field was left uninitialized. Some user daemons perform check against this
field.
Signed-off-by: Nicolas Dichtel <[email protected]>
Signed-off-by: Steffen Klassert <[email protected]>
|
static int pfkey_promisc(struct sock *sk, struct sk_buff *skb, const struct sadb_msg *hdr, void * const *ext_hdrs)
{
struct pfkey_sock *pfk = pfkey_sk(sk);
int satype = hdr->sadb_msg_satype;
bool reset_errno = false;
if (hdr->sadb_msg_len == (sizeof(*hdr) / sizeof(uint64_t))) {
reset_errno = true;
if (satype != 0 && satype != 1)
return -EINVAL;
pfk->promisc = satype;
}
if (reset_errno && skb_cloned(skb))
skb = skb_copy(skb, GFP_KERNEL);
else
skb = skb_clone(skb, GFP_KERNEL);
if (reset_errno && skb) {
struct sadb_msg *new_hdr = (struct sadb_msg *) skb->data;
new_hdr->sadb_msg_errno = 0;
}
pfkey_broadcast(skb, GFP_KERNEL, BROADCAST_ALL, NULL, sock_net(sk));
return 0;
}
|
static int pfkey_promisc(struct sock *sk, struct sk_buff *skb, const struct sadb_msg *hdr, void * const *ext_hdrs)
{
struct pfkey_sock *pfk = pfkey_sk(sk);
int satype = hdr->sadb_msg_satype;
bool reset_errno = false;
if (hdr->sadb_msg_len == (sizeof(*hdr) / sizeof(uint64_t))) {
reset_errno = true;
if (satype != 0 && satype != 1)
return -EINVAL;
pfk->promisc = satype;
}
if (reset_errno && skb_cloned(skb))
skb = skb_copy(skb, GFP_KERNEL);
else
skb = skb_clone(skb, GFP_KERNEL);
if (reset_errno && skb) {
struct sadb_msg *new_hdr = (struct sadb_msg *) skb->data;
new_hdr->sadb_msg_errno = 0;
}
pfkey_broadcast(skb, GFP_KERNEL, BROADCAST_ALL, NULL, sock_net(sk));
return 0;
}
|
C
|
linux
| 0 |
CVE-2019-16910
|
https://www.cvedetails.com/cve/CVE-2019-16910/
|
CWE-200
|
https://github.com/ARMmbed/mbedtls/commit/298a43a77ec0ed2c19a8c924ddd8571ef3e65dfd
|
298a43a77ec0ed2c19a8c924ddd8571ef3e65dfd
|
Merge remote-tracking branch 'upstream-restricted/pr/549' into mbedtls-2.7-restricted
|
int mbedtls_ecdsa_genkey( mbedtls_ecdsa_context *ctx, mbedtls_ecp_group_id gid,
int (*f_rng)(void *, unsigned char *, size_t), void *p_rng )
{
int ret = 0;
ret = mbedtls_ecp_group_load( &ctx->grp, gid );
if( ret != 0 )
return( ret );
return( mbedtls_ecp_gen_keypair( &ctx->grp, &ctx->d,
&ctx->Q, f_rng, p_rng ) );
}
|
int mbedtls_ecdsa_genkey( mbedtls_ecdsa_context *ctx, mbedtls_ecp_group_id gid,
int (*f_rng)(void *, unsigned char *, size_t), void *p_rng )
{
int ret = 0;
ret = mbedtls_ecp_group_load( &ctx->grp, gid );
if( ret != 0 )
return( ret );
return( mbedtls_ecp_gen_keypair( &ctx->grp, &ctx->d,
&ctx->Q, f_rng, p_rng ) );
}
|
C
|
mbedtls
| 0 |
CVE-2018-12436
|
https://www.cvedetails.com/cve/CVE-2018-12436/
|
CWE-200
|
https://github.com/wolfSSL/wolfssl/commit/9b9568d500f31f964af26ba8d01e542e1f27e5ca
|
9b9568d500f31f964af26ba8d01e542e1f27e5ca
|
Change ECDSA signing to use blinding.
|
int wc_ecc_sign_hash(const byte* in, word32 inlen, byte* out, word32 *outlen,
WC_RNG* rng, ecc_key* key)
{
int err;
mp_int *r = NULL, *s = NULL;
#ifndef WOLFSSL_ASYNC_CRYPT
mp_int r_lcl, s_lcl;
#endif
if (in == NULL || out == NULL || outlen == NULL || key == NULL ||
rng == NULL) {
return ECC_BAD_ARG_E;
}
#ifdef WOLF_CRYPTO_DEV
if (key->devId != INVALID_DEVID) {
err = wc_CryptoDev_EccSign(in, inlen, out, outlen, rng, key);
if (err != NOT_COMPILED_IN)
return err;
}
#endif
#ifdef WOLFSSL_ASYNC_CRYPT
err = wc_ecc_alloc_async(key);
if (err != 0)
return err;
r = key->r;
s = key->s;
#else
r = &r_lcl;
s = &s_lcl;
#endif
switch(key->state) {
case ECC_STATE_NONE:
case ECC_STATE_SIGN_DO:
key->state = ECC_STATE_SIGN_DO;
if ((err = mp_init_multi(r, s, NULL, NULL, NULL, NULL)) != MP_OKAY){
break;
}
/* hardware crypto */
#if defined(WOLFSSL_ATECC508A) || defined(PLUTON_CRYPTO_ECC)
err = wc_ecc_sign_hash_hw(in, inlen, r, s, out, outlen, rng, key);
#else
err = wc_ecc_sign_hash_ex(in, inlen, rng, key, r, s);
#endif
if (err < 0) {
break;
}
FALL_THROUGH;
case ECC_STATE_SIGN_ENCODE:
key->state = ECC_STATE_SIGN_ENCODE;
#if defined(WOLFSSL_ASYNC_CRYPT) && defined(WC_ASYNC_ENABLE_ECC)
if (key->asyncDev.marker == WOLFSSL_ASYNC_MARKER_ECC) {
#ifdef HAVE_CAVIUM_V
/* Nitrox requires r and s in sep buffer, so split it */
NitroxEccRsSplit(key, &r->raw, &s->raw);
#endif
#ifndef WOLFSSL_ASYNC_CRYPT_TEST
/* only do this if not simulator, since it overwrites result */
wc_bigint_to_mp(&r->raw, r);
wc_bigint_to_mp(&s->raw, s);
#endif
}
#endif /* WOLFSSL_ASYNC_CRYPT */
/* encoded with DSA header */
err = StoreECC_DSA_Sig(out, outlen, r, s);
/* done with R/S */
mp_clear(r);
mp_clear(s);
break;
default:
err = BAD_STATE_E;
break;
}
/* if async pending then return and skip done cleanup below */
if (err == WC_PENDING_E) {
key->state++;
return err;
}
/* cleanup */
#ifdef WOLFSSL_ASYNC_CRYPT
wc_ecc_free_async(key);
#endif
key->state = ECC_STATE_NONE;
return err;
}
|
int wc_ecc_sign_hash(const byte* in, word32 inlen, byte* out, word32 *outlen,
WC_RNG* rng, ecc_key* key)
{
int err;
mp_int *r = NULL, *s = NULL;
#ifndef WOLFSSL_ASYNC_CRYPT
mp_int r_lcl, s_lcl;
#endif
if (in == NULL || out == NULL || outlen == NULL || key == NULL ||
rng == NULL) {
return ECC_BAD_ARG_E;
}
#ifdef WOLF_CRYPTO_DEV
if (key->devId != INVALID_DEVID) {
err = wc_CryptoDev_EccSign(in, inlen, out, outlen, rng, key);
if (err != NOT_COMPILED_IN)
return err;
}
#endif
#ifdef WOLFSSL_ASYNC_CRYPT
err = wc_ecc_alloc_async(key);
if (err != 0)
return err;
r = key->r;
s = key->s;
#else
r = &r_lcl;
s = &s_lcl;
#endif
switch(key->state) {
case ECC_STATE_NONE:
case ECC_STATE_SIGN_DO:
key->state = ECC_STATE_SIGN_DO;
if ((err = mp_init_multi(r, s, NULL, NULL, NULL, NULL)) != MP_OKAY){
break;
}
/* hardware crypto */
#if defined(WOLFSSL_ATECC508A) || defined(PLUTON_CRYPTO_ECC)
err = wc_ecc_sign_hash_hw(in, inlen, r, s, out, outlen, rng, key);
#else
err = wc_ecc_sign_hash_ex(in, inlen, rng, key, r, s);
#endif
if (err < 0) {
break;
}
FALL_THROUGH;
case ECC_STATE_SIGN_ENCODE:
key->state = ECC_STATE_SIGN_ENCODE;
#if defined(WOLFSSL_ASYNC_CRYPT) && defined(WC_ASYNC_ENABLE_ECC)
if (key->asyncDev.marker == WOLFSSL_ASYNC_MARKER_ECC) {
#ifdef HAVE_CAVIUM_V
/* Nitrox requires r and s in sep buffer, so split it */
NitroxEccRsSplit(key, &r->raw, &s->raw);
#endif
#ifndef WOLFSSL_ASYNC_CRYPT_TEST
/* only do this if not simulator, since it overwrites result */
wc_bigint_to_mp(&r->raw, r);
wc_bigint_to_mp(&s->raw, s);
#endif
}
#endif /* WOLFSSL_ASYNC_CRYPT */
/* encoded with DSA header */
err = StoreECC_DSA_Sig(out, outlen, r, s);
/* done with R/S */
mp_clear(r);
mp_clear(s);
break;
default:
err = BAD_STATE_E;
break;
}
/* if async pending then return and skip done cleanup below */
if (err == WC_PENDING_E) {
key->state++;
return err;
}
/* cleanup */
#ifdef WOLFSSL_ASYNC_CRYPT
wc_ecc_free_async(key);
#endif
key->state = ECC_STATE_NONE;
return err;
}
|
C
|
wolfssl
| 0 |
CVE-2011-4718
|
https://www.cvedetails.com/cve/CVE-2011-4718/
|
CWE-264
|
https://git.php.net/?p=php-src.git;a=commit;h=25e8fcc88fa20dc9d4c47184471003f436927cde
|
25e8fcc88fa20dc9d4c47184471003f436927cde
| null |
static void ps_sd_destroy(ps_mm *data, ps_sd *sd)
{
php_uint32 slot;
slot = ps_sd_hash(sd->key, strlen(sd->key)) & data->hash_max;
if (data->hash[slot] == sd) {
data->hash[slot] = sd->next;
} else {
ps_sd *prev;
/* There must be some entry before the one we want to delete */
for (prev = data->hash[slot]; prev->next != sd; prev = prev->next);
prev->next = sd->next;
}
data->hash_cnt--;
if (sd->data) {
mm_free(data->mm, sd->data);
}
mm_free(data->mm, sd);
}
|
static void ps_sd_destroy(ps_mm *data, ps_sd *sd)
{
php_uint32 slot;
slot = ps_sd_hash(sd->key, strlen(sd->key)) & data->hash_max;
if (data->hash[slot] == sd) {
data->hash[slot] = sd->next;
} else {
ps_sd *prev;
/* There must be some entry before the one we want to delete */
for (prev = data->hash[slot]; prev->next != sd; prev = prev->next);
prev->next = sd->next;
}
data->hash_cnt--;
if (sd->data) {
mm_free(data->mm, sd->data);
}
mm_free(data->mm, sd);
}
|
C
|
php
| 0 |
CVE-2016-0850
|
https://www.cvedetails.com/cve/CVE-2016-0850/
|
CWE-264
|
https://android.googlesource.com/platform/external/bluetooth/bluedroid/+/c677ee92595335233eb0e7b59809a1a94e7a678a
|
c677ee92595335233eb0e7b59809a1a94e7a678a
|
DO NOT MERGE Remove Porsche car-kit pairing workaround
Bug: 26551752
Change-Id: I14c5e3fcda0849874c8a94e48aeb7d09585617e1
|
tBTM_STATUS BTM_SetEncryption (BD_ADDR bd_addr, tBT_TRANSPORT transport, tBTM_SEC_CBACK *p_callback,
void *p_ref_data)
{
tBTM_SEC_DEV_REC *p_dev_rec;
tBTM_STATUS rc;
#if BLE_INCLUDED == TRUE
tACL_CONN *p = btm_bda_to_acl(bd_addr, transport);
#endif
p_dev_rec = btm_find_dev (bd_addr);
if (!p_dev_rec ||
(transport == BT_TRANSPORT_BR_EDR && p_dev_rec->hci_handle == BTM_SEC_INVALID_HANDLE)
#if BLE_INCLUDED == TRUE
|| (transport == BT_TRANSPORT_LE && p_dev_rec->ble_hci_handle == BTM_SEC_INVALID_HANDLE)
#endif
)
{
/* Connection should be up and runnning */
BTM_TRACE_WARNING ("Security Manager: BTM_SetEncryption not connected");
if (p_callback)
(*p_callback) (bd_addr, transport, p_ref_data, BTM_WRONG_MODE);
return(BTM_WRONG_MODE);
}
if ((transport == BT_TRANSPORT_BR_EDR &&
(p_dev_rec->sec_flags & BTM_SEC_ENCRYPTED))
#if BLE_INCLUDED == TRUE && SMP_INCLUDED == TRUE
|| (transport == BT_TRANSPORT_LE &&
(p_dev_rec->sec_flags & BTM_SEC_LE_ENCRYPTED))
#endif
)
{
BTM_TRACE_EVENT ("Security Manager: BTM_SetEncryption already encrypted");
if (p_callback)
(*p_callback) (bd_addr, transport, p_ref_data, BTM_SUCCESS);
return(BTM_SUCCESS);
}
if (p_dev_rec->p_callback)
{
/* Connection should be up and runnning */
BTM_TRACE_WARNING ("Security Manager: BTM_SetEncryption busy");
if (p_callback)
(*p_callback) (bd_addr, transport, p_ref_data, BTM_BUSY);
return(BTM_BUSY);
}
p_dev_rec->p_callback = p_callback;
p_dev_rec->p_ref_data = p_ref_data;
p_dev_rec->security_required |= (BTM_SEC_IN_AUTHENTICATE | BTM_SEC_IN_ENCRYPT);
p_dev_rec->is_originator = FALSE;
BTM_TRACE_API ("Security Manager: BTM_SetEncryption Handle:%d State:%d Flags:0x%x Required:0x%x",
p_dev_rec->hci_handle, p_dev_rec->sec_state, p_dev_rec->sec_flags,
p_dev_rec->security_required);
#if BLE_INCLUDED == TRUE && SMP_INCLUDED == TRUE
if (transport == BT_TRANSPORT_LE)
{
rc = btm_ble_set_encryption(bd_addr, p_ref_data, p->link_role);
}
else
#endif
rc = btm_sec_execute_procedure (p_dev_rec);
if (rc != BTM_CMD_STARTED && rc != BTM_BUSY)
{
if (p_callback)
{
p_dev_rec->p_callback = NULL;
(*p_callback) (bd_addr, transport, p_dev_rec->p_ref_data, rc);
}
}
return(rc);
}
|
tBTM_STATUS BTM_SetEncryption (BD_ADDR bd_addr, tBT_TRANSPORT transport, tBTM_SEC_CBACK *p_callback,
void *p_ref_data)
{
tBTM_SEC_DEV_REC *p_dev_rec;
tBTM_STATUS rc;
#if BLE_INCLUDED == TRUE
tACL_CONN *p = btm_bda_to_acl(bd_addr, transport);
#endif
p_dev_rec = btm_find_dev (bd_addr);
if (!p_dev_rec ||
(transport == BT_TRANSPORT_BR_EDR && p_dev_rec->hci_handle == BTM_SEC_INVALID_HANDLE)
#if BLE_INCLUDED == TRUE
|| (transport == BT_TRANSPORT_LE && p_dev_rec->ble_hci_handle == BTM_SEC_INVALID_HANDLE)
#endif
)
{
/* Connection should be up and runnning */
BTM_TRACE_WARNING ("Security Manager: BTM_SetEncryption not connected");
if (p_callback)
(*p_callback) (bd_addr, transport, p_ref_data, BTM_WRONG_MODE);
return(BTM_WRONG_MODE);
}
if ((transport == BT_TRANSPORT_BR_EDR &&
(p_dev_rec->sec_flags & BTM_SEC_ENCRYPTED))
#if BLE_INCLUDED == TRUE && SMP_INCLUDED == TRUE
|| (transport == BT_TRANSPORT_LE &&
(p_dev_rec->sec_flags & BTM_SEC_LE_ENCRYPTED))
#endif
)
{
BTM_TRACE_EVENT ("Security Manager: BTM_SetEncryption already encrypted");
if (p_callback)
(*p_callback) (bd_addr, transport, p_ref_data, BTM_SUCCESS);
return(BTM_SUCCESS);
}
if (p_dev_rec->p_callback)
{
/* Connection should be up and runnning */
BTM_TRACE_WARNING ("Security Manager: BTM_SetEncryption busy");
if (p_callback)
(*p_callback) (bd_addr, transport, p_ref_data, BTM_BUSY);
return(BTM_BUSY);
}
p_dev_rec->p_callback = p_callback;
p_dev_rec->p_ref_data = p_ref_data;
p_dev_rec->security_required |= (BTM_SEC_IN_AUTHENTICATE | BTM_SEC_IN_ENCRYPT);
p_dev_rec->is_originator = FALSE;
BTM_TRACE_API ("Security Manager: BTM_SetEncryption Handle:%d State:%d Flags:0x%x Required:0x%x",
p_dev_rec->hci_handle, p_dev_rec->sec_state, p_dev_rec->sec_flags,
p_dev_rec->security_required);
#if BLE_INCLUDED == TRUE && SMP_INCLUDED == TRUE
if (transport == BT_TRANSPORT_LE)
{
rc = btm_ble_set_encryption(bd_addr, p_ref_data, p->link_role);
}
else
#endif
rc = btm_sec_execute_procedure (p_dev_rec);
if (rc != BTM_CMD_STARTED && rc != BTM_BUSY)
{
if (p_callback)
{
p_dev_rec->p_callback = NULL;
(*p_callback) (bd_addr, transport, p_dev_rec->p_ref_data, rc);
}
}
return(rc);
}
|
C
|
Android
| 0 |
CVE-2014-3571
|
https://www.cvedetails.com/cve/CVE-2014-3571/
| null |
https://github.com/openssl/openssl/commit/248385c606620b29ecc96ca9d3603463f879652b
|
248385c606620b29ecc96ca9d3603463f879652b
|
Follow on from CVE-2014-3571. This fixes the code that was the original source
of the crash due to p being NULL. Steve's fix prevents this situation from
occuring - however this is by no means obvious by looking at the code for
dtls1_get_record. This fix just makes things look a bit more sane.
Reviewed-by: Dr Stephen Henson <[email protected]>
|
dtls1_buffer_record(SSL *s, record_pqueue *queue, unsigned char *priority)
{
DTLS1_RECORD_DATA *rdata;
pitem *item;
/* Limit the size of the queue to prevent DOS attacks */
if (pqueue_size(queue->q) >= 100)
return 0;
rdata = OPENSSL_malloc(sizeof(DTLS1_RECORD_DATA));
item = pitem_new(priority, rdata);
if (rdata == NULL || item == NULL)
{
if (rdata != NULL) OPENSSL_free(rdata);
if (item != NULL) pitem_free(item);
SSLerr(SSL_F_DTLS1_BUFFER_RECORD, ERR_R_INTERNAL_ERROR);
return(0);
}
rdata->packet = s->packet;
rdata->packet_length = s->packet_length;
memcpy(&(rdata->rbuf), &(s->s3->rbuf), sizeof(SSL3_BUFFER));
memcpy(&(rdata->rrec), &(s->s3->rrec), sizeof(SSL3_RECORD));
item->data = rdata;
#ifndef OPENSSL_NO_SCTP
/* Store bio_dgram_sctp_rcvinfo struct */
if (BIO_dgram_is_sctp(SSL_get_rbio(s)) &&
(s->state == SSL3_ST_SR_FINISHED_A || s->state == SSL3_ST_CR_FINISHED_A)) {
BIO_ctrl(SSL_get_rbio(s), BIO_CTRL_DGRAM_SCTP_GET_RCVINFO, sizeof(rdata->recordinfo), &rdata->recordinfo);
}
#endif
s->packet = NULL;
s->packet_length = 0;
memset(&(s->s3->rbuf), 0, sizeof(SSL3_BUFFER));
memset(&(s->s3->rrec), 0, sizeof(SSL3_RECORD));
if (!ssl3_setup_buffers(s))
{
SSLerr(SSL_F_DTLS1_BUFFER_RECORD, ERR_R_INTERNAL_ERROR);
OPENSSL_free(rdata);
pitem_free(item);
return(0);
}
/* insert should not fail, since duplicates are dropped */
if (pqueue_insert(queue->q, item) == NULL)
{
SSLerr(SSL_F_DTLS1_BUFFER_RECORD, ERR_R_INTERNAL_ERROR);
OPENSSL_free(rdata);
pitem_free(item);
return(0);
}
return(1);
}
|
dtls1_buffer_record(SSL *s, record_pqueue *queue, unsigned char *priority)
{
DTLS1_RECORD_DATA *rdata;
pitem *item;
/* Limit the size of the queue to prevent DOS attacks */
if (pqueue_size(queue->q) >= 100)
return 0;
rdata = OPENSSL_malloc(sizeof(DTLS1_RECORD_DATA));
item = pitem_new(priority, rdata);
if (rdata == NULL || item == NULL)
{
if (rdata != NULL) OPENSSL_free(rdata);
if (item != NULL) pitem_free(item);
SSLerr(SSL_F_DTLS1_BUFFER_RECORD, ERR_R_INTERNAL_ERROR);
return(0);
}
rdata->packet = s->packet;
rdata->packet_length = s->packet_length;
memcpy(&(rdata->rbuf), &(s->s3->rbuf), sizeof(SSL3_BUFFER));
memcpy(&(rdata->rrec), &(s->s3->rrec), sizeof(SSL3_RECORD));
item->data = rdata;
#ifndef OPENSSL_NO_SCTP
/* Store bio_dgram_sctp_rcvinfo struct */
if (BIO_dgram_is_sctp(SSL_get_rbio(s)) &&
(s->state == SSL3_ST_SR_FINISHED_A || s->state == SSL3_ST_CR_FINISHED_A)) {
BIO_ctrl(SSL_get_rbio(s), BIO_CTRL_DGRAM_SCTP_GET_RCVINFO, sizeof(rdata->recordinfo), &rdata->recordinfo);
}
#endif
s->packet = NULL;
s->packet_length = 0;
memset(&(s->s3->rbuf), 0, sizeof(SSL3_BUFFER));
memset(&(s->s3->rrec), 0, sizeof(SSL3_RECORD));
if (!ssl3_setup_buffers(s))
{
SSLerr(SSL_F_DTLS1_BUFFER_RECORD, ERR_R_INTERNAL_ERROR);
OPENSSL_free(rdata);
pitem_free(item);
return(0);
}
/* insert should not fail, since duplicates are dropped */
if (pqueue_insert(queue->q, item) == NULL)
{
SSLerr(SSL_F_DTLS1_BUFFER_RECORD, ERR_R_INTERNAL_ERROR);
OPENSSL_free(rdata);
pitem_free(item);
return(0);
}
return(1);
}
|
C
|
openssl
| 0 |
CVE-2013-1774
|
https://www.cvedetails.com/cve/CVE-2013-1774/
|
CWE-264
|
https://github.com/torvalds/linux/commit/1ee0a224bc9aad1de496c795f96bc6ba2c394811
|
1ee0a224bc9aad1de496c795f96bc6ba2c394811
|
USB: io_ti: Fix NULL dereference in chase_port()
The tty is NULL when the port is hanging up.
chase_port() needs to check for this.
This patch is intended for stable series.
The behavior was observed and tested in Linux 3.2 and 3.7.1.
Johan Hovold submitted a more elaborate patch for the mainline kernel.
[ 56.277883] usb 1-1: edge_bulk_in_callback - nonzero read bulk status received: -84
[ 56.278811] usb 1-1: USB disconnect, device number 3
[ 56.278856] usb 1-1: edge_bulk_in_callback - stopping read!
[ 56.279562] BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
[ 56.280536] IP: [<ffffffff8144e62a>] _raw_spin_lock_irqsave+0x19/0x35
[ 56.281212] PGD 1dc1b067 PUD 1e0f7067 PMD 0
[ 56.282085] Oops: 0002 [#1] SMP
[ 56.282744] Modules linked in:
[ 56.283512] CPU 1
[ 56.283512] Pid: 25, comm: khubd Not tainted 3.7.1 #1 innotek GmbH VirtualBox/VirtualBox
[ 56.283512] RIP: 0010:[<ffffffff8144e62a>] [<ffffffff8144e62a>] _raw_spin_lock_irqsave+0x19/0x35
[ 56.283512] RSP: 0018:ffff88001fa99ab0 EFLAGS: 00010046
[ 56.283512] RAX: 0000000000000046 RBX: 00000000000001c8 RCX: 0000000000640064
[ 56.283512] RDX: 0000000000010000 RSI: ffff88001fa99b20 RDI: 00000000000001c8
[ 56.283512] RBP: ffff88001fa99b20 R08: 0000000000000000 R09: 0000000000000000
[ 56.283512] R10: 0000000000000000 R11: ffffffff812fcb4c R12: ffff88001ddf53c0
[ 56.283512] R13: 0000000000000000 R14: 00000000000001c8 R15: ffff88001e19b9f4
[ 56.283512] FS: 0000000000000000(0000) GS:ffff88001fd00000(0000) knlGS:0000000000000000
[ 56.283512] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 56.283512] CR2: 00000000000001c8 CR3: 000000001dc51000 CR4: 00000000000006e0
[ 56.283512] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 56.283512] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 56.283512] Process khubd (pid: 25, threadinfo ffff88001fa98000, task ffff88001fa94f80)
[ 56.283512] Stack:
[ 56.283512] 0000000000000046 00000000000001c8 ffffffff810578ec ffffffff812fcb4c
[ 56.283512] ffff88001e19b980 0000000000002710 ffffffff812ffe81 0000000000000001
[ 56.283512] ffff88001fa94f80 0000000000000202 ffffffff00000001 0000000000000296
[ 56.283512] Call Trace:
[ 56.283512] [<ffffffff810578ec>] ? add_wait_queue+0x12/0x3c
[ 56.283512] [<ffffffff812fcb4c>] ? usb_serial_port_work+0x28/0x28
[ 56.283512] [<ffffffff812ffe81>] ? chase_port+0x84/0x2d6
[ 56.283512] [<ffffffff81063f27>] ? try_to_wake_up+0x199/0x199
[ 56.283512] [<ffffffff81263a5c>] ? tty_ldisc_hangup+0x222/0x298
[ 56.283512] [<ffffffff81300171>] ? edge_close+0x64/0x129
[ 56.283512] [<ffffffff810612f7>] ? __wake_up+0x35/0x46
[ 56.283512] [<ffffffff8106135b>] ? should_resched+0x5/0x23
[ 56.283512] [<ffffffff81264916>] ? tty_port_shutdown+0x39/0x44
[ 56.283512] [<ffffffff812fcb4c>] ? usb_serial_port_work+0x28/0x28
[ 56.283512] [<ffffffff8125d38c>] ? __tty_hangup+0x307/0x351
[ 56.283512] [<ffffffff812e6ddc>] ? usb_hcd_flush_endpoint+0xde/0xed
[ 56.283512] [<ffffffff8144e625>] ? _raw_spin_lock_irqsave+0x14/0x35
[ 56.283512] [<ffffffff812fd361>] ? usb_serial_disconnect+0x57/0xc2
[ 56.283512] [<ffffffff812ea99b>] ? usb_unbind_interface+0x5c/0x131
[ 56.283512] [<ffffffff8128d738>] ? __device_release_driver+0x7f/0xd5
[ 56.283512] [<ffffffff8128d9cd>] ? device_release_driver+0x1a/0x25
[ 56.283512] [<ffffffff8128d393>] ? bus_remove_device+0xd2/0xe7
[ 56.283512] [<ffffffff8128b7a3>] ? device_del+0x119/0x167
[ 56.283512] [<ffffffff812e8d9d>] ? usb_disable_device+0x6a/0x180
[ 56.283512] [<ffffffff812e2ae0>] ? usb_disconnect+0x81/0xe6
[ 56.283512] [<ffffffff812e4435>] ? hub_thread+0x577/0xe82
[ 56.283512] [<ffffffff8144daa7>] ? __schedule+0x490/0x4be
[ 56.283512] [<ffffffff8105798f>] ? abort_exclusive_wait+0x79/0x79
[ 56.283512] [<ffffffff812e3ebe>] ? usb_remote_wakeup+0x2f/0x2f
[ 56.283512] [<ffffffff812e3ebe>] ? usb_remote_wakeup+0x2f/0x2f
[ 56.283512] [<ffffffff810570b4>] ? kthread+0x81/0x89
[ 56.283512] [<ffffffff81057033>] ? __kthread_parkme+0x5c/0x5c
[ 56.283512] [<ffffffff8145387c>] ? ret_from_fork+0x7c/0xb0
[ 56.283512] [<ffffffff81057033>] ? __kthread_parkme+0x5c/0x5c
[ 56.283512] Code: 8b 7c 24 08 e8 17 0b c3 ff 48 8b 04 24 48 83 c4 10 c3 53 48 89 fb 41 50 e8 e0 0a c3 ff 48 89 04 24 e8 e7 0a c3 ff ba 00 00 01 00
<f0> 0f c1 13 48 8b 04 24 89 d1 c1 ea 10 66 39 d1 74 07 f3 90 66
[ 56.283512] RIP [<ffffffff8144e62a>] _raw_spin_lock_irqsave+0x19/0x35
[ 56.283512] RSP <ffff88001fa99ab0>
[ 56.283512] CR2: 00000000000001c8
[ 56.283512] ---[ end trace 49714df27e1679ce ]---
Signed-off-by: Wolfgang Frisch <[email protected]>
Cc: Johan Hovold <[email protected]>
Cc: stable <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
static void handle_new_msr(struct edgeport_port *edge_port, __u8 msr)
{
struct async_icount *icount;
struct tty_struct *tty;
dev_dbg(&edge_port->port->dev, "%s - %02x\n", __func__, msr);
if (msr & (EDGEPORT_MSR_DELTA_CTS | EDGEPORT_MSR_DELTA_DSR |
EDGEPORT_MSR_DELTA_RI | EDGEPORT_MSR_DELTA_CD)) {
icount = &edge_port->icount;
/* update input line counters */
if (msr & EDGEPORT_MSR_DELTA_CTS)
icount->cts++;
if (msr & EDGEPORT_MSR_DELTA_DSR)
icount->dsr++;
if (msr & EDGEPORT_MSR_DELTA_CD)
icount->dcd++;
if (msr & EDGEPORT_MSR_DELTA_RI)
icount->rng++;
wake_up_interruptible(&edge_port->delta_msr_wait);
}
/* Save the new modem status */
edge_port->shadow_msr = msr & 0xf0;
tty = tty_port_tty_get(&edge_port->port->port);
/* handle CTS flow control */
if (tty && C_CRTSCTS(tty)) {
if (msr & EDGEPORT_MSR_CTS) {
tty->hw_stopped = 0;
tty_wakeup(tty);
} else {
tty->hw_stopped = 1;
}
}
tty_kref_put(tty);
}
|
static void handle_new_msr(struct edgeport_port *edge_port, __u8 msr)
{
struct async_icount *icount;
struct tty_struct *tty;
dev_dbg(&edge_port->port->dev, "%s - %02x\n", __func__, msr);
if (msr & (EDGEPORT_MSR_DELTA_CTS | EDGEPORT_MSR_DELTA_DSR |
EDGEPORT_MSR_DELTA_RI | EDGEPORT_MSR_DELTA_CD)) {
icount = &edge_port->icount;
/* update input line counters */
if (msr & EDGEPORT_MSR_DELTA_CTS)
icount->cts++;
if (msr & EDGEPORT_MSR_DELTA_DSR)
icount->dsr++;
if (msr & EDGEPORT_MSR_DELTA_CD)
icount->dcd++;
if (msr & EDGEPORT_MSR_DELTA_RI)
icount->rng++;
wake_up_interruptible(&edge_port->delta_msr_wait);
}
/* Save the new modem status */
edge_port->shadow_msr = msr & 0xf0;
tty = tty_port_tty_get(&edge_port->port->port);
/* handle CTS flow control */
if (tty && C_CRTSCTS(tty)) {
if (msr & EDGEPORT_MSR_CTS) {
tty->hw_stopped = 0;
tty_wakeup(tty);
} else {
tty->hw_stopped = 1;
}
}
tty_kref_put(tty);
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/a151041807a7e3c702c5f935a742368333aa69d4
|
a151041807a7e3c702c5f935a742368333aa69d4
|
Don't move the window when resizing
When the GPU ode resizes the window to make it non-0x0 size, it should not move the window.
BUG=137523
Review URL: https://chromiumcodereview.appspot.com/10829206
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@150176 0039d316-1c4b-4281-b951-d872f2087c98
|
GpuProcessHostUIShim::GpuProcessHostUIShim(int host_id)
: host_id_(host_id) {
g_hosts_by_id.Pointer()->AddWithID(this, host_id_);
}
|
GpuProcessHostUIShim::GpuProcessHostUIShim(int host_id)
: host_id_(host_id) {
g_hosts_by_id.Pointer()->AddWithID(this, host_id_);
}
|
C
|
Chrome
| 0 |
CVE-2012-2888
|
https://www.cvedetails.com/cve/CVE-2012-2888/
|
CWE-399
|
https://github.com/chromium/chromium/commit/3b0d77670a0613f409110817455d2137576b485a
|
3b0d77670a0613f409110817455d2137576b485a
|
Revert 143656 - Add an IPC channel between the NaCl loader process and the renderer.
BUG=116317
TEST=ppapi, nacl tests, manual testing for experimental IPC proxy.
Review URL: https://chromiumcodereview.appspot.com/10641016
[email protected]
Review URL: https://chromiumcodereview.appspot.com/10625007
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@143665 0039d316-1c4b-4281-b951-d872f2087c98
|
UrlSchemeType Plugin::GetUrlScheme(const std::string& url) {
CHECK(url_util_ != NULL);
PP_URLComponents_Dev comps;
pp::Var canonicalized =
url_util_->Canonicalize(pp::Var(url), &comps);
if (canonicalized.is_null() ||
(comps.scheme.begin == 0 && comps.scheme.len == -1)) {
return SCHEME_OTHER;
}
CHECK(comps.scheme.begin <
static_cast<int>(canonicalized.AsString().size()));
CHECK(comps.scheme.begin + comps.scheme.len <
static_cast<int>(canonicalized.AsString().size()));
std::string scheme = canonicalized.AsString().substr(comps.scheme.begin,
comps.scheme.len);
if (scheme == kChromeExtensionUriScheme)
return SCHEME_CHROME_EXTENSION;
if (scheme == kDataUriScheme)
return SCHEME_DATA;
return SCHEME_OTHER;
}
|
UrlSchemeType Plugin::GetUrlScheme(const std::string& url) {
CHECK(url_util_ != NULL);
PP_URLComponents_Dev comps;
pp::Var canonicalized =
url_util_->Canonicalize(pp::Var(url), &comps);
if (canonicalized.is_null() ||
(comps.scheme.begin == 0 && comps.scheme.len == -1)) {
return SCHEME_OTHER;
}
CHECK(comps.scheme.begin <
static_cast<int>(canonicalized.AsString().size()));
CHECK(comps.scheme.begin + comps.scheme.len <
static_cast<int>(canonicalized.AsString().size()));
std::string scheme = canonicalized.AsString().substr(comps.scheme.begin,
comps.scheme.len);
if (scheme == kChromeExtensionUriScheme)
return SCHEME_CHROME_EXTENSION;
if (scheme == kDataUriScheme)
return SCHEME_DATA;
return SCHEME_OTHER;
}
|
C
|
Chrome
| 0 |
CVE-2019-13045
|
https://www.cvedetails.com/cve/CVE-2019-13045/
|
CWE-416
|
https://github.com/irssi/irssi/commit/d23b0d22cc611e43c88d99192a59f413f951a955
|
d23b0d22cc611e43c88d99192a59f413f951a955
|
Merge pull request #1058 from ailin-nemui/sasl-reconnect
copy sasl username and password values
|
static void destroy_server_connect(SERVER_CONNECT_REC *conn)
{
IRC_SERVER_CONNECT_REC *ircconn;
ircconn = IRC_SERVER_CONNECT(conn);
if (ircconn == NULL)
return;
g_free_not_null(ircconn->usermode);
g_free_not_null(ircconn->alternate_nick);
g_free_not_null(ircconn->sasl_username);
g_free_not_null(ircconn->sasl_password);
}
|
static void destroy_server_connect(SERVER_CONNECT_REC *conn)
{
IRC_SERVER_CONNECT_REC *ircconn;
ircconn = IRC_SERVER_CONNECT(conn);
if (ircconn == NULL)
return;
g_free_not_null(ircconn->usermode);
g_free_not_null(ircconn->alternate_nick);
}
|
C
|
irssi
| 1 |
CVE-2015-1296
|
https://www.cvedetails.com/cve/CVE-2015-1296/
|
CWE-254
|
https://github.com/chromium/chromium/commit/5fc08cfb098acce49344d2e89cc27c915903f81c
|
5fc08cfb098acce49344d2e89cc27c915903f81c
|
Clean up Android DownloadManager code as most download now go through Chrome Network stack
The only exception is OMA DRM download.
And it only applies to context menu download interception.
Clean up the remaining unused code now.
BUG=647755
Review-Url: https://codereview.chromium.org/2371773003
Cr-Commit-Position: refs/heads/master@{#421332}
|
void DownloadController::CreateGETDownload(
|
void DownloadController::CreateGETDownload(
const content::ResourceRequestInfo::WebContentsGetter& wc_getter,
bool must_download,
const DownloadInfo& info) {
DCHECK_CURRENTLY_ON(BrowserThread::IO);
BrowserThread::PostTask(
BrowserThread::UI, FROM_HERE,
base::Bind(&DownloadController::StartAndroidDownload,
base::Unretained(this),
wc_getter, must_download, info));
}
|
C
|
Chrome
| 1 |
CVE-2016-3861
|
https://www.cvedetails.com/cve/CVE-2016-3861/
|
CWE-119
|
https://android.googlesource.com/platform/frameworks/native/+/1f4b49e64adf4623eefda503bca61e253597b9bf
|
1f4b49e64adf4623eefda503bca61e253597b9bf
|
Add bound checks to utf16_to_utf8
Bug: 29250543
Change-Id: I518e7b2fe10aaa3f1c1987586a09b1110aff7e1a
(cherry picked from commit 7e93b2ddcb49b5365fbe1dab134ffb38e6f1c719)
|
native_handle* Parcel::readNativeHandle() const
{
int numFds, numInts;
status_t err;
err = readInt32(&numFds);
if (err != NO_ERROR) return 0;
err = readInt32(&numInts);
if (err != NO_ERROR) return 0;
native_handle* h = native_handle_create(numFds, numInts);
if (!h) {
return 0;
}
for (int i=0 ; err==NO_ERROR && i<numFds ; i++) {
h->data[i] = dup(readFileDescriptor());
if (h->data[i] < 0) {
for (int j = 0; j < i; j++) {
close(h->data[j]);
}
native_handle_delete(h);
return 0;
}
}
err = read(h->data + numFds, sizeof(int)*numInts);
if (err != NO_ERROR) {
native_handle_close(h);
native_handle_delete(h);
h = 0;
}
return h;
}
|
native_handle* Parcel::readNativeHandle() const
{
int numFds, numInts;
status_t err;
err = readInt32(&numFds);
if (err != NO_ERROR) return 0;
err = readInt32(&numInts);
if (err != NO_ERROR) return 0;
native_handle* h = native_handle_create(numFds, numInts);
if (!h) {
return 0;
}
for (int i=0 ; err==NO_ERROR && i<numFds ; i++) {
h->data[i] = dup(readFileDescriptor());
if (h->data[i] < 0) {
for (int j = 0; j < i; j++) {
close(h->data[j]);
}
native_handle_delete(h);
return 0;
}
}
err = read(h->data + numFds, sizeof(int)*numInts);
if (err != NO_ERROR) {
native_handle_close(h);
native_handle_delete(h);
h = 0;
}
return h;
}
|
C
|
Android
| 0 |
CVE-2016-5219
|
https://www.cvedetails.com/cve/CVE-2016-5219/
|
CWE-416
|
https://github.com/chromium/chromium/commit/a4150b688a754d3d10d2ca385155b1c95d77d6ae
|
a4150b688a754d3d10d2ca385155b1c95d77d6ae
|
Add GL_PROGRAM_COMPLETION_QUERY_CHROMIUM
This makes the query of GL_COMPLETION_STATUS_KHR to programs much
cheaper by minimizing the round-trip to the GPU thread.
Bug: 881152, 957001
Change-Id: Iadfa798af29225e752c710ca5c25f50b3dd3101a
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1586630
Commit-Queue: Kenneth Russell <[email protected]>
Reviewed-by: Kentaro Hara <[email protected]>
Reviewed-by: Geoff Lang <[email protected]>
Reviewed-by: Kenneth Russell <[email protected]>
Cr-Commit-Position: refs/heads/master@{#657568}
|
error::Error GLES2DecoderPassthroughImpl::DoUniformMatrix3fv(
GLint location,
GLsizei count,
GLboolean transpose,
const volatile GLfloat* value) {
api()->glUniformMatrix3fvFn(location, count, transpose,
const_cast<const GLfloat*>(value));
return error::kNoError;
}
|
error::Error GLES2DecoderPassthroughImpl::DoUniformMatrix3fv(
GLint location,
GLsizei count,
GLboolean transpose,
const volatile GLfloat* value) {
api()->glUniformMatrix3fvFn(location, count, transpose,
const_cast<const GLfloat*>(value));
return error::kNoError;
}
|
C
|
Chrome
| 0 |
CVE-2011-2918
|
https://www.cvedetails.com/cve/CVE-2011-2918/
|
CWE-399
|
https://github.com/torvalds/linux/commit/a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
perf: Remove the nmi parameter from the swevent and overflow interface
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Michael Cree <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: Anton Blanchard <[email protected]>
Cc: Eric B Munson <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Jason Wessel <[email protected]>
Cc: Don Zickus <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
static inline unsigned long __user *__fetch_reg_addr_user(unsigned int reg,
struct pt_regs *regs)
{
BUG_ON(reg < 16);
BUG_ON(regs->tstate & TSTATE_PRIV);
if (test_thread_flag(TIF_32BIT)) {
struct reg_window32 __user *win32;
win32 = (struct reg_window32 __user *)((unsigned long)((u32)regs->u_regs[UREG_FP]));
return (unsigned long __user *)&win32->locals[reg - 16];
} else {
struct reg_window __user *win;
win = (struct reg_window __user *)(regs->u_regs[UREG_FP] + STACK_BIAS);
return &win->locals[reg - 16];
}
}
|
static inline unsigned long __user *__fetch_reg_addr_user(unsigned int reg,
struct pt_regs *regs)
{
BUG_ON(reg < 16);
BUG_ON(regs->tstate & TSTATE_PRIV);
if (test_thread_flag(TIF_32BIT)) {
struct reg_window32 __user *win32;
win32 = (struct reg_window32 __user *)((unsigned long)((u32)regs->u_regs[UREG_FP]));
return (unsigned long __user *)&win32->locals[reg - 16];
} else {
struct reg_window __user *win;
win = (struct reg_window __user *)(regs->u_regs[UREG_FP] + STACK_BIAS);
return &win->locals[reg - 16];
}
}
|
C
|
linux
| 0 |
CVE-2016-8658
|
https://www.cvedetails.com/cve/CVE-2016-8658/
|
CWE-119
|
https://github.com/torvalds/linux/commit/ded89912156b1a47d940a0c954c43afbabd0c42c
|
ded89912156b1a47d940a0c954c43afbabd0c42c
|
brcmfmac: avoid potential stack overflow in brcmf_cfg80211_start_ap()
User-space can choose to omit NL80211_ATTR_SSID and only provide raw
IE TLV data. When doing so it can provide SSID IE with length exceeding
the allowed size. The driver further processes this IE copying it
into a local variable without checking the length. Hence stack can be
corrupted and used as exploit.
Cc: [email protected] # v4.7
Reported-by: Daxing Guo <[email protected]>
Reviewed-by: Hante Meuleman <[email protected]>
Reviewed-by: Pieter-Paul Giesberts <[email protected]>
Reviewed-by: Franky Lin <[email protected]>
Signed-off-by: Arend van Spriel <[email protected]>
Signed-off-by: Kalle Valo <[email protected]>
|
static int brcmf_cfg80211_get_channel(struct wiphy *wiphy,
struct wireless_dev *wdev,
struct cfg80211_chan_def *chandef)
{
struct brcmf_cfg80211_info *cfg = wiphy_to_cfg(wiphy);
struct net_device *ndev = wdev->netdev;
struct brcmf_if *ifp;
struct brcmu_chan ch;
enum nl80211_band band = 0;
enum nl80211_chan_width width = 0;
u32 chanspec;
int freq, err;
if (!ndev)
return -ENODEV;
ifp = netdev_priv(ndev);
err = brcmf_fil_iovar_int_get(ifp, "chanspec", &chanspec);
if (err) {
brcmf_err("chanspec failed (%d)\n", err);
return err;
}
ch.chspec = chanspec;
cfg->d11inf.decchspec(&ch);
switch (ch.band) {
case BRCMU_CHAN_BAND_2G:
band = NL80211_BAND_2GHZ;
break;
case BRCMU_CHAN_BAND_5G:
band = NL80211_BAND_5GHZ;
break;
}
switch (ch.bw) {
case BRCMU_CHAN_BW_80:
width = NL80211_CHAN_WIDTH_80;
break;
case BRCMU_CHAN_BW_40:
width = NL80211_CHAN_WIDTH_40;
break;
case BRCMU_CHAN_BW_20:
width = NL80211_CHAN_WIDTH_20;
break;
case BRCMU_CHAN_BW_80P80:
width = NL80211_CHAN_WIDTH_80P80;
break;
case BRCMU_CHAN_BW_160:
width = NL80211_CHAN_WIDTH_160;
break;
}
freq = ieee80211_channel_to_frequency(ch.control_ch_num, band);
chandef->chan = ieee80211_get_channel(wiphy, freq);
chandef->width = width;
chandef->center_freq1 = ieee80211_channel_to_frequency(ch.chnum, band);
chandef->center_freq2 = 0;
return 0;
}
|
static int brcmf_cfg80211_get_channel(struct wiphy *wiphy,
struct wireless_dev *wdev,
struct cfg80211_chan_def *chandef)
{
struct brcmf_cfg80211_info *cfg = wiphy_to_cfg(wiphy);
struct net_device *ndev = wdev->netdev;
struct brcmf_if *ifp;
struct brcmu_chan ch;
enum nl80211_band band = 0;
enum nl80211_chan_width width = 0;
u32 chanspec;
int freq, err;
if (!ndev)
return -ENODEV;
ifp = netdev_priv(ndev);
err = brcmf_fil_iovar_int_get(ifp, "chanspec", &chanspec);
if (err) {
brcmf_err("chanspec failed (%d)\n", err);
return err;
}
ch.chspec = chanspec;
cfg->d11inf.decchspec(&ch);
switch (ch.band) {
case BRCMU_CHAN_BAND_2G:
band = NL80211_BAND_2GHZ;
break;
case BRCMU_CHAN_BAND_5G:
band = NL80211_BAND_5GHZ;
break;
}
switch (ch.bw) {
case BRCMU_CHAN_BW_80:
width = NL80211_CHAN_WIDTH_80;
break;
case BRCMU_CHAN_BW_40:
width = NL80211_CHAN_WIDTH_40;
break;
case BRCMU_CHAN_BW_20:
width = NL80211_CHAN_WIDTH_20;
break;
case BRCMU_CHAN_BW_80P80:
width = NL80211_CHAN_WIDTH_80P80;
break;
case BRCMU_CHAN_BW_160:
width = NL80211_CHAN_WIDTH_160;
break;
}
freq = ieee80211_channel_to_frequency(ch.control_ch_num, band);
chandef->chan = ieee80211_get_channel(wiphy, freq);
chandef->width = width;
chandef->center_freq1 = ieee80211_channel_to_frequency(ch.chnum, band);
chandef->center_freq2 = 0;
return 0;
}
|
C
|
linux
| 0 |
CVE-2011-4324
|
https://www.cvedetails.com/cve/CVE-2011-4324/
| null |
https://github.com/torvalds/linux/commit/dc0b027dfadfcb8a5504f7d8052754bf8d501ab9
|
dc0b027dfadfcb8a5504f7d8052754bf8d501ab9
|
NFSv4: Convert the open and close ops to use fmode
Signed-off-by: Trond Myklebust <[email protected]>
|
static int nfs4_recover_expired_lease(struct nfs_server *server)
{
struct nfs_client *clp = server->nfs_client;
int ret;
for (;;) {
ret = nfs4_wait_clnt_recover(clp);
if (ret != 0)
return ret;
if (!test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) &&
!test_bit(NFS4CLNT_CHECK_LEASE,&clp->cl_state))
break;
nfs4_schedule_state_recovery(clp);
}
return 0;
}
|
static int nfs4_recover_expired_lease(struct nfs_server *server)
{
struct nfs_client *clp = server->nfs_client;
int ret;
for (;;) {
ret = nfs4_wait_clnt_recover(clp);
if (ret != 0)
return ret;
if (!test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) &&
!test_bit(NFS4CLNT_CHECK_LEASE,&clp->cl_state))
break;
nfs4_schedule_state_recovery(clp);
}
return 0;
}
|
C
|
linux
| 0 |
CVE-2012-0879
|
https://www.cvedetails.com/cve/CVE-2012-0879/
|
CWE-20
|
https://github.com/torvalds/linux/commit/b69f2292063d2caf37ca9aec7d63ded203701bf3
|
b69f2292063d2caf37ca9aec7d63ded203701bf3
|
block: Fix io_context leak after failure of clone with CLONE_IO
With CLONE_IO, parent's io_context->nr_tasks is incremented, but never
decremented whenever copy_process() fails afterwards, which prevents
exit_io_context() from calling IO schedulers exit functions.
Give a task_struct to exit_io_context(), and call exit_io_context() instead of
put_io_context() in copy_process() cleanup path.
Signed-off-by: Louis Rilling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
static void exit_notify(struct task_struct *tsk, int group_dead)
{
int signal;
void *cookie;
/*
* This does two things:
*
* A. Make init inherit all the child processes
* B. Check to see if any process groups have become orphaned
* as a result of our exiting, and if they have any stopped
* jobs, send them a SIGHUP and then a SIGCONT. (POSIX 3.2.2.2)
*/
forget_original_parent(tsk);
exit_task_namespaces(tsk);
write_lock_irq(&tasklist_lock);
if (group_dead)
kill_orphaned_pgrp(tsk->group_leader, NULL);
/* Let father know we died
*
* Thread signals are configurable, but you aren't going to use
* that to send signals to arbitary processes.
* That stops right now.
*
* If the parent exec id doesn't match the exec id we saved
* when we started then we know the parent has changed security
* domain.
*
* If our self_exec id doesn't match our parent_exec_id then
* we have changed execution domain as these two values started
* the same after a fork.
*/
if (tsk->exit_signal != SIGCHLD && !task_detached(tsk) &&
(tsk->parent_exec_id != tsk->real_parent->self_exec_id ||
tsk->self_exec_id != tsk->parent_exec_id))
tsk->exit_signal = SIGCHLD;
signal = tracehook_notify_death(tsk, &cookie, group_dead);
if (signal >= 0)
signal = do_notify_parent(tsk, signal);
tsk->exit_state = signal == DEATH_REAP ? EXIT_DEAD : EXIT_ZOMBIE;
/* mt-exec, de_thread() is waiting for us */
if (thread_group_leader(tsk) &&
tsk->signal->group_exit_task &&
tsk->signal->notify_count < 0)
wake_up_process(tsk->signal->group_exit_task);
write_unlock_irq(&tasklist_lock);
tracehook_report_death(tsk, signal, cookie, group_dead);
/* If the process is dead, release it - nobody will wait for it */
if (signal == DEATH_REAP)
release_task(tsk);
}
|
static void exit_notify(struct task_struct *tsk, int group_dead)
{
int signal;
void *cookie;
/*
* This does two things:
*
* A. Make init inherit all the child processes
* B. Check to see if any process groups have become orphaned
* as a result of our exiting, and if they have any stopped
* jobs, send them a SIGHUP and then a SIGCONT. (POSIX 3.2.2.2)
*/
forget_original_parent(tsk);
exit_task_namespaces(tsk);
write_lock_irq(&tasklist_lock);
if (group_dead)
kill_orphaned_pgrp(tsk->group_leader, NULL);
/* Let father know we died
*
* Thread signals are configurable, but you aren't going to use
* that to send signals to arbitary processes.
* That stops right now.
*
* If the parent exec id doesn't match the exec id we saved
* when we started then we know the parent has changed security
* domain.
*
* If our self_exec id doesn't match our parent_exec_id then
* we have changed execution domain as these two values started
* the same after a fork.
*/
if (tsk->exit_signal != SIGCHLD && !task_detached(tsk) &&
(tsk->parent_exec_id != tsk->real_parent->self_exec_id ||
tsk->self_exec_id != tsk->parent_exec_id))
tsk->exit_signal = SIGCHLD;
signal = tracehook_notify_death(tsk, &cookie, group_dead);
if (signal >= 0)
signal = do_notify_parent(tsk, signal);
tsk->exit_state = signal == DEATH_REAP ? EXIT_DEAD : EXIT_ZOMBIE;
/* mt-exec, de_thread() is waiting for us */
if (thread_group_leader(tsk) &&
tsk->signal->group_exit_task &&
tsk->signal->notify_count < 0)
wake_up_process(tsk->signal->group_exit_task);
write_unlock_irq(&tasklist_lock);
tracehook_report_death(tsk, signal, cookie, group_dead);
/* If the process is dead, release it - nobody will wait for it */
if (signal == DEATH_REAP)
release_task(tsk);
}
|
C
|
linux
| 0 |
CVE-2017-9207
|
https://www.cvedetails.com/cve/CVE-2017-9207/
|
CWE-125
|
https://github.com/jsummers/imageworsener/commit/b45cb1b665a14b0175b9cb1502ef7168e1fe0d5d
|
b45cb1b665a14b0175b9cb1502ef7168e1fe0d5d
|
Fixed invalid memory access bugs when decoding JPEG Exif data
Fixes issues #22, #23, #24, #25
|
IW_IMPL(int) iw_read_jpeg_file(struct iw_context *ctx, struct iw_iodescr *iodescr)
{
int retval=0;
struct jpeg_decompress_struct cinfo;
struct my_error_mgr jerr;
int cinfo_valid=0;
int colorspace;
JDIMENSION rownum;
JSAMPLE *jsamprow;
int numchannels=0;
struct iw_image img;
struct iwjpegrcontext rctx;
JSAMPLE *tmprow = NULL;
int cmyk_flag = 0;
iw_zeromem(&img,sizeof(struct iw_image));
iw_zeromem(&cinfo,sizeof(struct jpeg_decompress_struct));
iw_zeromem(&jerr,sizeof(struct my_error_mgr));
iw_zeromem(&rctx,sizeof(struct iwjpegrcontext));
cinfo.err = jpeg_std_error(&jerr.pub);
jerr.pub.error_exit = my_error_exit;
jerr.pub.output_message = my_output_message;
if (setjmp(jerr.setjmp_buffer)) {
char buffer[JMSG_LENGTH_MAX];
(*cinfo.err->format_message) ((j_common_ptr)&cinfo, buffer);
iw_set_errorf(ctx,"libjpeg reports read error: %s",buffer);
goto done;
}
jpeg_create_decompress(&cinfo);
cinfo_valid=1;
rctx.pub.init_source = my_init_source_fn;
rctx.pub.fill_input_buffer = my_fill_input_buffer_fn;
rctx.pub.skip_input_data = my_skip_input_data_fn;
rctx.pub.resync_to_restart = jpeg_resync_to_restart; // libjpeg default
rctx.pub.term_source = my_term_source_fn;
rctx.ctx = ctx;
rctx.iodescr = iodescr;
rctx.buffer_len = 32768;
rctx.buffer = iw_malloc(ctx, rctx.buffer_len);
if(!rctx.buffer) goto done;
rctx.exif_density_x = -1.0;
rctx.exif_density_y = -1.0;
cinfo.src = (struct jpeg_source_mgr*)&rctx;
jpeg_save_markers(&cinfo, 0xe1, 65535);
jpeg_read_header(&cinfo, TRUE);
rctx.is_jfif = cinfo.saw_JFIF_marker;
iwjpeg_read_density(ctx,&img,&cinfo);
iwjpeg_read_saved_markers(&rctx,&cinfo);
jpeg_start_decompress(&cinfo);
colorspace=cinfo.out_color_space;
numchannels=cinfo.output_components;
if(colorspace==JCS_GRAYSCALE && numchannels==1) {
img.imgtype = IW_IMGTYPE_GRAY;
img.native_grayscale = 1;
}
else if((colorspace==JCS_RGB) && numchannels==3) {
img.imgtype = IW_IMGTYPE_RGB;
}
else if((colorspace==JCS_CMYK) && numchannels==4) {
img.imgtype = IW_IMGTYPE_RGB;
cmyk_flag = 1;
}
else {
iw_set_error(ctx,"Unsupported type of JPEG");
goto done;
}
img.width = cinfo.output_width;
img.height = cinfo.output_height;
if(!iw_check_image_dimensions(ctx,img.width,img.height)) {
goto done;
}
img.bit_depth = 8;
img.bpr = iw_calc_bytesperrow(img.width,img.bit_depth*numchannels);
img.pixels = (iw_byte*)iw_malloc_large(ctx, img.bpr, img.height);
if(!img.pixels) {
goto done;
}
if(cmyk_flag) {
tmprow = iw_malloc(ctx,4*img.width);
if(!tmprow) goto done;
}
while(cinfo.output_scanline < cinfo.output_height) {
rownum=cinfo.output_scanline;
jsamprow = &img.pixels[img.bpr * rownum];
if(cmyk_flag) {
jpeg_read_scanlines(&cinfo, &tmprow, 1);
convert_cmyk_to_rbg(ctx,tmprow,jsamprow,img.width);
}
else {
jpeg_read_scanlines(&cinfo, &jsamprow, 1);
}
if(cinfo.output_scanline<=rownum) {
iw_set_error(ctx,"Error reading JPEG file");
goto done;
}
}
jpeg_finish_decompress(&cinfo);
handle_exif_density(&rctx, &img);
iw_set_input_image(ctx, &img);
img.pixels = NULL;
if(rctx.exif_orientation>=2 && rctx.exif_orientation<=8) {
static const unsigned int exif_orient_to_transform[9] =
{ 0,0, 1,3,2,4,5,7,6 };
if(rctx.is_jfif) {
iw_warning(ctx,"JPEG image has an ambiguous orientation");
}
iw_reorient_image(ctx,exif_orient_to_transform[rctx.exif_orientation]);
}
retval=1;
done:
iw_free(ctx, img.pixels);
if(cinfo_valid) jpeg_destroy_decompress(&cinfo);
if(rctx.buffer) iw_free(ctx,rctx.buffer);
if(tmprow) iw_free(ctx,tmprow);
return retval;
}
|
IW_IMPL(int) iw_read_jpeg_file(struct iw_context *ctx, struct iw_iodescr *iodescr)
{
int retval=0;
struct jpeg_decompress_struct cinfo;
struct my_error_mgr jerr;
int cinfo_valid=0;
int colorspace;
JDIMENSION rownum;
JSAMPLE *jsamprow;
int numchannels=0;
struct iw_image img;
struct iwjpegrcontext rctx;
JSAMPLE *tmprow = NULL;
int cmyk_flag = 0;
iw_zeromem(&img,sizeof(struct iw_image));
iw_zeromem(&cinfo,sizeof(struct jpeg_decompress_struct));
iw_zeromem(&jerr,sizeof(struct my_error_mgr));
iw_zeromem(&rctx,sizeof(struct iwjpegrcontext));
cinfo.err = jpeg_std_error(&jerr.pub);
jerr.pub.error_exit = my_error_exit;
jerr.pub.output_message = my_output_message;
if (setjmp(jerr.setjmp_buffer)) {
char buffer[JMSG_LENGTH_MAX];
(*cinfo.err->format_message) ((j_common_ptr)&cinfo, buffer);
iw_set_errorf(ctx,"libjpeg reports read error: %s",buffer);
goto done;
}
jpeg_create_decompress(&cinfo);
cinfo_valid=1;
rctx.pub.init_source = my_init_source_fn;
rctx.pub.fill_input_buffer = my_fill_input_buffer_fn;
rctx.pub.skip_input_data = my_skip_input_data_fn;
rctx.pub.resync_to_restart = jpeg_resync_to_restart; // libjpeg default
rctx.pub.term_source = my_term_source_fn;
rctx.ctx = ctx;
rctx.iodescr = iodescr;
rctx.buffer_len = 32768;
rctx.buffer = iw_malloc(ctx, rctx.buffer_len);
if(!rctx.buffer) goto done;
rctx.exif_density_x = -1.0;
rctx.exif_density_y = -1.0;
cinfo.src = (struct jpeg_source_mgr*)&rctx;
jpeg_save_markers(&cinfo, 0xe1, 65535);
jpeg_read_header(&cinfo, TRUE);
rctx.is_jfif = cinfo.saw_JFIF_marker;
iwjpeg_read_density(ctx,&img,&cinfo);
iwjpeg_read_saved_markers(&rctx,&cinfo);
jpeg_start_decompress(&cinfo);
colorspace=cinfo.out_color_space;
numchannels=cinfo.output_components;
if(colorspace==JCS_GRAYSCALE && numchannels==1) {
img.imgtype = IW_IMGTYPE_GRAY;
img.native_grayscale = 1;
}
else if((colorspace==JCS_RGB) && numchannels==3) {
img.imgtype = IW_IMGTYPE_RGB;
}
else if((colorspace==JCS_CMYK) && numchannels==4) {
img.imgtype = IW_IMGTYPE_RGB;
cmyk_flag = 1;
}
else {
iw_set_error(ctx,"Unsupported type of JPEG");
goto done;
}
img.width = cinfo.output_width;
img.height = cinfo.output_height;
if(!iw_check_image_dimensions(ctx,img.width,img.height)) {
goto done;
}
img.bit_depth = 8;
img.bpr = iw_calc_bytesperrow(img.width,img.bit_depth*numchannels);
img.pixels = (iw_byte*)iw_malloc_large(ctx, img.bpr, img.height);
if(!img.pixels) {
goto done;
}
if(cmyk_flag) {
tmprow = iw_malloc(ctx,4*img.width);
if(!tmprow) goto done;
}
while(cinfo.output_scanline < cinfo.output_height) {
rownum=cinfo.output_scanline;
jsamprow = &img.pixels[img.bpr * rownum];
if(cmyk_flag) {
jpeg_read_scanlines(&cinfo, &tmprow, 1);
convert_cmyk_to_rbg(ctx,tmprow,jsamprow,img.width);
}
else {
jpeg_read_scanlines(&cinfo, &jsamprow, 1);
}
if(cinfo.output_scanline<=rownum) {
iw_set_error(ctx,"Error reading JPEG file");
goto done;
}
}
jpeg_finish_decompress(&cinfo);
handle_exif_density(&rctx, &img);
iw_set_input_image(ctx, &img);
img.pixels = NULL;
if(rctx.exif_orientation>=2 && rctx.exif_orientation<=8) {
static const unsigned int exif_orient_to_transform[9] =
{ 0,0, 1,3,2,4,5,7,6 };
if(rctx.is_jfif) {
iw_warning(ctx,"JPEG image has an ambiguous orientation");
}
iw_reorient_image(ctx,exif_orient_to_transform[rctx.exif_orientation]);
}
retval=1;
done:
iw_free(ctx, img.pixels);
if(cinfo_valid) jpeg_destroy_decompress(&cinfo);
if(rctx.buffer) iw_free(ctx,rctx.buffer);
if(tmprow) iw_free(ctx,tmprow);
return retval;
}
|
C
|
imageworsener
| 0 |
CVE-2016-8860
|
https://www.cvedetails.com/cve/CVE-2016-8860/
|
CWE-119
|
https://github.com/torproject/tor/commit/3cea86eb2fbb65949673eb4ba8ebb695c87a57ce
|
3cea86eb2fbb65949673eb4ba8ebb695c87a57ce
|
Add a one-word sentinel value of 0x0 at the end of each buf_t chunk
This helps protect against bugs where any part of a buf_t's memory
is passed to a function that expects a NUL-terminated input.
It also closes TROVE-2016-10-001 (aka bug 20384).
|
read_to_chunk_tls(buf_t *buf, chunk_t *chunk, tor_tls_t *tls,
size_t at_most)
{
int read_result;
tor_assert(CHUNK_REMAINING_CAPACITY(chunk) >= at_most);
read_result = tor_tls_read(tls, CHUNK_WRITE_PTR(chunk), at_most);
if (read_result < 0)
return read_result;
buf->datalen += read_result;
chunk->datalen += read_result;
return read_result;
}
|
read_to_chunk_tls(buf_t *buf, chunk_t *chunk, tor_tls_t *tls,
size_t at_most)
{
int read_result;
tor_assert(CHUNK_REMAINING_CAPACITY(chunk) >= at_most);
read_result = tor_tls_read(tls, CHUNK_WRITE_PTR(chunk), at_most);
if (read_result < 0)
return read_result;
buf->datalen += read_result;
chunk->datalen += read_result;
return read_result;
}
|
C
|
tor
| 0 |
CVE-2018-18352
|
https://www.cvedetails.com/cve/CVE-2018-18352/
|
CWE-732
|
https://github.com/chromium/chromium/commit/a9cbaa7a40e2b2723cfc2f266c42f4980038a949
|
a9cbaa7a40e2b2723cfc2f266c42f4980038a949
|
Simplify "WouldTaintOrigin" concept in media/blink
Currently WebMediaPlayer has three predicates:
- DidGetOpaqueResponseFromServiceWorker
- HasSingleSecurityOrigin
- DidPassCORSAccessCheck
. These are used to determine whether the response body is available
for scripts. They are known to be confusing, and actually
MediaElementAudioSourceHandler::WouldTaintOrigin misuses them.
This CL merges the three predicates to one, WouldTaintOrigin, to remove
the confusion. Now the "response type" concept is available and we
don't need a custom CORS check, so this CL removes
BaseAudioContext::WouldTaintOrigin. This CL also renames
URLData::has_opaque_data_ and its (direct and indirect) data accessors
to match the spec.
Bug: 849942, 875153
Change-Id: I6acf50169d7445c4ff614e80ac606f79ee577d2a
Reviewed-on: https://chromium-review.googlesource.com/c/1238098
Reviewed-by: Fredrik Hubinette <[email protected]>
Reviewed-by: Kinuko Yasuda <[email protected]>
Reviewed-by: Raymond Toy <[email protected]>
Commit-Queue: Yutaka Hirano <[email protected]>
Cr-Commit-Position: refs/heads/master@{#598258}
|
void ReceiveData(int size) {
ReceiveDataLow(size);
base::RunLoop().RunUntilIdle();
}
|
void ReceiveData(int size) {
ReceiveDataLow(size);
base::RunLoop().RunUntilIdle();
}
|
C
|
Chrome
| 0 |
CVE-2015-5289
|
https://www.cvedetails.com/cve/CVE-2015-5289/
|
CWE-119
|
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=08fa47c4850cea32c3116665975bca219fbf2fe6
|
08fa47c4850cea32c3116665975bca219fbf2fe6
| null |
hash_object_field_end(void *state, char *fname, bool isnull)
{
JHashState *_state = (JHashState *) state;
JsonHashEntry *hashentry;
bool found;
/*
* Ignore nested fields.
*/
if (_state->lex->lex_level > 2)
return;
/*
* Ignore field names >= NAMEDATALEN - they can't match a record field.
* (Note: without this test, the hash code would truncate the string at
* NAMEDATALEN-1, and could then match against a similarly-truncated
* record field name. That would be a reasonable behavior, but this code
* has previously insisted on exact equality, so we keep this behavior.)
*/
if (strlen(fname) >= NAMEDATALEN)
return;
hashentry = hash_search(_state->hash, fname, HASH_ENTER, &found);
/*
* found being true indicates a duplicate. We don't do anything about
* that, a later field with the same name overrides the earlier field.
*/
hashentry->isnull = isnull;
if (_state->save_json_start != NULL)
{
int len = _state->lex->prev_token_terminator - _state->save_json_start;
char *val = palloc((len + 1) * sizeof(char));
memcpy(val, _state->save_json_start, len);
val[len] = '\0';
hashentry->val = val;
}
else
{
/* must have had a scalar instead */
hashentry->val = _state->saved_scalar;
}
}
|
hash_object_field_end(void *state, char *fname, bool isnull)
{
JHashState *_state = (JHashState *) state;
JsonHashEntry *hashentry;
bool found;
/*
* Ignore nested fields.
*/
if (_state->lex->lex_level > 2)
return;
/*
* Ignore field names >= NAMEDATALEN - they can't match a record field.
* (Note: without this test, the hash code would truncate the string at
* NAMEDATALEN-1, and could then match against a similarly-truncated
* record field name. That would be a reasonable behavior, but this code
* has previously insisted on exact equality, so we keep this behavior.)
*/
if (strlen(fname) >= NAMEDATALEN)
return;
hashentry = hash_search(_state->hash, fname, HASH_ENTER, &found);
/*
* found being true indicates a duplicate. We don't do anything about
* that, a later field with the same name overrides the earlier field.
*/
hashentry->isnull = isnull;
if (_state->save_json_start != NULL)
{
int len = _state->lex->prev_token_terminator - _state->save_json_start;
char *val = palloc((len + 1) * sizeof(char));
memcpy(val, _state->save_json_start, len);
val[len] = '\0';
hashentry->val = val;
}
else
{
/* must have had a scalar instead */
hashentry->val = _state->saved_scalar;
}
}
|
C
|
postgresql
| 0 |
CVE-2011-2918
|
https://www.cvedetails.com/cve/CVE-2011-2918/
|
CWE-399
|
https://github.com/torvalds/linux/commit/a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
perf: Remove the nmi parameter from the swevent and overflow interface
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Michael Cree <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: Anton Blanchard <[email protected]>
Cc: Eric B Munson <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Jason Wessel <[email protected]>
Cc: Don Zickus <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
void account_system_vtime(struct task_struct *curr)
{
unsigned long flags;
s64 delta;
int cpu;
if (!sched_clock_irqtime)
return;
local_irq_save(flags);
cpu = smp_processor_id();
delta = sched_clock_cpu(cpu) - __this_cpu_read(irq_start_time);
__this_cpu_add(irq_start_time, delta);
irq_time_write_begin();
/*
* We do not account for softirq time from ksoftirqd here.
* We want to continue accounting softirq time to ksoftirqd thread
* in that case, so as not to confuse scheduler with a special task
* that do not consume any time, but still wants to run.
*/
if (hardirq_count())
__this_cpu_add(cpu_hardirq_time, delta);
else if (in_serving_softirq() && curr != this_cpu_ksoftirqd())
__this_cpu_add(cpu_softirq_time, delta);
irq_time_write_end();
local_irq_restore(flags);
}
|
void account_system_vtime(struct task_struct *curr)
{
unsigned long flags;
s64 delta;
int cpu;
if (!sched_clock_irqtime)
return;
local_irq_save(flags);
cpu = smp_processor_id();
delta = sched_clock_cpu(cpu) - __this_cpu_read(irq_start_time);
__this_cpu_add(irq_start_time, delta);
irq_time_write_begin();
/*
* We do not account for softirq time from ksoftirqd here.
* We want to continue accounting softirq time to ksoftirqd thread
* in that case, so as not to confuse scheduler with a special task
* that do not consume any time, but still wants to run.
*/
if (hardirq_count())
__this_cpu_add(cpu_hardirq_time, delta);
else if (in_serving_softirq() && curr != this_cpu_ksoftirqd())
__this_cpu_add(cpu_softirq_time, delta);
irq_time_write_end();
local_irq_restore(flags);
}
|
C
|
linux
| 0 |
CVE-2016-7916
|
https://www.cvedetails.com/cve/CVE-2016-7916/
|
CWE-362
|
https://github.com/torvalds/linux/commit/8148a73c9901a8794a50f950083c00ccf97d43b3
|
8148a73c9901a8794a50f950083c00ccf97d43b3
|
proc: prevent accessing /proc/<PID>/environ until it's ready
If /proc/<PID>/environ gets read before the envp[] array is fully set up
in create_{aout,elf,elf_fdpic,flat}_tables(), we might end up trying to
read more bytes than are actually written, as env_start will already be
set but env_end will still be zero, making the range calculation
underflow, allowing to read beyond the end of what has been written.
Fix this as it is done for /proc/<PID>/cmdline by testing env_end for
zero. It is, apparently, intentionally set last in create_*_tables().
This bug was found by the PaX size_overflow plugin that detected the
arithmetic underflow of 'this_len = env_end - (env_start + src)' when
env_end is still zero.
The expected consequence is that userland trying to access
/proc/<PID>/environ of a not yet fully set up process may get
inconsistent data as we're in the middle of copying in the environment
variables.
Fixes: https://forums.grsecurity.net/viewtopic.php?f=3&t=4363
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=116461
Signed-off-by: Mathias Krause <[email protected]>
Cc: Emese Revfy <[email protected]>
Cc: Pax Team <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Mateusz Guzik <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: Cyrill Gorcunov <[email protected]>
Cc: Jarod Wilson <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
proc_map_files_instantiate(struct inode *dir, struct dentry *dentry,
struct task_struct *task, const void *ptr)
{
fmode_t mode = (fmode_t)(unsigned long)ptr;
struct proc_inode *ei;
struct inode *inode;
inode = proc_pid_make_inode(dir->i_sb, task);
if (!inode)
return -ENOENT;
ei = PROC_I(inode);
ei->op.proc_get_link = map_files_get_link;
inode->i_op = &proc_map_files_link_inode_operations;
inode->i_size = 64;
inode->i_mode = S_IFLNK;
if (mode & FMODE_READ)
inode->i_mode |= S_IRUSR;
if (mode & FMODE_WRITE)
inode->i_mode |= S_IWUSR;
d_set_d_op(dentry, &tid_map_files_dentry_operations);
d_add(dentry, inode);
return 0;
}
|
proc_map_files_instantiate(struct inode *dir, struct dentry *dentry,
struct task_struct *task, const void *ptr)
{
fmode_t mode = (fmode_t)(unsigned long)ptr;
struct proc_inode *ei;
struct inode *inode;
inode = proc_pid_make_inode(dir->i_sb, task);
if (!inode)
return -ENOENT;
ei = PROC_I(inode);
ei->op.proc_get_link = map_files_get_link;
inode->i_op = &proc_map_files_link_inode_operations;
inode->i_size = 64;
inode->i_mode = S_IFLNK;
if (mode & FMODE_READ)
inode->i_mode |= S_IRUSR;
if (mode & FMODE_WRITE)
inode->i_mode |= S_IWUSR;
d_set_d_op(dentry, &tid_map_files_dentry_operations);
d_add(dentry, inode);
return 0;
}
|
C
|
linux
| 0 |
CVE-2015-5330
|
https://www.cvedetails.com/cve/CVE-2015-5330/
|
CWE-200
|
https://git.samba.org/?p=samba.git;a=commit;h=0454b95657846fcecf0f51b6f1194faac02518bd
|
0454b95657846fcecf0f51b6f1194faac02518bd
| null |
static bool ldb_dn_casefold_internal(struct ldb_dn *dn)
{
unsigned int i;
int ret;
if ( ! dn || dn->invalid) return false;
if (dn->valid_case) return true;
if (( ! dn->components) && ( ! ldb_dn_explode(dn))) {
return false;
}
for (i = 0; i < dn->comp_num; i++) {
const struct ldb_schema_attribute *a;
dn->components[i].cf_name =
ldb_attr_casefold(dn->components,
dn->components[i].name);
if (!dn->components[i].cf_name) {
goto failed;
}
a = ldb_schema_attribute_by_name(dn->ldb,
dn->components[i].cf_name);
ret = a->syntax->canonicalise_fn(dn->ldb, dn->components,
&(dn->components[i].value),
&(dn->components[i].cf_value));
if (ret != 0) {
goto failed;
}
}
dn->valid_case = true;
return true;
failed:
for (i = 0; i < dn->comp_num; i++) {
LDB_FREE(dn->components[i].cf_name);
LDB_FREE(dn->components[i].cf_value.data);
}
return false;
}
|
static bool ldb_dn_casefold_internal(struct ldb_dn *dn)
{
unsigned int i;
int ret;
if ( ! dn || dn->invalid) return false;
if (dn->valid_case) return true;
if (( ! dn->components) && ( ! ldb_dn_explode(dn))) {
return false;
}
for (i = 0; i < dn->comp_num; i++) {
const struct ldb_schema_attribute *a;
dn->components[i].cf_name =
ldb_attr_casefold(dn->components,
dn->components[i].name);
if (!dn->components[i].cf_name) {
goto failed;
}
a = ldb_schema_attribute_by_name(dn->ldb,
dn->components[i].cf_name);
ret = a->syntax->canonicalise_fn(dn->ldb, dn->components,
&(dn->components[i].value),
&(dn->components[i].cf_value));
if (ret != 0) {
goto failed;
}
}
dn->valid_case = true;
return true;
failed:
for (i = 0; i < dn->comp_num; i++) {
LDB_FREE(dn->components[i].cf_name);
LDB_FREE(dn->components[i].cf_value.data);
}
return false;
}
|
C
|
samba
| 0 |
CVE-2016-5216
|
https://www.cvedetails.com/cve/CVE-2016-5216/
|
CWE-416
|
https://github.com/chromium/chromium/commit/bf6a6765d44b09c64b8c75d749efb84742a250e7
|
bf6a6765d44b09c64b8c75d749efb84742a250e7
|
[pdf] Defer page unloading in JS callback.
One of the callbacks from PDFium JavaScript into the embedder is to get the
current page number. In Chromium, this will trigger a call to
CalculateMostVisiblePage that method will determine the visible pages and unload
any non-visible pages. But, if the originating JS is on a non-visible page
we'll delete the page and annotations associated with that page. This will
cause issues as we are currently working with those objects when the JavaScript
returns.
This Cl defers the page unloading triggered by getting the most visible page
until the next event is handled by the Chromium embedder.
BUG=chromium:653090
Review-Url: https://codereview.chromium.org/2418533002
Cr-Commit-Position: refs/heads/master@{#424781}
|
pp::URLLoader PDFiumEngine::CreateURLLoader() {
return client_->CreateURLLoader();
}
|
pp::URLLoader PDFiumEngine::CreateURLLoader() {
return client_->CreateURLLoader();
}
|
C
|
Chrome
| 0 |
CVE-2018-16068
|
https://www.cvedetails.com/cve/CVE-2018-16068/
|
CWE-20
|
https://github.com/chromium/chromium/commit/66e24a8793615bd9d5c238b1745b093090e1f72d
|
66e24a8793615bd9d5c238b1745b093090e1f72d
|
[mojo-core] Validate data pipe endpoint metadata
Ensures that we don't blindly trust specified buffer size and offset
metadata when deserializing data pipe consumer and producer handles.
Bug: 877182
Change-Id: I30f3eceafb5cee06284c2714d08357ef911d6fd9
Reviewed-on: https://chromium-review.googlesource.com/1192922
Reviewed-by: Reilly Grant <[email protected]>
Commit-Queue: Ken Rockot <[email protected]>
Cr-Commit-Position: refs/heads/master@{#586704}
|
void DataPipeConsumerDispatcher::CancelTransit() {
base::AutoLock lock(lock_);
DCHECK(in_transit_);
in_transit_ = false;
UpdateSignalsStateNoLock();
}
|
void DataPipeConsumerDispatcher::CancelTransit() {
base::AutoLock lock(lock_);
DCHECK(in_transit_);
in_transit_ = false;
UpdateSignalsStateNoLock();
}
|
C
|
Chrome
| 0 |
CVE-2014-3690
|
https://www.cvedetails.com/cve/CVE-2014-3690/
|
CWE-399
|
https://github.com/torvalds/linux/commit/d974baa398f34393db76be45f7d4d04fbdbb4a0a
|
d974baa398f34393db76be45f7d4d04fbdbb4a0a
|
x86,kvm,vmx: Preserve CR4 across VM entry
CR4 isn't constant; at least the TSD and PCE bits can vary.
TBH, treating CR0 and CR3 as constant scares me a bit, too, but it looks
like it's correct.
This adds a branch and a read from cr4 to each vm entry. Because it is
extremely likely that consecutive entries into the same vcpu will have
the same host cr4 value, this fixes up the vmcs instead of restoring cr4
after the fact. A subsequent patch will add a kernel-wide cr4 shadow,
reducing the overhead in the common case to just two memory reads and a
branch.
Signed-off-by: Andy Lutomirski <[email protected]>
Acked-by: Paolo Bonzini <[email protected]>
Cc: [email protected]
Cc: Petr Matousek <[email protected]>
Cc: Gleb Natapov <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu,
struct x86_exception *fault)
{
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
WARN_ON(!is_guest_mode(vcpu));
/* TODO: also check PFEC_MATCH/MASK, not just EB.PF. */
if (vmcs12->exception_bitmap & (1u << PF_VECTOR))
nested_vmx_vmexit(vcpu, to_vmx(vcpu)->exit_reason,
vmcs_read32(VM_EXIT_INTR_INFO),
vmcs_readl(EXIT_QUALIFICATION));
else
kvm_inject_page_fault(vcpu, fault);
}
|
static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu,
struct x86_exception *fault)
{
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
WARN_ON(!is_guest_mode(vcpu));
/* TODO: also check PFEC_MATCH/MASK, not just EB.PF. */
if (vmcs12->exception_bitmap & (1u << PF_VECTOR))
nested_vmx_vmexit(vcpu, to_vmx(vcpu)->exit_reason,
vmcs_read32(VM_EXIT_INTR_INFO),
vmcs_readl(EXIT_QUALIFICATION));
else
kvm_inject_page_fault(vcpu, fault);
}
|
C
|
linux
| 0 |
CVE-2017-7277
|
https://www.cvedetails.com/cve/CVE-2017-7277/
|
CWE-125
|
https://github.com/torvalds/linux/commit/8605330aac5a5785630aec8f64378a54891937cc
|
8605330aac5a5785630aec8f64378a54891937cc
|
tcp: fix SCM_TIMESTAMPING_OPT_STATS for normal skbs
__sock_recv_timestamp can be called for both normal skbs (for
receive timestamps) and for skbs on the error queue (for transmit
timestamps).
Commit 1c885808e456
(tcp: SOF_TIMESTAMPING_OPT_STATS option for SO_TIMESTAMPING)
assumes any skb passed to __sock_recv_timestamp are from
the error queue, containing OPT_STATS in the content of the skb.
This results in accessing invalid memory or generating junk
data.
To fix this, set skb->pkt_type to PACKET_OUTGOING for packets
on the error queue. This is safe because on the receive path
on local sockets skb->pkt_type is never set to PACKET_OUTGOING.
With that, copy OPT_STATS from a packet, only if its pkt_type
is PACKET_OUTGOING.
Fixes: 1c885808e456 ("tcp: SOF_TIMESTAMPING_OPT_STATS option for SO_TIMESTAMPING")
Reported-by: JongHwan Kim <[email protected]>
Signed-off-by: Soheil Hassas Yeganeh <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
{
struct page_frag_cache *nc;
unsigned long flags;
void *data;
local_irq_save(flags);
nc = this_cpu_ptr(&netdev_alloc_cache);
data = page_frag_alloc(nc, fragsz, gfp_mask);
local_irq_restore(flags);
return data;
}
|
static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
{
struct page_frag_cache *nc;
unsigned long flags;
void *data;
local_irq_save(flags);
nc = this_cpu_ptr(&netdev_alloc_cache);
data = page_frag_alloc(nc, fragsz, gfp_mask);
local_irq_restore(flags);
return data;
}
|
C
|
linux
| 0 |
CVE-2012-2673
|
https://www.cvedetails.com/cve/CVE-2012-2673/
|
CWE-189
|
https://github.com/ivmai/bdwgc/commit/be9df82919960214ee4b9d3313523bff44fd99e1
|
be9df82919960214ee4b9d3313523bff44fd99e1
|
Fix allocation size overflows due to rounding.
* malloc.c (GC_generic_malloc): Check if the allocation size is
rounded to a smaller value.
* mallocx.c (GC_generic_malloc_ignore_off_page): Likewise.
|
GC_API char * GC_CALL GC_strndup(const char *str, size_t size)
{
char *copy;
size_t len = strlen(str); /* str is expected to be non-NULL */
if (len > size)
len = size;
copy = GC_malloc_atomic(len + 1);
if (copy == NULL) {
# ifndef MSWINCE
errno = ENOMEM;
# endif
return NULL;
}
BCOPY(str, copy, len);
copy[len] = '\0';
return copy;
}
|
GC_API char * GC_CALL GC_strndup(const char *str, size_t size)
{
char *copy;
size_t len = strlen(str); /* str is expected to be non-NULL */
if (len > size)
len = size;
copy = GC_malloc_atomic(len + 1);
if (copy == NULL) {
# ifndef MSWINCE
errno = ENOMEM;
# endif
return NULL;
}
BCOPY(str, copy, len);
copy[len] = '\0';
return copy;
}
|
C
|
bdwgc
| 0 |
CVE-2013-4623
|
https://www.cvedetails.com/cve/CVE-2013-4623/
|
CWE-20
|
https://github.com/polarssl/polarssl/commit/1922a4e6aade7b1d685af19d4d9339ddb5c02859
|
1922a4e6aade7b1d685af19d4d9339ddb5c02859
|
ssl_parse_certificate() now calls x509parse_crt_der() directly
|
static void ssl_update_checksum_sha256( ssl_context *ssl, unsigned char *buf,
size_t len )
{
sha2_update( &ssl->handshake->fin_sha2, buf, len );
}
|
static void ssl_update_checksum_sha256( ssl_context *ssl, unsigned char *buf,
size_t len )
{
sha2_update( &ssl->handshake->fin_sha2, buf, len );
}
|
C
|
polarssl
| 0 |
CVE-2014-9665
|
https://www.cvedetails.com/cve/CVE-2014-9665/
|
CWE-119
|
https://git.savannah.gnu.org/cgit/freetype/freetype2.git/commit/?id=b3500af717010137046ec4076d1e1c0641e33727
|
b3500af717010137046ec4076d1e1c0641e33727
| null |
ftc_snode_load( FTC_SNode snode,
FTC_Manager manager,
FT_UInt gindex,
FT_ULong *asize )
{
FT_Error error;
FTC_GNode gnode = FTC_GNODE( snode );
FTC_Family family = gnode->family;
FT_Memory memory = manager->memory;
FT_Face face;
FTC_SBit sbit;
FTC_SFamilyClass clazz;
if ( (FT_UInt)(gindex - gnode->gindex) >= snode->count )
{
FT_ERROR(( "ftc_snode_load: invalid glyph index" ));
return FT_THROW( Invalid_Argument );
}
sbit = snode->sbits + ( gindex - gnode->gindex );
clazz = (FTC_SFamilyClass)family->clazz;
sbit->buffer = 0;
error = clazz->family_load_glyph( family, gindex, manager, &face );
if ( error )
goto BadGlyph;
{
FT_Int temp;
FT_GlyphSlot slot = face->glyph;
FT_Bitmap* bitmap = &slot->bitmap;
FT_Pos xadvance, yadvance; /* FT_GlyphSlot->advance.{x|y} */
if ( slot->format != FT_GLYPH_FORMAT_BITMAP )
{
FT_TRACE0(( "ftc_snode_load:"
" glyph loaded didn't return a bitmap\n" ));
goto BadGlyph;
}
/* Check whether our values fit into 8-bit containers! */
/* If this is not the case, our bitmap is too large */
/* and we will leave it as `missing' with sbit.buffer = 0 */
#define CHECK_CHAR( d ) ( temp = (FT_Char)d, (FT_Int) temp == (FT_Int) d )
#define CHECK_BYTE( d ) ( temp = (FT_Byte)d, (FT_UInt)temp == (FT_UInt)d )
/* horizontal advance in pixels */
xadvance = ( slot->advance.x + 32 ) >> 6;
yadvance = ( slot->advance.y + 32 ) >> 6;
if ( !CHECK_BYTE( bitmap->rows ) ||
!CHECK_BYTE( bitmap->width ) ||
!CHECK_CHAR( bitmap->pitch ) ||
!CHECK_CHAR( slot->bitmap_left ) ||
!CHECK_CHAR( slot->bitmap_top ) ||
!CHECK_CHAR( xadvance ) ||
!CHECK_CHAR( yadvance ) )
{
FT_TRACE2(( "ftc_snode_load:"
" glyph too large for small bitmap cache\n"));
goto BadGlyph;
}
sbit->width = (FT_Byte)bitmap->width;
sbit->height = (FT_Byte)bitmap->rows;
sbit->pitch = (FT_Char)bitmap->pitch;
sbit->left = (FT_Char)slot->bitmap_left;
sbit->top = (FT_Char)slot->bitmap_top;
sbit->xadvance = (FT_Char)xadvance;
sbit->yadvance = (FT_Char)yadvance;
sbit->format = (FT_Byte)bitmap->pixel_mode;
sbit->max_grays = (FT_Byte)(bitmap->num_grays - 1);
/* copy the bitmap into a new buffer -- ignore error */
error = ftc_sbit_copy_bitmap( sbit, bitmap, memory );
/* now, compute size */
if ( asize )
*asize = FT_ABS( sbit->pitch ) * sbit->height;
} /* glyph loading successful */
/* ignore the errors that might have occurred -- */
/* we mark unloaded glyphs with `sbit.buffer == 0' */
/* and `width == 255', `height == 0' */
/* */
if ( error && FT_ERR_NEQ( error, Out_Of_Memory ) )
{
BadGlyph:
sbit->width = 255;
sbit->height = 0;
sbit->buffer = NULL;
error = FT_Err_Ok;
if ( asize )
*asize = 0;
}
return error;
}
|
ftc_snode_load( FTC_SNode snode,
FTC_Manager manager,
FT_UInt gindex,
FT_ULong *asize )
{
FT_Error error;
FTC_GNode gnode = FTC_GNODE( snode );
FTC_Family family = gnode->family;
FT_Memory memory = manager->memory;
FT_Face face;
FTC_SBit sbit;
FTC_SFamilyClass clazz;
if ( (FT_UInt)(gindex - gnode->gindex) >= snode->count )
{
FT_ERROR(( "ftc_snode_load: invalid glyph index" ));
return FT_THROW( Invalid_Argument );
}
sbit = snode->sbits + ( gindex - gnode->gindex );
clazz = (FTC_SFamilyClass)family->clazz;
sbit->buffer = 0;
error = clazz->family_load_glyph( family, gindex, manager, &face );
if ( error )
goto BadGlyph;
{
FT_Int temp;
FT_GlyphSlot slot = face->glyph;
FT_Bitmap* bitmap = &slot->bitmap;
FT_Pos xadvance, yadvance; /* FT_GlyphSlot->advance.{x|y} */
if ( slot->format != FT_GLYPH_FORMAT_BITMAP )
{
FT_TRACE0(( "ftc_snode_load:"
" glyph loaded didn't return a bitmap\n" ));
goto BadGlyph;
}
/* Check that our values fit into 8-bit containers! */
/* If this is not the case, our bitmap is too large */
/* and we will leave it as `missing' with sbit.buffer = 0 */
#define CHECK_CHAR( d ) ( temp = (FT_Char)d, temp == d )
#define CHECK_BYTE( d ) ( temp = (FT_Byte)d, temp == d )
/* horizontal advance in pixels */
xadvance = ( slot->advance.x + 32 ) >> 6;
yadvance = ( slot->advance.y + 32 ) >> 6;
if ( !CHECK_BYTE( bitmap->rows ) ||
!CHECK_BYTE( bitmap->width ) ||
!CHECK_CHAR( bitmap->pitch ) ||
!CHECK_CHAR( slot->bitmap_left ) ||
!CHECK_CHAR( slot->bitmap_top ) ||
!CHECK_CHAR( xadvance ) ||
!CHECK_CHAR( yadvance ) )
{
FT_TRACE2(( "ftc_snode_load:"
" glyph too large for small bitmap cache\n"));
goto BadGlyph;
}
sbit->width = (FT_Byte)bitmap->width;
sbit->height = (FT_Byte)bitmap->rows;
sbit->pitch = (FT_Char)bitmap->pitch;
sbit->left = (FT_Char)slot->bitmap_left;
sbit->top = (FT_Char)slot->bitmap_top;
sbit->xadvance = (FT_Char)xadvance;
sbit->yadvance = (FT_Char)yadvance;
sbit->format = (FT_Byte)bitmap->pixel_mode;
sbit->max_grays = (FT_Byte)(bitmap->num_grays - 1);
/* copy the bitmap into a new buffer -- ignore error */
error = ftc_sbit_copy_bitmap( sbit, bitmap, memory );
/* now, compute size */
if ( asize )
*asize = FT_ABS( sbit->pitch ) * sbit->height;
} /* glyph loading successful */
/* ignore the errors that might have occurred -- */
/* we mark unloaded glyphs with `sbit.buffer == 0' */
/* and `width == 255', `height == 0' */
/* */
if ( error && FT_ERR_NEQ( error, Out_Of_Memory ) )
{
BadGlyph:
sbit->width = 255;
sbit->height = 0;
sbit->buffer = NULL;
error = FT_Err_Ok;
if ( asize )
*asize = 0;
}
return error;
}
|
C
|
savannah
| 1 |
CVE-2018-6103
|
https://www.cvedetails.com/cve/CVE-2018-6103/
|
CWE-20
|
https://github.com/chromium/chromium/commit/12c876ae82355de6285bf0879023f1d1f1822ecf
|
12c876ae82355de6285bf0879023f1d1f1822ecf
|
Fix MediaObserver notifications in MediaStreamManager.
This CL fixes the stream type used to notify MediaObserver about
cancelled MediaStream requests.
Before this CL, NUM_MEDIA_TYPES was used as stream type to indicate
that all stream types should be cancelled.
However, the MediaObserver end does not interpret NUM_MEDIA_TYPES this
way and the request to update the UI is ignored.
This CL sends a separate notification for each stream type so that the
UI actually gets updated for all stream types in use.
Bug: 816033
Change-Id: Ib7d3b3046d1dd0976627f8ab38abf086eacc9405
Reviewed-on: https://chromium-review.googlesource.com/939630
Commit-Queue: Guido Urdaneta <[email protected]>
Reviewed-by: Raymes Khoury <[email protected]>
Cr-Commit-Position: refs/heads/master@{#540122}
|
DeviceRequest(
int requesting_process_id,
int requesting_frame_id,
int page_request_id,
const url::Origin& security_origin,
bool user_gesture,
MediaStreamRequestType request_type,
const StreamControls& controls,
const std::string& salt,
DeviceStoppedCallback device_stopped_cb = DeviceStoppedCallback())
: requesting_process_id(requesting_process_id),
requesting_frame_id(requesting_frame_id),
page_request_id(page_request_id),
security_origin(security_origin),
user_gesture(user_gesture),
request_type(request_type),
controls(controls),
salt(salt),
device_stopped_cb(std::move(device_stopped_cb)),
state_(NUM_MEDIA_TYPES, MEDIA_REQUEST_STATE_NOT_REQUESTED),
audio_type_(MEDIA_NO_SERVICE),
video_type_(MEDIA_NO_SERVICE),
target_process_id_(-1),
target_frame_id_(-1) {}
|
DeviceRequest(
int requesting_process_id,
int requesting_frame_id,
int page_request_id,
const url::Origin& security_origin,
bool user_gesture,
MediaStreamRequestType request_type,
const StreamControls& controls,
const std::string& salt,
DeviceStoppedCallback device_stopped_cb = DeviceStoppedCallback())
: requesting_process_id(requesting_process_id),
requesting_frame_id(requesting_frame_id),
page_request_id(page_request_id),
security_origin(security_origin),
user_gesture(user_gesture),
request_type(request_type),
controls(controls),
salt(salt),
device_stopped_cb(std::move(device_stopped_cb)),
state_(NUM_MEDIA_TYPES, MEDIA_REQUEST_STATE_NOT_REQUESTED),
audio_type_(MEDIA_NO_SERVICE),
video_type_(MEDIA_NO_SERVICE),
target_process_id_(-1),
target_frame_id_(-1) {}
|
C
|
Chrome
| 0 |
CVE-2014-7822
|
https://www.cvedetails.com/cve/CVE-2014-7822/
|
CWE-264
|
https://github.com/torvalds/linux/commit/8d0207652cbe27d1f962050737848e5ad4671958
|
8d0207652cbe27d1f962050737848e5ad4671958
|
->splice_write() via ->write_iter()
iter_file_splice_write() - a ->splice_write() instance that gathers the
pipe buffers, builds a bio_vec-based iov_iter covering those and feeds
it to ->write_iter(). A bunch of simple cases coverted to that...
[AV: fixed the braino spotted by Cyrill]
Signed-off-by: Al Viro <[email protected]>
|
static int exofs_file_fsync(struct file *filp, loff_t start, loff_t end,
int datasync)
{
struct inode *inode = filp->f_mapping->host;
int ret;
ret = filemap_write_and_wait_range(inode->i_mapping, start, end);
if (ret)
return ret;
mutex_lock(&inode->i_mutex);
ret = sync_inode_metadata(filp->f_mapping->host, 1);
mutex_unlock(&inode->i_mutex);
return ret;
}
|
static int exofs_file_fsync(struct file *filp, loff_t start, loff_t end,
int datasync)
{
struct inode *inode = filp->f_mapping->host;
int ret;
ret = filemap_write_and_wait_range(inode->i_mapping, start, end);
if (ret)
return ret;
mutex_lock(&inode->i_mutex);
ret = sync_inode_metadata(filp->f_mapping->host, 1);
mutex_unlock(&inode->i_mutex);
return ret;
}
|
C
|
linux
| 0 |
CVE-2016-5164
|
https://www.cvedetails.com/cve/CVE-2016-5164/
|
CWE-79
|
https://github.com/chromium/chromium/commit/93bc623489bdcfc7e9127614fcfb3258edf3f0f9
|
93bc623489bdcfc7e9127614fcfb3258edf3f0f9
|
[DevTools] Copy objects from debugger context to inspected context properly.
BUG=637594
Review-Url: https://codereview.chromium.org/2253643002
Cr-Commit-Position: refs/heads/master@{#412436}
|
void V8Console::timeStampCallback(const v8::FunctionCallbackInfo<v8::Value>& info)
{
ConsoleHelper helper(info);
if (V8InspectorClient* client = helper.ensureDebuggerClient())
client->consoleTimeStamp(helper.firstArgToString(String16()));
}
|
void V8Console::timeStampCallback(const v8::FunctionCallbackInfo<v8::Value>& info)
{
ConsoleHelper helper(info);
if (V8InspectorClient* client = helper.ensureDebuggerClient())
client->consoleTimeStamp(helper.firstArgToString(String16()));
}
|
C
|
Chrome
| 0 |
CVE-2018-17204
|
https://www.cvedetails.com/cve/CVE-2018-17204/
|
CWE-617
|
https://github.com/openvswitch/ovs/commit/4af6da3b275b764b1afe194df6499b33d2bf4cde
|
4af6da3b275b764b1afe194df6499b33d2bf4cde
|
ofp-group: Don't assert-fail decoding bad OF1.5 group mod type or command.
When decoding a group mod, the current code validates the group type and
command after the whole group mod has been decoded. The OF1.5 decoder,
however, tries to use the type and command earlier, when it might still be
invalid. This caused an assertion failure (via OVS_NOT_REACHED). This
commit fixes the problem.
ovs-vswitchd does not enable support for OpenFlow 1.5 by default.
Reported-at: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=9249
Signed-off-by: Ben Pfaff <[email protected]>
Reviewed-by: Yifeng Sun <[email protected]>
|
ofputil_pull_queue_get_config_reply(struct ofpbuf *msg,
struct ofputil_queue_config *queue)
{
enum ofpraw raw;
if (!msg->header) {
/* Pull OpenFlow header. */
raw = ofpraw_pull_assert(msg);
/* Pull protocol-specific ofp_queue_get_config_reply header (OF1.4
* doesn't have one at all). */
if (raw == OFPRAW_OFPT10_QUEUE_GET_CONFIG_REPLY) {
ofpbuf_pull(msg, sizeof(struct ofp10_queue_get_config_reply));
} else if (raw == OFPRAW_OFPT11_QUEUE_GET_CONFIG_REPLY) {
ofpbuf_pull(msg, sizeof(struct ofp11_queue_get_config_reply));
} else {
ovs_assert(raw == OFPRAW_OFPST14_QUEUE_DESC_REPLY);
}
} else {
raw = ofpraw_decode_assert(msg->header);
}
queue->min_rate = UINT16_MAX;
queue->max_rate = UINT16_MAX;
if (!msg->size) {
return EOF;
} else if (raw == OFPRAW_OFPST14_QUEUE_DESC_REPLY) {
return ofputil_pull_queue_get_config_reply14(msg, queue);
} else {
return ofputil_pull_queue_get_config_reply10(msg, queue);
}
}
|
ofputil_pull_queue_get_config_reply(struct ofpbuf *msg,
struct ofputil_queue_config *queue)
{
enum ofpraw raw;
if (!msg->header) {
/* Pull OpenFlow header. */
raw = ofpraw_pull_assert(msg);
/* Pull protocol-specific ofp_queue_get_config_reply header (OF1.4
* doesn't have one at all). */
if (raw == OFPRAW_OFPT10_QUEUE_GET_CONFIG_REPLY) {
ofpbuf_pull(msg, sizeof(struct ofp10_queue_get_config_reply));
} else if (raw == OFPRAW_OFPT11_QUEUE_GET_CONFIG_REPLY) {
ofpbuf_pull(msg, sizeof(struct ofp11_queue_get_config_reply));
} else {
ovs_assert(raw == OFPRAW_OFPST14_QUEUE_DESC_REPLY);
}
} else {
raw = ofpraw_decode_assert(msg->header);
}
queue->min_rate = UINT16_MAX;
queue->max_rate = UINT16_MAX;
if (!msg->size) {
return EOF;
} else if (raw == OFPRAW_OFPST14_QUEUE_DESC_REPLY) {
return ofputil_pull_queue_get_config_reply14(msg, queue);
} else {
return ofputil_pull_queue_get_config_reply10(msg, queue);
}
}
|
C
|
ovs
| 0 |
CVE-2015-3834
|
https://www.cvedetails.com/cve/CVE-2015-3834/
|
CWE-189
|
https://android.googlesource.com/platform/frameworks/av/+/c82e31a7039a03dca7b37c65b7890ba5c1e18ced
|
c82e31a7039a03dca7b37c65b7890ba5c1e18ced
|
HDCP: buffer over flow check -- DO NOT MERGE
bug: 20222489
Change-Id: I3a64a5999d68ea243d187f12ec7717b7f26d93a3
(cherry picked from commit 532cd7b86a5fdc7b9a30a45d8ae2d16ef7660a72)
|
virtual status_t decrypt(
const void *inData, size_t size,
uint32_t streamCTR, uint64_t inputCTR,
void *outData) {
Parcel data, reply;
data.writeInterfaceToken(IHDCP::getInterfaceDescriptor());
data.writeInt32(size);
data.write(inData, size);
data.writeInt32(streamCTR);
data.writeInt64(inputCTR);
remote()->transact(HDCP_DECRYPT, data, &reply);
status_t err = reply.readInt32();
if (err != OK) {
return err;
}
reply.read(outData, size);
return err;
}
|
virtual status_t decrypt(
const void *inData, size_t size,
uint32_t streamCTR, uint64_t inputCTR,
void *outData) {
Parcel data, reply;
data.writeInterfaceToken(IHDCP::getInterfaceDescriptor());
data.writeInt32(size);
data.write(inData, size);
data.writeInt32(streamCTR);
data.writeInt64(inputCTR);
remote()->transact(HDCP_DECRYPT, data, &reply);
status_t err = reply.readInt32();
if (err != OK) {
return err;
}
reply.read(outData, size);
return err;
}
|
C
|
Android
| 0 |
CVE-2014-2739
|
https://www.cvedetails.com/cve/CVE-2014-2739/
|
CWE-20
|
https://github.com/torvalds/linux/commit/b2853fd6c2d0f383dbdf7427e263eb576a633867
|
b2853fd6c2d0f383dbdf7427e263eb576a633867
|
IB/core: Don't resolve passive side RoCE L2 address in CMA REQ handler
The code that resolves the passive side source MAC within the rdma_cm
connection request handler was both redundant and buggy, so remove it.
It was redundant since later, when an RC QP is modified to RTR state,
the resolution will take place in the ib_core module. It was buggy
because this callback also deals with UD SIDR exchange, for which we
incorrectly looked at the REQ member of the CM event and dereferenced
a random value.
Fixes: dd5f03beb4f7 ("IB/core: Ethernet L2 attributes in verbs/cm structures")
Signed-off-by: Moni Shoua <[email protected]>
Signed-off-by: Or Gerlitz <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>
|
static struct cm_timewait_info * cm_insert_remote_qpn(struct cm_timewait_info
*timewait_info)
{
struct rb_node **link = &cm.remote_qp_table.rb_node;
struct rb_node *parent = NULL;
struct cm_timewait_info *cur_timewait_info;
__be64 remote_ca_guid = timewait_info->remote_ca_guid;
__be32 remote_qpn = timewait_info->remote_qpn;
while (*link) {
parent = *link;
cur_timewait_info = rb_entry(parent, struct cm_timewait_info,
remote_qp_node);
if (be32_lt(remote_qpn, cur_timewait_info->remote_qpn))
link = &(*link)->rb_left;
else if (be32_gt(remote_qpn, cur_timewait_info->remote_qpn))
link = &(*link)->rb_right;
else if (be64_lt(remote_ca_guid, cur_timewait_info->remote_ca_guid))
link = &(*link)->rb_left;
else if (be64_gt(remote_ca_guid, cur_timewait_info->remote_ca_guid))
link = &(*link)->rb_right;
else
return cur_timewait_info;
}
timewait_info->inserted_remote_qp = 1;
rb_link_node(&timewait_info->remote_qp_node, parent, link);
rb_insert_color(&timewait_info->remote_qp_node, &cm.remote_qp_table);
return NULL;
}
|
static struct cm_timewait_info * cm_insert_remote_qpn(struct cm_timewait_info
*timewait_info)
{
struct rb_node **link = &cm.remote_qp_table.rb_node;
struct rb_node *parent = NULL;
struct cm_timewait_info *cur_timewait_info;
__be64 remote_ca_guid = timewait_info->remote_ca_guid;
__be32 remote_qpn = timewait_info->remote_qpn;
while (*link) {
parent = *link;
cur_timewait_info = rb_entry(parent, struct cm_timewait_info,
remote_qp_node);
if (be32_lt(remote_qpn, cur_timewait_info->remote_qpn))
link = &(*link)->rb_left;
else if (be32_gt(remote_qpn, cur_timewait_info->remote_qpn))
link = &(*link)->rb_right;
else if (be64_lt(remote_ca_guid, cur_timewait_info->remote_ca_guid))
link = &(*link)->rb_left;
else if (be64_gt(remote_ca_guid, cur_timewait_info->remote_ca_guid))
link = &(*link)->rb_right;
else
return cur_timewait_info;
}
timewait_info->inserted_remote_qp = 1;
rb_link_node(&timewait_info->remote_qp_node, parent, link);
rb_insert_color(&timewait_info->remote_qp_node, &cm.remote_qp_table);
return NULL;
}
|
C
|
linux
| 0 |
CVE-2011-4324
|
https://www.cvedetails.com/cve/CVE-2011-4324/
| null |
https://github.com/torvalds/linux/commit/dc0b027dfadfcb8a5504f7d8052754bf8d501ab9
|
dc0b027dfadfcb8a5504f7d8052754bf8d501ab9
|
NFSv4: Convert the open and close ops to use fmode
Signed-off-by: Trond Myklebust <[email protected]>
|
static void nfs4_open_confirm_done(struct rpc_task *task, void *calldata)
{
struct nfs4_opendata *data = calldata;
data->rpc_status = task->tk_status;
if (RPC_ASSASSINATED(task))
return;
if (data->rpc_status == 0) {
memcpy(data->o_res.stateid.data, data->c_res.stateid.data,
sizeof(data->o_res.stateid.data));
nfs_confirm_seqid(&data->owner->so_seqid, 0);
renew_lease(data->o_res.server, data->timestamp);
data->rpc_done = 1;
}
}
|
static void nfs4_open_confirm_done(struct rpc_task *task, void *calldata)
{
struct nfs4_opendata *data = calldata;
data->rpc_status = task->tk_status;
if (RPC_ASSASSINATED(task))
return;
if (data->rpc_status == 0) {
memcpy(data->o_res.stateid.data, data->c_res.stateid.data,
sizeof(data->o_res.stateid.data));
nfs_confirm_seqid(&data->owner->so_seqid, 0);
renew_lease(data->o_res.server, data->timestamp);
data->rpc_done = 1;
}
}
|
C
|
linux
| 0 |
CVE-2015-4116
|
https://www.cvedetails.com/cve/CVE-2015-4116/
| null |
https://git.php.net/?p=php-src.git;a=commit;h=1cbd25ca15383394ffa9ee8601c5de4c0f2f90e1
|
1cbd25ca15383394ffa9ee8601c5de4c0f2f90e1
| null |
SPL_METHOD(SplHeap, rewind)
{
if (zend_parse_parameters_none() == FAILURE) {
return;
}
/* do nothing, the iterator always points to the top element */
}
|
SPL_METHOD(SplHeap, rewind)
{
if (zend_parse_parameters_none() == FAILURE) {
return;
}
/* do nothing, the iterator always points to the top element */
}
|
C
|
php
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/1161a49d663dd395bd639549c2dfe7324f847938
|
1161a49d663dd395bd639549c2dfe7324f847938
|
Don't populate URL data in WebDropData when dragging files.
This is considered a potential security issue as well, since it leaks
filesystem paths.
BUG=332579
Review URL: https://codereview.chromium.org/135633002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@244538 0039d316-1c4b-4281-b951-d872f2087c98
|
base::string16 OmniboxViewViews::GetGrayTextAutocompletion() const {
#if defined(OS_WIN) || defined(USE_AURA)
return location_bar_view_->GetGrayTextAutocompletion();
#else
return base::string16();
#endif
}
|
base::string16 OmniboxViewViews::GetGrayTextAutocompletion() const {
#if defined(OS_WIN) || defined(USE_AURA)
return location_bar_view_->GetGrayTextAutocompletion();
#else
return base::string16();
#endif
}
|
C
|
Chrome
| 0 |
CVE-2011-2347
|
https://www.cvedetails.com/cve/CVE-2011-2347/
|
CWE-119
|
https://github.com/chromium/chromium/commit/60cc89e8d2e761dea28bb9e4cf9ebbad516bff09
|
60cc89e8d2e761dea28bb9e4cf9ebbad516bff09
|
iwyu: Include callback_old.h where appropriate, final.
BUG=82098
TEST=none
Review URL: http://codereview.chromium.org/7003003
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@85003 0039d316-1c4b-4281-b951-d872f2087c98
|
STDMETHODIMP UrlmonUrlRequest::OnDataAvailable(DWORD flags, DWORD size,
FORMATETC* formatetc,
STGMEDIUM* storage) {
DCHECK_EQ(thread_, base::PlatformThread::CurrentId());
DVLOG(1) << __FUNCTION__ << me() << "bytes available: " << size;
if (terminate_requested()) {
DVLOG(1) << " Download requested. INET_E_TERMINATED_BIND returned";
return INET_E_TERMINATED_BIND;
}
if (!storage || (storage->tymed != TYMED_ISTREAM)) {
NOTREACHED();
return E_INVALIDARG;
}
IStream* read_stream = storage->pstm;
if (!read_stream) {
NOTREACHED();
return E_UNEXPECTED;
}
if (size > 0)
pending_data_ = read_stream;
if (pending_read_size_) {
size_t bytes_copied = SendDataToDelegate(pending_read_size_);
DVLOG(1) << __FUNCTION__ << me() << "size read: " << bytes_copied;
} else {
DVLOG(1) << __FUNCTION__ << me() << "- waiting for remote read";
}
if (BSCF_LASTDATANOTIFICATION & flags) {
if (!is_expecting_download_ || pending()) {
DVLOG(1) << __FUNCTION__ << me() << "EOF";
return S_OK;
}
DVLOG(1) << __FUNCTION__ << " EOF: INET_E_TERMINATED_BIND returned";
return INET_E_TERMINATED_BIND;
}
return S_OK;
}
|
STDMETHODIMP UrlmonUrlRequest::OnDataAvailable(DWORD flags, DWORD size,
FORMATETC* formatetc,
STGMEDIUM* storage) {
DCHECK_EQ(thread_, base::PlatformThread::CurrentId());
DVLOG(1) << __FUNCTION__ << me() << "bytes available: " << size;
if (terminate_requested()) {
DVLOG(1) << " Download requested. INET_E_TERMINATED_BIND returned";
return INET_E_TERMINATED_BIND;
}
if (!storage || (storage->tymed != TYMED_ISTREAM)) {
NOTREACHED();
return E_INVALIDARG;
}
IStream* read_stream = storage->pstm;
if (!read_stream) {
NOTREACHED();
return E_UNEXPECTED;
}
if (size > 0)
pending_data_ = read_stream;
if (pending_read_size_) {
size_t bytes_copied = SendDataToDelegate(pending_read_size_);
DVLOG(1) << __FUNCTION__ << me() << "size read: " << bytes_copied;
} else {
DVLOG(1) << __FUNCTION__ << me() << "- waiting for remote read";
}
if (BSCF_LASTDATANOTIFICATION & flags) {
if (!is_expecting_download_ || pending()) {
DVLOG(1) << __FUNCTION__ << me() << "EOF";
return S_OK;
}
DVLOG(1) << __FUNCTION__ << " EOF: INET_E_TERMINATED_BIND returned";
return INET_E_TERMINATED_BIND;
}
return S_OK;
}
|
C
|
Chrome
| 0 |
CVE-2013-1790
|
https://www.cvedetails.com/cve/CVE-2013-1790/
|
CWE-119
|
https://cgit.freedesktop.org/poppler/poppler/commit/?h=poppler-0.22&id=b1026b5978c385328f2a15a2185c599a563edf91
|
b1026b5978c385328f2a15a2185c599a563edf91
| null |
FilterStream::FilterStream(Stream *strA) {
str = strA;
}
|
FilterStream::FilterStream(Stream *strA) {
str = strA;
}
|
CPP
|
poppler
| 0 |
CVE-2011-4086
|
https://www.cvedetails.com/cve/CVE-2011-4086/
|
CWE-119
|
https://github.com/torvalds/linux/commit/15291164b22a357cb211b618adfef4fa82fc0de3
|
15291164b22a357cb211b618adfef4fa82fc0de3
|
jbd2: clear BH_Delay & BH_Unwritten in journal_unmap_buffer
journal_unmap_buffer()'s zap_buffer: code clears a lot of buffer head
state ala discard_buffer(), but does not touch _Delay or _Unwritten as
discard_buffer() does.
This can be problematic in some areas of the ext4 code which assume
that if they have found a buffer marked unwritten or delay, then it's
a live one. Perhaps those spots should check whether it is mapped
as well, but if jbd2 is going to tear down a buffer, let's really
tear it down completely.
Without this I get some fsx failures on sub-page-block filesystems
up until v3.2, at which point 4e96b2dbbf1d7e81f22047a50f862555a6cb87cb
and 189e868fa8fdca702eb9db9d8afc46b5cb9144c9 make the failures go
away, because buried within that large change is some more flag
clearing. I still think it's worth doing in jbd2, since
->invalidatepage leads here directly, and it's the right place
to clear away these flags.
Signed-off-by: Eric Sandeen <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Cc: [email protected]
|
__blist_del_buffer(struct journal_head **list, struct journal_head *jh)
{
if (*list == jh) {
*list = jh->b_tnext;
if (*list == jh)
*list = NULL;
}
jh->b_tprev->b_tnext = jh->b_tnext;
jh->b_tnext->b_tprev = jh->b_tprev;
}
|
__blist_del_buffer(struct journal_head **list, struct journal_head *jh)
{
if (*list == jh) {
*list = jh->b_tnext;
if (*list == jh)
*list = NULL;
}
jh->b_tprev->b_tnext = jh->b_tnext;
jh->b_tnext->b_tprev = jh->b_tprev;
}
|
C
|
linux
| 0 |
CVE-2014-1446
|
https://www.cvedetails.com/cve/CVE-2014-1446/
|
CWE-399
|
https://github.com/torvalds/linux/commit/8e3fbf870481eb53b2d3a322d1fc395ad8b367ed
|
8e3fbf870481eb53b2d3a322d1fc395ad8b367ed
|
hamradio/yam: fix info leak in ioctl
The yam_ioctl() code fails to initialise the cmd field
of the struct yamdrv_ioctl_cfg. Add an explicit memset(0)
before filling the structure to avoid the 4-byte info leak.
Signed-off-by: Salva Peiró <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static void yam_seq_stop(struct seq_file *seq, void *v)
{
}
|
static void yam_seq_stop(struct seq_file *seq, void *v)
{
}
|
C
|
linux
| 0 |
CVE-2018-6091
|
https://www.cvedetails.com/cve/CVE-2018-6091/
| null |
https://github.com/chromium/chromium/commit/59ad2dcbe6dd5c5d846944258e6cd26a700ade83
|
59ad2dcbe6dd5c5d846944258e6cd26a700ade83
|
service worker: Disable interception when OBJECT/EMBED uses ImageLoader.
Per the specification, service worker should not intercept requests for
OBJECT/EMBED elements.
R=kinuko
Bug: 771933
Change-Id: Ia6da6107dc5c68aa2c2efffde14bd2c51251fbd4
Reviewed-on: https://chromium-review.googlesource.com/927303
Reviewed-by: Kinuko Yasuda <[email protected]>
Commit-Queue: Matt Falkenhagen <[email protected]>
Cr-Commit-Position: refs/heads/master@{#538027}
|
void ImageLoader::DecodeRequest::ProcessForTask() {
if (!loader_)
return;
DCHECK_EQ(state_, kPendingMicrotask);
state_ = kPendingLoad;
loader_->DispatchDecodeRequestsIfComplete();
}
|
void ImageLoader::DecodeRequest::ProcessForTask() {
if (!loader_)
return;
DCHECK_EQ(state_, kPendingMicrotask);
state_ = kPendingLoad;
loader_->DispatchDecodeRequestsIfComplete();
}
|
C
|
Chrome
| 0 |
CVE-2014-2669
|
https://www.cvedetails.com/cve/CVE-2014-2669/
|
CWE-189
|
https://github.com/postgres/postgres/commit/31400a673325147e1205326008e32135a78b4d8a
|
31400a673325147e1205326008e32135a78b4d8a
|
Predict integer overflow to avoid buffer overruns.
Several functions, mostly type input functions, calculated an allocation
size such that the calculation wrapped to a small positive value when
arguments implied a sufficiently-large requirement. Writes past the end
of the inadvertent small allocation followed shortly thereafter.
Coverity identified the path_in() vulnerability; code inspection led to
the rest. In passing, add check_stack_depth() to prevent stack overflow
in related functions.
Back-patch to 8.4 (all supported versions). The non-comment hstore
changes touch code that did not exist in 8.4, so that part stops at 9.0.
Noah Misch and Heikki Linnakangas, reviewed by Tom Lane.
Security: CVE-2014-0064
|
box_fill(BOX *result, double x1, double x2, double y1, double y2)
{
if (x1 > x2)
{
result->high.x = x1;
result->low.x = x2;
}
else
{
result->high.x = x2;
result->low.x = x1;
}
if (y1 > y2)
{
result->high.y = y1;
result->low.y = y2;
}
else
{
result->high.y = y2;
result->low.y = y1;
}
return result;
}
|
box_fill(BOX *result, double x1, double x2, double y1, double y2)
{
if (x1 > x2)
{
result->high.x = x1;
result->low.x = x2;
}
else
{
result->high.x = x2;
result->low.x = x1;
}
if (y1 > y2)
{
result->high.y = y1;
result->low.y = y2;
}
else
{
result->high.y = y2;
result->low.y = y1;
}
return result;
}
|
C
|
postgres
| 0 |
CVE-2017-5091
|
https://www.cvedetails.com/cve/CVE-2017-5091/
|
CWE-416
|
https://github.com/chromium/chromium/commit/d007b8b750851fe1b375c463009ea3b24e5c021d
|
d007b8b750851fe1b375c463009ea3b24e5c021d
|
[IndexedDB] Fix Cursor UAF
If the connection is closed before we return a cursor, it dies in
IndexedDBCallbacks::IOThreadHelper::SendSuccessCursor. It's deleted on
the correct thread, but we also need to makes sure to remove it from its
transaction.
To make things simpler, we have the cursor remove itself from its
transaction on destruction.
R: [email protected]
Bug: 728887
Change-Id: I8c76e6195c2490137a05213e47c635d12f4d3dd2
Reviewed-on: https://chromium-review.googlesource.com/526284
Commit-Queue: Daniel Murphy <[email protected]>
Reviewed-by: Victor Costan <[email protected]>
Cr-Commit-Position: refs/heads/master@{#477504}
|
void CursorImpl::Continue(
const IndexedDBKey& key,
const IndexedDBKey& primary_key,
::indexed_db::mojom::CallbacksAssociatedPtrInfo callbacks_info) {
scoped_refptr<IndexedDBCallbacks> callbacks(
new IndexedDBCallbacks(dispatcher_host_->AsWeakPtr(), origin_,
std::move(callbacks_info), idb_runner_));
idb_runner_->PostTask(
FROM_HERE,
base::Bind(&IDBThreadHelper::Continue, base::Unretained(helper_), key,
primary_key, base::Passed(&callbacks)));
}
|
void CursorImpl::Continue(
const IndexedDBKey& key,
const IndexedDBKey& primary_key,
::indexed_db::mojom::CallbacksAssociatedPtrInfo callbacks_info) {
scoped_refptr<IndexedDBCallbacks> callbacks(
new IndexedDBCallbacks(dispatcher_host_->AsWeakPtr(), origin_,
std::move(callbacks_info), idb_runner_));
idb_runner_->PostTask(
FROM_HERE,
base::Bind(&IDBThreadHelper::Continue, base::Unretained(helper_), key,
primary_key, base::Passed(&callbacks)));
}
|
C
|
Chrome
| 0 |
CVE-2011-1943
|
https://www.cvedetails.com/cve/CVE-2011-1943/
|
CWE-200
|
https://cgit.freedesktop.org/NetworkManager/NetworkManager/commit/?id=78ce088843d59d4494965bfc40b30a2e63d065f6
|
78ce088843d59d4494965bfc40b30a2e63d065f6
| null |
destroy_one_secret (gpointer data)
{
char *secret = (char *) data;
/* Don't leave the secret lying around in memory */
memset (secret, 0, strlen (secret));
g_free (secret);
}
|
destroy_one_secret (gpointer data)
{
char *secret = (char *) data;
/* Don't leave the secret lying around in memory */
g_message ("%s: destroying %s", __func__, secret);
memset (secret, 0, strlen (secret));
g_free (secret);
}
|
C
|
NetworkManager
| 1 |
CVE-2015-8126
|
https://www.cvedetails.com/cve/CVE-2015-8126/
|
CWE-119
|
https://github.com/chromium/chromium/commit/7f3d85b096f66870a15b37c2f40b219b2e292693
|
7f3d85b096f66870a15b37c2f40b219b2e292693
|
third_party/libpng: update to 1.2.54
[email protected]
BUG=560291
Review URL: https://codereview.chromium.org/1467263003
Cr-Commit-Position: refs/heads/master@{#362298}
|
png_get_y_pixels_per_inch(png_structp png_ptr, png_infop info_ptr)
{
return ((png_uint_32)((float)png_get_y_pixels_per_meter(png_ptr, info_ptr)
*.0254 +.5));
}
|
png_get_y_pixels_per_inch(png_structp png_ptr, png_infop info_ptr)
{
return ((png_uint_32)((float)png_get_y_pixels_per_meter(png_ptr, info_ptr)
*.0254 +.5));
}
|
C
|
Chrome
| 0 |
CVE-2014-3610
|
https://www.cvedetails.com/cve/CVE-2014-3610/
|
CWE-264
|
https://github.com/torvalds/linux/commit/854e8bb1aa06c578c2c9145fa6bfe3680ef63b23
|
854e8bb1aa06c578c2c9145fa6bfe3680ef63b23
|
KVM: x86: Check non-canonical addresses upon WRMSR
Upon WRMSR, the CPU should inject #GP if a non-canonical value (address) is
written to certain MSRs. The behavior is "almost" identical for AMD and Intel
(ignoring MSRs that are not implemented in either architecture since they would
anyhow #GP). However, IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if
non-canonical address is written on Intel but not on AMD (which ignores the top
32-bits).
Accordingly, this patch injects a #GP on the MSRs which behave identically on
Intel and AMD. To eliminate the differences between the architecutres, the
value which is written to IA32_SYSENTER_ESP and IA32_SYSENTER_EIP is turned to
canonical value before writing instead of injecting a #GP.
Some references from Intel and AMD manuals:
According to Intel SDM description of WRMSR instruction #GP is expected on
WRMSR "If the source register contains a non-canonical address and ECX
specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE,
IA32_KERNEL_GS_BASE, IA32_LSTAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP."
According to AMD manual instruction manual:
LSTAR/CSTAR (SYSCALL): "The WRMSR instruction loads the target RIP into the
LSTAR and CSTAR registers. If an RIP written by WRMSR is not in canonical
form, a general-protection exception (#GP) occurs."
IA32_GS_BASE and IA32_FS_BASE (WRFSBASE/WRGSBASE): "The address written to the
base field must be in canonical form or a #GP fault will occur."
IA32_KERNEL_GS_BASE (SWAPGS): "The address stored in the KernelGSbase MSR must
be in canonical form."
This patch fixes CVE-2014-3610.
Cc: [email protected]
Signed-off-by: Nadav Amit <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
static int wrmsr_interception(struct vcpu_svm *svm)
{
struct msr_data msr;
u32 ecx = svm->vcpu.arch.regs[VCPU_REGS_RCX];
u64 data = (svm->vcpu.arch.regs[VCPU_REGS_RAX] & -1u)
| ((u64)(svm->vcpu.arch.regs[VCPU_REGS_RDX] & -1u) << 32);
msr.data = data;
msr.index = ecx;
msr.host_initiated = false;
svm->next_rip = kvm_rip_read(&svm->vcpu) + 2;
if (kvm_set_msr(&svm->vcpu, &msr)) {
trace_kvm_msr_write_ex(ecx, data);
kvm_inject_gp(&svm->vcpu, 0);
} else {
trace_kvm_msr_write(ecx, data);
skip_emulated_instruction(&svm->vcpu);
}
return 1;
}
|
static int wrmsr_interception(struct vcpu_svm *svm)
{
struct msr_data msr;
u32 ecx = svm->vcpu.arch.regs[VCPU_REGS_RCX];
u64 data = (svm->vcpu.arch.regs[VCPU_REGS_RAX] & -1u)
| ((u64)(svm->vcpu.arch.regs[VCPU_REGS_RDX] & -1u) << 32);
msr.data = data;
msr.index = ecx;
msr.host_initiated = false;
svm->next_rip = kvm_rip_read(&svm->vcpu) + 2;
if (svm_set_msr(&svm->vcpu, &msr)) {
trace_kvm_msr_write_ex(ecx, data);
kvm_inject_gp(&svm->vcpu, 0);
} else {
trace_kvm_msr_write(ecx, data);
skip_emulated_instruction(&svm->vcpu);
}
return 1;
}
|
C
|
linux
| 1 |
CVE-2012-5148
|
https://www.cvedetails.com/cve/CVE-2012-5148/
|
CWE-20
|
https://github.com/chromium/chromium/commit/e89cfcb9090e8c98129ae9160c513f504db74599
|
e89cfcb9090e8c98129ae9160c513f504db74599
|
Remove TabContents from TabStripModelObserver::TabDetachedAt.
BUG=107201
TEST=no visible change
Review URL: https://chromiumcodereview.appspot.com/11293205
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@167122 0039d316-1c4b-4281-b951-d872f2087c98
|
void InitialLoadObserver::Observe(int type,
const content::NotificationSource& source,
const content::NotificationDetails& details) {
if (type == content::NOTIFICATION_LOAD_START) {
if (outstanding_tab_count_ > loading_tabs_.size())
loading_tabs_.insert(TabTimeMap::value_type(
source.map_key(),
TabTime(base::TimeTicks::Now())));
} else if (type == content::NOTIFICATION_LOAD_STOP) {
if (outstanding_tab_count_ > finished_tabs_.size()) {
TabTimeMap::iterator iter = loading_tabs_.find(source.map_key());
if (iter != loading_tabs_.end()) {
finished_tabs_.insert(source.map_key());
iter->second.set_stop_time(base::TimeTicks::Now());
}
}
} else if (type == content::NOTIFICATION_RENDERER_PROCESS_CLOSED) {
base::TerminationStatus status =
content::Details<content::RenderProcessHost::RendererClosedDetails>(
details)->status;
switch (status) {
case base::TERMINATION_STATUS_NORMAL_TERMINATION:
break;
case base::TERMINATION_STATUS_ABNORMAL_TERMINATION:
case base::TERMINATION_STATUS_PROCESS_WAS_KILLED:
case base::TERMINATION_STATUS_PROCESS_CRASHED:
crashed_tab_count_++;
break;
case base::TERMINATION_STATUS_STILL_RUNNING:
LOG(ERROR) << "Got RENDERER_PROCESS_CLOSED notification, "
<< "but the process is still running. We may miss further "
<< "crash notification, resulting in hangs.";
break;
default:
LOG(ERROR) << "Unhandled termination status " << status;
NOTREACHED();
break;
}
} else {
NOTREACHED();
}
if (finished_tabs_.size() + crashed_tab_count_ >= outstanding_tab_count_)
ConditionMet();
}
|
void InitialLoadObserver::Observe(int type,
const content::NotificationSource& source,
const content::NotificationDetails& details) {
if (type == content::NOTIFICATION_LOAD_START) {
if (outstanding_tab_count_ > loading_tabs_.size())
loading_tabs_.insert(TabTimeMap::value_type(
source.map_key(),
TabTime(base::TimeTicks::Now())));
} else if (type == content::NOTIFICATION_LOAD_STOP) {
if (outstanding_tab_count_ > finished_tabs_.size()) {
TabTimeMap::iterator iter = loading_tabs_.find(source.map_key());
if (iter != loading_tabs_.end()) {
finished_tabs_.insert(source.map_key());
iter->second.set_stop_time(base::TimeTicks::Now());
}
}
} else if (type == content::NOTIFICATION_RENDERER_PROCESS_CLOSED) {
base::TerminationStatus status =
content::Details<content::RenderProcessHost::RendererClosedDetails>(
details)->status;
switch (status) {
case base::TERMINATION_STATUS_NORMAL_TERMINATION:
break;
case base::TERMINATION_STATUS_ABNORMAL_TERMINATION:
case base::TERMINATION_STATUS_PROCESS_WAS_KILLED:
case base::TERMINATION_STATUS_PROCESS_CRASHED:
crashed_tab_count_++;
break;
case base::TERMINATION_STATUS_STILL_RUNNING:
LOG(ERROR) << "Got RENDERER_PROCESS_CLOSED notification, "
<< "but the process is still running. We may miss further "
<< "crash notification, resulting in hangs.";
break;
default:
LOG(ERROR) << "Unhandled termination status " << status;
NOTREACHED();
break;
}
} else {
NOTREACHED();
}
if (finished_tabs_.size() + crashed_tab_count_ >= outstanding_tab_count_)
ConditionMet();
}
|
C
|
Chrome
| 0 |
CVE-2011-2906
|
https://www.cvedetails.com/cve/CVE-2011-2906/
|
CWE-189
|
https://github.com/torvalds/linux/commit/b5b515445f4f5a905c5dd27e6e682868ccd6c09d
|
b5b515445f4f5a905c5dd27e6e682868ccd6c09d
|
[SCSI] pmcraid: reject negative request size
There's a code path in pmcraid that can be reached via device ioctl that
causes all sorts of ugliness, including heap corruption or triggering the
OOM killer due to consecutive allocation of large numbers of pages.
First, the user can call pmcraid_chr_ioctl(), with a type
PMCRAID_PASSTHROUGH_IOCTL. This calls through to
pmcraid_ioctl_passthrough(). Next, a pmcraid_passthrough_ioctl_buffer
is copied in, and the request_size variable is set to
buffer->ioarcb.data_transfer_length, which is an arbitrary 32-bit
signed value provided by the user. If a negative value is provided
here, bad things can happen. For example,
pmcraid_build_passthrough_ioadls() is called with this request_size,
which immediately calls pmcraid_alloc_sglist() with a negative size.
The resulting math on allocating a scatter list can result in an
overflow in the kzalloc() call (if num_elem is 0, the sglist will be
smaller than expected), or if num_elem is unexpectedly large the
subsequent loop will call alloc_pages() repeatedly, a high number of
pages will be allocated and the OOM killer might be invoked.
It looks like preventing this value from being negative in
pmcraid_ioctl_passthrough() would be sufficient.
Signed-off-by: Dan Rosenberg <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
|
static ssize_t pmcraid_show_drv_version(
struct device *dev,
struct device_attribute *attr,
char *buf
)
{
return snprintf(buf, PAGE_SIZE, "version: %s\n",
PMCRAID_DRIVER_VERSION);
}
|
static ssize_t pmcraid_show_drv_version(
struct device *dev,
struct device_attribute *attr,
char *buf
)
{
return snprintf(buf, PAGE_SIZE, "version: %s\n",
PMCRAID_DRIVER_VERSION);
}
|
C
|
linux
| 0 |
CVE-2016-9588
|
https://www.cvedetails.com/cve/CVE-2016-9588/
|
CWE-388
|
https://github.com/torvalds/linux/commit/ef85b67385436ddc1998f45f1d6a210f935b3388
|
ef85b67385436ddc1998f45f1d6a210f935b3388
|
kvm: nVMX: Allow L1 to intercept software exceptions (#BP and #OF)
When L2 exits to L0 due to "exception or NMI", software exceptions
(#BP and #OF) for which L1 has requested an intercept should be
handled by L1 rather than L0. Previously, only hardware exceptions
were forwarded to L1.
Signed-off-by: Jim Mattson <[email protected]>
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
|
static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
u64 tscl;
u32 delta_tsc;
if (vmx->hv_deadline_tsc == -1)
return;
tscl = rdtsc();
if (vmx->hv_deadline_tsc > tscl)
/* sure to be 32 bit only because checked on set_hv_timer */
delta_tsc = (u32)((vmx->hv_deadline_tsc - tscl) >>
cpu_preemption_timer_multi);
else
delta_tsc = 0;
vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, delta_tsc);
}
|
static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
u64 tscl;
u32 delta_tsc;
if (vmx->hv_deadline_tsc == -1)
return;
tscl = rdtsc();
if (vmx->hv_deadline_tsc > tscl)
/* sure to be 32 bit only because checked on set_hv_timer */
delta_tsc = (u32)((vmx->hv_deadline_tsc - tscl) >>
cpu_preemption_timer_multi);
else
delta_tsc = 0;
vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, delta_tsc);
}
|
C
|
linux
| 0 |
CVE-2013-7421
|
https://www.cvedetails.com/cve/CVE-2013-7421/
|
CWE-264
|
https://github.com/torvalds/linux/commit/5d26a105b5a73e5635eae0629b42fa0a90e07b7b
|
5d26a105b5a73e5635eae0629b42fa0a90e07b7b
|
crypto: prefix module autoloading with "crypto-"
This prefixes all crypto module loading with "crypto-" so we never run
the risk of exposing module auto-loading to userspace via a crypto API,
as demonstrated by Mathias Krause:
https://lkml.org/lkml/2013/3/4/70
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
static void xtea_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
u32 y, z, sum;
struct tea_ctx *ctx = crypto_tfm_ctx(tfm);
const __le32 *in = (const __le32 *)src;
__le32 *out = (__le32 *)dst;
y = le32_to_cpu(in[0]);
z = le32_to_cpu(in[1]);
sum = XTEA_DELTA * XTEA_ROUNDS;
while (sum) {
z -= ((y << 4 ^ y >> 5) + y) ^ (sum + ctx->KEY[sum>>11 & 3]);
sum -= XTEA_DELTA;
y -= ((z << 4 ^ z >> 5) + z) ^ (sum + ctx->KEY[sum & 3]);
}
out[0] = cpu_to_le32(y);
out[1] = cpu_to_le32(z);
}
|
static void xtea_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
u32 y, z, sum;
struct tea_ctx *ctx = crypto_tfm_ctx(tfm);
const __le32 *in = (const __le32 *)src;
__le32 *out = (__le32 *)dst;
y = le32_to_cpu(in[0]);
z = le32_to_cpu(in[1]);
sum = XTEA_DELTA * XTEA_ROUNDS;
while (sum) {
z -= ((y << 4 ^ y >> 5) + y) ^ (sum + ctx->KEY[sum>>11 & 3]);
sum -= XTEA_DELTA;
y -= ((z << 4 ^ z >> 5) + z) ^ (sum + ctx->KEY[sum & 3]);
}
out[0] = cpu_to_le32(y);
out[1] = cpu_to_le32(z);
}
|
C
|
linux
| 0 |
CVE-2018-16435
|
https://www.cvedetails.com/cve/CVE-2018-16435/
|
CWE-190
|
https://github.com/mm2/Little-CMS/commit/768f70ca405cd3159d990e962d54456773bb8cf8
|
768f70ca405cd3159d990e962d54456773bb8cf8
|
Upgrade Visual studio 2017 15.8
- Upgrade to 15.8
- Add check on CGATS memory allocation (thanks to Quang Nguyen for
pointing out this)
|
cmsBool DataSection (cmsIT8* it8)
{
int iField = 0;
int iSet = 0;
char Buffer[256];
TABLE* t = GetTable(it8);
InSymbol(it8); // Eats "BEGIN_DATA"
CheckEOLN(it8);
if (!t->Data)
AllocateDataSet(it8);
while (it8->sy != SEND_DATA && it8->sy != SEOF)
{
if (iField >= t -> nSamples) {
iField = 0;
iSet++;
}
if (it8->sy != SEND_DATA && it8->sy != SEOF) {
if (!GetVal(it8, Buffer, 255, "Sample data expected"))
return FALSE;
if (!SetData(it8, iSet, iField, Buffer))
return FALSE;
iField++;
InSymbol(it8);
SkipEOLN(it8);
}
}
SkipEOLN(it8);
Skip(it8, SEND_DATA);
SkipEOLN(it8);
if ((iSet+1) != t -> nPatches)
return SynError(it8, "Count mismatch. NUMBER_OF_SETS was %d, found %d\n", t ->nPatches, iSet+1);
return TRUE;
}
|
cmsBool DataSection (cmsIT8* it8)
{
int iField = 0;
int iSet = 0;
char Buffer[256];
TABLE* t = GetTable(it8);
InSymbol(it8); // Eats "BEGIN_DATA"
CheckEOLN(it8);
if (!t->Data)
AllocateDataSet(it8);
while (it8->sy != SEND_DATA && it8->sy != SEOF)
{
if (iField >= t -> nSamples) {
iField = 0;
iSet++;
}
if (it8->sy != SEND_DATA && it8->sy != SEOF) {
if (!GetVal(it8, Buffer, 255, "Sample data expected"))
return FALSE;
if (!SetData(it8, iSet, iField, Buffer))
return FALSE;
iField++;
InSymbol(it8);
SkipEOLN(it8);
}
}
SkipEOLN(it8);
Skip(it8, SEND_DATA);
SkipEOLN(it8);
if ((iSet+1) != t -> nPatches)
return SynError(it8, "Count mismatch. NUMBER_OF_SETS was %d, found %d\n", t ->nPatches, iSet+1);
return TRUE;
}
|
C
|
Little-CMS
| 0 |
CVE-2016-1641
|
https://www.cvedetails.com/cve/CVE-2016-1641/
| null |
https://github.com/chromium/chromium/commit/75ca8ffd7bd7c58ace1144df05e1307d8d707662
|
75ca8ffd7bd7c58ace1144df05e1307d8d707662
|
Don't call WebContents::DownloadImage() callback if the WebContents were deleted
BUG=583718
Review URL: https://codereview.chromium.org/1685343004
Cr-Commit-Position: refs/heads/master@{#375700}
|
void WebContentsImpl::OnOpenColorChooser(
int color_chooser_id,
SkColor color,
const std::vector<ColorSuggestion>& suggestions) {
if (!HasValidFrameSource())
return;
ColorChooser* new_color_chooser = delegate_ ?
delegate_->OpenColorChooser(this, color, suggestions) :
NULL;
if (!new_color_chooser)
return;
if (color_chooser_info_.get())
color_chooser_info_->chooser->End();
color_chooser_info_.reset(new ColorChooserInfo(
render_frame_message_source_->GetProcess()->GetID(),
render_frame_message_source_->GetRoutingID(),
new_color_chooser,
color_chooser_id));
}
|
void WebContentsImpl::OnOpenColorChooser(
int color_chooser_id,
SkColor color,
const std::vector<ColorSuggestion>& suggestions) {
if (!HasValidFrameSource())
return;
ColorChooser* new_color_chooser = delegate_ ?
delegate_->OpenColorChooser(this, color, suggestions) :
NULL;
if (!new_color_chooser)
return;
if (color_chooser_info_.get())
color_chooser_info_->chooser->End();
color_chooser_info_.reset(new ColorChooserInfo(
render_frame_message_source_->GetProcess()->GetID(),
render_frame_message_source_->GetRoutingID(),
new_color_chooser,
color_chooser_id));
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/87c15175997b0103166020d79fe9048dcf4025f4
|
87c15175997b0103166020d79fe9048dcf4025f4
|
Add support for horizontal mouse wheel messages in Windows Desktop Aura.
This is simply a matter of recognizing the WM_MOUSEHWHEEL message as a valid mouse wheel message.
Tested this on web pages with horizontal scrollbars and it works well.
BUG=332797
[email protected], sky
Review URL: https://codereview.chromium.org/140653006
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@245651 0039d316-1c4b-4281-b951-d872f2087c98
|
int GetTouchId(const base::NativeEvent& xev) {
NOTIMPLEMENTED();
return 0;
}
|
int GetTouchId(const base::NativeEvent& xev) {
NOTIMPLEMENTED();
return 0;
}
|
C
|
Chrome
| 0 |
CVE-2016-5219
|
https://www.cvedetails.com/cve/CVE-2016-5219/
|
CWE-416
|
https://github.com/chromium/chromium/commit/a4150b688a754d3d10d2ca385155b1c95d77d6ae
|
a4150b688a754d3d10d2ca385155b1c95d77d6ae
|
Add GL_PROGRAM_COMPLETION_QUERY_CHROMIUM
This makes the query of GL_COMPLETION_STATUS_KHR to programs much
cheaper by minimizing the round-trip to the GPU thread.
Bug: 881152, 957001
Change-Id: Iadfa798af29225e752c710ca5c25f50b3dd3101a
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1586630
Commit-Queue: Kenneth Russell <[email protected]>
Reviewed-by: Kentaro Hara <[email protected]>
Reviewed-by: Geoff Lang <[email protected]>
Reviewed-by: Kenneth Russell <[email protected]>
Cr-Commit-Position: refs/heads/master@{#657568}
|
void GLES2DecoderPassthroughImpl::RestoreState(const ContextState* prev_state) {
}
|
void GLES2DecoderPassthroughImpl::RestoreState(const ContextState* prev_state) {
}
|
C
|
Chrome
| 0 |
CVE-2014-7909
|
https://www.cvedetails.com/cve/CVE-2014-7909/
|
CWE-189
|
https://github.com/chromium/chromium/commit/2571533bbb5b554ff47205c8ef1513ccc0817c3e
|
2571533bbb5b554ff47205c8ef1513ccc0817c3e
|
DocumentThreadableLoader: Add guards for sync notifyFinished() in setResource()
In loadRequest(), setResource() can call clear() synchronously:
DocumentThreadableLoader::clear()
DocumentThreadableLoader::handleError()
Resource::didAddClient()
RawResource::didAddClient()
and thus |m_client| can be null while resource() isn't null after setResource(),
causing crashes (Issue 595964).
This CL checks whether |*this| is destructed and
whether |m_client| is null after setResource().
BUG=595964
Review-Url: https://codereview.chromium.org/1902683002
Cr-Commit-Position: refs/heads/master@{#391001}
|
void DocumentThreadableLoader::start(const ResourceRequest& request)
{
ASSERT(m_async || request.httpReferrer().isEmpty());
m_sameOriginRequest = getSecurityOrigin()->canRequestNoSuborigin(request.url());
m_requestContext = request.requestContext();
m_redirectMode = request.fetchRedirectMode();
if (!m_sameOriginRequest && m_options.crossOriginRequestPolicy == DenyCrossOriginRequests) {
InspectorInstrumentation::documentThreadableLoaderFailedToStartLoadingForClient(m_document, m_client);
ThreadableLoaderClient* client = m_client;
clear();
client->didFail(ResourceError(errorDomainBlinkInternal, 0, request.url().getString(), "Cross origin requests are not supported."));
return;
}
m_requestStartedSeconds = monotonicallyIncreasingTime();
const HTTPHeaderMap& headerMap = request.httpHeaderFields();
for (const auto& header : headerMap) {
if (FetchUtils::isSimpleHeader(header.key, header.value)) {
m_simpleRequestHeaders.add(header.key, header.value);
} else if (equalIgnoringCase(header.key, HTTPNames::Range) && m_options.crossOriginRequestPolicy == UseAccessControl && m_options.preflightPolicy == PreventPreflight) {
m_simpleRequestHeaders.add(header.key, header.value);
}
}
if (request.httpMethod() != HTTPNames::GET) {
if (Page* page = m_document->page())
page->chromeClient().didObserveNonGetFetchFromScript();
}
if (m_async && !request.skipServiceWorker() && SchemeRegistry::shouldTreatURLSchemeAsAllowingServiceWorkers(request.url().protocol()) && m_document->fetcher()->isControlledByServiceWorker()) {
ResourceRequest newRequest(request);
const WebURLRequest::RequestContext requestContext(request.requestContext());
if (requestContext != WebURLRequest::RequestContextFetch) {
switch (m_options.crossOriginRequestPolicy) {
case DenyCrossOriginRequests:
newRequest.setFetchRequestMode(WebURLRequest::FetchRequestModeSameOrigin);
break;
case UseAccessControl:
if (m_options.preflightPolicy == ForcePreflight)
newRequest.setFetchRequestMode(WebURLRequest::FetchRequestModeCORSWithForcedPreflight);
else
newRequest.setFetchRequestMode(WebURLRequest::FetchRequestModeCORS);
break;
case AllowCrossOriginRequests:
SECURITY_CHECK(requestContext == WebURLRequest::RequestContextAudio || requestContext == WebURLRequest::RequestContextVideo || requestContext == WebURLRequest::RequestContextObject || requestContext == WebURLRequest::RequestContextFavicon || requestContext == WebURLRequest::RequestContextImage || requestContext == WebURLRequest::RequestContextScript);
newRequest.setFetchRequestMode(WebURLRequest::FetchRequestModeNoCORS);
break;
}
if (m_resourceLoaderOptions.allowCredentials == AllowStoredCredentials)
newRequest.setFetchCredentialsMode(WebURLRequest::FetchCredentialsModeInclude);
else
newRequest.setFetchCredentialsMode(WebURLRequest::FetchCredentialsModeSameOrigin);
}
if (newRequest.fetchRequestMode() == WebURLRequest::FetchRequestModeCORS || newRequest.fetchRequestMode() == WebURLRequest::FetchRequestModeCORSWithForcedPreflight) {
m_fallbackRequestForServiceWorker = ResourceRequest(request);
m_fallbackRequestForServiceWorker.setSkipServiceWorker(true);
}
loadRequest(newRequest, m_resourceLoaderOptions);
return;
}
dispatchInitialRequest(request);
}
|
void DocumentThreadableLoader::start(const ResourceRequest& request)
{
ASSERT(m_async || request.httpReferrer().isEmpty());
m_sameOriginRequest = getSecurityOrigin()->canRequestNoSuborigin(request.url());
m_requestContext = request.requestContext();
m_redirectMode = request.fetchRedirectMode();
if (!m_sameOriginRequest && m_options.crossOriginRequestPolicy == DenyCrossOriginRequests) {
InspectorInstrumentation::documentThreadableLoaderFailedToStartLoadingForClient(m_document, m_client);
ThreadableLoaderClient* client = m_client;
clear();
client->didFail(ResourceError(errorDomainBlinkInternal, 0, request.url().getString(), "Cross origin requests are not supported."));
return;
}
m_requestStartedSeconds = monotonicallyIncreasingTime();
const HTTPHeaderMap& headerMap = request.httpHeaderFields();
for (const auto& header : headerMap) {
if (FetchUtils::isSimpleHeader(header.key, header.value)) {
m_simpleRequestHeaders.add(header.key, header.value);
} else if (equalIgnoringCase(header.key, HTTPNames::Range) && m_options.crossOriginRequestPolicy == UseAccessControl && m_options.preflightPolicy == PreventPreflight) {
m_simpleRequestHeaders.add(header.key, header.value);
}
}
if (request.httpMethod() != HTTPNames::GET) {
if (Page* page = m_document->page())
page->chromeClient().didObserveNonGetFetchFromScript();
}
if (m_async && !request.skipServiceWorker() && SchemeRegistry::shouldTreatURLSchemeAsAllowingServiceWorkers(request.url().protocol()) && m_document->fetcher()->isControlledByServiceWorker()) {
ResourceRequest newRequest(request);
const WebURLRequest::RequestContext requestContext(request.requestContext());
if (requestContext != WebURLRequest::RequestContextFetch) {
switch (m_options.crossOriginRequestPolicy) {
case DenyCrossOriginRequests:
newRequest.setFetchRequestMode(WebURLRequest::FetchRequestModeSameOrigin);
break;
case UseAccessControl:
if (m_options.preflightPolicy == ForcePreflight)
newRequest.setFetchRequestMode(WebURLRequest::FetchRequestModeCORSWithForcedPreflight);
else
newRequest.setFetchRequestMode(WebURLRequest::FetchRequestModeCORS);
break;
case AllowCrossOriginRequests:
SECURITY_CHECK(requestContext == WebURLRequest::RequestContextAudio || requestContext == WebURLRequest::RequestContextVideo || requestContext == WebURLRequest::RequestContextObject || requestContext == WebURLRequest::RequestContextFavicon || requestContext == WebURLRequest::RequestContextImage || requestContext == WebURLRequest::RequestContextScript);
newRequest.setFetchRequestMode(WebURLRequest::FetchRequestModeNoCORS);
break;
}
if (m_resourceLoaderOptions.allowCredentials == AllowStoredCredentials)
newRequest.setFetchCredentialsMode(WebURLRequest::FetchCredentialsModeInclude);
else
newRequest.setFetchCredentialsMode(WebURLRequest::FetchCredentialsModeSameOrigin);
}
if (newRequest.fetchRequestMode() == WebURLRequest::FetchRequestModeCORS || newRequest.fetchRequestMode() == WebURLRequest::FetchRequestModeCORSWithForcedPreflight) {
m_fallbackRequestForServiceWorker = ResourceRequest(request);
m_fallbackRequestForServiceWorker.setSkipServiceWorker(true);
}
loadRequest(newRequest, m_resourceLoaderOptions);
return;
}
dispatchInitialRequest(request);
}
|
C
|
Chrome
| 0 |
CVE-2015-8767
|
https://www.cvedetails.com/cve/CVE-2015-8767/
|
CWE-362
|
https://github.com/torvalds/linux/commit/635682a14427d241bab7bbdeebb48a7d7b91638e
|
635682a14427d241bab7bbdeebb48a7d7b91638e
|
sctp: Prevent soft lockup when sctp_accept() is called during a timeout event
A case can occur when sctp_accept() is called by the user during
a heartbeat timeout event after the 4-way handshake. Since
sctp_assoc_migrate() changes both assoc->base.sk and assoc->ep, the
bh_sock_lock in sctp_generate_heartbeat_event() will be taken with
the listening socket but released with the new association socket.
The result is a deadlock on any future attempts to take the listening
socket lock.
Note that this race can occur with other SCTP timeouts that take
the bh_lock_sock() in the event sctp_accept() is called.
BUG: soft lockup - CPU#9 stuck for 67s! [swapper:0]
...
RIP: 0010:[<ffffffff8152d48e>] [<ffffffff8152d48e>] _spin_lock+0x1e/0x30
RSP: 0018:ffff880028323b20 EFLAGS: 00000206
RAX: 0000000000000002 RBX: ffff880028323b20 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff880028323be0 RDI: ffff8804632c4b48
RBP: ffffffff8100bb93 R08: 0000000000000000 R09: 0000000000000000
R10: ffff880610662280 R11: 0000000000000100 R12: ffff880028323aa0
R13: ffff8804383c3880 R14: ffff880028323a90 R15: ffffffff81534225
FS: 0000000000000000(0000) GS:ffff880028320000(0000) knlGS:0000000000000000
CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00000000006df528 CR3: 0000000001a85000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process swapper (pid: 0, threadinfo ffff880616b70000, task ffff880616b6cab0)
Stack:
ffff880028323c40 ffffffffa01c2582 ffff880614cfb020 0000000000000000
<d> 0100000000000000 00000014383a6c44 ffff8804383c3880 ffff880614e93c00
<d> ffff880614e93c00 0000000000000000 ffff8804632c4b00 ffff8804383c38b8
Call Trace:
<IRQ>
[<ffffffffa01c2582>] ? sctp_rcv+0x492/0xa10 [sctp]
[<ffffffff8148c559>] ? nf_iterate+0x69/0xb0
[<ffffffff814974a0>] ? ip_local_deliver_finish+0x0/0x2d0
[<ffffffff8148c716>] ? nf_hook_slow+0x76/0x120
[<ffffffff814974a0>] ? ip_local_deliver_finish+0x0/0x2d0
[<ffffffff8149757d>] ? ip_local_deliver_finish+0xdd/0x2d0
[<ffffffff81497808>] ? ip_local_deliver+0x98/0xa0
[<ffffffff81496ccd>] ? ip_rcv_finish+0x12d/0x440
[<ffffffff81497255>] ? ip_rcv+0x275/0x350
[<ffffffff8145cfeb>] ? __netif_receive_skb+0x4ab/0x750
...
With lockdep debugging:
=====================================
[ BUG: bad unlock balance detected! ]
-------------------------------------
CslRx/12087 is trying to release lock (slock-AF_INET) at:
[<ffffffffa01bcae0>] sctp_generate_timeout_event+0x40/0xe0 [sctp]
but there are no more locks to release!
other info that might help us debug this:
2 locks held by CslRx/12087:
#0: (&asoc->timers[i]){+.-...}, at: [<ffffffff8108ce1f>] run_timer_softirq+0x16f/0x3e0
#1: (slock-AF_INET){+.-...}, at: [<ffffffffa01bcac3>] sctp_generate_timeout_event+0x23/0xe0 [sctp]
Ensure the socket taken is also the same one that is released by
saving a copy of the socket before entering the timeout event
critical section.
Signed-off-by: Karl Heiss <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
void sctp_generate_proto_unreach_event(unsigned long data)
{
struct sctp_transport *transport = (struct sctp_transport *) data;
struct sctp_association *asoc = transport->asoc;
struct sock *sk = asoc->base.sk;
struct net *net = sock_net(sk);
bh_lock_sock(sk);
if (sock_owned_by_user(sk)) {
pr_debug("%s: sock is busy\n", __func__);
/* Try again later. */
if (!mod_timer(&transport->proto_unreach_timer,
jiffies + (HZ/20)))
sctp_association_hold(asoc);
goto out_unlock;
}
/* Is this structure just waiting around for us to actually
* get destroyed?
*/
if (asoc->base.dead)
goto out_unlock;
sctp_do_sm(net, SCTP_EVENT_T_OTHER,
SCTP_ST_OTHER(SCTP_EVENT_ICMP_PROTO_UNREACH),
asoc->state, asoc->ep, asoc, transport, GFP_ATOMIC);
out_unlock:
bh_unlock_sock(sk);
sctp_association_put(asoc);
}
|
void sctp_generate_proto_unreach_event(unsigned long data)
{
struct sctp_transport *transport = (struct sctp_transport *) data;
struct sctp_association *asoc = transport->asoc;
struct net *net = sock_net(asoc->base.sk);
bh_lock_sock(asoc->base.sk);
if (sock_owned_by_user(asoc->base.sk)) {
pr_debug("%s: sock is busy\n", __func__);
/* Try again later. */
if (!mod_timer(&transport->proto_unreach_timer,
jiffies + (HZ/20)))
sctp_association_hold(asoc);
goto out_unlock;
}
/* Is this structure just waiting around for us to actually
* get destroyed?
*/
if (asoc->base.dead)
goto out_unlock;
sctp_do_sm(net, SCTP_EVENT_T_OTHER,
SCTP_ST_OTHER(SCTP_EVENT_ICMP_PROTO_UNREACH),
asoc->state, asoc->ep, asoc, transport, GFP_ATOMIC);
out_unlock:
bh_unlock_sock(asoc->base.sk);
sctp_association_put(asoc);
}
|
C
|
linux
| 1 |
CVE-2013-0892
|
https://www.cvedetails.com/cve/CVE-2013-0892/
| null |
https://github.com/chromium/chromium/commit/0ab5fab4939150bd0f30ada8a4bf6eb0f69d66c1
|
0ab5fab4939150bd0f30ada8a4bf6eb0f69d66c1
|
Sizes going across an IPC should be uint32.
BUG=164946
Review URL: https://chromiumcodereview.appspot.com/11472038
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@171944 0039d316-1c4b-4281-b951-d872f2087c98
|
~GpuCommandBufferMemoryTracker() {
delete gpu_memory_manager_tracking_group_;
}
|
~GpuCommandBufferMemoryTracker() {
delete gpu_memory_manager_tracking_group_;
}
|
C
|
Chrome
| 0 |
CVE-2018-16863
|
https://www.cvedetails.com/cve/CVE-2018-16863/
|
CWE-78
|
http://git.ghostscript.com/?p=ghostpdl.git;a=commit;h=79cccf641486
|
79cccf641486a6595c43f1de1cd7ade696020a31
| null |
gs_copyscanlines(gx_device * dev, int start_y, byte * data, uint size,
int *plines_copied, uint * pbytes_copied)
{
uint line_size = gx_device_raster(dev, 0);
uint count = size / line_size;
uint i;
byte *dest = data;
for (i = 0; i < count; i++, dest += line_size) {
int code = (*dev_proc(dev, get_bits)) (dev, start_y + i, dest, NULL);
if (code < 0) {
/* Might just be an overrun. */
if (start_y + i == dev->height)
break;
return_error(code);
}
}
if (plines_copied != NULL)
*plines_copied = i;
if (pbytes_copied != NULL)
*pbytes_copied = i * line_size;
return 0;
}
|
gs_copyscanlines(gx_device * dev, int start_y, byte * data, uint size,
int *plines_copied, uint * pbytes_copied)
{
uint line_size = gx_device_raster(dev, 0);
uint count = size / line_size;
uint i;
byte *dest = data;
for (i = 0; i < count; i++, dest += line_size) {
int code = (*dev_proc(dev, get_bits)) (dev, start_y + i, dest, NULL);
if (code < 0) {
/* Might just be an overrun. */
if (start_y + i == dev->height)
break;
return_error(code);
}
}
if (plines_copied != NULL)
*plines_copied = i;
if (pbytes_copied != NULL)
*pbytes_copied = i * line_size;
return 0;
}
|
C
|
ghostscript
| 0 |
CVE-2019-13106
|
https://www.cvedetails.com/cve/CVE-2019-13106/
|
CWE-787
|
https://github.com/u-boot/u-boot/commits/master
|
master
|
Merge branch '2020-01-22-master-imports'
- Re-add U8500 platform support
- Add bcm968360bg support
- Assorted Keymile fixes
- Other assorted bugfixes
|
static void get_keyword(struct token *t)
{
int i;
for (i = 0; keywords[i].val; i++) {
if (!strcmp(t->val, keywords[i].val)) {
t->type = keywords[i].type;
break;
}
}
}
|
static void get_keyword(struct token *t)
{
int i;
for (i = 0; keywords[i].val; i++) {
if (!strcmp(t->val, keywords[i].val)) {
t->type = keywords[i].type;
break;
}
}
}
|
C
|
u-boot
| 0 |
CVE-2014-5336
|
https://www.cvedetails.com/cve/CVE-2014-5336/
|
CWE-20
|
https://github.com/monkey/monkey/commit/b2d0e6f92310bb14a15aa2f8e96e1fb5379776dd
|
b2d0e6f92310bb14a15aa2f8e96e1fb5379776dd
|
Request: new request session flag to mark those files opened by FDT
This patch aims to fix a potential DDoS problem that can be caused
in the server quering repetitive non-existent resources.
When serving a static file, the core use Vhost FDT mechanism, but if
it sends a static error page it does a direct open(2). When closing
the resources for the same request it was just calling mk_vhost_close()
which did not clear properly the file descriptor.
This patch adds a new field on the struct session_request called 'fd_is_fdt',
which contains MK_TRUE or MK_FALSE depending of how fd_file was opened.
Thanks to Matthew Daley <[email protected]> for report and troubleshoot this
problem.
Signed-off-by: Eduardo Silva <[email protected]>
|
struct vhost_fdt_hash_table *mk_vhost_fdt_table_lookup(int id, struct host *host)
{
struct mk_list *head;
struct mk_list *vhost_list;
struct vhost_fdt_host *fdt_host;
struct vhost_fdt_hash_table *ht = NULL;
vhost_list = mk_vhost_fdt_key;
mk_list_foreach(head, vhost_list) {
fdt_host = mk_list_entry(head, struct vhost_fdt_host, _head);
if (fdt_host->host == host) {
ht = &fdt_host->hash_table[id];
return ht;
}
}
return ht;
}
|
struct vhost_fdt_hash_table *mk_vhost_fdt_table_lookup(int id, struct host *host)
{
struct mk_list *head;
struct mk_list *vhost_list;
struct vhost_fdt_host *fdt_host;
struct vhost_fdt_hash_table *ht = NULL;
vhost_list = mk_vhost_fdt_key;
mk_list_foreach(head, vhost_list) {
fdt_host = mk_list_entry(head, struct vhost_fdt_host, _head);
if (fdt_host->host == host) {
ht = &fdt_host->hash_table[id];
return ht;
}
}
return ht;
}
|
C
|
monkey
| 0 |
CVE-2015-1233
|
https://www.cvedetails.com/cve/CVE-2015-1233/
|
CWE-17
|
https://github.com/chromium/chromium/commit/31b81d4cf8b6a063391839816c82fc61c8272e53
|
31b81d4cf8b6a063391839816c82fc61c8272e53
|
Avoid Showing rotation change notification when source is accelerometer
BUG=717252
TEST=Manually rotate device with accelerometer and observe there's no notification
Review-Url: https://codereview.chromium.org/2853113005
Cr-Commit-Position: refs/heads/master@{#469058}
|
ScreenLayoutObserverTest::ScreenLayoutObserverTest() {}
|
ScreenLayoutObserverTest::ScreenLayoutObserverTest() {}
|
C
|
Chrome
| 0 |
CVE-2012-2816
|
https://www.cvedetails.com/cve/CVE-2012-2816/
| null |
https://github.com/chromium/chromium/commit/cd0bd79d6ebdb72183e6f0833673464cc10b3600
|
cd0bd79d6ebdb72183e6f0833673464cc10b3600
|
Convert plugin and GPU process to brokered handle duplication.
BUG=119250
Review URL: https://chromiumcodereview.appspot.com/9958034
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@132303 0039d316-1c4b-4281-b951-d872f2087c98
|
void WebGraphicsContext3DCommandBufferImpl::FlipVertically(
uint8* framebuffer,
unsigned int width,
unsigned int height) {
if (width == 0)
return;
scanline_.resize(width * 4);
uint8* scanline = &scanline_[0];
unsigned int row_bytes = width * 4;
unsigned int count = height / 2;
for (unsigned int i = 0; i < count; i++) {
uint8* row_a = framebuffer + i * row_bytes;
uint8* row_b = framebuffer + (height - i - 1) * row_bytes;
memcpy(scanline, row_b, row_bytes);
memcpy(row_b, row_a, row_bytes);
memcpy(row_a, scanline, row_bytes);
}
}
|
void WebGraphicsContext3DCommandBufferImpl::FlipVertically(
uint8* framebuffer,
unsigned int width,
unsigned int height) {
if (width == 0)
return;
scanline_.resize(width * 4);
uint8* scanline = &scanline_[0];
unsigned int row_bytes = width * 4;
unsigned int count = height / 2;
for (unsigned int i = 0; i < count; i++) {
uint8* row_a = framebuffer + i * row_bytes;
uint8* row_b = framebuffer + (height - i - 1) * row_bytes;
memcpy(scanline, row_b, row_bytes);
memcpy(row_b, row_a, row_bytes);
memcpy(row_a, scanline, row_bytes);
}
}
|
C
|
Chrome
| 0 |
CVE-2018-6088
|
https://www.cvedetails.com/cve/CVE-2018-6088/
|
CWE-20
|
https://github.com/chromium/chromium/commit/94b3728a2836da335a10085d4089c9d8e1c9d225
|
94b3728a2836da335a10085d4089c9d8e1c9d225
|
Copy visible_pages_ when iterating over it.
On this case, a call inside the loop may cause visible_pages_ to
change.
Bug: 822091
Change-Id: I41b0715faa6fe3e39203cd9142cf5ea38e59aefb
Reviewed-on: https://chromium-review.googlesource.com/964592
Reviewed-by: dsinclair <[email protected]>
Commit-Queue: Henrique Nakashima <[email protected]>
Cr-Commit-Position: refs/heads/master@{#543494}
|
PDFiumEngine::PDFiumEngine(PDFEngine::Client* client)
: client_(client),
current_zoom_(1.0),
current_rotation_(0),
doc_(nullptr),
form_(nullptr),
defer_page_unload_(false),
selecting_(false),
mouse_down_state_(PDFiumPage::NONSELECTABLE_AREA,
PDFiumPage::LinkTarget()),
in_form_text_area_(false),
editable_form_text_area_(false),
mouse_left_button_down_(false),
permissions_(0),
permissions_handler_revision_(-1),
fpdf_availability_(nullptr),
last_page_mouse_down_(-1),
most_visible_page_(-1),
called_do_document_action_(false),
render_grayscale_(false),
render_annots_(true),
edit_mode_(false) {
find_factory_.Initialize(this);
password_factory_.Initialize(this);
file_access_.m_FileLen = 0;
file_access_.m_GetBlock = &GetBlock;
file_access_.m_Param = this;
file_availability_.version = 1;
file_availability_.IsDataAvail = &IsDataAvail;
file_availability_.engine = this;
download_hints_.version = 1;
download_hints_.AddSegment = &AddSegment;
download_hints_.engine = this;
FPDF_FORMFILLINFO::version = 1;
FPDF_FORMFILLINFO::m_pJsPlatform = this;
FPDF_FORMFILLINFO::Release = nullptr;
FPDF_FORMFILLINFO::FFI_Invalidate = Form_Invalidate;
FPDF_FORMFILLINFO::FFI_OutputSelectedRect = Form_OutputSelectedRect;
FPDF_FORMFILLINFO::FFI_SetCursor = Form_SetCursor;
FPDF_FORMFILLINFO::FFI_SetTimer = Form_SetTimer;
FPDF_FORMFILLINFO::FFI_KillTimer = Form_KillTimer;
FPDF_FORMFILLINFO::FFI_GetLocalTime = Form_GetLocalTime;
FPDF_FORMFILLINFO::FFI_OnChange = Form_OnChange;
FPDF_FORMFILLINFO::FFI_GetPage = Form_GetPage;
FPDF_FORMFILLINFO::FFI_GetCurrentPage = Form_GetCurrentPage;
FPDF_FORMFILLINFO::FFI_GetRotation = Form_GetRotation;
FPDF_FORMFILLINFO::FFI_ExecuteNamedAction = Form_ExecuteNamedAction;
FPDF_FORMFILLINFO::FFI_SetTextFieldFocus = Form_SetTextFieldFocus;
FPDF_FORMFILLINFO::FFI_DoURIAction = Form_DoURIAction;
FPDF_FORMFILLINFO::FFI_DoGoToAction = Form_DoGoToAction;
#if defined(PDF_ENABLE_XFA)
FPDF_FORMFILLINFO::version = 2;
FPDF_FORMFILLINFO::FFI_EmailTo = Form_EmailTo;
FPDF_FORMFILLINFO::FFI_DisplayCaret = Form_DisplayCaret;
FPDF_FORMFILLINFO::FFI_SetCurrentPage = Form_SetCurrentPage;
FPDF_FORMFILLINFO::FFI_GetCurrentPageIndex = Form_GetCurrentPageIndex;
FPDF_FORMFILLINFO::FFI_GetPageViewRect = Form_GetPageViewRect;
FPDF_FORMFILLINFO::FFI_GetPlatform = Form_GetPlatform;
FPDF_FORMFILLINFO::FFI_PageEvent = nullptr;
FPDF_FORMFILLINFO::FFI_PopupMenu = Form_PopupMenu;
FPDF_FORMFILLINFO::FFI_PostRequestURL = Form_PostRequestURL;
FPDF_FORMFILLINFO::FFI_PutRequestURL = Form_PutRequestURL;
FPDF_FORMFILLINFO::FFI_UploadTo = Form_UploadTo;
FPDF_FORMFILLINFO::FFI_DownloadFromURL = Form_DownloadFromURL;
FPDF_FORMFILLINFO::FFI_OpenFile = Form_OpenFile;
FPDF_FORMFILLINFO::FFI_GotoURL = Form_GotoURL;
FPDF_FORMFILLINFO::FFI_GetLanguage = Form_GetLanguage;
#endif // defined(PDF_ENABLE_XFA)
IPDF_JSPLATFORM::version = 3;
IPDF_JSPLATFORM::app_alert = Form_Alert;
IPDF_JSPLATFORM::app_beep = Form_Beep;
IPDF_JSPLATFORM::app_response = Form_Response;
IPDF_JSPLATFORM::Doc_getFilePath = Form_GetFilePath;
IPDF_JSPLATFORM::Doc_mail = Form_Mail;
IPDF_JSPLATFORM::Doc_print = Form_Print;
IPDF_JSPLATFORM::Doc_submitForm = Form_SubmitForm;
IPDF_JSPLATFORM::Doc_gotoPage = Form_GotoPage;
IPDF_JSPLATFORM::Field_browse = nullptr;
IFSDK_PAUSE::version = 1;
IFSDK_PAUSE::user = nullptr;
IFSDK_PAUSE::NeedToPauseNow = Pause_NeedToPauseNow;
#if defined(OS_LINUX)
pp::Instance* instance = client_->GetPluginInstance();
if (instance)
g_last_instance_id = instance->pp_instance();
#endif
}
|
PDFiumEngine::PDFiumEngine(PDFEngine::Client* client)
: client_(client),
current_zoom_(1.0),
current_rotation_(0),
doc_(nullptr),
form_(nullptr),
defer_page_unload_(false),
selecting_(false),
mouse_down_state_(PDFiumPage::NONSELECTABLE_AREA,
PDFiumPage::LinkTarget()),
in_form_text_area_(false),
editable_form_text_area_(false),
mouse_left_button_down_(false),
permissions_(0),
permissions_handler_revision_(-1),
fpdf_availability_(nullptr),
last_page_mouse_down_(-1),
most_visible_page_(-1),
called_do_document_action_(false),
render_grayscale_(false),
render_annots_(true),
edit_mode_(false) {
find_factory_.Initialize(this);
password_factory_.Initialize(this);
file_access_.m_FileLen = 0;
file_access_.m_GetBlock = &GetBlock;
file_access_.m_Param = this;
file_availability_.version = 1;
file_availability_.IsDataAvail = &IsDataAvail;
file_availability_.engine = this;
download_hints_.version = 1;
download_hints_.AddSegment = &AddSegment;
download_hints_.engine = this;
FPDF_FORMFILLINFO::version = 1;
FPDF_FORMFILLINFO::m_pJsPlatform = this;
FPDF_FORMFILLINFO::Release = nullptr;
FPDF_FORMFILLINFO::FFI_Invalidate = Form_Invalidate;
FPDF_FORMFILLINFO::FFI_OutputSelectedRect = Form_OutputSelectedRect;
FPDF_FORMFILLINFO::FFI_SetCursor = Form_SetCursor;
FPDF_FORMFILLINFO::FFI_SetTimer = Form_SetTimer;
FPDF_FORMFILLINFO::FFI_KillTimer = Form_KillTimer;
FPDF_FORMFILLINFO::FFI_GetLocalTime = Form_GetLocalTime;
FPDF_FORMFILLINFO::FFI_OnChange = Form_OnChange;
FPDF_FORMFILLINFO::FFI_GetPage = Form_GetPage;
FPDF_FORMFILLINFO::FFI_GetCurrentPage = Form_GetCurrentPage;
FPDF_FORMFILLINFO::FFI_GetRotation = Form_GetRotation;
FPDF_FORMFILLINFO::FFI_ExecuteNamedAction = Form_ExecuteNamedAction;
FPDF_FORMFILLINFO::FFI_SetTextFieldFocus = Form_SetTextFieldFocus;
FPDF_FORMFILLINFO::FFI_DoURIAction = Form_DoURIAction;
FPDF_FORMFILLINFO::FFI_DoGoToAction = Form_DoGoToAction;
#if defined(PDF_ENABLE_XFA)
FPDF_FORMFILLINFO::version = 2;
FPDF_FORMFILLINFO::FFI_EmailTo = Form_EmailTo;
FPDF_FORMFILLINFO::FFI_DisplayCaret = Form_DisplayCaret;
FPDF_FORMFILLINFO::FFI_SetCurrentPage = Form_SetCurrentPage;
FPDF_FORMFILLINFO::FFI_GetCurrentPageIndex = Form_GetCurrentPageIndex;
FPDF_FORMFILLINFO::FFI_GetPageViewRect = Form_GetPageViewRect;
FPDF_FORMFILLINFO::FFI_GetPlatform = Form_GetPlatform;
FPDF_FORMFILLINFO::FFI_PageEvent = nullptr;
FPDF_FORMFILLINFO::FFI_PopupMenu = Form_PopupMenu;
FPDF_FORMFILLINFO::FFI_PostRequestURL = Form_PostRequestURL;
FPDF_FORMFILLINFO::FFI_PutRequestURL = Form_PutRequestURL;
FPDF_FORMFILLINFO::FFI_UploadTo = Form_UploadTo;
FPDF_FORMFILLINFO::FFI_DownloadFromURL = Form_DownloadFromURL;
FPDF_FORMFILLINFO::FFI_OpenFile = Form_OpenFile;
FPDF_FORMFILLINFO::FFI_GotoURL = Form_GotoURL;
FPDF_FORMFILLINFO::FFI_GetLanguage = Form_GetLanguage;
#endif // defined(PDF_ENABLE_XFA)
IPDF_JSPLATFORM::version = 3;
IPDF_JSPLATFORM::app_alert = Form_Alert;
IPDF_JSPLATFORM::app_beep = Form_Beep;
IPDF_JSPLATFORM::app_response = Form_Response;
IPDF_JSPLATFORM::Doc_getFilePath = Form_GetFilePath;
IPDF_JSPLATFORM::Doc_mail = Form_Mail;
IPDF_JSPLATFORM::Doc_print = Form_Print;
IPDF_JSPLATFORM::Doc_submitForm = Form_SubmitForm;
IPDF_JSPLATFORM::Doc_gotoPage = Form_GotoPage;
IPDF_JSPLATFORM::Field_browse = nullptr;
IFSDK_PAUSE::version = 1;
IFSDK_PAUSE::user = nullptr;
IFSDK_PAUSE::NeedToPauseNow = Pause_NeedToPauseNow;
#if defined(OS_LINUX)
pp::Instance* instance = client_->GetPluginInstance();
if (instance)
g_last_instance_id = instance->pp_instance();
#endif
}
|
C
|
Chrome
| 0 |
CVE-2017-15924
|
https://www.cvedetails.com/cve/CVE-2017-15924/
|
CWE-78
|
https://github.com/shadowsocks/shadowsocks-libev/commit/c67d275803dc6ea22c558d06b1f7ba9f94cd8de3
|
c67d275803dc6ea22c558d06b1f7ba9f94cd8de3
|
Fix #1734
|
create_server_socket(const char *host, const char *port)
{
struct addrinfo hints;
struct addrinfo *result, *rp, *ipv4v6bindall;
int s, server_sock;
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_family = AF_UNSPEC; /* Return IPv4 and IPv6 choices */
hints.ai_socktype = SOCK_DGRAM; /* We want a UDP socket */
hints.ai_flags = AI_PASSIVE | AI_ADDRCONFIG; /* For wildcard IP address */
hints.ai_protocol = IPPROTO_UDP;
s = getaddrinfo(host, port, &hints, &result);
if (s != 0) {
LOGE("getaddrinfo: %s", gai_strerror(s));
return -1;
}
rp = result;
/*
* On Linux, with net.ipv6.bindv6only = 0 (the default), getaddrinfo(NULL) with
* AI_PASSIVE returns 0.0.0.0 and :: (in this order). AI_PASSIVE was meant to
* return a list of addresses to listen on, but it is impossible to listen on
* 0.0.0.0 and :: at the same time, if :: implies dualstack mode.
*/
if (!host) {
ipv4v6bindall = result;
/* Loop over all address infos found until a IPV6 address is found. */
while (ipv4v6bindall) {
if (ipv4v6bindall->ai_family == AF_INET6) {
rp = ipv4v6bindall; /* Take first IPV6 address available */
break;
}
ipv4v6bindall = ipv4v6bindall->ai_next; /* Get next address info, if any */
}
}
for (/*rp = result*/; rp != NULL; rp = rp->ai_next) {
server_sock = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
if (server_sock == -1) {
continue;
}
if (rp->ai_family == AF_INET6) {
int ipv6only = host ? 1 : 0;
setsockopt(server_sock, IPPROTO_IPV6, IPV6_V6ONLY, &ipv6only, sizeof(ipv6only));
}
int opt = 1;
setsockopt(server_sock, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
s = bind(server_sock, rp->ai_addr, rp->ai_addrlen);
if (s == 0) {
/* We managed to bind successfully! */
break;
} else {
ERROR("bind");
}
close(server_sock);
}
if (rp == NULL) {
LOGE("cannot bind");
return -1;
}
freeaddrinfo(result);
return server_sock;
}
|
create_server_socket(const char *host, const char *port)
{
struct addrinfo hints;
struct addrinfo *result, *rp, *ipv4v6bindall;
int s, server_sock;
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_family = AF_UNSPEC; /* Return IPv4 and IPv6 choices */
hints.ai_socktype = SOCK_DGRAM; /* We want a UDP socket */
hints.ai_flags = AI_PASSIVE | AI_ADDRCONFIG; /* For wildcard IP address */
hints.ai_protocol = IPPROTO_UDP;
s = getaddrinfo(host, port, &hints, &result);
if (s != 0) {
LOGE("getaddrinfo: %s", gai_strerror(s));
return -1;
}
rp = result;
/*
* On Linux, with net.ipv6.bindv6only = 0 (the default), getaddrinfo(NULL) with
* AI_PASSIVE returns 0.0.0.0 and :: (in this order). AI_PASSIVE was meant to
* return a list of addresses to listen on, but it is impossible to listen on
* 0.0.0.0 and :: at the same time, if :: implies dualstack mode.
*/
if (!host) {
ipv4v6bindall = result;
/* Loop over all address infos found until a IPV6 address is found. */
while (ipv4v6bindall) {
if (ipv4v6bindall->ai_family == AF_INET6) {
rp = ipv4v6bindall; /* Take first IPV6 address available */
break;
}
ipv4v6bindall = ipv4v6bindall->ai_next; /* Get next address info, if any */
}
}
for (/*rp = result*/; rp != NULL; rp = rp->ai_next) {
server_sock = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
if (server_sock == -1) {
continue;
}
if (rp->ai_family == AF_INET6) {
int ipv6only = host ? 1 : 0;
setsockopt(server_sock, IPPROTO_IPV6, IPV6_V6ONLY, &ipv6only, sizeof(ipv6only));
}
int opt = 1;
setsockopt(server_sock, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
s = bind(server_sock, rp->ai_addr, rp->ai_addrlen);
if (s == 0) {
/* We managed to bind successfully! */
break;
} else {
ERROR("bind");
}
close(server_sock);
}
if (rp == NULL) {
LOGE("cannot bind");
return -1;
}
freeaddrinfo(result);
return server_sock;
}
|
C
|
shadowsocks-libev
| 0 |
CVE-2018-16075
|
https://www.cvedetails.com/cve/CVE-2018-16075/
|
CWE-254
|
https://github.com/chromium/chromium/commit/d913f72b4875cf0814fc3f03ad7c00642097c4a4
|
d913f72b4875cf0814fc3f03ad7c00642097c4a4
|
Remove RequireCSSExtensionForFile runtime enabled flag.
The feature has long since been stable (since M64) and doesn't seem
to be a need for this flag.
BUG=788936
Change-Id: I666390b869289c328acb4a2daa5bf4154e1702c0
Reviewed-on: https://chromium-review.googlesource.com/c/1324143
Reviewed-by: Mike West <[email protected]>
Reviewed-by: Camille Lamy <[email protected]>
Commit-Queue: Dave Tapuska <[email protected]>
Cr-Commit-Position: refs/heads/master@{#607329}
|
void WebRuntimeFeatures::EnablePaymentApp(bool enable) {
RuntimeEnabledFeatures::SetPaymentAppEnabled(enable);
}
|
void WebRuntimeFeatures::EnablePaymentApp(bool enable) {
RuntimeEnabledFeatures::SetPaymentAppEnabled(enable);
}
|
C
|
Chrome
| 0 |
CVE-2017-18200
|
https://www.cvedetails.com/cve/CVE-2017-18200/
|
CWE-20
|
https://github.com/torvalds/linux/commit/638164a2718f337ea224b747cf5977ef143166a4
|
638164a2718f337ea224b747cf5977ef143166a4
|
f2fs: fix potential panic during fstrim
As Ju Hyung Park reported:
"When 'fstrim' is called for manual trim, a BUG() can be triggered
randomly with this patch.
I'm seeing this issue on both x86 Desktop and arm64 Android phone.
On x86 Desktop, this was caused during Ubuntu boot-up. I have a
cronjob installed which calls 'fstrim -v /' during boot. On arm64
Android, this was caused during GC looping with 1ms gc_min_sleep_time
& gc_max_sleep_time."
Root cause of this issue is that f2fs_wait_discard_bios can only be
used by f2fs_put_super, because during put_super there must be no
other referrers, so it can ignore discard entry's reference count
when removing the entry, otherwise in other caller we will hit bug_on
in __remove_discard_cmd as there may be other issuer added reference
count in discard entry.
Thread A Thread B
- issue_discard_thread
- f2fs_ioc_fitrim
- f2fs_trim_fs
- f2fs_wait_discard_bios
- __issue_discard_cmd
- __submit_discard_cmd
- __wait_discard_cmd
- dc->ref++
- __wait_one_discard_bio
- __wait_discard_cmd
- __remove_discard_cmd
- f2fs_bug_on(sbi, dc->ref)
Fixes: 969d1b180d987c2be02de890d0fff0f66a0e80de
Reported-by: Ju Hyung Park <[email protected]>
Signed-off-by: Chao Yu <[email protected]>
Signed-off-by: Jaegeuk Kim <[email protected]>
|
static int build_dirty_segmap(struct f2fs_sb_info *sbi)
{
struct dirty_seglist_info *dirty_i;
unsigned int bitmap_size, i;
/* allocate memory for dirty segments list information */
dirty_i = kzalloc(sizeof(struct dirty_seglist_info), GFP_KERNEL);
if (!dirty_i)
return -ENOMEM;
SM_I(sbi)->dirty_info = dirty_i;
mutex_init(&dirty_i->seglist_lock);
bitmap_size = f2fs_bitmap_size(MAIN_SEGS(sbi));
for (i = 0; i < NR_DIRTY_TYPE; i++) {
dirty_i->dirty_segmap[i] = kvzalloc(bitmap_size, GFP_KERNEL);
if (!dirty_i->dirty_segmap[i])
return -ENOMEM;
}
init_dirty_segmap(sbi);
return init_victim_secmap(sbi);
}
|
static int build_dirty_segmap(struct f2fs_sb_info *sbi)
{
struct dirty_seglist_info *dirty_i;
unsigned int bitmap_size, i;
/* allocate memory for dirty segments list information */
dirty_i = kzalloc(sizeof(struct dirty_seglist_info), GFP_KERNEL);
if (!dirty_i)
return -ENOMEM;
SM_I(sbi)->dirty_info = dirty_i;
mutex_init(&dirty_i->seglist_lock);
bitmap_size = f2fs_bitmap_size(MAIN_SEGS(sbi));
for (i = 0; i < NR_DIRTY_TYPE; i++) {
dirty_i->dirty_segmap[i] = kvzalloc(bitmap_size, GFP_KERNEL);
if (!dirty_i->dirty_segmap[i])
return -ENOMEM;
}
init_dirty_segmap(sbi);
return init_victim_secmap(sbi);
}
|
C
|
linux
| 0 |
CVE-2018-18397
|
https://www.cvedetails.com/cve/CVE-2018-18397/
| null |
https://github.com/torvalds/linux/commit/29ec90660d68bbdd69507c1c8b4e33aa299278b1
|
29ec90660d68bbdd69507c1c8b4e33aa299278b1
|
userfaultfd: shmem/hugetlbfs: only allow to register VM_MAYWRITE vmas
After the VMA to register the uffd onto is found, check that it has
VM_MAYWRITE set before allowing registration. This way we inherit all
common code checks before allowing to fill file holes in shmem and
hugetlbfs with UFFDIO_COPY.
The userfaultfd memory model is not applicable for readonly files unless
it's a MAP_PRIVATE.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: ff62a3421044 ("hugetlb: implement memfd sealing")
Signed-off-by: Andrea Arcangeli <[email protected]>
Reviewed-by: Mike Rapoport <[email protected]>
Reviewed-by: Hugh Dickins <[email protected]>
Reported-by: Jann Horn <[email protected]>
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Cc: <[email protected]>
Cc: "Dr. David Alan Gilbert" <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Peter Xu <[email protected]>
Cc: [email protected]
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static void userfaultfd_ctx_get(struct userfaultfd_ctx *ctx)
{
if (!atomic_inc_not_zero(&ctx->refcount))
BUG();
}
|
static void userfaultfd_ctx_get(struct userfaultfd_ctx *ctx)
{
if (!atomic_inc_not_zero(&ctx->refcount))
BUG();
}
|
C
|
linux
| 0 |
CVE-2015-5287
|
https://www.cvedetails.com/cve/CVE-2015-5287/
|
CWE-59
|
https://github.com/abrt/abrt/commit/3c1b60cfa62d39e5fff5a53a5bc53dae189e740e
|
3c1b60cfa62d39e5fff5a53a5bc53dae189e740e
|
ccpp: save abrt core files only to new files
Prior this commit abrt-hook-ccpp saved a core file generated by a
process running a program whose name starts with "abrt" in
DUMP_LOCATION/$(basename program)-coredump. If the file was a symlink,
the hook followed and wrote core file to the symlink's target.
Addresses CVE-2015-5287
Signed-off-by: Jakub Filak <[email protected]>
|
static int open_user_core(uid_t uid, uid_t fsuid, gid_t fsgid, pid_t pid, char **percent_values)
{
proc_cwd = open_cwd(pid);
if (proc_cwd == NULL)
return -1;
/* http://article.gmane.org/gmane.comp.security.selinux/21842 */
security_context_t newcon;
if (compute_selinux_con_for_new_file(pid, dirfd(proc_cwd), &newcon) < 0)
{
log_notice("Not going to create a user core due to SELinux errors");
return -1;
}
if (strcmp(core_basename, "core") == 0)
{
/* Mimic "core.PID" if requested */
char buf[] = "0\n";
int fd = open("/proc/sys/kernel/core_uses_pid", O_RDONLY);
if (fd >= 0)
{
IGNORE_RESULT(read(fd, buf, sizeof(buf)));
close(fd);
}
if (strcmp(buf, "1\n") == 0)
{
core_basename = xasprintf("%s.%lu", core_basename, (long)pid);
}
}
else
{
/* Expand old core pattern, put expanded name in core_basename */
core_basename = xstrdup(core_basename);
unsigned idx = 0;
while (1)
{
char c = core_basename[idx];
if (!c)
break;
idx++;
if (c != '%')
continue;
/* We just copied %, look at following char and expand %c */
c = core_basename[idx];
unsigned specifier_num = strchrnul(percent_specifiers, c) - percent_specifiers;
if (percent_specifiers[specifier_num] != '\0') /* valid %c (might be %% too) */
{
const char *val = "%";
if (specifier_num > 0) /* not %% */
val = percent_values[specifier_num - 1];
/* Replace %c at core_basename[idx] by its value */
idx--;
char *old = core_basename;
core_basename = xasprintf("%.*s%s%s", idx, core_basename, val, core_basename + idx + 2);
free(old);
idx += strlen(val);
}
/* else: invalid %c, % is already copied verbatim,
* next loop iteration will copy c */
}
}
if (g_need_nonrelative && core_basename[0] != '/')
{
error_msg("Current suid_dumpable policy prevents from saving core dumps according to relative core_pattern");
return -1;
}
/* Open (create) compat core file.
* man core:
* There are various circumstances in which a core dump file
* is not produced:
*
* [skipped obvious ones]
* The process does not have permission to write the core file.
* ...if a file with the same name exists and is not writable
* or is not a regular file (e.g., it is a directory or a symbolic link).
*
* A file with the same name already exists, but there is more
* than one hard link to that file.
*
* The file system where the core dump file would be created is full;
* or has run out of inodes; or is mounted read-only;
* or the user has reached their quota for the file system.
*
* The RLIMIT_CORE or RLIMIT_FSIZE resource limits for the process
* are set to zero.
* [we check RLIMIT_CORE, but how can we check RLIMIT_FSIZE?]
*
* The binary being executed by the process does not have
* read permission enabled. [how we can check it here?]
*
* The process is executing a set-user-ID (set-group-ID) program
* that is owned by a user (group) other than the real
* user (group) ID of the process. [TODO?]
* (However, see the description of the prctl(2) PR_SET_DUMPABLE operation,
* and the description of the /proc/sys/fs/suid_dumpable file in proc(5).)
*/
int user_core_fd = -1;
int selinux_fail = 1;
/*
* These calls must be reverted as soon as possible.
*/
xsetegid(fsgid);
xseteuid(fsuid);
/* Set SELinux context like kernel when creating core dump file.
* This condition is TRUE if */
if (/* SELinux is disabled */ newcon == NULL
|| /* or the call succeeds */ setfscreatecon_raw(newcon) >= 0)
{
/* Do not O_TRUNC: if later checks fail, we do not want to have file already modified here */
user_core_fd = openat(dirfd(proc_cwd), core_basename, O_WRONLY | O_CREAT | O_NOFOLLOW | g_user_core_flags, 0600); /* kernel makes 0600 too */
/* Do the error check here and print the error message in order to
* avoid interference in 'errno' usage caused by SELinux functions */
if (user_core_fd < 0)
perror_msg("Can't open '%s' at '%s'", core_basename, user_pwd);
/* Fail if SELinux is enabled and the call fails */
if (newcon != NULL && setfscreatecon_raw(NULL) < 0)
perror_msg("setfscreatecon_raw(NULL)");
else
selinux_fail = 0;
}
else
perror_msg("setfscreatecon_raw(%s)", newcon);
/*
* DON'T JUMP OVER THIS REVERT OF THE UID/GID CHANGES
*/
xsetegid(0);
xseteuid(0);
if (user_core_fd < 0 || selinux_fail)
goto user_core_fail;
struct stat sb;
if (fstat(user_core_fd, &sb) != 0
|| !S_ISREG(sb.st_mode)
|| sb.st_nlink != 1
|| sb.st_uid != fsuid
) {
perror_msg("'%s' at '%s' is not a regular file with link count 1 owned by UID(%d)", core_basename, user_pwd, fsuid);
goto user_core_fail;
}
if (ftruncate(user_core_fd, 0) != 0) {
/* perror first, otherwise unlink may trash errno */
perror_msg("Can't truncate '%s' at '%s' to size 0", core_basename, user_pwd);
goto user_core_fail;
}
return user_core_fd;
user_core_fail:
if (user_core_fd >= 0)
close(user_core_fd);
return -1;
}
|
static int open_user_core(uid_t uid, uid_t fsuid, gid_t fsgid, pid_t pid, char **percent_values)
{
proc_cwd = open_cwd(pid);
if (proc_cwd == NULL)
return -1;
/* http://article.gmane.org/gmane.comp.security.selinux/21842 */
security_context_t newcon;
if (compute_selinux_con_for_new_file(pid, dirfd(proc_cwd), &newcon) < 0)
{
log_notice("Not going to create a user core due to SELinux errors");
return -1;
}
if (strcmp(core_basename, "core") == 0)
{
/* Mimic "core.PID" if requested */
char buf[] = "0\n";
int fd = open("/proc/sys/kernel/core_uses_pid", O_RDONLY);
if (fd >= 0)
{
IGNORE_RESULT(read(fd, buf, sizeof(buf)));
close(fd);
}
if (strcmp(buf, "1\n") == 0)
{
core_basename = xasprintf("%s.%lu", core_basename, (long)pid);
}
}
else
{
/* Expand old core pattern, put expanded name in core_basename */
core_basename = xstrdup(core_basename);
unsigned idx = 0;
while (1)
{
char c = core_basename[idx];
if (!c)
break;
idx++;
if (c != '%')
continue;
/* We just copied %, look at following char and expand %c */
c = core_basename[idx];
unsigned specifier_num = strchrnul(percent_specifiers, c) - percent_specifiers;
if (percent_specifiers[specifier_num] != '\0') /* valid %c (might be %% too) */
{
const char *val = "%";
if (specifier_num > 0) /* not %% */
val = percent_values[specifier_num - 1];
/* Replace %c at core_basename[idx] by its value */
idx--;
char *old = core_basename;
core_basename = xasprintf("%.*s%s%s", idx, core_basename, val, core_basename + idx + 2);
free(old);
idx += strlen(val);
}
/* else: invalid %c, % is already copied verbatim,
* next loop iteration will copy c */
}
}
if (g_need_nonrelative && core_basename[0] != '/')
{
error_msg("Current suid_dumpable policy prevents from saving core dumps according to relative core_pattern");
return -1;
}
/* Open (create) compat core file.
* man core:
* There are various circumstances in which a core dump file
* is not produced:
*
* [skipped obvious ones]
* The process does not have permission to write the core file.
* ...if a file with the same name exists and is not writable
* or is not a regular file (e.g., it is a directory or a symbolic link).
*
* A file with the same name already exists, but there is more
* than one hard link to that file.
*
* The file system where the core dump file would be created is full;
* or has run out of inodes; or is mounted read-only;
* or the user has reached their quota for the file system.
*
* The RLIMIT_CORE or RLIMIT_FSIZE resource limits for the process
* are set to zero.
* [we check RLIMIT_CORE, but how can we check RLIMIT_FSIZE?]
*
* The binary being executed by the process does not have
* read permission enabled. [how we can check it here?]
*
* The process is executing a set-user-ID (set-group-ID) program
* that is owned by a user (group) other than the real
* user (group) ID of the process. [TODO?]
* (However, see the description of the prctl(2) PR_SET_DUMPABLE operation,
* and the description of the /proc/sys/fs/suid_dumpable file in proc(5).)
*/
int user_core_fd = -1;
int selinux_fail = 1;
/*
* These calls must be reverted as soon as possible.
*/
xsetegid(fsgid);
xseteuid(fsuid);
/* Set SELinux context like kernel when creating core dump file.
* This condition is TRUE if */
if (/* SELinux is disabled */ newcon == NULL
|| /* or the call succeeds */ setfscreatecon_raw(newcon) >= 0)
{
/* Do not O_TRUNC: if later checks fail, we do not want to have file already modified here */
user_core_fd = openat(dirfd(proc_cwd), core_basename, O_WRONLY | O_CREAT | O_NOFOLLOW | g_user_core_flags, 0600); /* kernel makes 0600 too */
/* Do the error check here and print the error message in order to
* avoid interference in 'errno' usage caused by SELinux functions */
if (user_core_fd < 0)
perror_msg("Can't open '%s' at '%s'", core_basename, user_pwd);
/* Fail if SELinux is enabled and the call fails */
if (newcon != NULL && setfscreatecon_raw(NULL) < 0)
perror_msg("setfscreatecon_raw(NULL)");
else
selinux_fail = 0;
}
else
perror_msg("setfscreatecon_raw(%s)", newcon);
/*
* DON'T JUMP OVER THIS REVERT OF THE UID/GID CHANGES
*/
xsetegid(0);
xseteuid(0);
if (user_core_fd < 0 || selinux_fail)
goto user_core_fail;
struct stat sb;
if (fstat(user_core_fd, &sb) != 0
|| !S_ISREG(sb.st_mode)
|| sb.st_nlink != 1
|| sb.st_uid != fsuid
) {
perror_msg("'%s' at '%s' is not a regular file with link count 1 owned by UID(%d)", core_basename, user_pwd, fsuid);
goto user_core_fail;
}
if (ftruncate(user_core_fd, 0) != 0) {
/* perror first, otherwise unlink may trash errno */
perror_msg("Can't truncate '%s' at '%s' to size 0", core_basename, user_pwd);
goto user_core_fail;
}
return user_core_fd;
user_core_fail:
if (user_core_fd >= 0)
close(user_core_fd);
return -1;
}
|
C
|
abrt
| 0 |
CVE-2007-6761
|
https://www.cvedetails.com/cve/CVE-2007-6761/
|
CWE-119
|
https://github.com/torvalds/linux/commit/0b29669c065f60501e7289e1950fa2a618962358
|
0b29669c065f60501e7289e1950fa2a618962358
|
V4L/DVB (6751): V4L: Memory leak! Fix count in videobuf-vmalloc mmap
This is pretty serious bug. map->count is never initialized after the
call to kmalloc making the count start at some random trash value. The
end result is leaking videobufs.
Also, fix up the debug statements to print unsigned values.
Pushed to http://ifup.org/hg/v4l-dvb too
Signed-off-by: Brandon Philips <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
|
void videobuf_queue_vmalloc_init(struct videobuf_queue* q,
struct videobuf_queue_ops *ops,
void *dev,
spinlock_t *irqlock,
enum v4l2_buf_type type,
enum v4l2_field field,
unsigned int msize,
void *priv)
{
videobuf_queue_core_init(q, ops, dev, irqlock, type, field, msize,
priv, &qops);
}
|
void videobuf_queue_vmalloc_init(struct videobuf_queue* q,
struct videobuf_queue_ops *ops,
void *dev,
spinlock_t *irqlock,
enum v4l2_buf_type type,
enum v4l2_field field,
unsigned int msize,
void *priv)
{
videobuf_queue_core_init(q, ops, dev, irqlock, type, field, msize,
priv, &qops);
}
|
C
|
linux
| 0 |
CVE-2016-1696
|
https://www.cvedetails.com/cve/CVE-2016-1696/
|
CWE-284
|
https://github.com/chromium/chromium/commit/c0569cc04741cccf6548c2169fcc1609d958523f
|
c0569cc04741cccf6548c2169fcc1609d958523f
|
[Extensions] Expand bindings access checks
BUG=601149
BUG=601073
Review URL: https://codereview.chromium.org/1866103002
Cr-Commit-Position: refs/heads/master@{#387710}
|
void Dispatcher::UpdateOriginPermissions(const GURL& extension_url,
const URLPatternSet& old_patterns,
const URLPatternSet& new_patterns) {
static const char* kSchemes[] = {
url::kHttpScheme,
url::kHttpsScheme,
url::kFileScheme,
content::kChromeUIScheme,
url::kFtpScheme,
#if defined(OS_CHROMEOS)
content::kExternalFileScheme,
#endif
extensions::kExtensionScheme,
};
for (size_t i = 0; i < arraysize(kSchemes); ++i) {
const char* scheme = kSchemes[i];
for (URLPatternSet::const_iterator pattern = old_patterns.begin();
pattern != old_patterns.end(); ++pattern) {
if (pattern->MatchesScheme(scheme)) {
WebSecurityPolicy::removeOriginAccessWhitelistEntry(
extension_url,
WebString::fromUTF8(scheme),
WebString::fromUTF8(pattern->host()),
pattern->match_subdomains());
}
}
for (URLPatternSet::const_iterator pattern = new_patterns.begin();
pattern != new_patterns.end(); ++pattern) {
if (pattern->MatchesScheme(scheme)) {
WebSecurityPolicy::addOriginAccessWhitelistEntry(
extension_url,
WebString::fromUTF8(scheme),
WebString::fromUTF8(pattern->host()),
pattern->match_subdomains());
}
}
}
}
|
void Dispatcher::UpdateOriginPermissions(const GURL& extension_url,
const URLPatternSet& old_patterns,
const URLPatternSet& new_patterns) {
static const char* kSchemes[] = {
url::kHttpScheme,
url::kHttpsScheme,
url::kFileScheme,
content::kChromeUIScheme,
url::kFtpScheme,
#if defined(OS_CHROMEOS)
content::kExternalFileScheme,
#endif
extensions::kExtensionScheme,
};
for (size_t i = 0; i < arraysize(kSchemes); ++i) {
const char* scheme = kSchemes[i];
for (URLPatternSet::const_iterator pattern = old_patterns.begin();
pattern != old_patterns.end(); ++pattern) {
if (pattern->MatchesScheme(scheme)) {
WebSecurityPolicy::removeOriginAccessWhitelistEntry(
extension_url,
WebString::fromUTF8(scheme),
WebString::fromUTF8(pattern->host()),
pattern->match_subdomains());
}
}
for (URLPatternSet::const_iterator pattern = new_patterns.begin();
pattern != new_patterns.end(); ++pattern) {
if (pattern->MatchesScheme(scheme)) {
WebSecurityPolicy::addOriginAccessWhitelistEntry(
extension_url,
WebString::fromUTF8(scheme),
WebString::fromUTF8(pattern->host()),
pattern->match_subdomains());
}
}
}
}
|
C
|
Chrome
| 0 |
CVE-2016-6213
|
https://www.cvedetails.com/cve/CVE-2016-6213/
|
CWE-400
|
https://github.com/torvalds/linux/commit/d29216842a85c7970c536108e093963f02714498
|
d29216842a85c7970c536108e093963f02714498
|
mnt: Add a per mount namespace limit on the number of mounts
CAI Qian <[email protected]> pointed out that the semantics
of shared subtrees make it possible to create an exponentially
increasing number of mounts in a mount namespace.
mkdir /tmp/1 /tmp/2
mount --make-rshared /
for i in $(seq 1 20) ; do mount --bind /tmp/1 /tmp/2 ; done
Will create create 2^20 or 1048576 mounts, which is a practical problem
as some people have managed to hit this by accident.
As such CVE-2016-6213 was assigned.
Ian Kent <[email protected]> described the situation for autofs users
as follows:
> The number of mounts for direct mount maps is usually not very large because of
> the way they are implemented, large direct mount maps can have performance
> problems. There can be anywhere from a few (likely case a few hundred) to less
> than 10000, plus mounts that have been triggered and not yet expired.
>
> Indirect mounts have one autofs mount at the root plus the number of mounts that
> have been triggered and not yet expired.
>
> The number of autofs indirect map entries can range from a few to the common
> case of several thousand and in rare cases up to between 30000 and 50000. I've
> not heard of people with maps larger than 50000 entries.
>
> The larger the number of map entries the greater the possibility for a large
> number of active mounts so it's not hard to expect cases of a 1000 or somewhat
> more active mounts.
So I am setting the default number of mounts allowed per mount
namespace at 100,000. This is more than enough for any use case I
know of, but small enough to quickly stop an exponential increase
in mounts. Which should be perfect to catch misconfigurations and
malfunctioning programs.
For anyone who needs a higher limit this can be changed by writing
to the new /proc/sys/fs/mount-max sysctl.
Tested-by: CAI Qian <[email protected]>
Signed-off-by: "Eric W. Biederman" <[email protected]>
|
static bool disconnect_mount(struct mount *mnt, enum umount_tree_flags how)
{
/* Leaving mounts connected is only valid for lazy umounts */
if (how & UMOUNT_SYNC)
return true;
/* A mount without a parent has nothing to be connected to */
if (!mnt_has_parent(mnt))
return true;
/* Because the reference counting rules change when mounts are
* unmounted and connected, umounted mounts may not be
* connected to mounted mounts.
*/
if (!(mnt->mnt_parent->mnt.mnt_flags & MNT_UMOUNT))
return true;
/* Has it been requested that the mount remain connected? */
if (how & UMOUNT_CONNECTED)
return false;
/* Is the mount locked such that it needs to remain connected? */
if (IS_MNT_LOCKED(mnt))
return false;
/* By default disconnect the mount */
return true;
}
|
static bool disconnect_mount(struct mount *mnt, enum umount_tree_flags how)
{
/* Leaving mounts connected is only valid for lazy umounts */
if (how & UMOUNT_SYNC)
return true;
/* A mount without a parent has nothing to be connected to */
if (!mnt_has_parent(mnt))
return true;
/* Because the reference counting rules change when mounts are
* unmounted and connected, umounted mounts may not be
* connected to mounted mounts.
*/
if (!(mnt->mnt_parent->mnt.mnt_flags & MNT_UMOUNT))
return true;
/* Has it been requested that the mount remain connected? */
if (how & UMOUNT_CONNECTED)
return false;
/* Is the mount locked such that it needs to remain connected? */
if (IS_MNT_LOCKED(mnt))
return false;
/* By default disconnect the mount */
return true;
}
|
C
|
linux
| 0 |
CVE-2016-5842
|
https://www.cvedetails.com/cve/CVE-2016-5842/
|
CWE-125
|
https://github.com/ImageMagick/ImageMagick/commit/d8ab7f046587f2e9f734b687ba7e6e10147c294b
|
d8ab7f046587f2e9f734b687ba7e6e10147c294b
|
Improve checking of EXIF profile to prevent integer overflow (bug report from Ibrahim el-sayed)
|
MagickExport void DestroyImageProperties(Image *image)
{
assert(image != (Image *) NULL);
assert(image->signature == MagickCoreSignature);
if (image->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",
image->filename);
if (image->properties != (void *) NULL)
image->properties=(void *) DestroySplayTree((SplayTreeInfo *)
image->properties);
}
|
MagickExport void DestroyImageProperties(Image *image)
{
assert(image != (Image *) NULL);
assert(image->signature == MagickCoreSignature);
if (image->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",
image->filename);
if (image->properties != (void *) NULL)
image->properties=(void *) DestroySplayTree((SplayTreeInfo *)
image->properties);
}
|
C
|
ImageMagick
| 0 |
CVE-2015-3885
|
https://www.cvedetails.com/cve/CVE-2015-3885/
|
CWE-189
|
https://github.com/rawstudio/rawstudio/commit/983bda1f0fa5fa86884381208274198a620f006e
|
983bda1f0fa5fa86884381208274198a620f006e
|
Avoid overflow in ljpeg_start().
|
void CLASS parse_foveon()
{
int entries, img=0, off, len, tag, save, i, wide, high, pent, poff[256][2];
char name[64], value[64];
order = 0x4949; /* Little-endian */
fseek (ifp, 36, SEEK_SET);
flip = get4();
fseek (ifp, -4, SEEK_END);
fseek (ifp, get4(), SEEK_SET);
if (get4() != 0x64434553) return; /* SECd */
entries = (get4(),get4());
while (entries--) {
off = get4();
len = get4();
tag = get4();
save = ftell(ifp);
fseek (ifp, off, SEEK_SET);
if (get4() != (unsigned)(0x20434553 | (tag << 24))) return;
switch (tag) {
case 0x47414d49: /* IMAG */
case 0x32414d49: /* IMA2 */
fseek (ifp, 8, SEEK_CUR);
if (get4() == 30) { /* SIGMA SD15/SD1/DP* are unsupported */
is_foveon = 0;
return;
}
wide = get4();
high = get4();
if (wide > raw_width && high > raw_height) {
raw_width = wide;
raw_height = high;
data_offset = off+24;
}
fseek (ifp, off+28, SEEK_SET);
if (fgetc(ifp) == 0xff && fgetc(ifp) == 0xd8
&& thumb_length < (unsigned)(len-28)) {
thumb_offset = off+28;
thumb_length = len-28;
write_thumb = &CLASS jpeg_thumb;
}
if (++img == 2 && !thumb_length) {
thumb_offset = off+24;
thumb_width = wide;
thumb_height = high;
write_thumb = &CLASS foveon_thumb;
}
break;
case 0x464d4143: /* CAMF */
meta_offset = off+24;
meta_length = len-28;
if (meta_length > 0x20000)
meta_length = 0x20000;
break;
case 0x504f5250: /* PROP */
pent = (get4(),get4());
fseek (ifp, 12, SEEK_CUR);
off += pent*8 + 24;
if ((unsigned) pent > 256) pent=256;
for (i=0; i < pent*2; i++)
poff[0][i] = off + get4()*2;
for (i=0; i < pent; i++) {
foveon_gets (poff[i][0], name, 64);
foveon_gets (poff[i][1], value, 64);
if (!strcmp (name, "ISO"))
iso_speed = atoi(value);
if (!strcmp (name, "CAMMANUF"))
strcpy (make, value);
if (!strcmp (name, "CAMMODEL"))
strcpy (model, value);
if (!strcmp (name, "WB_DESC"))
strcpy (model2, value);
if (!strcmp (name, "TIME"))
timestamp = atoi(value);
if (!strcmp (name, "EXPTIME"))
shutter = atoi(value) / 1000000.0;
if (!strcmp (name, "APERTURE"))
aperture = atof(value);
if (!strcmp (name, "FLENGTH"))
focal_len = atof(value);
}
#ifdef LOCALTIME
timestamp = mktime (gmtime (×tamp));
#endif
}
fseek (ifp, save, SEEK_SET);
}
is_foveon = 1;
}
|
void CLASS parse_foveon()
{
int entries, img=0, off, len, tag, save, i, wide, high, pent, poff[256][2];
char name[64], value[64];
order = 0x4949; /* Little-endian */
fseek (ifp, 36, SEEK_SET);
flip = get4();
fseek (ifp, -4, SEEK_END);
fseek (ifp, get4(), SEEK_SET);
if (get4() != 0x64434553) return; /* SECd */
entries = (get4(),get4());
while (entries--) {
off = get4();
len = get4();
tag = get4();
save = ftell(ifp);
fseek (ifp, off, SEEK_SET);
if (get4() != (unsigned)(0x20434553 | (tag << 24))) return;
switch (tag) {
case 0x47414d49: /* IMAG */
case 0x32414d49: /* IMA2 */
fseek (ifp, 8, SEEK_CUR);
if (get4() == 30) { /* SIGMA SD15/SD1/DP* are unsupported */
is_foveon = 0;
return;
}
wide = get4();
high = get4();
if (wide > raw_width && high > raw_height) {
raw_width = wide;
raw_height = high;
data_offset = off+24;
}
fseek (ifp, off+28, SEEK_SET);
if (fgetc(ifp) == 0xff && fgetc(ifp) == 0xd8
&& thumb_length < (unsigned)(len-28)) {
thumb_offset = off+28;
thumb_length = len-28;
write_thumb = &CLASS jpeg_thumb;
}
if (++img == 2 && !thumb_length) {
thumb_offset = off+24;
thumb_width = wide;
thumb_height = high;
write_thumb = &CLASS foveon_thumb;
}
break;
case 0x464d4143: /* CAMF */
meta_offset = off+24;
meta_length = len-28;
if (meta_length > 0x20000)
meta_length = 0x20000;
break;
case 0x504f5250: /* PROP */
pent = (get4(),get4());
fseek (ifp, 12, SEEK_CUR);
off += pent*8 + 24;
if ((unsigned) pent > 256) pent=256;
for (i=0; i < pent*2; i++)
poff[0][i] = off + get4()*2;
for (i=0; i < pent; i++) {
foveon_gets (poff[i][0], name, 64);
foveon_gets (poff[i][1], value, 64);
if (!strcmp (name, "ISO"))
iso_speed = atoi(value);
if (!strcmp (name, "CAMMANUF"))
strcpy (make, value);
if (!strcmp (name, "CAMMODEL"))
strcpy (model, value);
if (!strcmp (name, "WB_DESC"))
strcpy (model2, value);
if (!strcmp (name, "TIME"))
timestamp = atoi(value);
if (!strcmp (name, "EXPTIME"))
shutter = atoi(value) / 1000000.0;
if (!strcmp (name, "APERTURE"))
aperture = atof(value);
if (!strcmp (name, "FLENGTH"))
focal_len = atof(value);
}
#ifdef LOCALTIME
timestamp = mktime (gmtime (×tamp));
#endif
}
fseek (ifp, save, SEEK_SET);
}
is_foveon = 1;
}
|
C
|
rawstudio
| 0 |
CVE-2017-18241
|
https://www.cvedetails.com/cve/CVE-2017-18241/
|
CWE-476
|
https://github.com/torvalds/linux/commit/d4fdf8ba0e5808ba9ad6b44337783bd9935e0982
|
d4fdf8ba0e5808ba9ad6b44337783bd9935e0982
|
f2fs: fix a panic caused by NULL flush_cmd_control
Mount fs with option noflush_merge, boot failed for illegal address
fcc in function f2fs_issue_flush:
if (!test_opt(sbi, FLUSH_MERGE)) {
ret = submit_flush_wait(sbi);
atomic_inc(&fcc->issued_flush); -> Here, fcc illegal
return ret;
}
Signed-off-by: Yunlei He <[email protected]>
Signed-off-by: Jaegeuk Kim <[email protected]>
|
void drop_inmem_page(struct inode *inode, struct page *page)
{
struct f2fs_inode_info *fi = F2FS_I(inode);
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct list_head *head = &fi->inmem_pages;
struct inmem_pages *cur = NULL;
f2fs_bug_on(sbi, !IS_ATOMIC_WRITTEN_PAGE(page));
mutex_lock(&fi->inmem_lock);
list_for_each_entry(cur, head, list) {
if (cur->page == page)
break;
}
f2fs_bug_on(sbi, !cur || cur->page != page);
list_del(&cur->list);
mutex_unlock(&fi->inmem_lock);
dec_page_count(sbi, F2FS_INMEM_PAGES);
kmem_cache_free(inmem_entry_slab, cur);
ClearPageUptodate(page);
set_page_private(page, 0);
ClearPagePrivate(page);
f2fs_put_page(page, 0);
trace_f2fs_commit_inmem_page(page, INMEM_INVALIDATE);
}
|
void drop_inmem_page(struct inode *inode, struct page *page)
{
struct f2fs_inode_info *fi = F2FS_I(inode);
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct list_head *head = &fi->inmem_pages;
struct inmem_pages *cur = NULL;
f2fs_bug_on(sbi, !IS_ATOMIC_WRITTEN_PAGE(page));
mutex_lock(&fi->inmem_lock);
list_for_each_entry(cur, head, list) {
if (cur->page == page)
break;
}
f2fs_bug_on(sbi, !cur || cur->page != page);
list_del(&cur->list);
mutex_unlock(&fi->inmem_lock);
dec_page_count(sbi, F2FS_INMEM_PAGES);
kmem_cache_free(inmem_entry_slab, cur);
ClearPageUptodate(page);
set_page_private(page, 0);
ClearPagePrivate(page);
f2fs_put_page(page, 0);
trace_f2fs_commit_inmem_page(page, INMEM_INVALIDATE);
}
|
C
|
linux
| 0 |
CVE-2017-5118
|
https://www.cvedetails.com/cve/CVE-2017-5118/
|
CWE-732
|
https://github.com/chromium/chromium/commit/0ab2412a104d2f235d7b9fe19d30ef605a410832
|
0ab2412a104d2f235d7b9fe19d30ef605a410832
|
Inherit CSP when we inherit the security origin
This prevents attacks that use main window navigation to get out of the
existing csp constraints such as the related bug
Bug: 747847
Change-Id: I1e57b50da17f65d38088205b0a3c7c49ef2ae4d8
Reviewed-on: https://chromium-review.googlesource.com/592027
Reviewed-by: Mike West <[email protected]>
Commit-Queue: Andy Paicu <[email protected]>
Cr-Commit-Position: refs/heads/master@{#492333}
|
void Document::UpdateStyleAndLayoutIgnorePendingStylesheetsForNode(Node* node) {
DCHECK(node);
if (!node->InActiveDocument())
return;
UpdateStyleAndLayoutIgnorePendingStylesheets();
}
|
void Document::UpdateStyleAndLayoutIgnorePendingStylesheetsForNode(Node* node) {
DCHECK(node);
if (!node->InActiveDocument())
return;
UpdateStyleAndLayoutIgnorePendingStylesheets();
}
|
C
|
Chrome
| 0 |
CVE-2017-6345
|
https://www.cvedetails.com/cve/CVE-2017-6345/
|
CWE-20
|
https://github.com/torvalds/linux/commit/8b74d439e1697110c5e5c600643e823eb1dd0762
|
8b74d439e1697110c5e5c600643e823eb1dd0762
|
net/llc: avoid BUG_ON() in skb_orphan()
It seems nobody used LLC since linux-3.12.
Fortunately fuzzers like syzkaller still know how to run this code,
otherwise it would be no fun.
Setting skb->sk without skb->destructor leads to all kinds of
bugs, we now prefer to be very strict about it.
Ideally here we would use skb_set_owner() but this helper does not exist yet,
only CAN seems to have a private helper for that.
Fixes: 376c7311bdb6 ("net: add a temporary sanity check in skb_orphan()")
Signed-off-by: Eric Dumazet <[email protected]>
Reported-by: Andrey Konovalov <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static inline bool llc_listener_match(const struct llc_sap *sap,
const struct llc_addr *laddr,
const struct sock *sk)
{
struct llc_sock *llc = llc_sk(sk);
return sk->sk_type == SOCK_STREAM && sk->sk_state == TCP_LISTEN &&
llc->laddr.lsap == laddr->lsap &&
ether_addr_equal(llc->laddr.mac, laddr->mac);
}
|
static inline bool llc_listener_match(const struct llc_sap *sap,
const struct llc_addr *laddr,
const struct sock *sk)
{
struct llc_sock *llc = llc_sk(sk);
return sk->sk_type == SOCK_STREAM && sk->sk_state == TCP_LISTEN &&
llc->laddr.lsap == laddr->lsap &&
ether_addr_equal(llc->laddr.mac, laddr->mac);
}
|
C
|
linux
| 0 |
Subsets and Splits
CWE-119 Function Changes
This query retrieves specific examples (before and after code changes) of vulnerabilities with CWE-119, providing basic filtering but limited insight.
Vulnerable Code with CWE IDs
The query filters and combines records from multiple datasets to list specific vulnerability details, providing a basic overview of vulnerable functions but lacking deeper insights.
Vulnerable Functions in BigVul
Retrieves details of vulnerable functions from both validation and test datasets where vulnerabilities are present, providing a basic set of data points for further analysis.
Vulnerable Code Functions
This query filters and shows raw data for vulnerable functions, which provides basic insight into specific vulnerabilities but lacks broader analytical value.
Top 100 Vulnerable Functions
Retrieves 100 samples of vulnerabilities from the training dataset, showing the CVE ID, CWE ID, and code changes before and after the vulnerability, which is a basic filtering of vulnerability data.